text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
The Unified Modeling Language (UML) is a general-purpose visual modeling language that is intended to provide a standard way to visualize the design of a system.
UML provides a standard notation for many types of diagrams which can be roughly divided into three main groups: behavior diagrams, interaction diagrams, and structure diagrams.
The creation of UML was originally motivated by the desire to standardize the disparate notational systems and approaches to software design. It was developed at Rational Software in 1994–1995, with further development led by them through 1996.
In 1997, UML was adopted as a standard by the Object Management Group (OMG) and has been managed by this organization ever since. In 2005, UML was also published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as the ISO/IEC 19501 standard. Since then the standard has been periodically revised to cover the latest revision of UML.
In software engineering, most practitioners do not use UML, but instead produce informal hand drawn diagrams; these diagrams, however, often include elements from UML.: 536
== History ==
=== Before UML 1.0 ===
UML has evolved since the second half of the 1990s and has its roots in the object-oriented programming methods developed in the late 1980s and early 1990s. The timeline (see image) shows the highlights of the history of object-oriented modeling methods and notation.
It is originally based on the notations of the Booch method, the object-modeling technique (OMT), and object-oriented software engineering (OOSE), which it has integrated into a single language.
Rational Software Corporation hired James Rumbaugh from General Electric in 1994 and after that, the company became the source for two of the most popular object-oriented modeling approaches of the day: Rumbaugh's object-modeling technique (OMT) and Grady Booch's method. They were soon assisted in their efforts by Ivar Jacobson, the creator of the object-oriented software engineering (OOSE) method, who joined them at Rational in 1995.
=== UML 1.x ===
Under the technical leadership of those three (Rumbaugh, Jacobson, and Booch), a consortium called the UML Partners was organized in 1996 to complete the Unified Modeling Language (UML) specification and propose it to the Object Management Group (OMG) for standardization. The partnership also contained additional interested parties (for example HP, DEC, IBM, and Microsoft). The UML Partners' UML 1.0 draft was proposed to the OMG in January 1997 by the consortium. During the same month, the UML Partners formed a group, designed to define the exact meaning of language constructs, chaired by Cris Kobryn and administered by Ed Eykholt, to finalize the specification and integrate it with other standardization efforts. The result of this work, UML 1.1, was submitted to the OMG in August 1997 and adopted by the OMG in November 1997.
After the first release, a task force was formed to improve the language, which released several minor revisions, 1.3, 1.4, and 1.5.
The standards it produced (as well as the original standard) have been noted as being ambiguous and inconsistent.
==== Cardinality notation ====
As with database Chen, Bachman, and ISO ER diagrams, class models are specified to use "look-across" cardinalities, even though several authors (Merise,
Elmasri & Navathe,
amongst others)
prefer same-side or "look-here" for roles and both minimum and maximum cardinalities. Recent researchers (Feinerer
and Dullea et al.)
have shown that the "look-across" technique used by UML and ER diagrams is less effective and less coherent when applied to n-ary relationships of order strictly greater than 2.
Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann
investigates this situation and shows how and why different transformations fail.", and: "As we will see on the next few pages, the look-across interpretation introduces several difficulties which prevent the extension of simple mechanisms from binary to n-ary associations."
=== UML 2 ===
UML 2.0 major revision replaced version 1.5 in 2005, which was developed with an enlarged consortium to improve the language further to reflect new experiences on the usage of its features.
Although UML 2.1 was never released as a formal specification, versions 2.1.1 and 2.1.2 appeared in 2007, followed by UML 2.2 in February 2009. UML 2.3 was formally released in May 2010. UML 2.4.1 was formally released in August 2011. UML 2.5 was released in October 2012 as an "In progress" version and was officially released in June 2015.
The formal version 2.5.1 was adopted in December 2017.
There are four parts to the UML 2.x specification:
The Superstructure that defines the notation and semantics for diagrams and their model elements
The Infrastructure that defines the core metamodel on which the Superstructure is based
The Object Constraint Language (OCL) for defining rules for model elements
The UML Diagram Interchange that defines how UML 2 diagram layouts are exchanged
Until UML 2.4.1, the latest versions of these standards were:
UML Superstructure version 2.4.1
UML Infrastructure version 2.4.1
OCL version 2.3.1
UML Diagram Interchange version 1.0.
Since version 2.5, the UML Specification has been simplified (without Superstructure and Infrastructure), and the latest versions of these standards are now:
UML Specification 2.5.1
OCL version 2.4
It continues to be updated and improved by the revision task force, who resolve any issues with the language.
== Design ==
UML offers a way to visualize a system's architectural blueprints in a diagram, including elements such as:
any activities (jobs);
individual components of the system;
and how they can interact with other software components;
how the system will run;
how entities interact with others (components and interfaces);
external user interface.
Although originally intended for object-oriented design documentation, UML has been extended to a larger set of design documentation (as listed above), and has been found useful in many contexts.
=== Software development methods ===
UML is not a development method by itself; however, it was designed to be compatible with the leading object-oriented software development methods of its time, for example, OMT, Booch method, Objectory, and especially RUP it was originally intended to be used with when work began at Rational Software.
=== Modeling ===
It is important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphic representation of a system's model. The set of diagrams need not completely cover the model and deleting a diagram does not change the model. The model may also contain documentation that drives the model elements and diagrams (such as written use cases).
UML diagrams represent two different views of a system model:
Static (or structural) view: emphasizes the static structure of the system using objects, attributes, operations and relationships. It includes class diagrams and composite structure diagrams.
Dynamic (or behavioral) view: emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and state machine diagrams.
UML models can be exchanged among UML tools by using the XML Metadata Interchange (XMI) format.
In UML, one of the key tools for behavior modeling is the use-case model, caused by OOSE. Use cases are a way of specifying required usages of a system. Typically, they are used to capture the requirements of a system, that is, what a system is supposed to do.
== Diagrams ==
UML 2 has many types of diagrams, which are divided into two categories. Some types represent structural information, and the rest represent general types of behavior, including a few that represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following class diagram:
These diagrams may all contain comments or notes explaining usage, constraint, or intent.
=== Structure diagrams ===
Structure diagrams represent the static aspects of the system. It emphasizes the things that must be present in the system being modeled. Since structure diagrams represent the structure, they are used extensively in documenting the software architecture of software systems. For example, the component diagram describes how a software system is split up into components and shows the dependencies among these components.
=== Behavior diagrams ===
Behavior diagrams represent the dynamic aspect of the system. It emphasizes what must happen in the system being modeled. Since behavior diagrams illustrate the behavior of a system, they are used extensively to describe the functionality of software systems. As an example, the activity diagram describes the business and operational step-by-step activities of the components in a system.
Visual Representation: Staff User → Complaints System: Submit Complaint Complaints System → HR System: Forward Complaint HR System → Department: Assign Complaint Department → Complaints System: Update Resolution Complaints System → Feedback System: Request Feedback Feedback System → Staff User: Provide Feedback Staff User → Feedback System: Submit Feedback. This description can be used to draw a sequence diagram using tools like Lucidchart, Draw.io, or any UML diagram software. The diagram would have actors on the left side, with arrows indicating the sequence of actions and interactions between systems and actors as described. Sequence diagrams should be drawn for each use case to show how different objects interact with each other to achieve the functionality of the use case.
== Artifacts ==
In UML, an artifact
is the "specification of a physical piece of information that is used or produced by a software development process, or by deployment and operation of a system."
"Examples of artifacts include model files, source files, scripts, and binary executable files, a table in a database system, a development deliverable, a word-processing document, or a mail message."
Artifacts are the physical entities that are deployed on
Nodes
(i.e. Devices and Execution Environments).
Other UML elements such as classes and components are first manifested into artifacts and instances of these artifacts are then deployed. Artifacts can also be composed of other artifacts.
== Metamodeling ==
The Object Management Group (OMG) has developed a metamodeling architecture to define the UML, called the Meta-Object Facility. MOF is designed as a four-layered architecture, as shown in the image at right. It provides a meta-meta model at the top, called the M3 layer. This M3-model is the language used by Meta-Object Facility to build metamodels, called M2-models.
The most prominent example of a Layer 2 Meta-Object Facility model is the UML metamodel, which describes the UML itself. These M2-models describe elements of the M1-layer, and thus M1-models. These would be, for example, models written in UML. The last layer is the M0-layer or data layer. It is used to describe runtime instances of the system.
The meta-model can be extended using a mechanism called stereotyping. This has been criticized as being insufficient/untenable by Brian Henderson-Sellers and Cesar Gonzalez-Perez in "Uses and Abuses of the Stereotype Mechanism in UML 1.x and 2.0".
== Adoption ==
In 2013, UML had been marketed by OMG for many contexts, but aimed primarily at software development with limited success.
It has been treated, at times, as a design silver bullet, which leads to problems. UML misuse includes overuse (designing every part of the system with it, which is unnecessary) and assuming that novices can design with it.
It is considered a large language, with many constructs. Some people (including Jacobson) feel that UML's size hinders learning and therefore uptake.
MS Visual Studio dropped support for UML in 2016 due to lack of usage.
According to Google Trends, UML has been on a steady decline since 2004.
== See also ==
Applications of UML
BPMN (Business Process Model and Notation)
C4 model
Department of Defense Architecture Framework
DOT (graph description language)
List of Unified Modeling Language tools
MODAF
Model-based testing
Model-driven engineering
Object-oriented role analysis and modeling
Process Specification Language
Systems Modeling Language (SysML)
== References ==
== Further reading ==
Ambler, Scott William (2004). The Object Primer: Agile Model Driven Development with UML 2. Cambridge University Press. ISBN 0-521-54018-6. Archived from the original on 31 January 2010. Retrieved 29 April 2006.
Chonoles, Michael Jesse; James A. Schardt (2003). UML 2 for Dummies. Wiley Publishing. ISBN 0-7645-2614-6.
Fowler, Martin (2004). UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd ed.). Addison-Wesley. ISBN 0-321-19368-7.
Jacobson, Ivar; Grady Booch; James Rumbaugh (1998). The Unified Software Development Process. Addison Wesley Longman. ISBN 0-201-57169-2.
Martin, Robert Cecil (2003). UML for Java Programmers. Prentice Hall. ISBN 0-13-142848-9.
Noran, Ovidiu S. "Business Modelling: UML vs. IDEF" (PDF). Retrieved 14 November 2022.
Horst Kargl. "Interactive UML Metamodel with additional Examples".
Penker, Magnus; Hans-Erik Eriksson (2000). Business Modeling with UML. John Wiley & Sons. ISBN 0-471-29551-5.
Douglass, Bruce Powel. "Bruce Douglass: Real-Time Agile Systems and Software Development" (web). Retrieved 1 January 2019.
Douglass, Bruce (2014). Real-Time UML Workshop 2nd Edition. Newnes. ISBN 978-0-471-29551-8.
Douglass, Bruce (2004). Real-Time UML 3rd Edition. Newnes. ISBN 978-0321160768.
Douglass, Bruce (2002). Real-Time Design Patterns. Addison-Wesley Professional. ISBN 978-0201699562.
Douglass, Bruce (2009). Real-Time Agility. Addison-Wesley Professional. ISBN 978-0321545497.
Douglass, Bruce (2010). Design Patterns for Embedded Systems in C. Newnes. ISBN 978-1856177078.
== External links ==
Official website
Current UML specification: Unified Modeling Language 2.5.1. OMG Document Number formal/2017-12-05. Object Management Group Standards Development Organization (OMG SDO). December 2017. | Wikipedia/Unified_Modeling_Language |
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
== Overview ==
A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize.
SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations.
In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".
SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer.
== History ==
According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".
The structured systems analysis and design method (SSADM) was produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".
== Models ==
SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one. Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap.
=== Waterfall ===
The oldest and best known is the waterfall model, which uses a linear sequence of steps. Waterfall has different varieties. One variety is as follows:
==== Preliminary analysis ====
Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations.
Conduct preliminary analysis: Identify the organization's objectives and define the nature and scope of the project. Ensure that the project fits with the objectives.
Consider alternative solutions: Alternatives may come from interviewing employees, clients, suppliers, and consultants, as well as competitive analysis.
Cost-benefit analysis: Analyze the costs and benefits of the project.
==== Systems analysis, requirements definition ====
Decompose project goals into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness:
Collect facts: Obtain end-user requirements by document review, client interviews, observation, and questionnaires.
Scrutinize existing system(s): Identify pros and cons.
Analyze the proposed system: Find solutions to issues and prepare specifications, incorporating appropriate user proposals.
==== Systems design ====
At this step, desired features and operations are detailed, including screen layouts, business rules, process diagrams, pseudocode, and other deliverables.
==== Development ====
Write the code.
==== Integration and testing ====
Assemble the modules in a testing environment. Check for errors, bugs, and interoperability.
==== Acceptance, installation, deployment ====
Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system.
==== Maintenance ====
Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality.
==== Evaluation ====
The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance.
==== Disposal ====
At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security.
In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products:
=== Systems analysis and design ===
Systems analysis and design (SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
=== Object-oriented analysis and design ===
Object-oriented analysis and design (OOAD) is the process of analyzing a problem domain to develop a conceptual model that can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders.
The conceptual model that results from OOAD typically consists of use cases, and class and interaction diagrams. It may also include a user interface mock-up.
An output artifact does not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process.
Some typical input artifacts for OOAD:
Conceptual model: A conceptual model is the result of object-oriented analysis. It captures concepts in the problem domain. The conceptual model is explicitly independent of implementation details.
Use cases: A use case is a description of sequences of events that, taken together, complete a required task. Each use case provides scenarios that convey how the system should interact with actors (users). Actors may be end users or other systems. Use cases may further elaborated using diagrams. Such diagrams identify the actor and the processes they perform.
System Sequence Diagram: A System Sequence diagrams (SSD) is a picture that shows, for a particular use case, the events that actors generate, their order, including inter-system events.
User interface document: Document that shows and describes the user interface.
Data model: A data model describes how data elements relate to each other. The data model is created before the design phase. Object-oriented designs map directly from the data model. Relational designs are more involved.
=== System lifecycle ===
The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal.
==== Conceptual design ====
The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
Need identification
Feasibility analysis
System requirements analysis
System specification
Conceptual design review
==== Preliminary system design ====
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
Functional analysis
Requirements allocation
Detailed trade-off studies
Synthesis of system options
Preliminary design of engineering models
Development specification
Preliminary design review
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
==== Detail design and development ====
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
Detailed design
Detailed synthesis
Development of engineering and prototype models
Revision of development specification
Product, process, and material specification
Critical design review
==== Production and construction ====
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Production and/or construction of system components
Acceptance testing
System distribution and operation
Operational testing and evaluation
System assessment
==== Utilization and support ====
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
System operation in the user environment
Change management
System modifications for improvement
System assessment
==== Phase-out and disposal ====
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
== Phases ==
=== System investigation ===
During this step, current priorities that would be affected and how they should be handled are considered. A feasibility study determines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs.
The feasibility study should address operational, financial, technical, human factors, and legal/political concerns.
=== Analysis ===
The goal of analysis is to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements.
=== Design ===
In systems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems.
The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced.
Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a complete data model with a data dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input.
=== Testing ===
The code is tested at various levels in software testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted.
The following types of testing may be relevant:
Path testing
Data set testing
Unit testing
System testing
Integration testing
Black-box testing
White-box testing
Regression testing
Automation testing
User acceptance testing
Software performance testing
=== Training and transition ===
Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training.
After training, systems engineers and developers transition the system to its production environment.
=== Operations and maintenance ===
Maintenance includes changes, fixes, and enhancements.
=== Evaluation ===
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
== Life cycle ==
=== Management and control ===
SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control a substantial SDLC initiative, a work breakdown structure (WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The project manager chooses a WBS format that best describes the project.
The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support.
=== Work breakdown structured organization ===
The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have a statement of work (SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors.
=== Baselines ===
Baselines are established after four of the five phases of the SDLC, and are critical to the iterative nature of the model. Baselines become milestones.
functional baseline: established after the conceptual design phase.
allocated baseline: established after the preliminary design phase.
product baseline: established after the detail design and development phase.
updated product baseline: established after the production construction phase.
== Alternative methodologies ==
Alternative software development methods to systems development life cycle are:
Software prototyping
Joint applications development (JAD)
Rapid application development (RAD)
Extreme programming (XP);
Open-source development
End-user development
Object-oriented programming
== Strengths and weaknesses ==
Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers.
== See also ==
Application lifecycle management
Decision cycle
IPO model
Software development methodologies
== References ==
== Further reading ==
Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson
Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke. ISBN 978-0-230-20368-6
Computer World, 2002, Retrieved on June 22, 2006, from the World Wide Web:
Management Information Systems, 2005, Retrieved on June 22, 2006, from the World Wide Web:
== External links ==
The Agile System Development Lifecycle
Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology
DoD Integrated Framework Chart IFC (front, back)
FSA Life Cycle Framework
HHS Enterprise Performance Life Cycle Framework
The Open Systems Development Life Cycle
System Development Life Cycle Evolution Modeling
Zero Deviation Life Cycle
Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept. | Wikipedia/Systems_development_life_cycle |
In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite data type, i.e., a data type formed by combining other types.
Two common classes of algebraic types are product types (i.e., tuples, and records) and sum types (i.e., tagged or disjoint unions, coproduct types or variant types).
The values of a product type typically contain several values, called fields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., the Cartesian product, of the sets of all possible values of its field types.
The values of a sum type are typically grouped into several classes, called variants. A value of a variant type is usually created with a quasi-functional entity called a constructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., the disjoint union, of the sets of all possible values of its variants. Enumerated types are a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor.
Values of algebraic types are analyzed with pattern matching, which identifies a value by its constructor or field names and extracts the data it contains.
== History ==
Algebraic data types were introduced in Hope, a small functional programming language developed in the 1970s at the University of Edinburgh.
== Examples ==
=== Singly linked list ===
One of the most common examples of an algebraic data type is the singly linked list. A list type is a sum type with two variants, Nil for an empty list and Cons x xs for the combination of a new element x with a list xs to create a new list. Here is an example of how a singly linked list would be declared in Haskell:
or
Cons is an abbreviation of construct. Many languages have special syntax for lists defined in this way. For example, Haskell and ML use [] for Nil, : or :: for Cons, respectively, and square brackets for entire lists. So Cons 1 (Cons 2 (Cons 3 Nil)) would normally be written as 1:2:3:[] or [1,2,3] in Haskell, or as 1::2::3::[] or [1,2,3] in ML.
=== Binary tree ===
For a slightly more complex example, binary trees may be implemented in Haskell as follows:
or
Here, Empty represents an empty tree, Leaf represents a leaf node, and Node organizes the data into branches.
In most languages that support algebraic data types, it is possible to define parametric types. Examples are given later in this article.
Somewhat similar to a function, a data constructor is applied to arguments of an appropriate type, yielding an instance of the data type to which the type constructor belongs. For example, the data constructor Leaf is logically a function Int -> Tree, meaning that giving an integer as an argument to Leaf produces a value of the type Tree. As Node takes two arguments of the type Tree itself, the datatype is recursive.
Operations on algebraic data types can be defined by using pattern matching to retrieve the arguments. For example, consider a function to find the depth of a Tree, given here in Haskell:
Thus, a Tree given to depth can be constructed using any of Empty, Leaf, or Node and must be matched for any of them respectively to deal with all cases. In case of Node, the pattern extracts the subtrees l and r for further processing.
=== Abstract syntax ===
Algebraic data types are highly suited to implementing abstract syntax. For example, the following algebraic data type describes a simple language representing numerical expressions:
An element of such a data type would have a form such as Mult (Add (Number 4) (Minus (Number 0) (Number 1))) (Number 2).
Writing an evaluation function for this language is a simple exercise; however, more complex transformations also become feasible. For example, an optimization pass in a compiler might be written as a function taking an abstract expression as input and returning an optimized form.
== Pattern matching ==
Algebraic data types are used to represent values that can be one of several types of things. Each type of thing is associated with an identifier called a constructor, which can be considered a tag for that kind of data. Each constructor can carry with it a different type of data.
For example, considering the binary Tree example shown above, a constructor could carry no data (e.g., Empty), or one piece of data (e.g., Leaf has one Int value), or multiple pieces of data (e.g., Node has one Int value and two Tree values).
To do something with a value of this Tree algebraic data type, it is deconstructed using a process called pattern matching. This involves matching the data with a series of patterns. The example function depth above pattern-matches its argument with three patterns. When the function is called, it finds the first pattern that matches its argument, performs any variable bindings that are found in the pattern, and evaluates the expression corresponding to the pattern.
Each pattern above has a form that resembles the structure of some possible value of this datatype. The first pattern simply matches values of the constructor Empty. The second pattern matches values of the constructor Leaf. Patterns are recursive, so then the data that is associated with that constructor is matched with the pattern "n". In this case, a lowercase identifier represents a pattern that matches any value, which then is bound to a variable of that name — in this case, a variable "n" is bound to the integer value stored in the data type — to be used in the expression to evaluate.
The recursion in patterns in this example are trivial, but a possible more complex recursive pattern would be something like:
Node i (Node j (Leaf 4) x) (Node k y (Node Empty z))
Recursive patterns several layers deep are used for example in balancing red–black trees, which involve cases that require looking at colors several layers deep.
The example above is operationally equivalent to the following pseudocode:
The advantages of algebraic data types can be highlighted by comparison of the above pseudocode with a pattern matching equivalent.
Firstly, there is type safety. In the pseudocode example above, programmer diligence is required to not access field2 when the constructor is a Leaf. The type system would have difficulties assigning a static type in a safe way for traditional record data structures. However, in pattern matching such problems are not faced. The type of each extracted value is based on the types declared by the relevant constructor. The number of values that can be extracted is known based on the constructor.
Secondly, in pattern matching, the compiler performs exhaustiveness checking to ensure all cases are handled. If one of the cases of the depth function above were missing, the compiler would issue a warning. Exhaustiveness checking may seem easy for simple patterns, but with many complex recursive patterns, the task soon becomes difficult for the average human (or compiler, if it must check arbitrary nested if-else constructs). Similarly, there may be patterns which never match (i.e., are already covered by prior patterns). The compiler can also check and issue warnings for these, as they may indicate an error in reasoning.
Algebraic data type pattern matching should not be confused with regular expression string pattern matching. The purpose of both is similar (to extract parts from a piece of data matching certain constraints) however, the implementation is very different. Pattern matching on algebraic data types matches on the structural properties of an object rather than on the character sequence of strings.
== Theory ==
A general algebraic data type is a possibly recursive sum type of product types. Each constructor tags a product type to separate it from others, or if there is only one constructor, the data type is a product type. Further, the parameter types of a constructor are the factors of the product type. A parameterless constructor corresponds to the empty product. If a datatype is recursive, the entire sum of products is wrapped in a recursive type, and each constructor also rolls the datatype into the recursive type.
For example, the Haskell datatype:
is represented in type theory as
λ
α
.
μ
β
.1
+
α
×
β
{\displaystyle \lambda \alpha .\mu \beta .1+\alpha \times \beta }
with constructors
n
i
l
α
=
r
o
l
l
(
i
n
l
⟨
⟩
)
{\displaystyle \mathrm {nil} _{\alpha }=\mathrm {roll} \ (\mathrm {inl} \ \langle \rangle )}
and
c
o
n
s
α
x
l
=
r
o
l
l
(
i
n
r
⟨
x
,
l
⟩
)
{\displaystyle \mathrm {cons} _{\alpha }\ x\ l=\mathrm {roll} \ (\mathrm {inr} \ \langle x,l\rangle )}
.
The Haskell List datatype can also be represented in type theory in a slightly different form, thus:
μ
ϕ
.
λ
α
.1
+
α
×
ϕ
α
{\displaystyle \mu \phi .\lambda \alpha .1+\alpha \times \phi \ \alpha }
.
(Note how the
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
constructs are reversed relative to the original.) The original formation specified a type function whose body was a recursive type. The revised version specifies a recursive function on types. (The type variable
ϕ
{\displaystyle \phi }
is used to suggest a function rather than a base type like
β
{\displaystyle \beta }
, since
ϕ
{\displaystyle \phi }
is like a Greek f.) The function must also now be applied
ϕ
{\displaystyle \phi }
to its argument type
α
{\displaystyle \alpha }
in the body of the type.
For the purposes of the List example, these two formulations are not significantly different; but the second form allows expressing so-called nested data types, i.e., those where the recursive type differs parametrically from the original. (For more information on nested data types, see the works of Richard Bird, Lambert Meertens, and Ross Paterson.)
In set theory the equivalent of a sum type is a disjoint union, a set whose elements are pairs consisting of a tag (equivalent to a constructor) and an object of a type corresponding to the tag (equivalent to the constructor arguments).
== Programming languages with algebraic data types ==
Many programming languages incorporate algebraic data types as a first class notion, including:
== See also ==
Disjoint union
Generalized algebraic data type
Initial algebra
Quotient type
Tagged union
Type theory
Visitor pattern
== References == | Wikipedia/Algebraic_data_types |
In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
== Overview ==
Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples, including samples that have not been seen previously by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.
In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning. In
computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time
complexity results:
Positive results – Showing that a certain class of functions is learnable in polynomial time.
Negative results – Showing that certain classes cannot be learned in polynomial time.
Negative results often rely on commonly believed, but yet unproven assumptions, such as:
Computational complexity – P ≠ NP (the P versus NP problem);
Cryptographic – One-way functions exist.
There are several different approaches to computational learning theory based on making different assumptions about the inference principles used to generalise from limited data. This includes different definitions of probability (see frequency probability, Bayesian probability) and different assumptions on the generation of samples. The different approaches include:
Exact learning, proposed by Dana Angluin;
Probably approximately correct learning (PAC learning), proposed by Leslie Valiant;
VC theory, proposed by Vladimir Vapnik and Alexey Chervonenkis;
Inductive inference as developed by Ray Solomonoff;
Algorithmic learning theory, from the work of E. Mark Gold;
Online machine learning, from the work of Nick Littlestone.
While its primary goal is to understand learning abstractly, computational learning theory has led to the development of practical algorithms. For example, PAC theory inspired boosting, VC theory led to support vector machines, and Bayesian inference led to belief networks.
== See also ==
Error tolerance (PAC learning)
Grammar induction
Information theory
Occam learning
Stability (learning theory)
== References ==
== Further reading ==
A description of some of these publications is given at important publications in machine learning.
=== Surveys ===
Angluin, D. 1992. Computational learning theory: Survey and selected bibliography. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (May 1992), pages 351–369. http://portal.acm.org/citation.cfm?id=129712.129746
D. Haussler. Probably approximately correct learning. In AAAI-90 Proceedings of the Eight National Conference on Artificial Intelligence, Boston, MA, pages 1101–1108. American Association for Artificial Intelligence, 1990. http://citeseer.ist.psu.edu/haussler90probably.html
=== Feature selection ===
A. Dhagat and L. Hellerstein, "PAC learning with irrelevant attributes", in 'Proceedings of the IEEE Symp. on Foundation of Computer Science', 1994. http://citeseer.ist.psu.edu/dhagat94pac.html
=== Optimal O notation learning ===
Oded Goldreich, Dana Ron. On universal learning algorithms. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.2224
=== Negative results ===
M. Kearns and Leslie Valiant. 1989. Cryptographic limitations on learning boolean formulae and finite automata. In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, pages 433–444, New York. ACM. http://citeseer.ist.psu.edu/kearns89cryptographic.html
=== Error tolerance ===
Michael Kearns and Ming Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807–837, August 1993. http://citeseer.ist.psu.edu/kearns93learning.html
Kearns, M. (1993). Efficient noise-tolerant learning from statistical queries. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, pages 392–401. http://citeseer.ist.psu.edu/kearns93efficient.html
=== Equivalence ===
D.Haussler, M.Kearns, N.Littlestone and M. Warmuth, Equivalence of models for polynomial learnability, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 42-55.
Pitt, L.; Warmuth, M. K. (1990). "Prediction-Preserving Reducibility". Journal of Computer and System Sciences. 41 (3): 430–467. doi:10.1016/0022-0000(90)90028-J.
== External links ==
Basics of Bayesian inference | Wikipedia/Computational_learning_theory |
In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management).
What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence).
Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé.
== Ontology in Philosophy ==
Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history.
== Etymology ==
The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.
While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).
The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey.
== Formal Ontology ==
Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.
Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms.
Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."
== Formal Ontology Components ==
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
=== Types ===
==== Domain ontology ====
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).
At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry.
=== Upper ontology ===
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.
=== Hybrid ontology ===
The Gellish ontology is an example of a combination of an upper and a domain ontology.
== Visualization ==
A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL).
== Engineering ==
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ensuring the ontology is current with domain knowledge and term use
Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem
Ensuring the ontology can support its use cases
=== Editors ===
Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages.
Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc.
=== Learning ===
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.
=== Research ===
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.
== Languages ==
An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms.
Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other.
The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions.
DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability.
The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language.
IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies.
KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology.
MOF and UML are standards of the OMG
Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors.
OBO, a language used for biological and biomedical ontologies.
OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies.
OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs.
Rule Interchange Format (RIF) and F-Logic combine ontologies and rules.
Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in.
SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies.
TOVE Project, TOronto Virtual Enterprise project
== Published examples ==
Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content.
AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management.
BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages
Basic Formal Ontology, a formal upper ontology designed to support scientific research
BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data
BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature
SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO).
CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB)
CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management
CIDOC Conceptual Reference Model, an ontology for cultural heritage
COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary.
Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science
Cyc, a large Foundation Ontology for formal representation of the universe of discourse
Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes
DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering
Drammar, ontology of drama
Dublin Core, a simple ontology for documents and publishing
Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry
Foundational, Core and Linguistic Ontologies
Foundational Model of Anatomy, an ontology for human anatomy
Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects
Gene Ontology for genomics
Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focuses on industrial and business applications in engineering, technology and procurement.
Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles.
GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions
GOLD, General Ontology for Linguistic Description
GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology
IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts.
Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology.
LPL, Landmark Pattern Language
NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise
NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain.
OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies
OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine
OMNIBUS Ontology, an ontology of learning, instruction, and instructional design
Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations
ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta.
Plant Ontology for plant structures and growth/development stages, etc.
POPE, Purdue Ontology for Pharmaceutical Engineering
PRO, the Protein Ontology of the Protein Information Resource, Georgetown University
ProbOnto, knowledge base and ontology of probability distributions.
Program abstraction taxonomy
Protein Ontology for proteomics
RXNO Ontology, for name reactions in chemistry
SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website).
Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines
Sequence Ontology, for representing genomic feature types found on biological sequences
SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms)
Suggested Upper Merged Ontology, a formal upper ontology
Systems Biology Ontology (SBO), for computational models in biology
SWEET, Semantic Web for Earth and Environmental Terminology
SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations.
ThoughtTreasure ontology
TIME-ITEM, Topics for Indexing Medical Education
Uberon, representing animal anatomical structures
UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc
WordNet, a lexical reference system
YAMATO, Yet Another More Advanced Top-level Ontology
YSO – General Finnish Ontology
The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web.
== Libraries ==
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository.
DAML Ontology Library maintains a legacy of ontologies in DAML.
Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies.
Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies.
SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL.
The following are both directories and search engines.
OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine.
Bioportal (ontology repository of NCBO)
Linked Open Vocabularies
OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies.
Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004).
Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies.
Open Ontology Repository initiative
ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO.
== Examples of applications ==
In general, ontologies can be used beneficially in several fields.
Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health.
Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data.
Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts.
== See also ==
Related philosophical concepts
Alphabet of human thought
Characteristica universalis
Interoperability
Level of measurement
Metalanguage
Natural semantic metalanguage
== References ==
== Further reading ==
Oberle, D.; Guarino, N.; Staab, S. (2009). "What is an Ontology?" (PDF). Handbook on Ontologies. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-70999-2. S2CID 8522608.
Fensel, D.; van Harmelen, F.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F. (2001). "OIL: an ontology infrastructure for the Semantic Web". IEEE Intelligent Systems. 16 (2): 38–45. doi:10.1109/5254.920598.
Gangemi, A.; Presutti, V. "Ontology Design Patterns" (PDF). Staab & Studer 2009.
Golemati, M.; Katifori, A.; Vassilakis, C.; Lepouras, G.; Halatsis, C. (2007). "Creating an Ontology for the User Profile# Method and Applications" (PDF). Proceedings of the First IEEE International Conference on Research Challenges in Information Science (RCIS), Morocco 2007. CiteSeerX 10.1.1.74.9399. Archived from the original (PDF) on 2008-12-17.
Mizoguchi, R. (2004). "Tutorial on ontological engineering: Part 3: Advanced course of ontological engineering" (PDF). New Gener Comput. 22: 193–220. doi:10.1007/BF03040960. S2CID 23747079. Archived from the original (PDF) on 2013-03-09. Retrieved 2009-06-08.
Gruber, T. R. (1993). "A translation approach to portable ontology specifications" (PDF). Knowledge Acquisition. 5 (2): 199–220. CiteSeerX 10.1.1.101.7493. doi:10.1006/knac.1993.1008. S2CID 15709015.
Maedche, A.; Staab, S. (2001). "Ontology learning for the Semantic Web". IEEE Intelligent Systems. 16 (2): 72–79. doi:10.1109/5254.920602. S2CID 1411149.
Noy, Natalya F.; McGuinness, Deborah L. (March 2001). "Ontology Development 101: A Guide to Creating Your First Ontology". Stanford Knowledge Systems Laboratory Technical Report KSL-01-05, Stanford Medical Informatics Technical Report SMI-2001-0880. Archived from the original on 2010-07-14.
Chaminda Abeysiriwardana, Prabath; Kodituwakku, Saluka R (2012). "Ontology Based Information Extraction for Disease Intelligence". International Journal of Research in Computer Science. 2 (6): 7–19. arXiv:1211.3497. Bibcode:2012arXiv1211.3497C. doi:10.7815/ijorcs.26.2012.051 (inactive 8 December 2024). S2CID 11297019.{{cite journal}}: CS1 maint: DOI inactive as of December 2024 (link)
Razmerita, L.; Angehrn, A.; Maedche, A. (2003). "Ontology-Based User Modeling for Knowledge Management Systems". User Modeling 2003. Lecture Notes in Computer Science. Vol. 2702. Springer. pp. 213–7. CiteSeerX 10.1.1.102.4591. doi:10.1007/3-540-44963-9_29. ISBN 3-540-44963-9.
Soylu, A.; De Causmaecker, Patrick (2009). "Merging model driven and ontology driven system development approaches pervasive computing perspective". Proceedings of the 24th International Symposium on Computer and Information Sciences. pp. 730–5. doi:10.1109/ISCIS.2009.5291915. ISBN 978-1-4244-5021-3. S2CID 2267593.
Smith, B. (2008). "Ontology (Science)". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599.
Staab, S.; Studer, R., eds. (2009). "What is an Ontology?". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-92673-3. S2CID 8522608.
Uschold, Mike; Gruninger, M. (1996). "Ontologies: Principles, Methods and Applications". Knowledge Engineering Review. 11 (2): 93–136. CiteSeerX 10.1.1.111.5903. doi:10.1017/S0269888900007797. S2CID 2618234.
Pidcock, W. "What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?". Archived from the original on 2009-10-14.
Yudelson, M.; Gavrilova, T.; Brusilovsky, P. (2005). "Towards User Modeling Meta-ontology". User Modeling 2005. Lecture Notes in Computer Science. Vol. 3538. Springer. pp. 448–452. CiteSeerX 10.1.1.86.7079. doi:10.1007/11527886_62. ISBN 978-3-540-31878-1.
Movshovitz-Attias, Dana; Cohen, William W. (2012). "Bootstrapping Biomedical Ontologies for Scientific Text using NELL" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874.
== External links ==
Knowledge Representation at Open Directory Project
Library of ontologies (Archive, Unmaintained)
GoPubMed using Ontologies for searching
ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology
Use of Ontologies in Natural Language Processing
Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit.
Standardization of Ontologies | Wikipedia/Ontology_(information_science) |
In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified. Best explored is functional correctness, which refers to the input–output behavior of the algorithm: for each input it produces an output satisfying the specification.
Within the latter notion, partial correctness, requiring that if an answer is returned it will be correct, is distinguished from total correctness, which additionally requires that an answer is eventually returned, i.e. the algorithm terminates. Correspondingly, to prove a program's total correctness, it is sufficient to prove its partial correctness, and its termination. The latter kind of proof (termination proof) can never be fully automated, since the halting problem is undecidable.
For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number—it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory.
A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally. In particular it is not expected to be a correctness assertion for a given program implementing the algorithm on a given machine. That would involve such considerations as limitations on computer memory.
A deep result in proof theory, the Curry–Howard correspondence, states that a proof of functional correctness in constructive logic corresponds to a certain program in the lambda calculus. Converting a proof in this way is called program extraction.
Hoare logic is a specific formal system for reasoning rigorously about the correctness of computer programs. It uses axiomatic techniques to define programming language semantics and argue about the correctness of programs through assertions known as Hoare triples.
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.
== See also ==
Formal verification
Design by contract
Program analysis
Model checking
Compiler correctness
Program derivation
== Notes ==
== References ==
"Human Language Technology. Challenges for Computer Science and Linguistics." Google Books. N.p., n.d. Web. 10 April 2017.
"Security in Computing and Communications." Google Books. N.p., n.d. Web. 10 April 2017.
"The Halting Problem of Alan Turing - A Most Merry and Illustrated Explanation." The Halting Problem of Alan Turing - A Most Merry and Illustrated Explanation. N.p., n.d. Web. 10 April 2017.
Turner, Raymond, and Nicola Angius. "The Philosophy of Computer Science." Stanford Encyclopedia of Philosophy. Stanford University, 20 August 2013. Web. 10 April 2017.
Dijkstra, E. W. "Program Correctness". U of Texas at Austin, Departments of Mathematics and Computer Sciences, Automatic Theorem Proving Project, 1970. Web. | Wikipedia/Correctness_(computer_science) |
The waterfall model is a breakdown of developmental activities into linear sequential phases, meaning that each phase is passed down onto each other, where each phase depends on the deliverables of the previous one and corresponds to a specialization of tasks.
This approach is typical for certain areas of engineering design. In software development,
it tends to be among the less iterative and flexible approaches, as progress flows in largely one direction (downwards like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, deployment, and maintenance.
The waterfall model is the earliest systems development life cycle (SDLC) approach used in software development.
When it was first adopted, there were no recognized alternatives for knowledge-based creative work.
== History ==
The first known presentation describing the use of such phases in software engineering was held by Herbert D. Benington at the Symposium on Advanced Programming Methods for Digital Computers on 29 June 1956.
This presentation was about the development of software for SAGE. In 1983, Benington republished his paper with a foreword explaining that the phases were on purpose organized according to the specialization of tasks, and pointing out that the process was not in fact performed in a strict top-down fashion, but depended on a prototype.
Although the term "waterfall" is not used in the paper, the first formal detailed diagram of the process later known as the "waterfall model" is often cited as coming from a 1970 article by Winston W. Royce. However, he commented that it had major flaws stemming from how testing only happened at the end of the process, which he described as being "risky and [inviting] failure". The rest of his paper introduced five steps which he felt were necessary to "eliminate most of the development risks" associated with the unaltered waterfall approach.
Royce's five additional steps (which included writing complete documentation at various stages of development) never took mainstream hold, but his diagram of what he considered a flawed process became the starting point when describing a "waterfall" approach.
The earliest use of the term "waterfall" may have been in a 1976 paper by Bell and Thayer.
In 1985, the United States Department of Defense adopted the waterfall model in the DOD-STD-2167 standard for working with software development contractors. This standard referred for iterations of a software development to "the sequential phases of a software development cycle" and stated that "the contractor shall implement a software development cycle that includes the following six phases: Software Requirement Analysis, Preliminary Design, Detailed Design, Coding and Unit Testing, Integration, and Testing".
== Model ==
Although Royce never recommended nor described a waterfall model, rigid adherence to the following phases are criticized by him:
System and software requirements: captured in a product requirements document
Analysis: resulting in models, schema, and business rules
Design: resulting in the software architecture
Coding: the development, proving, and integration of software
Testing: the systematic discovery and debugging of defects
Operations: the installation, migration, support, and maintenance of complete systems
Thus, the waterfall model maintains that one should move to a phase only when its preceding phase is reviewed and verified.
Various modified waterfall models (including Royce's final model), however, can include slight or major variations on this process. These variations include returning to the previous cycle after flaws are found downstream, or returning to the design phase if downstream phases are deemed insufficient.
== Supporting arguments ==
Time spent early in the software production cycle can reduce costs at later stages. For example, a problem found in the early stages (such as requirements specification) is cheaper to fix than the same bug found later on in the process (by a factor of 50 to 200).
In common practice, waterfall methodologies result in a project schedule with 20–40% of the time invested for the first two phases, 30–40% of the time to coding, and the rest dedicated to testing and implementation. With the project organization needing to be highly structured, most medium and large projects will include a detailed set of procedures and controls, which regulate every process on the project.
A further argument supporting the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less thoroughly designed and documented methodologies, knowledge is lost if team members leave before the project is completed, and it may be difficult for a project to recover from the loss. If a fully working design document is present (as is the intent of big design up front and the waterfall model), new team members and new teams should be able to familiarise themselves to the project by reading the documents.
The waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand. It also provides easily identifiable milestones in the development process, often being used as a beginning example of a development model in many software engineering texts and courses.
Similarly, simulation can play a valuable role within the waterfall model. By creating computerized or mathematical simulations of the system being developed, teams can gain insights into how the system will perform before proceeding to the next phase. Simulations allow for testing and refining the design, identifying potential issues or bottlenecks, and making informed decisions about the system's functionality and performance.
== Criticism ==
Clients may not know the exact requirements before they see working software and thus change their requirements further on, leading to redesign, redevelopment, and retesting, and increased costs.
Designers may not be aware of future difficulties when designing a new software product or feature, in which case revising the design initially can increase efficiency in comparison to a design not built to account for newly discovered constraints, requirements, or problems.
Organisations may attempt to deal with a lack of concrete requirements from clients by employing systems analysts to examine existing manual systems and analyse what they do and how they might be replaced. However, in practice, it is difficult to sustain a strict separation between systems analysis and programming, as implementing any non-trivial system will often expose issues and edge cases that the systems analyst did not consider.
Some organisations, such as the United States Department of Defense, now have a stated preference against waterfall-type methodologies, starting with MIL-STD-498 released in 1994, which encourages evolutionary acquisition and iterative and incremental development.
== Modified waterfall models ==
In response to the perceived problems with the "pure" waterfall model, many 'modified waterfall models' have been introduced. These models may address some or all of the criticisms of the "pure" waterfall model.
These include the rapid development models that Steve McConnell calls "modified waterfalls": Peter DeGrace's "sashimi model" (waterfall with overlapping phases), waterfall with subprojects, and waterfall with risk reduction. Other software development model combinations such as "incremental waterfall model" also exist.
== Royce's final model ==
Winston W. Royce's final model, his intended improvement upon his initial "waterfall model", illustrated that feedback could (should, and often would) lead from code testing to design (as testing of code uncovered flaws in the design) and from design back to requirements specification (as design problems may necessitate the removal of conflicting or otherwise unsatisfiable/undesignable requirements). In the same paper Royce also advocated large quantities of documentation, doing the job "twice if possible" (a sentiment similar to that of Fred Brooks, famous for writing the Mythical Man Month — an influential book in software project management — who advocated planning to "throw one away"), and involving the customer as much as possible (a sentiment similar to that of extreme programming).
Royce notes on the final model are the following:
Complete program design before analysis and coding begins
Documentation must be current and complete
Do the job twice if possible
Testing must be planned, controlled, and monitored
Involve the customer
== See also ==
== References ==
== External links ==
Understanding the pros and cons of the Waterfall Model of software development
Project lifecycle models: how they differ and when to use them
Going Over the Waterfall with the RUP by Philippe Kruchten
CSC and IBM Rational join to deliver C-RUP and support rapid business change
c2:WaterFall
[1] | Wikipedia/Waterfall_model |
Rapid application development (RAD), also called rapid application building (RAB), is both a general term for adaptive software development approaches, and the name for James Martin's method of rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are often used in addition to or sometimes even instead of design specifications.
RAD is especially well suited for (although not limited to) developing software that is driven by user interface requirements. Graphical user interface builders are often called rapid application development tools. Other approaches to rapid development include the adaptive, agile, spiral, and unified models.
== History ==
Rapid application development was a response to plan-driven waterfall processes, developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method (SSADM). One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact. Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution. Plan-driven approaches attempt to rigidly define the requirements, the solution, and the plan to implement it, and have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution.
The first such RAD alternative was developed by Barry Boehm and was known as the spiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications:
Risk reduction. A prototype could test some of the most difficult potential parts of the system early on in the life-cycle. This can provide valuable information as to the feasibility of a design and can prevent the team from pursuing solutions that turn out to be too complex or time-consuming to implement. This benefit of finding problems earlier in the life-cycle rather than later was a key benefit of the RAD approach. The earlier a problem can be found the cheaper it is to address.
Users are better at using and reacting than at creating specifications. In the waterfall model it was common for a user to sign off on a set of requirements but then when presented with an implemented system to suddenly realize that a given design lacked some critical features or was too complex. In general most users give much more useful feedback when they can experience a prototype of the running system rather than abstractly define what that system should be.
Prototypes can be usable and can evolve into the completed product. One approach used in some RAD methods was to build the system as a series of prototypes that evolve from minimal functionality to moderately useful to the final completed system. The advantage of this besides the two advantages above was that the users could get useful business functionality much earlier in the process.
Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application development approach during the 1980s at IBM and finally formalized it by publishing a book in 1991, Rapid Application Development. This has resulted in some confusion over the term RAD even among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin. The Martin method was tailored toward knowledge intensive and UI intensive business systems.
These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD, which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project. These practitioners, and those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches.
The RAD approach also matured during the period of peak interest in business re-engineering. The idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind. RAD was often an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process.
Much of James Martin's comfort with RAD stemmed from Dupont's Information Engineering division and its leader Scott Schultz and their respective relationships with John Underwood who headed up a bespoke RAD development company that pioneered many successful RAD projects in Australia and Hong Kong.
Successful projects that included ANZ Bank, Lend Lease, BHP, Coca-Cola Amatil, Alcan, Hong Kong Jockey Club and numerous others.
Success that led to both Scott Shultz and James Martin both spending time in Australia with John Underwood to understand the methods and details of why Australia was disproportionately successful in implementing significant mission critical RAD projects.
== James Martin approach ==
The James Martin approach to RAD divides the process into four distinct phases:
Requirements planning phase – combines elements of the system planning and systems analysis phases of the systems development life cycle (SDLC). Users, managers, and IT staff members discuss and agree on business needs, project scope, constraints, and system requirements. It ends when the team agrees on the key issues and obtains management authorization to continue.
User design phase – during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes, inputs, and outputs. The RAD groups or subgroups typically use a combination of joint application design (JAD) techniques and CASE tools to translate user needs into working models. User design is a continuous interactive process that allows users to understand, modify, and eventually approve a working model of the system that meets their needs.
Construction phase – focuses on program and application development task similar to the SDLC. In RAD, however, users continue to participate and can still suggest changes or improvements as actual screens or reports are developed. Its tasks are programming and application development, coding, unit-integration and system testing.
Cutover phase – resembles the final tasks in the SDLC implementation phase, including data conversion, testing, changeover to the new system, and user training. Compared with traditional methods, the entire process is compressed. As a result, the new system is built, delivered, and placed in operation much sooner.
== Advantages ==
In modern Information Technology environments, many systems are now built using some degree of Rapid Application Development (not necessarily the James Martin approach). In addition to Martin's method, agile methods and the Rational Unified Process are often used for RAD development.
The purported advantages of RAD include:
Better quality. By having users interact with evolving prototypes the business functionality from a RAD project can often be much higher than that achieved via a waterfall model. The software can be more usable and has a better chance to focus on business problems that are critical to end users rather than technical problems of interest to developers. However, this excludes other categories of what are usually known as Non-functional requirements (AKA constraints or quality attributes) including security and portability.
Risk control. Although much of the literature on RAD focuses on speed and user involvement a critical feature of RAD done correctly is risk mitigation. It's worth remembering that Boehm initially characterized the spiral model as a risk based approach. A RAD approach can focus in early on the key risk factors and adjust to them based on empirical evidence collected in the early part of the process. E.g., the complexity of prototyping some of the most complex parts of the system.
More projects completed on time and within budget. By focusing on the development of incremental units the chances for catastrophic failures that have dogged large waterfall projects is reduced. In the Waterfall model it was common to come to a realization after six months or more of analysis and development that required a radical rethinking of the entire system. With RAD this kind of information can be discovered and acted upon earlier in the process.
== Disadvantages ==
The purported disadvantages of RAD include:
The risk of a new approach. For most IT shops RAD was a new approach that required experienced professionals to rethink the way they worked. Humans are virtually always averse to change and any project undertaken with new tools or methods will be more likely to fail the first time simply due to the requirement for the team to learn.
Lack of emphasis on Non-functional requirements, which are often not visible to the end user in normal operation.
Requires time of scarce resources. One thing virtually all approaches to RAD have in common is that there is much more interaction throughout the entire life-cycle between users and developers. In the waterfall model, users would define requirements and then mostly go away as developers created the system. In RAD users are involved from the beginning and through virtually the entire project. This requires that the business is willing to invest the time of application domain experts. The paradox is that the better the expert, the more they are familiar with their domain, the more they are required to actually run the business and it may be difficult to convince their supervisors to invest their time. Without such commitments RAD projects will not succeed.
Less control. One of the advantages of RAD is that it provides a flexible adaptable process. The ideal is to be able to adapt quickly to both problems and opportunities. There is an inevitable trade-off between flexibility and control, more of one means less of the other. If a project (e.g. life-critical software) values control more than agility RAD is not appropriate.
Poor design. The focus on prototypes can be taken too far in some cases resulting in a "hack and test" methodology where developers are constantly making minor changes to individual components and ignoring system architecture issues that could result in a better overall design. This can especially be an issue for methodologies such as Martin's that focus so heavily on the user interface of the system.
Lack of scalability. RAD typically focuses on small to medium-sized project teams. The other issues cited above (less design and control) present special challenges when using a RAD approach for very large scale systems.
== See also ==
Practical concepts to implement RAD:
Graphical user interface builder, where main software tools for RAD are represented
Fourth-generation programming language, e.g. FileMaker, 4th Dimension, dBase and Visual FoxPro
Other similar concepts:
Flow-based programming
Lean software development
Platform as a service
Low-code development platforms
No-code development platform
== References ==
== Further reading ==
Steve McConnell (1996). Rapid Development: Taming Wild Software Schedules, Microsoft Press Books, ISBN 978-1-55615-900-8
Kerr, James M.; Hunter, Richard (1993). Inside RAD: How to Build a Fully Functional System in 90 Days or Less. McGraw-Hill. ISBN 0-07-034223-7.
Ellen Gottesdiener (1995). "RAD Realities: Beyond the Hype to How RAD Really Works" Application Development Trends
Ken Schwaber (1996). Agile Project Management with Scrum, Microsoft Press Books, ISBN 978-0-7356-1993-7
Steve McConnell (2003). Professional Software Development: Shorter Schedules, Higher Quality Products, More Successful Projects, Enhanced Careers, Addison-Wesley, ISBN 978-0-321-19367-4
Dean Leffingwell (2007). Scaling Software Agility: Best Practices for Large Enterprises, Addison-Wesley Professional, ISBN 978-0-321-45819-3
Scott Stiner (2016). Forbes List: "Rapid Application Development (RAD): A Smart, Quick And Valuable Process For Software Developers" | Wikipedia/Rapid_application_development |
Data science is an interdisciplinary academic field that uses statistics, scientific computing, scientific methods, processing, scientific visualization, algorithms and systems to extract or extrapolate knowledge from potentially noisy, structured, or unstructured data.
Data science also integrates domain knowledge from the underlying application domain (e.g., natural sciences, information technology, and medicine). Data science is multifaceted and can be described as a science, a research paradigm, a research method, a discipline, a workflow, and a profession.
Data science is "a concept to unify statistics, data analysis, informatics, and their related methods" to "understand and analyze actual phenomena" with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, information science, and domain knowledge. However, data science is different from computer science and information science. Turing Award winner Jim Gray imagined data science as a "fourth paradigm" of science (empirical, theoretical, computational, and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge.
A data scientist is a professional who creates programming code and combines it with statistical knowledge to summarize data.
== Foundations ==
Data science is an interdisciplinary field focused on extracting knowledge from typically large data sets and applying the knowledge from that data to solve problems in other application domains. The field encompasses preparing data for analysis, formulating data science problems, analyzing data, and summarizing these findings. As such, it incorporates skills from computer science, mathematics, data visualization, graphic design, communication, and business.
Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g., from images, text, sensors, transactions, customer information, etc.) and emphasizes prediction and action. Andrew Gelman of Columbia University has described statistics as a non-essential part of data science. Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data-science program. He describes data science as an applied field growing out of traditional statistics.
== Etymology ==
=== Early usage ===
In 1962, John Tukey described a field he called "data analysis", which resembles modern data science. In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing, C. F. Jeff Wu used the term "data science" for the first time as an alternative name for statistics. Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.
The term "data science" has been traced back to 1974, when Peter Naur proposed it as an alternative name to computer science. In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. However, the definition was still in flux. After the 1985 lecture at the Chinese Academy of Sciences in Beijing, in 1997 C. F. Jeff Wu again suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting or limited to describing data. In 1998, Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.
=== Modern usage ===
In 2012, technologists Thomas H. Davenport and DJ Patil declared "Data Scientist: The Sexiest Job of the 21st Century", a catchphrase that was picked up even by major-city newspapers like the New York Times and the Boston Globe. A decade later, they reaffirmed it, stating that "the job is more in demand than ever with employers".
The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. In 2014, the American Statistical Association's Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.
The professional title of "data scientist" has been attributed to DJ Patil and Jeff Hammerbacher in 2008. Though it was used by the National Science Board in their 2005 report "Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century", it referred broadly to any key role in managing a digital data collection.
== Data science and data analysis ==
Data analysis typically involves working with structured datasets to answer specific questions or solve specific problems. This can involve tasks such as data cleaning and data visualization to summarize data and develop hypotheses about relationships between variables. Data analysts typically use statistical methods to test these hypotheses and draw conclusions from the data.
Data science involves working with larger datasets that often require advanced computational and statistical methods to analyze. Data scientists often work with unstructured data such as text or images and use machine learning algorithms to build predictive models. Data science often uses statistical analysis, data preprocessing, and supervised learning.
== Cloud computing for data science ==
Cloud computing can offer access to large amounts of computational power and storage. In big data, where volumes of information are continually generated and processed, these platforms can be used to handle complex and resource-intensive analytical tasks.
Some distributed computing frameworks are designed to handle big data workloads. These frameworks can enable data scientists to process and analyze large datasets in parallel, which can reduce processing times.
== Ethical consideration in data science ==
Data science involves collecting, processing, and analyzing data which often includes personal and sensitive information. Ethical concerns include potential privacy violations, bias perpetuation, and negative societal impacts.
Machine learning models can amplify existing biases present in training data, leading to discriminatory or unfair outcomes.
== See also ==
Python (programming language)
R (programming language)
Data engineering
Big data
Machine learning
Bioinformatics
Astroinformatics
Topological data analysis
List of open-source data science software
== References == | Wikipedia/Data_science |
Digital physics is a speculative idea suggesting that the universe can be conceived of as a vast, digital computation device, or as the output of a deterministic or probabilistic computer program. The hypothesis that the universe is a digital computer was proposed by Konrad Zuse in his 1969 book Rechnender Raum (Calculating-space). The term "digital physics" was coined in 1978 by Edward Fredkin, who later came to prefer the term "digital philosophy". Fredkin taught a graduate course called "digital physics" at MIT in 1978, and collaborated with Tommaso Toffoli on "conservative logic" while Norman Margolus served as a graduate student in his research group.
Digital physics posits that there exists, at least in principle, a program for a universal computer that computes the evolution of the universe. The computer could be, for example, a huge cellular automaton. It is deeply connected to the concept of information theory, particularly the idea that the universe's fundamental building blocks might be bits of information rather than traditional particles or fields.
However, extant models of digital physics face challenges, particularly in reconciling with several continuous symmetries in physical laws, e.g., rotational symmetry, translational symmetry, Lorentz symmetry, and the Lie group gauge invariance of Yang–Mills theories, all of which are central to current physical theories. Moreover, existing models of digital physics violate various well-established features of quantum physics, as they belong to a class of theories involving local hidden variables. These models have so far been disqualified experimentally by physicists using Bell's theorem.
== See also ==
Mathematical universe hypothesis
It from bit
Simulation hypothesis
Weyl's tile argument
Natura non facit saltus
== References ==
== Further reading ==
Robert Wright, "Did the Universe Just Happen?", Atlantic Monthly, April 1988 - Article discussing Fredkin and his digital physics ideas | Wikipedia/Digital_physics |
Electroencephalography (EEG)
is a method to record an electrogram of the spontaneous electrical activity of the brain. The bio signals detected by EEG have been shown to represent the postsynaptic potentials of pyramidal neurons in the neocortex and allocortex. It is typically non-invasive, with the EEG electrodes placed along the scalp (commonly called "scalp EEG") using the International 10–20 system, or variations of it. Electrocorticography, involving surgical placement of electrodes, is sometimes called "intracranial EEG". Clinical interpretation of EEG recordings is most often performed by visual inspection of the tracing or quantitative EEG analysis.
Voltage fluctuations measured by the EEG bio amplifier and electrodes allow the evaluation of normal brain activity. As the electrical activity monitored by EEG originates in neurons in the underlying brain tissue, the recordings made by the electrodes on the surface of the scalp vary in accordance with their orientation and distance to the source of the activity. Furthermore, the value recorded is distorted by intermediary tissues and bones, which act in a manner akin to resistors and capacitors in an electrical circuit. This means that not all neurons will contribute equally to an EEG signal, with an EEG predominately reflecting the activity of cortical neurons near the electrodes on the scalp. Deep structures within the brain further away from the electrodes will not contribute directly to an EEG; these include the base of the cortical gyrus, mesial walls of the major lobes, hippocampus, thalamus, and brain stem.
A healthy human EEG will show certain patterns of activity that correlate with how awake a person is. The range of frequencies one observes are between 1 and 30 Hz, and amplitudes will vary between 20 and 100 μV. The observed frequencies are subdivided into various groups: alpha (8–13 Hz), beta (13–30 Hz), delta (0.5–4 Hz), and theta (4–7 Hz). Alpha waves are observed when a person is in a state of relaxed wakefulness and are mostly prominent over the parietal and occipital sites. During intense mental activity, beta waves are more prominent in frontal areas as well as other regions. If a relaxed person is told to open their eyes, one observes alpha activity decreasing and an increase in beta activity. Theta and delta waves are not generally seen in wakefulness – if they are, it is a sign of brain dysfunction.
EEG can detect abnormal electrical discharges such as sharp waves, spikes, or spike-and-wave complexes, as observable in people with epilepsy; thus, it is often used to inform medical diagnosis. EEG can detect the onset and spatio-temporal (location and time) evolution of seizures and the presence of status epilepticus. It is also used to help diagnose sleep disorders, depth of anesthesia, coma, encephalopathies, cerebral hypoxia after cardiac arrest, and brain death. EEG used to be a first-line method of diagnosis for tumors, stroke, and other focal brain disorders, but this use has decreased with the advent of high-resolution anatomical imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). Despite its limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution, which is not possible with CT, PET, or MRI.
Derivatives of the EEG technique include evoked potentials (EP), which involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort (visual, somatosensory, or auditory). Event-related potentials (ERPs) refer to averaged EEG responses that are time-locked to more complex processing of stimuli; this technique is used in cognitive science, cognitive psychology, and psychophysiological research.
== Uses ==
=== Epilepsy ===
EEG is the gold standard diagnostic procedure to confirm epilepsy. The sensitivity of a routine EEG to detect interictal epileptiform discharges at epilepsy centers has been reported to be in the range of 29–55%. Given the low to moderate sensitivity, a routine EEG (typically with a duration of 20–30 minutes) can be normal in people that have epilepsy. When an EEG shows interictal epileptiform discharges (e.g. sharp waves, spikes, spike-and-wave, etc.) it is confirmatory of epilepsy in nearly all cases (high specificity), however up to 3.5% of the general population may have epileptiform abnormalities in an EEG without ever having had a seizure (low false positive rate) or with a very low risk of developing epilepsy in the future.
When a routine EEG is normal and there is a high suspicion or need to confirm epilepsy, it may be repeated or performed with a longer duration in the epilepsy monitoring unit (EMU) or at home with an ambulatory EEG. In addition, there are activating maneuvers such as photic stimulation, hyperventilation and sleep deprivation that can increase the diagnostic yield of the EEG.
=== Epilepsy Monitoring Unit (EMU) ===
At times, a routine EEG is not sufficient to establish the diagnosis or determine the best course of action in terms of treatment. In this case, attempts may be made to record an EEG while a seizure is occurring. This is known as an ictal recording, as opposed to an interictal recording, which refers to the EEG recording between seizures. To obtain an ictal recording, a prolonged EEG is typically performed accompanied by a time-synchronized video and audio recording. This can be done either as an outpatient (at home) or during a hospital admission, preferably to an Epilepsy Monitoring Unit (EMU) with nurses and other personnel trained in the care of patients with seizures. Outpatient ambulatory video EEGs typically last one to three days. An admission to an Epilepsy Monitoring Unit typically lasts several days but may last for a week or longer. While in the hospital, seizure medications are usually withdrawn to increase the odds that a seizure will occur during admission. For reasons of safety, medications are not withdrawn during an EEG outside of the hospital. Ambulatory video EEGs, therefore, have the advantage of convenience and are less expensive than a hospital admission, but they also have the disadvantage of a decreased probability of recording a clinical event.
Epilepsy monitoring is often considered when patients continue having events despite being on anti-seizure medications or if there is concern that the patient's events have an alternate diagnosis, e.g., psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders, migraine variants, stroke, etc. In cases of epileptic seizures, continuous EEG monitoring helps to characterize seizures and localize/lateralize the region of the brain from which a seizure originates. This can help identify appropriate non-medication treatment options. In clinical use, EEG traces are visually analyzed by neurologists to look at various features. Increasingly, quantitative analysis of EEG is being used in conjunction with visual analysis. Quantitative analysis displays like power spectrum analysis, alpha-delta ratio, amplitude integrated EEG, and spike detection can help quickly identify segments of EEG that need close visual analysis or, in some cases, be used as surrogates for quick identification of seizures in long-term recordings.
=== Other brain disorders ===
An EEG might also be helpful for diagnosing or treating the following disorders:
Brain tumor
Brain damage from head injury
Brain dysfunction that can have a variety of causes (encephalopathy)
Inflammation of the brain (encephalitis)
Stroke
Sleep disorders
It can also:
distinguish epileptic seizures from other types of spells, such as psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders and migraine variants
differentiate "organic" encephalopathy or delirium from primary psychiatric syndromes such as catatonia
serve as an adjunct test of brain death in comatose patients
prognosticate in comatose patients (in certain instances) or in newborns with brain injury from various causes around the time of birth
determine whether to wean anti-epileptic medications.
=== Intensive Care Unit (ICU) ===
EEG can also be used in intensive care units for brain function monitoring to monitor for non-convulsive seizures/non-convulsive status epilepticus, to monitor the effect of sedative/anesthesia in patients in medically induced coma (for treatment of refractory seizures or increased intracranial pressure), and to monitor for secondary brain damage in conditions such as subarachnoid hemorrhage (currently a research method).
In cases where significant brain injury is suspected, e.g., after cardiac arrest, EEG can provide some prognostic information.
If a patient with epilepsy is being considered for resective surgery to treat epilepsy, it is often necessary to localize the focus (source) of the epileptic brain activity with a resolution greater than what is provided by scalp EEG. In these cases, neurosurgeons typically implant strips and grids of electrodes or penetrating depth electrodes under the dura mater, through either a craniotomy or a burr hole. The recording of these signals is referred to as electrocorticography (ECoG), subdural EEG (SDE), intracranial EEG (iEEG), or stereotactic EEG (SEEG). The signal recorded from ECoG is on a different scale of activity than the brain activity recorded from scalp EEG. Low-voltage, high-frequency components that cannot be seen easily (or at all) in scalp EEG can be seen clearly in ECoG. Further, smaller electrodes (which cover a smaller parcel of brain surface) allow for better spatial resolution to narrow down the areas critical for seizure onset and propagation. Some clinical sites record data from penetrating microelectrodes.
=== Home ambulatory EEG ===
Sometimes it is more convenient or clinically necessary to perform ambulatory EEG recordings in the home of the person being tested. These studies typically have a duration of 24–72 hours.
== Research use ==
EEG and the related study of ERPs are used extensively in neuroscience, cognitive science, cognitive psychology, neurolinguistics, and psychophysiological research, as well as to study human functions such as swallowing. Any EEG techniques used in research are not sufficiently standardised for clinical use, and many ERP studies fail to report all of the necessary processing steps for data collection and reduction, limiting the reproducibility and replicability of many studies. Based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), EEG scans cannot be used reliably to assist in making a clinical diagnosis of ADHD. However, EEG continues to be used in research on mental disabilities, such as auditory processing disorder (APD), ADD, and ADHD. EEGs have also been studied for their utility in detecting neurophysiological changes in the brain after concussion, however, at this time there are no advanced imaging techniques that can be used clinically to diagnose or monitor recovery from concussion.
=== Advantages ===
Several other methods to study brain function exist, including functional magnetic resonance imaging (fMRI), positron emission tomography (PET), magnetoencephalography (MEG), nuclear magnetic resonance spectroscopy (NMR or MRS), electrocorticography (ECoG), single-photon emission computed tomography (SPECT), near-infrared spectroscopy (NIRS), and event-related optical signal (EROS). Despite the relatively poor spatial sensitivity of EEG, the "one-dimensional signals from localised peripheral regions on the head make it attractive for its simplistic fidelity and has allowed high clinical and basic research throughput". Thus, EEG possesses some advantages over some of those other techniques:
Hardware costs are significantly lower than those of most other techniques
EEG prevents limited availability of technologists to provide immediate care in high traffic hospitals.
EEG only requires a quiet room and briefcase-size equipment, whereas fMRI, SPECT, PET, MRS, or MEG require bulky and immobile equipment. For example, MEG requires equipment consisting of liquid helium-cooled detectors that can be used only in magnetically shielded rooms, altogether costing upwards of several million dollars; and fMRI requires the use of a 1-ton magnet in, again, a shielded room.
EEG can readily have a high temporal resolution, (although sub-millisecond resolution generates less meaningful data), because the two to 32 data streams generated by that number of electrodes is easily stored and processed, whereas 3D spatial technologies provide thousands or millions times as many input data streams, and are thus limited by hardware and software. EEG is commonly recorded at sampling rates between 250 and 2000 Hz in clinical and research settings.
EEG is relatively tolerant of subject movement, unlike most other neuroimaging techniques. There even exist methods for minimizing, and even eliminating movement artifacts in EEG data
EEG is silent, which allows for better study of the responses to auditory stimuli.
EEG does not aggravate claustrophobia, unlike fMRI, PET, MRS, SPECT, and sometimes MEG
EEG does not involve exposure to high-intensity (>1 Tesla) magnetic fields, as in some of the other techniques, especially MRI and MRS. These can cause a variety of undesirable issues with the data, and also prohibit use of these techniques with participants that have metal implants in their body, such as metal-containing pacemakers
EEG does not involve exposure to radioligands, unlike positron emission tomography.
ERP studies can be conducted with relatively simple paradigms, compared with IE block-design fMRI studies
Relatively non-invasive, in contrast to electrocorticography, which requires electrodes to be placed on the actual surface of the brain.
EEG also has some characteristics that compare favorably with behavioral testing:
EEG can detect covert processing (i.e., processing that does not require a response)
EEG can be used in subjects who are incapable of making a motor response
EEG is a method widely used in the study of sport performance, valued for its portability and lightweight design
Some ERP components can be detected even when the subject is not attending to the stimuli
Unlike other means of studying reaction time, ERPs can elucidate stages of processing (rather than just the result)
the simplicity of EEG readily provides for tracking of brain changes during different phases of life. EEG sleep analysis can indicate significant aspects of the timing of brain development, including evaluating adolescent brain maturation.
In EEG there is a better understanding of what signal is measured as compared to other research techniques, e.g. the BOLD response in MRI.
=== Disadvantages ===
Low spatial resolution on the scalp. fMRI, for example, can directly display areas of the brain that are active, while EEG requires intense interpretation just to hypothesize what areas are activated by a particular response.
Depending on the orientation and location of the dipole causing an EEG change, there may be a false localization due to the inverse problem.
EEG poorly measures neural activity that occurs below the upper layers of the brain (the cortex).
Unlike PET and MRS, EEG cannot identify specific locations in the brain at which various neurotransmitters, drugs, etc. can be found.
Often takes a long time to connect a subject to EEG, as it requires precise placement of dozens of electrodes around the head and the use of various gels, saline solutions, and pastes to maintain good conductivity, and a cap is used to keep them in place. While the length of time differs dependent on the specific EEG device used, as a general rule it takes considerably less time to prepare a subject for MEG, fMRI, MRS, and SPECT.
Signal-to-noise ratio is poor, so sophisticated data analysis and relatively large numbers of subjects are needed to extract useful information from EEG.
EEGs are not currently very compatible with individuals who have coarser or textured hair. Even protective styles can pose issues during testing. Researchers are currently trying to build better options for patients and technicians alike Furthermore, researchers are starting to implement more culturally-informed data collection practices to help reduce racial biases in EEG research.
=== With other neuroimaging techniques ===
Simultaneous EEG recordings and fMRI scans have been obtained successfully, though recording both at the same time effectively requires that several technical difficulties be overcome, such as the presence of ballistocardiographic artifact, MRI pulse artifact and the induction of electrical currents in EEG wires that move within the strong magnetic fields of the MRI. While challenging, these have been successfully overcome in a number of studies.
MRI's produce detailed images created by generating strong magnetic fields that may induce potentially harmful displacement force and torque. These fields produce potentially harmful radio frequency heating and create image artifacts rendering images useless. Due to these potential risks, only certain medical devices can be used in an MR environment.
Similarly, simultaneous recordings with MEG and EEG have also been conducted, which has several advantages over using either technique alone:
EEG requires accurate information about certain aspects of the skull that can only be estimated, such as skull radius, and conductivities of various skull locations. MEG does not have this issue, and a simultaneous analysis allows this to be corrected for.
MEG and EEG both detect activity below the surface of the cortex very poorly, and like EEG, the level of error increases with the depth below the surface of the cortex one attempts to examine. However, the errors are very different between the techniques, and combining them thus allows for correction of some of this noise.
MEG has access to virtually no sources of brain activity below a few centimetres under the cortex. EEG, on the other hand, can receive signals from greater depth, albeit with a high degree of noise. Combining the two makes it easier to determine what in the EEG signal comes from the surface (since MEG is very accurate in examining signals from the surface of the brain), and what comes from deeper in the brain, thus allowing for analysis of deeper brain signals than either EEG or MEG on its own.
Recently, a combined EEG/MEG (EMEG) approach has been investigated for the purpose of source reconstruction in epilepsy diagnosis.
EEG has also been combined with positron emission tomography. This provides the advantage of allowing researchers to see what EEG signals are associated with different drug actions in the brain.
Recent studies using machine learning techniques such as neural networks with statistical temporal features extracted from frontal lobe EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating), mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia.
== Mechanisms ==
The brain's electrical charge is maintained by billions of neurons. Neurons are electrically charged (or "polarized") by membrane transport proteins that pump ions across their membranes. Neurons are constantly exchanging ions with the extracellular milieu, for example to maintain resting potential and to propagate action potentials. Ions of similar charge repel each other, and when many ions are pushed out of many neurons at the same time, they can push their neighbours, who push their neighbours, and so on, in a wave. This process is known as volume conduction. When the wave of ions reaches the electrodes on the scalp, they can push or pull electrons on the metal in the electrodes. Since metal conducts the push and pull of electrons easily, the difference in push or pull voltages between any two electrodes can be measured by a voltmeter. Recording these voltages over time gives us the EEG.
The electric potential generated by an individual neuron is far too small to be picked up by EEG or MEG. EEG activity therefore always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation. If the cells do not have similar spatial orientation, their ions do not line up and create waves to be detected. Pyramidal neurons of the cortex are thought to produce the most EEG signal because they are well-aligned and fire together. Because voltage field gradients fall off with the square of distance, activity from deep sources is more difficult to detect than currents near the skull.
Scalp EEG activity shows oscillations at a variety of frequencies. Several of these oscillations have characteristic frequency ranges, spatial distributions and are associated with different states of brain functioning (e.g., waking and the various sleep stages). These oscillations represent synchronized activity over a network of neurons. The neuronal networks underlying some of these oscillations are understood (e.g., the thalamocortical resonance underlying sleep spindles), while many others are not (e.g., the system that generates the posterior basic rhythm). Research that measures both EEG and neuron spiking finds the relationship between the two is complex, with a combination of EEG power in the gamma band and phase in the delta band relating most strongly to neuron spike activity.
== Method ==
In conventional scalp EEG, the recording is obtained by placing electrodes on the scalp with a conductive gel or paste, usually after preparing the scalp area by light abrasion to reduce impedance due to dead skin cells. Many systems typically use electrodes, each of which is attached to an individual wire. Some systems use caps or nets into which electrodes are embedded; this is particularly common when high-density arrays of electrodes are needed.
Electrode locations and names are specified by the International 10–20 system for most clinical and research applications (except when high-density arrays are used). This system ensures that the naming of electrodes is consistent across laboratories. In most clinical applications, 19 recording electrodes (plus ground and system reference) are used. A smaller number of electrodes are typically used when recording EEG from neonates. Additional electrodes can be added to the standard set-up when a clinical or research application demands increased spatial resolution for a particular area of the brain. High-density arrays (typically via cap or net) can contain up to 256 electrodes more-or-less evenly spaced around the scalp.
Each electrode is connected to one input of a differential amplifier (one amplifier per pair of electrodes); a common system reference electrode is connected to the other input of each differential amplifier. These amplifiers amplify the voltage between the active electrode and the reference (typically 1,000–100,000 times, or 60–100 dB of power gain). In analog EEG, the signal is then filtered (next paragraph), and the EEG signal is output as the deflection of pens as paper passes underneath. Most EEG systems these days, however, are digital, and the amplified signal is digitized via an analog-to-digital converter, after being passed through an anti-aliasing filter. Analog-to-digital sampling typically occurs at 256–512 Hz in clinical scalp EEG; sampling rates of up to 20 kHz are used in some research applications.
During the recording, a series of activation procedures may be used. These procedures may induce normal or abnormal EEG activity that might not otherwise be seen. These procedures include hyperventilation, photic stimulation (with a strobe light), eye closure, mental activity, sleep and sleep deprivation. During (inpatient) epilepsy monitoring, a patient's typical seizure medications may be withdrawn.
The digital EEG signal is stored electronically and can be filtered for display. Typical settings for the high-pass filter and a low-pass filter are 0.5–1 Hz and 35–70 Hz respectively. The high-pass filter typically filters out slow artifact, such as electrogalvanic signals and movement artifact, whereas the low-pass filter filters out high-frequency artifacts, such as electromyographic signals. An additional notch filter is typically used to remove artifact caused by electrical power lines (60 Hz in the United States and 50 Hz in many other countries).
The EEG signals can be captured with opensource hardware such as OpenBCI and the signal can be processed by freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox.
As part of an evaluation for epilepsy surgery, it may be necessary to insert electrodes near the surface of the brain, under the surface of the dura mater. This is accomplished via burr hole or craniotomy. This is referred to variously as "electrocorticography (ECoG)", "intracranial EEG (I-EEG)" or "subdural EEG (SD-EEG)". Depth electrodes may also be placed into brain structures, such as the amygdala or hippocampus, structures, which are common epileptic foci and may not be "seen" clearly by scalp EEG. The electrocorticographic signal is processed in the same manner as digital scalp EEG (above), with a couple of caveats. ECoG is typically recorded at higher sampling rates than scalp EEG because of the requirements of Nyquist theorem – the subdural signal is composed of a higher predominance of higher frequency components. Also, many of the artifacts that affect scalp EEG do not impact ECoG, and therefore display filtering is often not needed.
A typical adult human EEG signal is about 10 μV to 100 μV in amplitude when measured from the scalp.
Since an EEG voltage signal represents a difference between the voltages at two electrodes, the display of the EEG for the reading electroencephalographer may be set up in one of several ways. The representation of the EEG channels is referred to as a montage.
Sequential montage
Each channel (i.e., waveform) represents the difference between two adjacent electrodes. The entire montage consists of a series of these channels. For example, the channel "Fp1-F3" represents the difference in voltage between the Fp1 electrode and the F3 electrode. The next channel in the montage, "F3-C3", represents the voltage difference between F3 and C3, and so on through the entire array of electrodes.
Referential montage
Each channel represents the difference between a certain electrode and a designated reference electrode. There is no standard position for this reference; it is, however, at a different position than the "recording" electrodes. Midline positions are often used because they do not amplify the signal in one hemisphere vs. the other, such as Cz, Oz, Pz etc. as online reference. The other popular offline references are:
REST reference: which is an offline computational reference at infinity where the potential is zero. REST (reference electrode standardization technique) takes the equivalent sources inside the brain of any a set of scalp recordings as springboard to link the actual recordings with any an online or offline( average, linked ears etc.) non-zero reference to the new recordings with infinity zero as the standardized reference.
"linked ears": which is a physical or mathematical average of electrodes attached to both earlobes or mastoids.
Average reference montage
The outputs of all of the amplifiers are summed and averaged, and this averaged signal is used as the common reference for each channel.
Laplacian montage
Each channel represents the difference between an electrode and a weighted average of the surrounding electrodes.
When analog (paper) EEGs are used, the technologist switches between montages during the recording in order to highlight or better characterize certain features of the EEG. With digital EEG, all signals are typically digitized and stored in a particular (usually referential) montage; since any montage can be constructed mathematically from any other, the EEG can be viewed by the electroencephalographer in any display montage that is desired.
The EEG is read by a clinical neurophysiologist or neurologist (depending on local custom and law regarding medical specialities), optimally one who has specific training in the interpretation of EEGs for clinical purposes. This is done by visual inspection of the waveforms, called graphoelements. The use of computer signal processing of the EEG – so-called quantitative electroencephalography – is somewhat controversial when used for clinical purposes (although there are many research uses).
=== Dry EEG electrodes ===
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode. The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.
In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.
In 2018, a functional dry electrode composed of a polydimethylsiloxane elastomer filled with conductive carbon nanofibers was reported. This research was conducted at the U.S. Army Research Laboratory. EEG technology often involves applying a gel to the scalp which facilitates strong signal-to-noise ratio. This results in more reproducible and reliable experimental results. Since patients dislike having their hair filled with gel, and the lengthy setup requires trained staff on hand, utilizing EEG outside the laboratory setting can be difficult. Additionally, it has been observed that wet electrode sensors' performance reduces after a span of hours. Therefore, research has been directed to developing dry and semi-dry EEG bioelectronic interfaces.
Dry electrode signals depend upon mechanical contact. Therefore, it can be difficult getting a usable signal because of impedance between the skin and the electrode. Some EEG systems attempt to circumvent this issue by applying a saline solution. Others have a semi dry nature and release small amounts of the gel upon contact with the scalp. Another solution uses spring loaded pin setups. These may be uncomfortable. They may also be dangerous if they were used in a situation where a patient could bump their head since they could become lodged after an impact trauma incident.
Currently, headsets are available incorporating dry electrodes with up to 30 channels. Such designs are able to compensate for some of the signal quality degradation related to high impedances by optimizing pre-amplification, shielding and supporting mechanics.
=== Limitations ===
EEG has several limitations. Most important is its poor spatial resolution. EEG is most sensitive to a particular set of post-synaptic potentials: those generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites which are deeper in the cortex, inside sulci, in midline or deep structures (such as the cingulate gyrus or hippocampus), or producing currents that are tangential to the skull, make far less contribution to the EEG signal.
EEG recordings do not directly capture axonal action potentials. An action potential can be accurately represented as a current quadrupole, meaning that the resulting field decreases more rapidly than the ones produced by the current dipole of post-synaptic potentials. In addition, since EEGs represent averages of thousands of neurons, a large population of cells in synchronous activity is necessary to cause a significant deflection on the recordings. Action potentials are very fast and, as a consequence, the chances of field summation are slim. However, neural backpropagation, as a typically longer dendritic current dipole, can be picked up by EEG electrodes and is a reliable indication of the occurrence of neural output.
Not only do EEGs capture dendritic currents almost exclusively as opposed to axonal currents, they also show a preference for activity on populations of parallel dendrites and transmitting current in the same direction at the same time. Pyramidal neurons of cortical layers II/III and V extend apical dendrites to layer I. Currents moving up or down these processes underlie most of the signals produced by electroencephalography.
EEG thus provides information with a large bias in favor of particular neuron types, locations and orientations. So it generally should not be used to make claims about global brain activity. The meninges, cerebrospinal fluid and skull "smear" the EEG signal, obscuring its intracranial source.
It is mathematically impossible to reconstruct a unique intracranial current source for a given EEG signal, as some currents produce potentials that cancel each other out. This is referred to as the inverse problem. However, much work has been done to produce remarkably good estimates of, at least, a localized electric dipole that represents the recorded currents.
=== EEG vis-à-vis fMRI, fNIRS, fUS and PET ===
EEG has several strong points as a tool for exploring brain activity. EEGs can detect changes over milliseconds, which is excellent considering an action potential takes approximately 0.5–130 milliseconds to propagate across a single neuron, depending on the type of neuron. Other methods of looking at brain activity, such as PET, fMRI or fUS have time resolution between seconds and minutes. EEG measures the brain's electrical activity directly, while other methods record changes in blood flow (e.g., SPECT, fMRI, fUS) or metabolic activity (e.g., PET, NIRS), which are indirect markers of brain electrical activity.
EEG can be used simultaneously with fMRI or fUS so that high-temporal-resolution data can be recorded at the same time as high-spatial-resolution data, however, since the data derived from each occurs over a different time course, the data sets do not necessarily represent exactly the same brain activity.
There are technical difficulties associated with combining EEG and fMRI including the need to remove the MRI gradient artifact present during MRI acquisition. Furthermore, currents can be induced in moving EEG electrode wires due to the magnetic field of the MRI.
EEG can be used simultaneously with NIRS or fUS without major technical difficulties. There is no influence of these modalities on each other and a combined measurement can give useful information about electrical activity as well as hemodynamics at medium spatial resolution.
=== EEG vis-à-vis MEG ===
EEG reflects correlated synaptic activity caused by post-synaptic potentials of cortical neurons. The ionic currents involved in the generation of fast action potentials may not contribute greatly to the averaged field potentials representing the EEG. More specifically, the scalp electrical potentials that produce EEG are generally thought to be caused by the extracellular ionic currents caused by dendritic electrical activity, whereas the fields producing magnetoencephalographic signals are associated with intracellular ionic currents.
== Normal activity ==
The EEG is typically described in terms of (1) rhythmic activity and (2) transients. The rhythmic activity is divided into bands by frequency. To some degree, these frequency bands are a matter of nomenclature (i.e., any rhythmic activity between 8–12 Hz can be described as "alpha"), but these designations arose because rhythmic activity within a certain frequency range was noted to have a certain distribution over the scalp or a certain biological significance. Frequency bands are usually extracted using spectral methods (for instance Welch) as implemented for instance in freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox.
Computational processing of the EEG is often named quantitative electroencephalography (qEEG).
Most of the cerebral signal observed in the scalp EEG falls in the range of 1–20 Hz (activity below or above this range is likely to be artifactual, under standard clinical recording techniques). Waveforms are subdivided into bandwidths known as alpha, beta, theta, and delta to signify the majority of the EEG used in clinical practice.
=== Comparison of EEG bands ===
The practice of using only whole numbers in the definitions comes from practical considerations in the days when only whole cycles could be counted on paper records. This leads to gaps in the definitions, as seen elsewhere on this page. The theoretical definitions have always been more carefully defined to include all frequencies. Unfortunately there is no agreement in standard reference works on what these ranges should be – values for the upper end of alpha and lower end of beta include 12, 13, 14 and 15. If the threshold is taken as 14 Hz, then the slowest beta wave has about the same duration as the longest spike (70 ms), which makes this the most useful value.
=== Wave patterns ===
Delta waves is the frequency range up to 4 Hz. It tends to be the highest in amplitude and the slowest waves. It is seen normally in adults in slow-wave sleep. It is also seen normally in babies. It may occur focally with subcortical lesions and in general distribution with diffuse lesions, metabolic encephalopathy hydrocephalus or deep midline lesions. It is usually most prominent frontally in adults (e.g. FIRDA – frontal intermittent rhythmic delta) and posteriorly in children (e.g. OIRDA – occipital intermittent rhythmic delta).
Theta is the frequency range from 4 Hz to 7 Hz. Theta is seen normally in young children. It may be seen in drowsiness or arousal in older children and adults; it can also be seen in meditation. Excess theta for age represents abnormal activity. It can be seen as a focal disturbance in focal subcortical lesions; it can be seen in generalized distribution in diffuse disorder or metabolic encephalopathy or deep midline disorders or some instances of hydrocephalus. On the contrary this range has been associated with reports of relaxed, meditative, and creative states.
Alpha is the frequency range from 8 Hz to 12 Hz. Hans Berger named the first rhythmic EEG activity he observed the "alpha wave". This was the "posterior basic rhythm" (also called the "posterior dominant rhythm" or the "posterior alpha rhythm"), seen in the posterior regions of the head on both sides, higher in amplitude on the dominant side. It emerges with closing of the eyes and with relaxation, and attenuates with eye opening or mental exertion. The posterior basic rhythm is actually slower than 8 Hz in young children (therefore technically in the theta range).
In addition to the posterior basic rhythm, there are other normal alpha rhythms such as the mu rhythm (alpha activity in the contralateral sensory and motor cortical areas) that emerges when the hands and arms are idle; and the "third rhythm" (alpha activity in the temporal or frontal lobes). Alpha can be abnormal; for example, an EEG that has diffuse alpha occurring in coma and is not responsive to external stimuli is referred to as "alpha coma".
Beta is the frequency range from 13 Hz to about 30 Hz. It is seen usually on both sides in symmetrical distribution and is most evident frontally. Beta activity is closely linked to motor behavior and is generally attenuated during active movements. Low-amplitude beta with multiple and varying frequencies is often associated with active, busy or anxious thinking and active concentration. Rhythmic beta with a dominant set of frequencies is associated with various pathologies, such as Dup15q syndrome, and drug effects, especially benzodiazepines. It may be absent or reduced in areas of cortical damage. It is the dominant rhythm in patients who are alert or anxious or who have their eyes open.
Gamma is the frequency range approximately 30–100 Hz. Gamma rhythms are thought to represent binding of different populations of neurons together into a network for the purpose of carrying out a certain cognitive or motor function.
Mu range is 8–13 Hz and partly overlaps with other frequencies. It reflects the synchronous firing of motor neurons in rest state. Mu suppression is thought to reflect motor mirror neuron systems, because when an action is observed, the pattern extinguishes, possibly because the normal and mirror neuronal systems "go out of sync" and interfere with one other.
"Ultra-slow" or "near-DC" activity is recorded using DC amplifiers in some research contexts. It is not typically recorded in a clinical context because the signal at these frequencies is susceptible to a number of artifacts.
Some features of the EEG are transient rather than rhythmic. Spikes and sharp waves may represent seizure activity or interictal activity in individuals with epilepsy or a predisposition toward epilepsy. Other transient features are normal: vertex waves and sleep spindles are seen in normal sleep.
There are types of activity that are statistically uncommon, but not associated with dysfunction or disease. These are often referred to as "normal variants". The mu rhythm is an example of a normal variant.
The normal electroencephalogram (EEG) varies by age. The prenatal EEG and neonatal EEG is quite different from the adult EEG. Fetuses in the third trimester and newborns display two common brain activity patterns: "discontinuous" and "trace alternant." "Discontinuous" electrical activity refers to sharp bursts of electrical activity followed by low frequency waves. "Trace alternant" electrical activity describes sharp bursts followed by short high amplitude intervals and usually indicates quiet sleep in newborns. The EEG in childhood generally has slower frequency oscillations than the adult EEG.
The normal EEG also varies depending on state. The EEG is used along with other measurements (EOG, EMG) to define sleep stages in polysomnography. Stage I sleep (equivalent to drowsiness in some systems) appears on the EEG as drop-out of the posterior basic rhythm. There can be an increase in theta frequencies. Santamaria and Chiappa cataloged a number of the variety of patterns associated with drowsiness. Stage II sleep is characterized by sleep spindles – transient runs of rhythmic activity in the 12–14 Hz range (sometimes referred to as the "sigma" band) that have a frontal-central maximum. Most of the activity in Stage II is in the 3–6 Hz range. Stage III and IV sleep are defined by the presence of delta frequencies and are often referred to collectively as "slow-wave sleep". Stages I–IV comprise non-REM (or "NREM") sleep. The EEG in REM (rapid eye movement) sleep appears somewhat similar to the awake EEG.
EEG under general anesthesia depends on the type of anesthetic employed. With halogenated anesthetics, such as halothane or intravenous agents, such as propofol, a rapid (alpha or low beta), nonreactive EEG pattern is seen over most of the scalp, especially anteriorly; in some older terminology this was known as a WAR (widespread anterior rapid) pattern, contrasted with a WAIS (widespread slow) pattern associated with high doses of opiates. Anesthetic effects on EEG signals are beginning to be understood at the level of drug actions on different kinds of synapses and the circuits that allow synchronized neuronal activity. Recent algorithms based on state-chart representation using EEG signals can now to monitor the brain states during general anesthesia allowing to classify the brain depth under various sedation.
== Artifacts ==
EEG is an extremely useful technique for studying brain activity, but the signal measured is always contaminated by artifacts which can impact the analysis of the data. An artifact is any measured signal that does not originate within the brain. Although multiple algorithms exist for the removal of artifacts, the problem of how to deal with them remains an open question. The source of artifacts can be from issues relating to the instrument, such as faulty electrodes, line noise or high electrode impedance, or they may be from the physiology of the subject being recorded. This can include, eye blinks and movement, cardiac activity and muscle activity and these types of artifacts are more complicated to remove. Artifacts may bias the visual interpretation of EEG data as some may mimic cognitive activity that could affect diagnoses of problems such as Alzheimer's disease or sleep disorders. As such the removal of such artifacts in EEG data used for practical applications is of the utmost importance.
=== Artifact removal ===
It is important to be able to distinguish artifacts from genuine brain activity in order to prevent incorrect interpretations of EEG data. General approaches for the removal of artifacts from the data are, prevention, rejection and cancellation. The goal of any approach is to develop methodology capable of identifying and removing artifacts without affecting the quality of the EEG signal. As artifact sources are quite different the majority of researchers focus on developing algorithms that will identify and remove a single type of noise in the signal. Simple filtering using a notch filter is commonly employed to reject components with a 50/60 Hz frequency. However such simple filters are not an appropriate choice for dealing with all artifacts, as for some, their frequencies will overlap with the EEG frequencies.
Regression algorithms have a moderate computation cost and are simple. They represented the most popular correction method up until the mid-1990s when they were replaced by "blind source separation" type methods. Regression algorithms work on the premise that all artifacts are comprised by one or more reference channels. Subtracting these reference channels from the other contaminated channels, in either the time or frequency domain, by estimating the impact of the reference channels on the other channels, would correct the channels for the artifact. Although the requirement of reference channels ultimately lead to this class of algorithm being replaced, they still represent the benchmark against which modern algorithms are evaluated.
Blind source separation (BSS) algorithms employed to remove artifacts include principal component analysis (PCA) and independent component analysis (ICA) and several algorithms in this class have been successful at tackling most physiological artifacts. Recent real-time algorithms based on wavelet transport called WQN can now be used to find and replace artifact segment in real-time in the absence of artifact information. These classes of algorithms depend on the continuity of spectral energy in the different frequency bands
=== Physiological artifacts ===
==== Ocular artifacts ====
Ocular artifacts affect the EEG signal significantly. This is due to eye movements involving a change in electric fields surrounding the eyes, distorting the electric field over the scalp, and as EEG is recorded on the scalp, it therefore distorts the recorded signal. A difference of opinion exists among researchers, with some arguing ocular artifacts are, or may be reasonably described as a single generator, whilst others argue it is important to understand the potentially complicated mechanisms. Three potential mechanisms have been proposed to explain the ocular artifact.
The first is corneal retinal dipole movement which argues that an electric dipole is formed between the cornea and retina, as the former is positively and the latter negatively charged. When the eye moves, so does this dipole which impacts the electrical field over the scalp, this is the most standard view. The second mechanism is retinal dipole movement, which is similar to the first but differing in that it argues there is a potential difference, hence dipole across the retina with the cornea having little effect. The third mechanism is eyelid movement. It is known that there is a change in voltage around the eyes when the eyelid moves, even if the eyeball does not. It is thought that the eyelid can be described as a sliding potential source and that the impacting of blinking is different to eye movement on the recorded EEG.
Eyelid fluttering artifacts of a characteristic type were previously called Kappa rhythm (or Kappa waves). It is usually seen in the prefrontal leads, that is, just over the eyes. Sometimes they are seen with mental activity. They are usually in the Theta (4–7 Hz) or Alpha (7–14 Hz) range. They were named because they were believed to originate from the brain. Later study revealed they were generated by rapid fluttering of the eyelids, sometimes so minute that it was difficult to see. They are in fact noise in the EEG reading, and should not technically be called a rhythm or wave. Therefore, current usage in electroencephalography refers to the phenomenon as an eyelid fluttering artifact, rather than a Kappa rhythm (or wave).
The propagation of the ocular artifact is impacted by multiple factors including the properties of the subject's skull, neuronal tissues and skin but the signal may be approximated as being inversely proportional to the distance from the eyes squared. The electrooculogram (EOG) consists of a series of electrodes measuring voltage changes close to the eye and is the most common tool for dealing with the eye movement artifact in the EEG signal.
==== Muscular artifacts ====
Another source of artifacts are various muscle movements across the body. This particular class of artifact is usually recorded by all electrodes on the scalp due to myogenic activity (increase or decrease of blood pressure). The origin of these artifacts have no single location and arises from functionally independent muscle groups, meaning the characteristics of the artifact are not constant. The observed patterns due to muscular artifacts will change depending on subject sex, the particular muscle tissue, and its degree of contraction. The frequency range for muscular artifacts is wide and overlaps with every classic EEG rhythm. However most of the power is concentrated in the lower range of the observed frequencies of 20 to 300 Hz making the gamma band particularly susceptible to muscular artifacts. Some muscle artifacts may have activity with a frequency as low as 2 Hz, so delta and theta bands may also be affected by muscle activity. Muscular artifacts may impact sleep studies, as unconscious bruxism (grinding of teeth) movements or snoring can seriously impact the quality of the recorded EEG. In addition the recordings made of epilepsy patients may be significantly impacted by the existence of muscular artifacts.
==== Cardiac artifacts ====
The potential due to cardiac activity introduces electrocardiograph (ECG) errors in the EEG. Artifacts arising due to cardiac activity may be removed with the help of an ECG reference signal.
==== Other physiological artifacts ====
Glossokinetic artifacts are caused by the potential difference between the base and the tip of the tongue. Minor tongue movements can contaminate the EEG, especially in parkinsonian and tremor disorders.
=== Environmental artifacts ===
In addition to artifacts generated by the body, many artifacts originate from outside the body. Movement by the patient, or even just settling of the electrodes, may cause electrode pops, spikes originating from a momentary change in the impedance of a given electrode. Poor grounding of the EEG electrodes can cause significant 50 or 60 Hz artifact, depending on the local power system's frequency. A third source of possible interference can be the presence of an IV drip; such devices can cause rhythmic, fast, low-voltage bursts, which may be confused for spikes.
== Abnormal activity ==
Abnormal activity can broadly be separated into epileptiform and non-epileptiform activity. It can also be separated into focal or diffuse.
Focal epileptiform discharges represent fast, synchronous potentials in a large number of neurons in a somewhat discrete area of the brain. These can occur as interictal activity, between seizures, and represent an area of cortical irritability that may be predisposed to producing epileptic seizures. Interictal discharges are not wholly reliable for determining whether a patient has epilepsy nor where his/her seizure might originate. (See focal epilepsy.)
Generalized epileptiform discharges often have an anterior maximum, but these are seen synchronously throughout the entire brain. They are strongly suggestive of a generalized epilepsy.
Focal non-epileptiform abnormal activity may occur over areas of the brain where there is focal damage of the cortex or white matter. It often consists of an increase in slow frequency rhythms or a loss of normal higher frequency rhythms. It may also appear as focal or unilateral decrease in amplitude of the EEG signal.
Diffuse non-epileptiform abnormal activity may manifest as diffuse abnormally slow rhythms or bilateral slowing of normal rhythms, such as the PBR.
Intracortical Encephalogram electrodes and sub-dural electrodes can be used in tandem to discriminate and discretize artifact from epileptiform and other severe neurological events.
More advanced measures of abnormal EEG signals have also recently received attention as possible biomarkers for different disorders such as Alzheimer's disease.
=== Remote communication ===
Systems for decoding imagined speech from EEG have applications such as in brain–computer interfaces.
=== EEG diagnostics ===
The Department of Defense (DoD) and Veteran's Affairs (VA), and U.S Army Research Laboratory (ARL), collaborated on EEG diagnostics in order to detect mild to moderate Traumatic Brain Injury (mTBI) in combat soldiers. Between 2000 and 2012, 75 percent of U.S. military operations brain injuries were classified mTBI. In response, the DoD pursued new technologies capable of rapid, accurate, non-invasive, and field-capable detection of mTBI to address this injury.
Combat personnel often develop PTSD and mTBI in correlation. Both conditions present with altered low-frequency brain wave oscillations. Altered brain waves from PTSD patients present with decreases in low-frequency oscillations, whereas, mTBI injuries are linked to increased low-frequency wave oscillations. Effective EEG diagnostics can help doctors accurately identify conditions and appropriately treat injuries in order to mitigate long-term effects.
Traditionally, clinical evaluation of EEGs involved visual inspection. Instead of a visual assessment of brain wave oscillation topography, quantitative electroencephalography (qEEG), computerized algorithmic methodologies, analyzes a specific region of the brain and transforms the data into a meaningful "power spectrum" of the area. Accurately differentiating between mTBI and PTSD can significantly increase positive recovery outcomes for patients especially since long-term changes in neural communication can persist after an initial mTBI incident.
Another common measurement made from EEG data is that of complexity measures such as Lempel-Ziv complexity, fractal dimension, and spectral flatness, which are associated with particular pathologies or pathology stages.
== Economics ==
Inexpensive EEG devices exist for the low-cost research and consumer markets. Recently, a few companies have miniaturized medical grade EEG technology to create versions accessible to the general public. Some of these companies have built commercial EEG devices retailing for less than US$100.
In 2004 OpenEEG released its ModularEEG as open source hardware. Compatible open source software includes a game for balancing a ball.
In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.
In 2008 OCZ Technology developed device for use in video games relying primarily on electromyography.
In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca.
In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best-selling consumer based EEG to date.
In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force.
In 2010, NeuroSky added a blink and electromyography function to the MindSet.
In 2011, NeuroSky released the MindWave, an EEG device designed for educational purposes and games. The MindWave won the Guinness Book of World Records award for "Heaviest machine moved using a brain control interface".
In 2012, a Japanese gadget project, neurowear, released Necomimi: a headset with motorized cat ears. The headset is a NeuroSky MindWave unit with two motors on the headband where a cat's ears might be. Slipcovers shaped like cat ears sit over the motors so that as the device registers emotional states the ears move to relate. For example, when relaxed, the ears fall to the sides and perk up when excited again.
In 2014, OpenBCI released an eponymous open source brain-computer interface after a successful kickstarter campaign in 2013. The board, later renamed "Cyton", has 8 channels, expandable to 16 with the Daisy module. It supports EEG, EKG, and EMG. The Cyton Board is based on the Texas Instruments ADS1299 IC and the Arduino or PIC microcontroller, and initially costed $399 before increasing in price to $999. It uses standard metal cup electrodes and conductive paste.
In 2015, Mind Solutions Inc released the smallest consumer BCI to date, the NeuroSync. This device functions as a dry sensor at a size no larger than a Bluetooth ear piece.
In 2015, A Chinese-based company Macrotellect released BrainLink Pro and BrainLink Lite, a consumer grade EEG wearable product providing 20 brain fitness enhancement Apps on Apple and Android App Stores.
In 2021, BioSerenity release the Neuronaute and Icecap a single-use disposable EEG headset that allows recording with equivalent quality to traditional cup electrodes.
== Future research ==
The EEG has been used for many purposes besides the conventional uses of clinical diagnosis and conventional cognitive neuroscience. An early use was during World War II by the U.S. Army Air Corps to screen out pilots in danger of having seizures; long-term EEG recordings in epilepsy patients are still used today for seizure prediction. Neurofeedback remains an important extension, and in its most advanced form is also attempted as the basis of brain computer interfaces. The EEG is also used quite extensively in the field of neuromarketing.
The EEG is altered by drugs that affect brain functions, the chemicals that are the basis for psychopharmacology. Berger's early experiments recorded the effects of drugs on EEG. The science of pharmaco-electroencephalography has developed methods to identify substances that systematically alter brain functions for therapeutic and recreational use.
Honda is attempting to develop a system to enable an operator to control its Asimo robot using EEG, a technology it eventually hopes to incorporate into its automobiles.
EEGs have been used as evidence in criminal trials in the Indian state of Maharashtra. Brain Electrical Oscillation Signature Profiling (BEOS), an EEG technique, was used in the trial of State of Maharashtra v. Sharma to show Sharma remembered using arsenic to poison her ex-fiancé, although the reliability and scientific basis of BEOS is disputed.
A lot of research is currently being carried out in order to make EEG devices smaller, more portable and easier to use. So called "Wearable EEG" is based upon creating low power wireless collection electronics and 'dry' electrodes which do not require a conductive gel to be used. Wearable EEG aims to provide small EEG devices which are present only on the head and which can record EEG for days, weeks, or months at a time, as ear-EEG. Such prolonged and easy-to-use monitoring could make a step change in the diagnosis of chronic conditions such as epilepsy, and greatly improve the end-user acceptance of BCI systems. Research is also being carried out on identifying specific solutions to increase the battery lifetime of Wearable EEG devices through the use of the data reduction approach.
In research, currently EEG is often used in combination with machine learning. EEG data are pre-processed then passed on to machine learning algorithms. These algorithms are then trained to recognize different diseases like schizophrenia, epilepsy or dementia. Furthermore, they are increasingly used to study seizure detection. By using machine learning, the data can be analyzed automatically. In the long run this research is intended to build algorithms that support physicians in their clinical practice and to provide further insights into diseases. In this vein, complexity measures of EEG data are often calculated, such as Lempel-Ziv complexity, fractal dimension, and spectral flatness. It has been shown that combining or multiplying such measures can reveal previously hidden information in EEG data.
EEG signals from musical performers were used to create instant compositions and one CD by the Brainwave Music Project, run at the Computer Music Center at Columbia University by Brad Garton and Dave Soldier. Similarly, an hour-long recording of the brainwaves of Ann Druyan was included on the Voyager Golden Record, launched on the Voyager probes in 1977, in case any extraterrestrial intelligence could decode her thoughts, which included what it was like to fall in love.
== History ==
In 1875, Richard Caton (1842–1926), a physician practicing in Liverpool, presented his findings about electrical phenomena of the exposed cerebral hemispheres of rabbits and monkeys in the British Medical Journal. In 1890, Polish physiologist Adolf Beck published an investigation of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light. Beck started experiments on the electrical brain activity of animals. Beck placed electrodes directly on the surface of the brain to test for sensory stimulation. His observation of fluctuating brain activity led to the conclusion of brain waves.
In 1912, Ukrainian physiologist Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG and the evoked potential of the mammalian (dog). In 1914, Napoleon Cybulski and Jelenska-Macieszyna photographed EEG recordings of experimentally induced seizures.
German physiologist and psychiatrist Hans Berger (1873–1941) recorded the first human EEG in 1924. Expanding on work previously conducted on animals by Richard Caton and others, Berger also invented the electroencephalograph (giving the device its name), an invention described "as one of the most surprising, remarkable, and momentous developments in the history of clinical neurology". His discoveries were first confirmed by British scientists Edgar Douglas Adrian and B. H. C. Matthews in 1934 and developed by them.
In 1934, Fisher and Lowenbach first demonstrated epileptiform spikes. In 1935, Gibbs, Davis and Lennox described interictal spike waves and the three cycles/s pattern of clinical absence seizures, which began the field of clinical electroencephalography. Subsequently, in 1936 Gibbs and Jasper reported the interictal spike as the focal signature of epilepsy. The same year, the first EEG laboratory opened at Massachusetts General Hospital.
Franklin Offner (1911–1999), professor of biophysics at Northwestern University developed a prototype of the EEG that incorporated a piezoelectric inkwriter called a Crystograph (the whole device was typically known as the Offner Dynograph).
In 1947, The American EEG Society was founded and the first International EEG congress was held. In 1953 Aserinsky and Kleitman described REM sleep.
In the 1950s, William Grey Walter developed an adjunct to EEG called EEG topography, which allowed for the mapping of electrical activity across the surface of the brain. This enjoyed a brief period of popularity in the 1980s and seemed especially promising for psychiatry. It was never accepted by neurologists and remains primarily a research tool.
An electroencephalograph system manufactured by Beckman Instruments was used on at least one of the Project Gemini manned spaceflights (1965–1966) to monitor the brain waves of astronauts on the flight. It was one of many Beckman Instruments specialized for and used by NASA.
The first instance of the use of EEG to control a physical object, a robot, was in 1988. The robot would follow a line or stop depending on the alpha activity of the subject. If the subject relaxed and closed their eyes therefore increasing alpha activity, the bot would move. Opening their eyes thus decreasing alpha activity would cause the robot to stop on the trajectory.
== See also ==
== References ==
== Further reading ==
== External links ==
"A tutorial on simulating and estimating EEG sources in Matlab". Archived from the original on March 7, 2016.
"A tutorial on analysis of ongoing, evoked, and induced neuronal activity: Power spectra, wavelet analysis, and coherence". Archived from the original on November 7, 2018. | Wikipedia/Electroencephalography |
Diploma in Computer Science, originally known as the Diploma in Numerical Analysis and Automatic Computing, was a conversion course in computer science offered by the University of Cambridge. It is equivalent to a master's degree in present-day nomenclature but the title diploma was retained for historic reasons, "diploma" being the archaic term for a master's degree.
The diploma was the world's first full-year taught course in computer science, starting in 1953. It attracted students of mathematics, science and engineering. At its peak, there were 50 students on the course. UK government (EPSRC) funding was withdrawn in 2001 and student numbers dropped dramatically. In 2007, the university decided to withdraw the diploma at the end of the 2007-08 academical year, after 55 years of service.
== History ==
The introduction of this one-year graduate course was motivated by a University of Cambridge Mathematics Faculty Board Report on the "demand for postgraduate instruction in numerical analysis and automatic computing … [which] if not met, there is a danger that the application to scientific research of the machines now being built will be hampered". The University of Cambridge Computer Laboratory "was one of the pioneers in the development and use of electronic computing-machines (sic)". It had introduced a Summer School in 1950, but the Report noted that "The Summer School deals [only] with 'programming', rather than the general theory of the numerical methods which are programmed." The Diploma "would include theoretical and practical work … [and also] instruction about the various types of computing-machine … and the principles of design on which they are based." With only a few students initially, no extra staff would be needed.
University-supported teaching and research staff in the Laboratory at the time were Maurice Wilkes (head of the laboratory), J. C. P. Miller, W. Renwick, E. N. Mutch, and S. Gill, joined slightly later by C. B. Haselgrove.
In its final incarnation, the Diploma was a 10-month course, evaluated two-thirds on examination and one-third on a project dissertation. Most of the examined courses were shared by the second year ("Part IB") of the undergraduate Computer Science Tripos course, with some additional lectures specifically for the Diploma students and four of the third year undergraduate ("Part II") lecture courses also included.
There were three grades of result from the Diploma: distinction (roughly equivalent to first class honours), pass (equivalent to second or third class honours), and fail.
== Notable alumni ==
== References ==
== External links ==
University of Cambridge Computer Laboratory | Wikipedia/Cambridge_Diploma_in_Computer_Science |
A functional specification (also, functional spec, specs, functional specifications document (FSD), functional requirements specification) in systems engineering and software development is a document that specifies the functions that a system or component must perform (often part of a requirements specification) (ISO/IEC/IEEE 24765-2010).
The documentation typically describes what is needed by the system user as well as requested properties of inputs and outputs (e.g. of the software system). A functional specification is the more technical response to a matching requirements document, e.g. the product requirements document "PRD". Thus it picks up the results of the requirements analysis stage. On more complex systems multiple levels of functional specifications will typically nest to each other, e.g. on the system level, on the module level and on the level of technical details.
== Overview ==
A functional specification does not define the inner workings of the proposed system; it does not include the specification of how the system function will be implemented.
A functional requirement in a functional specification might state as follows:
When the user clicks the OK button, the dialog is closed and the focus is returned to the main window in the state it was in before this dialog was displayed.
Such a requirement describes an interaction between an external agent (the user) and the software system. When the user provides input to the system by clicking the OK button, the program responds (or should respond) by closing the dialog window containing the OK button.
== Functional specification topics ==
=== Purpose ===
There are many purposes for functional specifications. One of the primary purposes on team projects is to achieve some form of team consensus on what the program is to achieve before making the more time-consuming effort of writing source code and test cases, followed by a period of debugging. Typically, such consensus is reached after one or more reviews by the stakeholders on the project at hand after having negotiated a cost-effective way to achieve the requirements the software needs to fulfill.
To let the developers know what to build.
To let the testers know what tests to run.
To let stakeholders know what they are getting.
=== Process ===
In the ordered industrial software engineering life-cycle (waterfall model), functional specification describes what has to be implemented. The next, systems architecture document describes how the functions will be realized using a chosen software environment. In non industrial, prototypical systems development, functional specifications are typically written after or as part of requirements analysis.
When the team agrees that functional specification consensus is reached, the functional spec is typically declared "complete" or "signed off". After this, typically the software development and testing team write source code and test cases using the functional specification as the reference. While testing is performed, the behavior of the program is compared against the expected behavior as defined in the functional specification.
=== Methods ===
One popular method of writing a functional specification document involves drawing or rendering either simple wire frames or accurate, graphically designed UI screenshots. After this has been completed, and the screen examples are approved by all stakeholders, graphical elements can be numbered and written instructions can be added for each number on the screen example. For example, a login screen can have the username field labeled '1' and password field labeled '2,' and then each number can be declared in writing, for use by software engineers and later for beta testing purposes to ensure that functionality is as intended. The benefit of this method is that countless additional details can be attached to the screen examples.
== Examples of functional specifications ==
Advanced Microcontroller Bus Architecture
Extensible Firmware Interface
Multiboot specification
Real-time specification for Java
Single UNIX Specification
== Types of software development specifications ==
Bit specification (disambiguation)
Design specification
Diagnostic design specification
Product design specification
Software requirements specification
== See also ==
Benchmarking
Software development process
Specification (technical standard)
Software verification and validation
== References ==
== External links ==
Painless Functional Specifications, 4-part series by Joel Spolsky | Wikipedia/Functional_specification |
An entity–relationship model (or ER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types).
In software engineering, an ER model is commonly formed to represent things a business needs to remember in order to perform business processes. Consequently, the ER model becomes an abstract data model, that defines a data or information structure that can be implemented in a database, typically a relational database.
Entity–relationship modeling was developed for database and design by Peter Chen and published in a 1976 paper, with variants of the idea existing previously. Today it is commonly used for teaching students the basics of database structure. Some ER models show super and subtype entities connected by generalization-specialization relationships, and an ER model can also be used to specify domain-specific ontologies.
== Introduction ==
An ER model usually results from systematic analysis to define and describe the data created and needed by processes in a business area. Typically, it represents records of entities and events monitored and directed by business processes, rather than the processes themselves. It is usually drawn in a graphical form as boxes (entities) that are connected by lines (relationships) which express the associations and dependencies between entities. It can also be expressed in a verbal form, for example: one building may be divided into zero or more apartments, but one apartment can only be located in one building.
Entities may be defined not only by relationships, but also by additional properties (attributes), which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models.
An ER model is typically implemented as a database. In a simple relational database implementation, each row of a table represents one instance of an entity type, and each field in a table represents an attribute type. In a relational database a relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity.
There is a tradition for ER/data models to be built at two or three levels of abstraction. The conceptual-logical-physical hierarchy below is used in other kinds of specification, and is different from the three schema approach to software engineering.
Conceptual data model
This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization.
A conceptual ER model may be used as the foundation for one or more logical data models (see below). The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration.
Logical data model
A logical ER model does not require a conceptual ER model, especially if the scope of the logical ER model includes only the development of a distinct information system. The logical ER model contains more detail than the conceptual ER model. In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the relationships between these data entities are established. The logical ER model is however developed independently of the specific database management system into which it can be implemented.
Physical data model
One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different.
The physical model is normally instantiated in the structural metadata of a database management system as relational database objects such as database tables, database indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database.
The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model. This in turn is mapped to a physical model during physical design. Sometimes, both of these phases are referred to as "physical design."
== Components ==
An entity may be defined as a thing that is capable of an independent existence that can be uniquely identified, and is capable of storing data. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world.
An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept). Although the term entity is the one most commonly used, following Chen, entities and entity-types should be distinguished. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym.
Entities can be thought of as nouns. Examples include a computer, an employee, a song, or a mathematical theorem.
A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples include an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, and a proves relationship between a mathematician and a conjecture.
The model's linguistic aspect described above is used in the declarative database query language ERROL, which mimics natural language constructs. ERROL's semantics and implementation are based on reshaped relational algebra (RRA), a relational algebra that is adapted to the entity–relationship model and captures its linguistic aspect.
Entities and relationships can both have attributes. For example, an employee entity might have a Social Security Number (SSN) attribute, while a proved relationship may have a date attribute.
All entities except weak entities must have a minimal set of uniquely identifying attributes that may be used as a unique/primary key.
Entity-relationship diagrams (ERDs) do not show single entities or single instances of relations. Rather, they show entity sets (all entities of the same entity type) and relationship sets (all relationships of the same relationship type). For example, a particular song is an entity, the collection of all songs in a database is an entity set, the eaten relationship between a child and his lunch is a single relationship, and the set of all such child-lunch relationships in a database is a relationship set.
In other words, a relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member of the relation.
Certain cardinality constraints on relationship sets may be indicated as well.
Physical views show how data is actually stored.
=== Relationships, roles, and cardinalities ===
Chen's original paper gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles, "husband" and "wife".
A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns.
Chen's terminology has also been applied to earlier ideas. The lines, arrows, and crow's feet of some diagrams owes more to the earlier Bachman diagrams than to Chen's relationship diagrams.
Another common extension to Chen's model is to "name" relationships and roles as verbs or phrases.
=== Role naming ===
It has also become prevalent to name roles with phrases such as is the owner of and is owned by. Correct nouns in this case are owner and possession. Thus, person plays the role of owner and car plays the role of possession rather than person plays the role of, is the owner of, etc.
Using nouns has direct benefit when generating physical implementations from semantic models. When a person has two relationships with car it is possible to generate names such as owner_person and driver_person, which are immediately meaningful.
=== Cardinalities ===
Modifications to the original specification can be beneficial. Chen described look-across cardinalities. As an aside, the Barker–Ellis notation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crow's foot).
Research by Merise, Elmasri & Navathe and others has shown there is a preference for same-side for roles and both minimum and maximum cardinalities, and researchers (Feinerer, Dullea et al.) have shown that this is more coherent when applied to n-ary relationships of order greater than 2.
Dullea et al. states: "A 'look across' notation such as used in the UML does not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary."
Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann investigates this situation and shows how and why different transformations fail." (Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same) and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties that prevent the extension of simple mechanisms from binary to n-ary associations."
Chen's notation for entity–relationship modeling uses rectangles to represent entity sets, and diamonds to represent relationships appropriate for first-class objects: they can have attributes and relationships of their own. If an entity set participates in a relationship set, they are connected with a line.
Attributes are drawn as ovals and connected with a line to exactly one entity or relationship set.
Cardinality constraints are expressed as follows:
a double line indicates a participation constraint, totality, or surjectivity: all entities in the entity set must participate in at least one relationship in the relationship set;
an arrow from an entity set to a relationship set indicates a key constraint, i.e. injectivity: each entity of the entity set can participate in at most one relationship in the relationship set;
a thick line indicates both, i.e. bijectivity: each entity in the entity set is involved in exactly one relationship.
an underlined name of an attribute indicates that it is a key: two different entities or relationships with this attribute always have different values for this attribute.
Attributes are often omitted as they can clutter up a diagram. Other diagram techniques often list entity attributes within the rectangles drawn for entity sets.
== Related diagramming convention techniques ==
Bachman notation
Barker's notation
EXPRESS
IDEF1X
§ Crow's foot notation (also Martin notation)
(min, max)-notation of Jean-Raymond Abrial in 1974
UML class diagrams
Merise
Object-role modeling
=== Crow's foot notation ===
Crow's foot notation, the beginning of which dates back to an article by Gordon Everest (1976), is used in Barker's notation, Structured Systems Analysis and Design Method (SSADM), and information technology engineering. Crow's foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the relative cardinality of the relationship.
Crow's foot notation was in use in ICL in 1978, and was used in the consultancy practice CACI. Many of the consultants at CACI (including Richard Barker) came from ICL and subsequently moved to Oracle UK, where they developed the early versions of Oracle's CASE tools, introducing the notation to a wider audience.
With this notation, relationships cannot have attributes. Where necessary, relationships are promoted to entities in their own right: for example, if it is necessary to capture where and when an artist performed a song, a new entity "performance" is introduced (with attributes reflecting the time and place), and the relationship of an artist to a song becomes an indirect relationship via the performance (artist-performs-performance, performance-features-song).
Three symbols are used to represent cardinality:
the ring represents "zero"
the dash represents "one"
the crow's foot represents "many" or "infinite"
These symbols are used in pairs to represent the four types of cardinality that an entity may have in a relationship. The inner component of the notation represents the minimum, and the outer component represents the maximum.
ring and dash → minimum zero, maximum one (optional)
dash and dash → minimum one, maximum one (mandatory)
ring and crow's foot → minimum zero, maximum many (optional)
dash and crow's foot → minimum one, maximum many (mandatory)
== Model usability issues ==
Users of a modeled database can encounter two well-known issues where the returned results differ from what the query author assumed. These are known as the fan trap and the chasm trap, and they can lead to inaccurate query results if not properly handled during the design of the Entity-Relationship Model (ER Model).
Both the fan trap and chasm trap underscore the importance of ensuring that ER models are not only technically correct but also fully and accurately reflect the real-world relationships they are designed to represent. Identifying and resolving these traps early in the design process helps avoid significant issues later, especially in complex databases intended for business intelligence or decision support.
=== Fan trap ===
The first issue is the fan trap. It occurs when a (master) table links to multiple tables in a one-to-many relationship. The issue derives its name from the visual appearance of the model when it is drawn in an entity–relationship diagram, as the linked tables 'fan out' from the master table. This type of model resembles a star schema, which is a common design in data warehouses. When attempting to calculate sums over aggregates using standard SQL queries based on the master table, the results can be unexpected and often incorrect due to the way relationships are structured. The miscalculation happens because SQL treats each relationship individually, which may result in double-counting or other inaccuracies. This issue is particularly common in decision support systems. To mitigate this, either the data model or the SQL query itself must be adjusted. Some database querying software designed for decision support includes built-in methods to detect and address fan traps.
=== Chasm trap ===
The second issue is the chasm trap. A chasm trap occurs when a model suggests the existence of a relationship between entity types, but the pathway between these entities is incomplete or missing in certain instances.
For example, imagine a database where a Building has one or more Rooms, and these Rooms hold zero or more Computers. One might expect to query the model to list all Computers in a Building. However, if a Computer is temporarily not assigned to a Room (perhaps under repair or stored elsewhere), it won't be included in the query results. The query would only return Computers currently assigned to Rooms, not all Computers in the Building. This reflects a flaw in the model, as it fails to account for Computers that are in the Building but not in a Room. To resolve this, an additional relationship directly linking the Building and Computers would be required.
== In semantic modeling ==
=== Semantic model ===
A semantic model is a model of concepts and is sometimes called a "platform independent model". It is an intensional model. At least since Carnap, it is well known that:
"...the full meaning of a concept is constituted by two aspects, its intension and its extension. The first part comprises the embedding of a concept in the world of concepts as a whole, i.e. the totality of all relations to other concepts. The second part establishes the referential meaning of the concept, i.e. its counterpart in the real or in a possible world".
=== Extension model ===
An extensional model is one that maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2"
=== Entity–relationship origins ===
Peter Chen, the father of ER modeling said in his seminal paper:
"The entity-relationship model adopts the more natural view that the real world consists of entities and relationships. It incorporates some of the important semantic information about the real world."
In his original 1976 article Chen explicitly contrasts entity–relationship diagrams with record modelling techniques:
"The data structure diagram is a representation of the organization of records and is not an exact representation of entities and relationships."
Several other authors also support Chen's program:
==== Philosophical alignment ====
Chen is in accord with philosophical traditions from the time of the Ancient Greek philosophers: Plato and Aristotle. Plato himself associates knowledge with the apprehension of unchanging Forms (namely, archetypes or abstract representations of the many types of things, and properties) and their relationships to one another.
== Limitations ==
An ER model is primarily conceptual, an ontology that expresses predicates in a domain of knowledge.
ER models are readily used to represent relational database structures (after Codd and Date) but not so often to represent other kinds of data structure (such as data warehouses and document stores)
Some ER model notations include symbols to show super-sub-type relationships and mutual exclusion between relationships; some do not.
An ER model does not show an entity's life history (how its attributes and/or relationships change over time in response to events). For many systems, such state changes are nontrivial and important enough to warrant explicit specification.
Some have extended ER modeling with constructs to represent state changes, an approach supported by the original author; an example is Anchor Modeling.
Others model state changes separately, using state transition diagrams or some other process modeling technique.
Many other kinds of diagram are drawn to model other aspects of systems, including the 14 diagram types offered by UML.
Today, even where ER modeling could be useful, it is uncommon because many use tools that support similar kinds of model, notably class diagrams for OO programming and data models for relational database management systems. Some of these tools can generate code from diagrams and reverse-engineer diagrams from code.
In a survey, Brodie and Liu could not find a single instance of entity–relationship modeling inside a sample of ten Fortune 100 companies. Badia and Lemire blame this lack of use on the lack of guidance but also on the lack of benefits, such as lack of support for data integration.
The enhanced entity–relationship model (EER modeling) introduces several concepts not in ER modeling, but are closely related to object-oriented design, like is-a relationships.
For modelling temporal databases, numerous ER extensions have been considered. Similarly, the ER model was found unsuitable for multidimensional databases (used in OLAP applications); no dominant conceptual model has emerged in this field yet, although they generally revolve around the concept of OLAP cube (also known as data cube within the field).
== See also ==
Associative entity – Term in relational and entity–relationship theory
Concept map – Diagram showing relationships among concepts
Database design – Designing how data is held in a database
Data structure diagram – visual representation of a certain kind of data model that contains entities, their relationships, and the constraints that are placed on themPages displaying wikidata descriptions as a fallback
Enhanced entity–relationship model – Data model
Enterprise architecture framework – Frame in which the architecture of a company is defined
Entity Data Model – Open source object-relational mapping frameworkPages displaying short descriptions of redirect targets
Value range structure diagrams
Comparison of data modeling tools – Comparison of notable data modeling tools
Knowledge graph – Type of knowledge base
Ontology – Specification of a conceptualization
Object-role modeling – Programming techniquePages displaying short descriptions of redirect targets
Three schema approach – Approach to building information systemsPages displaying short descriptions of redirect targets
Structured entity relationship model
Schema-agnostic databases – type of databankPages displaying wikidata descriptions as a fallback
== References ==
== Further reading ==
Chen, Peter (2002). "Entity-Relationship Modeling: Historical Events, Future Trends, and Lessons Learned" (PDF). Software pioneers. Springer-Verlag. pp. 296–310. ISBN 978-3-540-43081-0.
Barker, Richard (1990). CASE Method: Entity Relationship Modelling. Addison-Wesley. ISBN 978-0201416961.
Barker, Richard (1990). CASE Method: Tasks and Deliverables. Addison-Wesley. ISBN 978-0201416978.
Mannila, Heikki; Räihä, Kari-Jouko (1992). The Design of Relational Databases. Addison-Wesley. ISBN 978-0201565232.
Thalheim, Bernhard (2000). Entity-Relationship Modeling: Foundations of Database Technology. Springer. ISBN 978-3-540-65470-4.
Bagui, Sikha; Earp, Richard Walsh (2022). Database Design Using Entity-Relationship Diagrams. Auerbach Publications. ISBN 978-1-032-01718-1.
== External links ==
"The Entity Relationship Model: Toward a Unified View of Data"
Entity Relationship Modelling
Logical Data Structures (LDSs) - Getting started by Tony Drewry.
Crow's Foot Notation
Kinds of Data Models -- and How to Name Them presentation by David Hay | Wikipedia/Entity–relationship_model |
Systems modeling or system modeling is the interdisciplinary study of the use of models to conceptualize and construct systems in business and IT development.
A common type of systems modeling is function modeling, with specific techniques such as the Functional Flow Block Diagram and IDEF0. These models can be extended using functional decomposition, and can be linked to requirements models for further systems partition.
Contrasting the functional modeling, another type of systems modeling is architectural modeling which uses the systems architecture to conceptually model the structure, behavior, and more views of a system.
The Business Process Modeling Notation (BPMN), a graphical representation for specifying business processes in a workflow, can also be considered to be a systems modeling language.
== Overview ==
In business and IT development the term "systems modeling" has multiple meanings. It can relate to:
the use of model to conceptualize and construct systems
the interdisciplinary study of the use of these models
the systems modeling, analysis, and design efforts
the systems modeling and simulation, such as system dynamics
any specific systems modeling language
As a field of study systems modeling has emerged with the development of system theory and systems sciences.
As a type of modeling systems modeling is based on systems thinking and the systems approach. In business and IT systems modeling contrasts other approaches such as:
agent based modeling
data modeling and
mathematical modeling
In "Methodology for Creating Business Knowledge" (1997) Arbnor and Bjerke the systems approach (systems modeling) was considered to be one of the three basic methodological approaches for gaining business knowledge, beside the analytical approach and the actor's approach (agent based modeling).
== History ==
The function model originates in the 1950s, after in the first half of the 20th century other types of management diagrams had already been developed. The first known Gantt chart was developed in 1896 by Karol Adamiecki, who called it a harmonogram. Because Adamiecki did not publish his chart until 1931 - and in any case his works were published in either Polish or Russian, languages not popular in the West - the chart now bears the name of Henry Gantt (1861–1919), who designed his chart around the years 1910-1915 and popularized it in the West. One of the first well defined function models, was the Functional Flow Block Diagram (FFBD) developed by the defense-related TRW Incorporated in the 1950s. In the 1960s it was exploited by the NASA to visualize the time sequence of events in a space systems and flight missions. It is further widely used in classical systems engineering to show the order of execution of system functions.
One of the earliest pioneering works in information systems modeling has been done by Young and Kent (1958), who argued:
Since we may be called upon to evaluate different computers or to find alternative ways of organizing current systems it is necessary to have some means of precisely stating a data processing problem independently of mechanization.
They aimed for a precise and abstract way of specifying the informational and time characteristics of a data processing problem, and wanted to create a notation that should enable the analyst to organize the problem around any piece of hardware. Their efforts was not so much focused on independent systems analysis, but on creating abstract specification and invariant basis for designing different alternative implementations using different hardware components.
A next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
== Types of systems modeling ==
In business and IT development systems are modeled with different scopes and scales of complexity, such as:
Functional modeling
Systems architecture
Business process modeling
Enterprise modeling
Further more like systems thinking, systems modeling in can be divided into:
Systems analysis
Hard systems modeling or operational research modeling
Soft system modeling
Process based system modeling
And all other specific types of systems modeling, such as form example complex systems modeling, dynamical systems modeling, and critical systems modeling.
== Specific types of modeling languages ==
Framework-specific modeling language
Systems Modeling Language
== See also ==
Behavioral modeling
Dynamic systems
Human visual system model – a human visual system model used by image processing, video processing, and computer vision
Open energy system models – energy system models adopting open science principles
SEQUAL framework
Software and Systems Modeling
Solar System model – a model that illustrates the relative positions and motions of the planets and stars
Statistical model
Systems analysis
Systems design
Systems biology modeling
Viable system model – a model of the organizational structure of any viable or autonomous system
== References ==
== Further reading ==
Doo-Kwon Baik eds. (2005). Systems modeling and simulation: theory and applications : third Asian Simulation Conference, AsiaSim 2004, Jeju Island, Korea, October 4–6, 2004. Springer, 2005. ISBN 3-540-24477-8.
Derek W. Bunn, Erik R. Larsen (1997). Systems modelling for energy policy. Wiley, 1997. ISBN 0-471-95794-1
Hartmut Ehrig et al. (eds.) (2005). Formal methods in software and systems modeling. Springer, 2005 ISBN 3-540-24936-2
D. J. Harris (1985). Mathematics for business, management, and economics: a systems modelling approach. E. Horwood, 1985. ISBN 0-85312-821-9
Jiming Liu, Xiaolong Jin, Kwok Ching Tsui (2005). Autonomy oriented computing: from problem solving to complex systems modeling. Springer, 2005. ISBN 1-4020-8121-9
Michael Pidd (2004). Systems Modelling: Theory and Practice. John Wiley & Sons, 2004. ISBN 0-470-86732-9
Václav Pinkava (1988). Introduction to Logic for Systems Modelling. Taylor & Francis, 1988. ISBN 0-85626-431-8 | Wikipedia/Systems_modeling |
Universal Systems Language (USL) is a systems modeling language and formal method for the specification and design of software and other complex systems. It was designed by Margaret Hamilton based on her experiences writing flight software for the Apollo program. The language is implemented through the 001 Tool Suite software by Hamilton Technologies, Inc. USL evolved from 001AXES which in turn evolved from AXES all of which are based on Hamilton's axioms of control. The 001 Tool Suite uses the preventive concept of Development Before the Fact (DBTF) for its life-cycle development process. DBTF eliminates errors as early as possible during the development process removing the need to look for errors after-the-fact.
== Philosophy ==
USL was inspired by Hamilton's recognition of patterns or categories of errors occurring during Apollo software development.
Certain correctness guarantees are embedded in the USL grammar.
USL is regarded by some users as more user-friendly than other formal systems. It is not only a formalism for software, but also defines ontologies for common elements of problem domains, such as physical space and event timing.
== Formalism for a theory of control ==
Primitive structures are universal in that they are able to be used to derive new abstract universal structures, functions or types. The process of deriving new objects (i.e., structures, types and functions) is equivalent to the process of deriving new types in a constructive type theory.
== Implementation ==
The process of developing a software system with USL together with its automation, the 001 Tool Suite (001), is as follows: define the system with USL, automatically analyze the definition with 001's analyzer to ensure that USL was used correctly, automatically generate much of the design and all of the implementation code with 001's generator. USL can be used to lend its formal support to other languages.
== See also ==
Systems philosophy
IDEF
Model-driven architecture
Systems modeling language
Object process methodology
== References ==
== Further reading ==
Hamilton, M., Zeldin, S. (1976), "Higher Order Software — A Methodology for Defining Software," IEEE Transactions on Software Engineering, vol. SE-2, no. 1, Mar. 1976.
Hamilton, M. (April 1994). "Inside Development Before the Fact". (Cover story). Special Editorial Supplement. 8ES-24ES. Electronic Design.
Hamilton, M. (June 1994). "001: A Full Life Cycle Systems Engineering and Software Development Environment". (Cover story). Special Editorial Supplement. 22ES-30ES. Electronic Design.
Hamilton, M., Hackler, W.R.. (2004), Deeply Integrated Guidance Navigation Unit (DI-GNU) Common Software Architecture Principles (revised dec-29-04), DAAAE30-02-D-1020 and DAAB07-98-D-H502/0180, Picatinny Arsenal, NJ, 2003–2004.
Hamilton, M. and Hackler, W.R. (2007), "Universal Systems Language for Preventative Systems Engineering," Proc. 5th Ann. Conf. Systems Eng. Res. (CSER), Stevens Institute of Technology, Mar. 2007, paper #36.
Hamilton, M.; Hackler, W. R. (2007). "A Formal Universal Systems Semantics for SysML". 17th Annual International Symposium, INCOSE 2007, San Diego, CA, Jun. 2007.
== External links ==
Hamilton Technologies | Wikipedia/Universal_Systems_Language |
Capability Maturity Model Integration (CMMI) is a process level improvement training and appraisal program. Administered by the CMMI Institute, a subsidiary of ISACA, it was developed at Carnegie Mellon University (CMU). It is required by many U.S. Government contracts, especially in software development. CMU claims CMMI can be used to guide process improvement across a project, division, or an entire organization.
CMMI defines the following five maturity levels (1 to 5) for processes: Initial, Managed, Defined, Quantitatively Managed, and Optimizing. CMMI Version 3.0 was published in 2023; Version 2.0 was published in 2018; Version 1.3 was published in 2010, and is the reference model for the rest of the information in this article. CMMI is registered in the U.S. Patent and Trademark Office by CMU.
== Overview ==
Originally CMMI addresses three areas of interest:
Product and service development – CMMI for Development (CMMI-DEV),
Service establishment, management, – CMMI for Services (CMMI-SVC), and
Product and service acquisition – CMMI for Acquisition (CMMI-ACQ).
In version 2.0 these three areas (that previously had a separate model each) were merged into a single model.
CMMI was developed by a group from industry, government, and the Software Engineering Institute (SEI) at CMU. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization. By January 2013, the entire CMMI product suite was transferred from the SEI to the CMMI Institute, a newly created organization at Carnegie Mellon.
== History ==
CMMI was developed by the CMMI project, which aimed to improve the usability of maturity models by integrating many different models into one framework. The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industrial Association.
CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM was developed from 1987 until 1997. In 2002, version 1.1 was released, version 1.2 followed in August 2006, and version 1.3 in November 2010. Some major changes in CMMI V1.3 are the support of agile software development, improvements to high maturity practices and alignment of the representation (staged and continuous).
According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes."
Mary Beth Chrissis, Mike Konrad, and Sandy Shrum Rawdon were the authorship team for the hard copy publication of CMMI for Development Version 1.2 and 1.3. The Addison-Wesley publication of Version 1.3 was dedicated to the memory of Watts Humphry. Eileen C. Forrester, Brandon L. Buteau, and Sandy Shrum were the authorship team for the hard copy publication of CMMI for Services Version 1.3. Rawdon "Rusty" Young was the chief architect for the development of CMMI version 2.0. He was previously the CMMI Product Owner and the SCAMPI Quality Lead for the Software Engineering Institute.
In March 2016, the CMMI Institute was acquired by ISACA.
In April 2023, the CMMI V3.0 was released.
== Topics ==
=== Representation ===
In version 1.3 CMMI existed in two representations: continuous and staged. The continuous representation is designed to allow the user to focus on the specific processes that are considered important for the organization's immediate business objectives, or those to which the organization assigns a high degree of risks. The staged representation is designed to provide a standard sequence of improvements, and can serve as a basis for comparing the maturity of different projects and organizations. The staged representation also provides for an easy migration from the SW-CMM to CMMI.
In version 2.0 the above representation separation was cancelled and there is now only one cohesive model.
=== Model framework (v1.3) ===
Depending on the areas of interest (acquisition, services, development) used, the process areas it contains will vary. Process areas are the areas that will be covered by the organization's processes. The table below lists the seventeen CMMI core process areas that are present for all CMMI areas of interest in version 1.3.
=== Maturity levels for services ===
The process areas below and their maturity levels are listed for the CMMI for services model:
Maturity Level 2 – Managed
CM – Configuration Management
MA – Measurement and Analysis
PPQA – Process and Quality Assurance
REQM – Requirements Management
SAM – Supplier Agreement Management
SD – Service Delivery
WMC – Work Monitoring and Control
WP – Work Planning
Maturity Level 3 – Defined
CAM – Capacity and Availability Management
DAR – Decision Analysis and Resolution
IRP – Incident Resolution and Prevention
IWM – Integrated Work Managements
OPD – Organizational Process Definition
OPF – Organizational Process Focus...
OT – Organizational Training
RSKM – Risk Management
SCON – Service Continuity
SSD – Service System Development
SST – Service System Transition
STSM – Strategic Service Management
Maturity Level 4 – Quantitatively Managed
OPP – Organizational Process Performance
QWM – Quantitative Work Management
Maturity Level 5 – Optimizing
CAR – Causal Analysis and Resolution.
OPM – Organizational Performance Management.
=== Models (v1.3) ===
CMMI best practices are published in documents called models, each of which addresses a different area of interest. Version 1.3 provides models for three areas of interest: development, acquisition, and services.
CMMI for Development (CMMI-DEV), v1.3 was released in November 2010. It addresses product and service development processes.
CMMI for Acquisition (CMMI-ACQ), v1.3 was released in November 2010. It addresses supply chain management, acquisition, and outsourcing processes in government and industry.
CMMI for Services (CMMI-SVC), v1.3 was released in November 2010. It addresses guidance for delivering services within an organization and to external customers.
=== Model (v2.0) ===
In version 2.0 DEV, ACQ and SVC were merged into a single model where each process area potentially has a specific reference to one or more of these three aspects. Trying to keep up with the industry the model also has explicit reference to agile aspects in some process areas.
Some key differences between v1.3 and v2.0 models are given below:
"Process Areas" have been replaced with "Practice Areas (PA's)". The latter is arranged by levels, not "Specific Goals".
Each PA is composed of a "core" [i.e. a generic and terminology-free description] and "context-specific" [ i.e. description from the perspective of Agile/ Scrum, development, services, etc.] section.
Since all practices are now compulsory to comply, "Expected" section has been removed.
"Generic Practices" have been put under a new area called "Governance and Implementation Infrastructure", while "Specific practices" have been omitted.
Emphasis on ensuring implementation of PA's and that these are practised continuously until they become a "habit".
All maturity levels focus on the keyword "performance".
Two and five optional PA's from "Safety" and "Security" purview have been included.
PCMM process areas have been merged.
=== Appraisal ===
An organization cannot be certified in CMMI; instead, an organization is appraised. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1–5) or a capability level achievement profile.
Many organizations find value in measuring their progress by conducting an appraisal. Appraisals are typically conducted for one or more of the following reasons:
To determine how well the organization's processes compare to CMMI best practices, and to identify areas where improvement can be made
To inform external customers and suppliers of how well the organization's processes compare to CMMI best practices
To meet the contractual requirements of one or more customers
Appraisals of organizations using a CMMI model must conform to the requirements defined in the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals, A, B and C, which focus on identifying improvement opportunities and comparing the organization's processes to CMMI best practices. Of these, class A appraisal is the most formal and is the only one that can result in a level rating. Appraisal teams use a CMMI model and ARC-conformant appraisal method to guide their evaluation of the organization and their reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan improvements for the organization.
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal method that meets all of the ARC requirements. Results of a SCAMPI appraisal may be published (if the appraised organization approves) on the CMMI Web site of the SEI: Published SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability Determination), assessments etc.
This approach promotes that members of the EPG and PATs be trained in the CMMI, that an informal (SCAMPI C) appraisal be performed, and that process areas be prioritized for improvement. More modern approaches, that involve the deployment of commercially available, CMMI-compliant processes, can significantly reduce the time to achieve compliance. SEI has maintained statistics on the "time to move up" for organizations adopting the earlier Software CMM as well as CMMI. These statistics indicate that, since 1987, the median times to move from Level 1 to Level 2 is 23 months, and from Level 2 to Level 3 is an additional 20 months. Since the release of the CMMI, the median times to move from Level 1 to Level 2 is 5 months, with median movement to Level 3 another 21 months. These statistics are updated and published every six months in a maturity profile.
The Software Engineering Institute's (SEI) team software process methodology and the use of CMMI models can be used to raise the maturity level. A new product called Accelerated Improvement Method (AIM) combines the use of CMMI and the TSP.
=== Security ===
To address user security concerns, two unofficial security guides are available. Considering the Case for Security Content in CMMI for Services has one process area, Security Management. Security by Design with CMMI for Development, Version 1.3 has the following process areas:
OPSD – Organizational Preparedness for Secure Development
SMP – Secure Management in Projects
SRTS – Security Requirements and Technical Solution
SVV – Security Verification and Validation
While they do not affect maturity or capability levels, these process areas can be reported in appraisal results.
== Applications ==
The SEI published a study saying 60 organizations measured increases of performance in the categories of cost, schedule, productivity, quality and customer satisfaction. The median increase in performance varied between 14% (customer satisfaction) and 62% (productivity). However, the CMMI model mostly deals with what processes should be implemented, and not so much with how they can be implemented. These results do not guarantee that applying CMMI will increase performance in every organization. A small company with few resources may be less likely to benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of the organizations with 1,001–2,000 employees are rated at the highest level (5: Optimizing).
Turner & Jain (2002) argue that although it is obvious there are large differences between CMMI and agile software development, both approaches have much in common. They believe neither way is the 'right' way to develop software, but that there are phases in a project where one of the two is better suited. They suggest one should combine the different fragments of the methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum and CMMI brings more adaptability and predictability than either one alone. David J. Anderson (2005) gives hints on how to interpret CMMI in an agile manner.
CMMI Roadmaps, which are a goal-driven approach to selecting and deploying relevant process areas from the CMMI-DEV model, can provide guidance and focus for effective CMMI adoption. There are several CMMI roadmaps for the continuous representation, each with a specific set of improvement goals. Examples are the CMMI Project Roadmap, CMMI Product and Product Integration Roadmaps and the CMMI Process and Measurements Roadmaps. These roadmaps combine the strengths of both the staged and the continuous representations.
The combination of the project management technique earned value management (EVM) with CMMI has been described. To conclude with a similar use of CMMI, Extreme Programming (XP), a software engineering method, has been evaluated with CMM/CMMI (Nawrocki et al., 2002). For example, the XP requirements management approach, which relies on oral communication, was evaluated as not compliant with CMMI.
CMMI can be appraised using two different approaches: staged and continuous. The staged approach yields appraisal results as one of five maturity levels. The continuous approach yields one of four capability levels. The differences in these approaches are felt only in the appraisal; the best practices are equivalent resulting in equivalent process improvement results.
== See also ==
Capability Immaturity Model
Capability Maturity Model
Enterprise Architecture Assessment Framework
LeanCMMI
People Capability Maturity Model
Software Engineering Process Group
== References ==
== External links ==
Official website | Wikipedia/Capability_Maturity_Model_Integration |
2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. It may refer to the branch of computer science that comprises such techniques or to the models themselves.
2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics (whose approach is more akin to photography than to typography).
In many domains, such as desktop publishing, engineering, and business, a description of a document based on 2D computer graphics techniques can be much smaller than the corresponding digital image—often by a factor of 1/1000 or more. This representation is also more flexible since it can be rendered at different resolutions to suit different output devices. For these reasons, documents and illustrations are often stored or transmitted as 2D graphic files.
2D computer graphics started in the 1950s, based on vector graphics devices. These were largely supplanted by raster-based devices in the following decades. The PostScript language and the X Window System protocol were landmark developments in the field.
2D graphics models may combine geometric models (also called vector graphics), digital images (also called raster graphics), text to be typeset (defined by content, font style and size, color, position, and orientation), mathematical functions and equations, and more. These components can be modified and manipulated by two-dimensional geometric transformations such as translation, rotation, and scaling.
In object-oriented graphics, the image is described indirectly by an object endowed with a self-rendering method—a procedure that assigns colors to the image pixels by an arbitrary algorithm. Complex models can be built by combining simpler objects, in the paradigms of object-oriented programming.
== Background (geometry) ==
In Euclidean geometry, a translation (geometry) moves every point a constant distance in a specified direction. A translation can be described as a rigid motion: other rigid motions include rotations and reflections. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. A translation operator is an operator
T
δ
{\displaystyle T_{\mathbf {\delta } }}
such that
T
δ
f
(
v
)
=
f
(
v
+
δ
)
.
{\displaystyle T_{\mathbf {\delta } }f(\mathbf {v} )=f(\mathbf {v} +\mathbf {\delta } ).}
If v is a fixed vector, then the translation Tv will work as Tv(p) = p + v.
If T is a translation, then the image of a subset A under the function T is the translation of A by T. The translation of A by Tv is often written A + v.
In a Euclidean space, any translation is an isometry. The set of all translations forms the translation group T, which is isomorphic to the space itself, and a normal subgroup of Euclidean group E(n ). The quotient group of E(n ) by T is isomorphic to the orthogonal group O(n ):
E(n ) / T ≅ O(n ).
=== Translation ===
Since a translation is an affine transformation but not a linear transformation, homogeneous coordinates are normally used to represent the translation operator by a matrix and thus to make it linear. Thus we write the 3-dimensional vector w = (wx, wy, wz) using 4 homogeneous coordinates as w = (wx, wy, wz, 1).
To translate an object by a vector v, each homogeneous vector p (written in homogeneous coordinates) would need to be multiplied by this translation matrix:
T
v
=
[
1
0
0
v
x
0
1
0
v
y
0
0
1
v
z
0
0
0
1
]
{\displaystyle T_{\mathbf {v} }={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}}
As shown below, the multiplication will give the expected result:
T
v
p
=
[
1
0
0
v
x
0
1
0
v
y
0
0
1
v
z
0
0
0
1
]
[
p
x
p
y
p
z
1
]
=
[
p
x
+
v
x
p
y
+
v
y
p
z
+
v
z
1
]
=
p
+
v
{\displaystyle T_{\mathbf {v} }\mathbf {p} ={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\1\end{bmatrix}}={\begin{bmatrix}p_{x}+v_{x}\\p_{y}+v_{y}\\p_{z}+v_{z}\\1\end{bmatrix}}=\mathbf {p} +\mathbf {v} }
The inverse of a translation matrix can be obtained by reversing the direction of the vector:
T
v
−
1
=
T
−
v
.
{\displaystyle T_{\mathbf {v} }^{-1}=T_{-\mathbf {v} }.\!}
Similarly, the product of translation matrices is given by adding the vectors:
T
u
T
v
=
T
u
+
v
.
{\displaystyle T_{\mathbf {u} }T_{\mathbf {v} }=T_{\mathbf {u} +\mathbf {v} }.\!}
Because addition of vectors is commutative, multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices).
=== Rotation ===
In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space.
R
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
{\displaystyle R={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}}
rotates points in the xy-Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation using a rotation matrix R, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv. Since matrix multiplication has no effect on the zero vector (i.e., on the coordinates of the origin), rotation matrices can only be used to describe rotations about the origin of the coordinate system.
Rotation matrices provide a simple algebraic description of such rotations, and are used extensively for computations in geometry, physics, and computer graphics. In 2-dimensional space, a rotation can be simply described by an angle θ of rotation, but it can be also represented by the 4 entries of a rotation matrix with 2 rows and 2 columns. In 3-dimensional space, every rotation can be interpreted as a rotation by a given angle about a single fixed axis of rotation (see Euler's rotation theorem), and hence it can be simply described by an angle and a vector with 3 entries. However, it can also be represented by the 9 entries of a rotation matrix with 3 rows and 3 columns. The notion of rotation is not commonly used in dimensions higher than 3; there is a notion of a rotational displacement, which can be represented by a matrix, but no associated single axis or angle.
Rotation matrices are square matrices, with real entries. More specifically they can be characterized as orthogonal matrices with determinant 1:
R
T
=
R
−
1
,
det
R
=
1
{\displaystyle R^{T}=R^{-1},\det R=1\,}
.
The set of all such matrices of size n forms a group, known as the special orthogonal group SO(n).
=== In two dimensions ===
In two dimensions every rotation matrix has the following form:
R
(
θ
)
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
{\displaystyle R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}}
.
This rotates column vectors by means of the following matrix multiplication:
[
x
′
y
′
]
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
[
x
y
]
{\displaystyle {\begin{bmatrix}x'\\y'\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}{\begin{bmatrix}x\\y\\\end{bmatrix}}}
.
So the coordinates (x',y') of the point (x,y) after rotation are:
x
′
=
x
cos
θ
−
y
sin
θ
{\displaystyle x'=x\cos \theta -y\sin \theta \,}
,
y
′
=
x
sin
θ
+
y
cos
θ
{\displaystyle y'=x\sin \theta +y\cos \theta \,}
.
The direction of vector rotation is counterclockwise if θ is positive (e.g. 90°), and clockwise if θ is negative (e.g. -90°).
R
(
−
θ
)
=
[
cos
θ
sin
θ
−
sin
θ
cos
θ
]
{\displaystyle R(-\theta )={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \\\end{bmatrix}}\,}
.
=== Non-standard orientation of the coordinate system ===
If a standard right-handed Cartesian coordinate system is used, with the x axis to the right and the y axis up, the rotation R(θ) is counterclockwise. If a left-handed Cartesian coordinate system is used, with x directed to the right but y directed down, R(θ) is clockwise. Such non-standard orientations are rarely used in mathematics but are common in 2D computer graphics, which often have the origin in the top left corner and the y-axis down the screen or page.
See below for other alternative conventions which may change the sense of the rotation produced by a rotation matrix.
=== Common rotations ===
Particularly useful are the matrices for 90° and 180° rotations:
R
(
90
∘
)
=
[
0
−
1
1
0
]
{\displaystyle R(90^{\circ })={\begin{bmatrix}0&-1\\[3pt]1&0\\\end{bmatrix}}}
(90° counterclockwise rotation)
R
(
180
∘
)
=
[
−
1
0
0
−
1
]
{\displaystyle R(180^{\circ })={\begin{bmatrix}-1&0\\[3pt]0&-1\\\end{bmatrix}}}
(180° rotation in either direction – a half-turn)
R
(
270
∘
)
=
[
0
1
−
1
0
]
{\displaystyle R(270^{\circ })={\begin{bmatrix}0&1\\[3pt]-1&0\\\end{bmatrix}}}
(270° counterclockwise rotation, the same as a 90° clockwise rotation)
=== Scaling ===
In Euclidean geometry, uniform scaling (isotropic scaling, homogeneous dilation, homothety) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions. The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. (Some school text books specifically exclude this possibility, just as some exclude squares from being rectangles or circles from being ellipses.)
More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling, inhomogeneous dilation) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles).
A scaling can be represented by a scaling matrix. To scale an object by a vector v = (vx, vy, vz), each point p = (px, py, pz) would need to be multiplied with this scaling matrix:
S
v
=
[
v
x
0
0
0
v
y
0
0
0
v
z
]
.
{\displaystyle S_{v}={\begin{bmatrix}v_{x}&0&0\\0&v_{y}&0\\0&0&v_{z}\\\end{bmatrix}}.}
As shown below, the multiplication will give the expected result:
S
v
p
=
[
v
x
0
0
0
v
y
0
0
0
v
z
]
[
p
x
p
y
p
z
]
=
[
v
x
p
x
v
y
p
y
v
z
p
z
]
.
{\displaystyle S_{v}p={\begin{bmatrix}v_{x}&0&0\\0&v_{y}&0\\0&0&v_{z}\\\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\end{bmatrix}}={\begin{bmatrix}v_{x}p_{x}\\v_{y}p_{y}\\v_{z}p_{z}\end{bmatrix}}.}
Such a scaling changes the diameter of an object by a factor between the scale factors, the area by a factor between the smallest and the largest product of two scale factors, and the volume by the product of all three.
The scaling is uniform if and only if the scaling factors are equal (vx = vy = vz). If all except one of the scale factors are equal to 1, we have directional scaling.
In the case where vx = vy = vz = k, the scaling is also called an enlargement or dilation by a factor k, increasing the area by a factor of k2 and the volume by a factor of k3.
Scaling in the most general sense is any affine transformation with a diagonalizable matrix. It includes the case that the three directions of scaling are not perpendicular. It includes also the case that one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors. The latter corresponds to a combination of scaling proper and a kind of reflection: along lines in a particular direction we take the reflection in the point of intersection with a plane that need not be perpendicular; therefore it is more general than ordinary reflection in the plane.
=== Using homogeneous coordinates ===
In projective geometry, often used in computer graphics, points are represented using homogeneous coordinates. To scale an object by a vector v = (vx, vy, vz), each homogeneous coordinate vector p = (px, py, pz, 1) would need to be multiplied with this projective transformation matrix:
S
v
=
[
v
x
0
0
0
0
v
y
0
0
0
0
v
z
0
0
0
0
1
]
.
{\displaystyle S_{v}={\begin{bmatrix}v_{x}&0&0&0\\0&v_{y}&0&0\\0&0&v_{z}&0\\0&0&0&1\end{bmatrix}}.}
As shown below, the multiplication will give the expected result:
S
v
p
=
[
v
x
0
0
0
0
v
y
0
0
0
0
v
z
0
0
0
0
1
]
[
p
x
p
y
p
z
1
]
=
[
v
x
p
x
v
y
p
y
v
z
p
z
1
]
.
{\displaystyle S_{v}p={\begin{bmatrix}v_{x}&0&0&0\\0&v_{y}&0&0\\0&0&v_{z}&0\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\1\end{bmatrix}}={\begin{bmatrix}v_{x}p_{x}\\v_{y}p_{y}\\v_{z}p_{z}\\1\end{bmatrix}}.}
Since the last component of a homogeneous coordinate can be viewed as the denominator of the other three components, a uniform scaling by a common factor s (uniform scaling) can be accomplished by using this scaling matrix:
S
v
=
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
s
]
.
{\displaystyle S_{v}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&{\frac {1}{s}}\end{bmatrix}}.}
For each vector p = (px, py, pz, 1) we would have
S
v
p
=
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
s
]
[
p
x
p
y
p
z
1
]
=
[
p
x
p
y
p
z
1
s
]
{\displaystyle S_{v}p={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&{\frac {1}{s}}\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\1\end{bmatrix}}={\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\{\frac {1}{s}}\end{bmatrix}}}
which would be homogenized to
[
s
p
x
s
p
y
s
p
z
1
]
.
{\displaystyle {\begin{bmatrix}sp_{x}\\sp_{y}\\sp_{z}\\1\end{bmatrix}}.}
== Techniques ==
=== Direct painting ===
A convenient way to create a complex image is to start with a blank "canvas" raster map (an array of pixels, also known as a bitmap) filled with some uniform background color and then "draw", "paint" or "paste" simple patches of color onto it, in an appropriate order. In particular the canvas may be the frame buffer for a computer display.
Some programs will set the pixel colors directly, but most will rely on some 2D graphics library or the machine's graphics card, which usually implement the following operations:
paste a given image at a specified offset onto the canvas;
write a string of characters with a specified font, at a given position and angle;
paint a simple geometric shape, such as a triangle defined by three corners, or a circle with given center and radius;
draw a line segment, arc, or simple curve with a virtual pen of given width.
==== Extended color models ====
Text, shapes and lines are rendered with a client-specified color. Many libraries and cards provide color gradients, which are handy for the generation of smoothly-varying backgrounds, shadow effects, etc. (See also Gouraud shading). The pixel colors can also be taken from a texture, e.g. a digital image (thus emulating rub-on screentones and the fabled checker paint which used to be available only in cartoons).
Painting a pixel with a given color usually replaces its previous color. However, many systems support painting with transparent and translucent colors, which only modify the previous pixel values.
The two colors may also be combined in more complex ways, e.g. by computing their bitwise exclusive or. This technique is known as inverting color or color inversion, and is often used in graphical user interfaces for highlighting, rubber-band drawing, and other volatile painting—since re-painting the same shapes with the same color will restore the original pixel values.
==== Layers ====
The models used in 2D computer graphics usually do not provide for three-dimensional shapes, or three-dimensional optical phenomena such as lighting, shadows, reflection, refraction, etc. However, they usually can model multiple layers (conceptually of ink, paper, or film; opaque, translucent, or transparent—stacked in a specific order. The ordering is usually defined by a single number (the layer's depth, or distance from the viewer).
Layered models are sometimes called "21⁄2-D computer graphics". They make it possible to mimic traditional drafting and printing techniques based on film and paper, such as cutting and pasting; and allow the user to edit any layer without affecting the others. For these reasons, they are used in most graphics editors. Layered models also allow better spatial anti-aliasing of complex drawings and provide a sound model for certain techniques such as mitered joints and the even–odd rule.
Layered models are also used to allow the user to suppress unwanted information when viewing or printing a document, e.g. roads or railways from a map, certain process layers from an integrated circuit diagram, or hand annotations from a business letter.
In a layer-based model, the target image is produced by "painting" or "pasting" each layer, in order of decreasing depth, on the virtual canvas. Conceptually, each layer is first rendered on its own, yielding a digital image with the desired resolution which is then painted over the canvas, pixel by pixel. Fully transparent parts of a layer need not be rendered, of course. The rendering and painting may be done in parallel, i.e., each layer pixel may be painted on the canvas as soon as it is produced by the rendering procedure.
Layers that consist of complex geometric objects (such as text or polylines) may be broken down into simpler elements (characters or line segments, respectively), which are then painted as separate layers, in some order. However, this solution may create undesirable aliasing artifacts wherever two elements overlap the same pixel.
See also Portable Document Format#Layers.
== Hardware ==
Modern computer graphics card displays almost overwhelmingly use raster techniques, dividing the screen into a rectangular grid of pixels, due to the relatively low cost of raster-based video hardware as compared with vector graphic hardware. Most graphic hardware has internal support for blitting operations or sprite drawing. A co-processor dedicated to blitting is known as a Blitter chip.
Classic 2D graphics chips and graphics processing units of the late 1970s to 1980s, used in 8-bit to early 16-bit, arcade games, video game consoles, and home computers, include:
Atari, Inc.'s TIA, ANTIC, CTIA and GTIA
Capcom's CPS-A and CPS-B
Commodore's OCS
MOS Technology's VIC and VIC-II
Hudson Soft's Cynthia and HuC6270
NEC's μPD7220 and μPD72120
Ricoh's PPU and S-PPU
Sega's VDP, Super Scaler, 315-5011/315-5012 and 315-5196/315-5197
Texas Instruments' TMS9918
Yamaha's V9938, V9958 and YM7101 VDP
== Software ==
Many graphical user interfaces (GUIs), including macOS, Microsoft Windows, or the X Window System, are primarily based on 2D graphical concepts. Such software provides a visual environment for interacting with the computer, and commonly includes some form of window manager to aid the user in conceptually distinguishing between different applications.
The user interface within individual software applications is typically 2D in nature as well, due in part to the fact that most common input devices, such as the mouse, are constrained to two dimensions of movement.
2D graphics are very important in the control peripherals such as printers, plotters, sheet cutting machines, etc. They were also used in most early video games; and are still used for card and board games such as solitaire, chess, mahjongg, etc.
2D graphics editors or drawing programs are application-level software for the creation of images, diagrams and illustrations by direct manipulation (through the mouse, graphics tablet, or similar device) of 2D computer graphics primitives. These editors generally provide geometric primitives as well as digital images; and some even support procedural models. The illustration is usually represented internally as a layered model, often with a hierarchical structure to make editing more convenient. These editors generally output graphics files where the layers and primitives are separately preserved in their original form. MacDraw, introduced in 1984 with the Macintosh line of computers, was an early example of this class; recent examples are the commercial products Adobe Illustrator and CorelDRAW, and the free editors such as xfig or Inkscape. There are also many 2D graphics editors specialized for certain types of drawings such as electrical, electronic and VLSI diagrams, topographic maps, computer fonts, etc.
Image editors are specialized for the manipulation of digital images, mainly by means of free-hand drawing/painting and signal processing operations. They typically use a direct-painting paradigm, where the user controls virtual pens, brushes, and other free-hand artistic instruments to apply paint to a virtual canvas. Some image editors support a multiple-layer model; however, in order to support signal-processing operations like blurring each layer is normally represented as a digital image. Therefore, any geometric primitives that are provided by the editor are immediately converted to pixels and painted onto the canvas. The name raster graphics editor is sometimes used to contrast this approach to that of general editors which also handle vector graphics. One of the first popular image editors was Apple's MacPaint, companion to MacDraw. Modern examples are the free GIMP editor, and the commercial products Photoshop and Paint Shop Pro. This class too includes many specialized editors—for medicine, remote sensing, digital photography, etc.
== Developmental animation ==
With the resurgence: 8 of 2D animation, free and proprietary software packages have become widely available for amateurs and professional animators. With software like RETAS UbiArt Framework and Adobe After Effects, coloring and compositing can be done in less time.
Various approaches have been developed: 38 to aid and speed up the process of digital 2D animation. For example, by generating vector artwork in a tool like Adobe Flash an artist may employ software-driven automatic coloring and in-betweening.
Programs like Blender or Adobe Substance allow the user to do either 3D animation, 2D animation or combine both in its software allowing experimentation with multiple forms of animation.
== See also ==
2.5D
3D computer graphics
Computer animation
CGI
Bit blit
Computer graphics
Graphic art software
Graphics
Image scaling
List of home computers by video hardware
Turtle graphics
Transparency in graphics
Palette (computing)
Parallax scrolling
Pixel art
== References == | Wikipedia/2D_computer_graphics |
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.
In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
== Motivation ==
As a motivating example, consider the problem of finding an ‘a’ in an array of n elements.
Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
Output: Find an ‘a’ in the array.
We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.
Las Vegas algorithm:
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
lim
n
→
∞
∑
i
=
1
n
i
2
i
=
2
{\displaystyle \lim _{n\to \infty }\sum _{i=1}^{n}{\frac {i}{2^{i}}}=2}
Since it is constant, the expected run time over many calls is
Θ
(
1
)
{\displaystyle \Theta (1)}
. (See Big Theta notation)
Monte Carlo algorithm:
If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is
Θ
(
1
)
{\displaystyle \Theta (1)}
.
Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
== Computational complexity ==
Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.
== Early history ==
=== Sorting ===
Quicksort was discovered by Tony Hoare in 1959, and subsequently published in 1961. In the same year, Hoare published the quickselect algorithm, which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.
=== Number theory ===
In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding square roots modulo prime numbers.
In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field. In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.
=== Data structures ===
One of the earliest randomized data structures is the hash table, which was introduced in 1953 by Hans Peter Luhn at IBM. Luhn's hash table used chaining to resolve collisions and was also one of the first applications of linked lists. Subsequently, in 1954, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research introduced linear probing, although Andrey Ershov independently had the same idea in 1957. In 1962, Donald Knuth performed the first correct analysis of linear probing, although the memorandum containing his analysis was not published until much later. The first published analysis was due to Konheim and Weiss in 1966.
Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random. In 1979, Carter and Wegman introduced universal hash functions, which they showed could be used to implement chained hash tables with constant expected time per operation.
Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the Bloom filter. In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap. In the same year, William Pugh introduced another randomized search tree known as the skip list.
=== Implicit uses in combinatorics ===
Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method. Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.
== Examples ==
=== Quicksort ===
Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.
=== Randomized incremental constructions in geometry ===
In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.
=== Min cut ===
Input: A graph G(V,E)
Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.
Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's basic algorithm:
begin
i = 1
repeat
repeat
Take a random edge (u,v) ∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result Ci
i = i + 1
until i = m
output the minimum cut among C1, C2, ..., Cm.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is
O
(
n
)
{\displaystyle O(n)}
, and n denotes the number of vertices.
After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
==== Analysis of algorithm ====
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
∏
i
=
1
m
Pr
(
C
i
≠
C
)
=
∏
i
=
1
m
(
1
−
Pr
(
C
i
=
C
)
)
.
{\displaystyle \prod _{i=1}^{m}\Pr(C_{i}\neq C)=\prod _{i=1}^{m}(1-\Pr(C_{i}=C)).}
By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0, 1, …, n − 3}. Gj has n − j vertices. We use the chain rule of conditional possibilities.
The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is
1
−
k
|
E
(
G
j
)
|
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}}
. Note that Gj still has min cut of size k, so by Lemma 2, it still has at least
(
n
−
j
)
k
2
{\displaystyle {\frac {(n-j)k}{2}}}
edges.
Thus,
1
−
k
|
E
(
G
j
)
|
≥
1
−
2
n
−
j
=
n
−
j
−
2
n
−
j
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}\geq 1-{\frac {2}{n-j}}={\frac {n-j-2}{n-j}}}
.
So by the chain rule, the probability of finding the min cut C is
Pr
[
C
i
=
C
]
≥
(
n
−
2
n
)
(
n
−
3
n
−
1
)
(
n
−
4
n
−
2
)
…
(
3
5
)
(
2
4
)
(
1
3
)
.
{\displaystyle \Pr[C_{i}=C]\geq \left({\frac {n-2}{n}}\right)\left({\frac {n-3}{n-1}}\right)\left({\frac {n-4}{n-2}}\right)\ldots \left({\frac {3}{5}}\right)\left({\frac {2}{4}}\right)\left({\frac {1}{3}}\right).}
Cancellation gives
Pr
[
C
i
=
C
]
≥
2
n
(
n
−
1
)
{\displaystyle \Pr[C_{i}=C]\geq {\frac {2}{n(n-1)}}}
. Thus the probability that the algorithm succeeds is at least
1
−
(
1
−
2
n
(
n
−
1
)
)
m
{\displaystyle 1-\left(1-{\frac {2}{n(n-1)}}\right)^{m}}
. For
m
=
n
(
n
−
1
)
2
ln
n
{\displaystyle m={\frac {n(n-1)}{2}}\ln n}
, this is equivalent to
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
. The algorithm finds the min cut with probability
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
, in time
O
(
m
n
)
=
O
(
n
3
log
n
)
{\displaystyle O(mn)=O(n^{3}\log n)}
.
== Derandomization ==
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
the method of conditional probabilities, and its generalization, pessimistic estimators
discrepancy theory (which is used to derandomize geometric algorithms)
the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence used in universal hashing
the use of expander graphs (or dispersers in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)
changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)
== Where randomness helps ==
When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random-access machine requires 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
In communication complexity, the equality of two strings can be verified to some reliability using
log
n
{\displaystyle \log n}
bits of communication with a randomized protocol. Any deterministic protocol requires
Θ
(
n
)
{\displaystyle \Theta (n)}
bits if defending against a strong opponent.
The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE. However, if it is required that the verifier be deterministic, then IP = NP.
In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.
== See also ==
Approximate counting algorithm
Atlantic City algorithm
Bogosort
Count–min sketch
HyperLogLog
Karger's algorithm
Las Vegas algorithm
Monte Carlo algorithm
Principle of deferred decision
Probabilistic analysis of algorithms
Probabilistic roadmap
Randomized algorithms as zero-sum games
== Notes ==
== References ==
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 1990. ISBN 0-262-03293-7. Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122.
Dirk Draheim. "Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics)." Springer, 2017.
Jon Kleinberg and Éva Tardos. Algorithm Design. Chapter 13: "Randomized algorithms".
Fallis, D. (2000). "The reliability of randomized algorithms". The British Journal for the Philosophy of Science. 51 (2): 255–271. doi:10.1093/bjps/51.2.255.
M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York (NY), 2005.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. A survey on Randomized Algorithms.
Christos Papadimitriou (1993), Computational Complexity (1st ed.), Addison Wesley, ISBN 978-0-201-53082-7 Chapter 11: Randomized computation, pp. 241–278.
Rabin, Michael O. (1980). "Probabilistic algorithm for testing primality". Journal of Number Theory. 12: 128–138. doi:10.1016/0022-314X(80)90084-0.
A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
"Randomized Algorithms for Scientific Computing" (RASC), OSTI.GOV (July 10th, 2021). | Wikipedia/Randomized_algorithms |
A metamodel is a model of a model, and metamodeling is the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction, and development of the frames, rules, constraints, models, and theories applicable and useful for modeling a predefined class of problems. As its name implies, this concept applies the notions of meta- and modeling in software engineering and systems engineering. Metamodels are of many types and have diverse applications.
== Overview ==
A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software-like entity. Metamodel can be a mathematical relation or algorithm representing input and output relations. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting the properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural networks, Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting the right metamodels to represent that behavior.
Common uses for metamodels are:
As a schema for semantic data that needs to be exchanged or stored
As a language that supports a particular method or process
As a language to express additional semantics of existing information
As a mechanism to create tools that work with a broad class of models at run time
As a schema for modeling and automatically exploring sentences of a language with applications to automated test synthesis
As an approximation of a higher-fidelity model for use when reducing time, cost, or computational effort is necessary
Because of the "meta" character of metamodeling, both the praxis and theory of metamodels are of relevance to metascience, metaphilosophy, metatheories and systemics, and meta-consciousness. The concept can be useful in mathematics, and has practical applications in computer science and computer engineering/software engineering. The latter are the main focus of this article.
== Topics ==
=== Definition ===
In software engineering, the use of models is an alternative to more common code-based development techniques. A model always conforms to a unique metamodel. One of the currently most active branches of Model Driven Engineering is the approach named model-driven architecture proposed by OMG. This approach is embodied in the Meta Object Facility (MOF) specification.
Typical metamodelling specifications proposed by OMG are UML, SysML, SPEM or CWM. ISO has also published the standard metamodel ISO/IEC 24744. All the languages presented below could be defined as MOF metamodels.
=== Metadata modeling ===
Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined class of problems. (see also: data modeling).
=== Model transformations ===
One important move in model-driven engineering is the systematic use of model transformation languages. The OMG has proposed a standard for this called QVT for Queries/Views/Transformations. QVT is based on the meta-object facility (MOF). Among many other model transformation languages (MTLs), some examples of implementations of this standard are AndroMDA, VIATRA, Tefkat, MT, ManyDesigns Portofino.
=== Relationship to ontologies ===
Meta-models are closely related to ontologies. Both are often used to describe and analyze the relations between concepts:
Ontologies: express something meaningful within a specified universe or domain of discourse by utilizing grammar for using vocabulary. The grammar specifies what it means to be a well-formed statement, assertion, query, etc. (formal constraints) on how terms in the ontology’s controlled vocabulary can be used together.
Meta-modeling: can be considered as an explicit description (constructs and rules) of how a domain-specific model is built. In particular, this comprises a formalized specification of the domain-specific notations. Typically, metamodels are – and always should follow - a strict rule set. "A valid metamodel is an ontology, but not all ontologies are modeled explicitly as metamodels."
=== Types of metamodels ===
For software engineering, several types of models (and their corresponding modeling activities) can be distinguished:
Metadata modeling (MetaData model)
Meta-process modeling (MetaProcess model)
Executable meta-modeling (combining both of the above and much more, as in the general purpose tool Kermeta)
Model transformation language (see below)
Polynomial metamodels
Neural network metamodels
Kriging metamodels
Piecewise polynomial (spline) metamodels
Gradient-enhanced kriging (GEK)
=== Zoos of metamodels ===
A library of similar metamodels has been called a Zoo of metamodels.
There are several types of meta-model zoos. Some are expressed in ECore. Others are written in MOF 1.4 – XMI 1.2. The metamodels expressed in UML-XMI1.2 may be uploaded in Poseidon for UML, a UML CASE tool.
== See also ==
== References ==
== Further reading ==
Saraju Mohanty (2015). "Chapter 12 Metamodel-Based Fast AMS-SoC Design Methodologies". Nanoelectronic Mixed-Signal System Design. McGraw-Hill. ISBN 978-0071825719.
Booch, G., Rumbaugh, J., Jacobson, I. (1999), The Unified Modeling Language User Guide, Redwood City, CA: Addison Wesley Longman Publishing Co., Inc.
J. P. van Gigch, System Design Modeling and Metamodeling, Plenum Press, New York, 1991
Gopi Bulusu, hamara.in, 2004 Model Driven Transformation
P. C. Smolik, Mambo Metamodeling Environment, Doctoral Thesis, Brno University of Technology. 2006
Gonzalez-Perez, C. and B. Henderson-Sellers, 2008. Metamodelling for Software Engineering. Chichester (UK): Wiley. 210 p. ISBN 978-0-470-03036-3
M.A. Jeusfeld, M. Jarke, and J. Mylopoulos, 2009. Metamodeling for Method Engineering. Cambridge (USA): The MIT Press. 424 p. ISBN 978-0-262-10108-0, Open access via http://conceptbase.sourceforge.net/2021_Metamodeling_for_Method_Engineering.pdf
G. Caplat Modèles & Métamodèles, 2008 - ISBN 978-2-88074-749-7 (in French)
Fill, H.-G., Karagiannis, D., 2013. On the Conceptualisation of Modelling Methods Using the ADOxx Meta Modelling Platform, Enterprise Modelling and Information Systems Architectures, Vol. 8, Issue 1, 4-25. | Wikipedia/Metamodeling |
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator).
In the field of electronics, signal recovery is the separation of such patterns from a disguising background.
According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g.
fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat.
Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954.
Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases.
Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the sciences and confusion matrices used in artificial intelligence. It is also usable in alarm management, where it is important to separate important events from background noise.
== Psychology ==
Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during eyewitness identification. SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see also decision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect.
To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories:
Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like the sensitivity index d' and A', and response bias can be estimated with statistics like c and β. β is the measure of response bias.
Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm.
== Applications ==
Signal Detection Theory has wide application, both in humans and animals. Topics include memory, stimulus characteristics of schedules of reinforcement, etc.
=== Sensitivity or discriminability ===
Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-called sensitivity index or d'. There are also non-parametric measures, such as the area under the ROC-curve.
=== Bias ===
Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias.
=== Compressed sensing ===
Another field which is closely related to signal detection theory is called compressed sensing (or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing including basis pursuit, expander recovery algorithm, CoSaMP and also fast non-iterative algorithm. In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such as RIP (Restricted Isometry Property) or Null-Space property in order to achieve robust sparse recovery.
== Mathematics ==
=== P(H1|y) > P(H2|y) / MAP testing ===
In the case of making a decision between two hypotheses, H1, absent, and H2, present, in the event of a particular observation, y, a classical approach is to choose H1 when p(H1|y) > p(H2|y) and H2 in the reverse case. In the event that the two a posteriori probabilities are equal, one might choose to default to a single choice (either always choose H1 or always choose H2), or might randomly select either H1 or H2. The a priori probabilities of H1 and H2 can guide this choice, e.g. by always choosing the hypothesis with the higher a priori probability.
When taking this approach, usually what one knows are the conditional probabilities, p(y|H1) and p(y|H2), and the a priori probabilities
p
(
H
1
)
=
π
1
{\displaystyle p(H1)=\pi _{1}}
and
p
(
H
2
)
=
π
2
{\displaystyle p(H2)=\pi _{2}}
. In this case,
p
(
H
1
|
y
)
=
p
(
y
|
H
1
)
⋅
π
1
p
(
y
)
{\displaystyle p(H1|y)={\frac {p(y|H1)\cdot \pi _{1}}{p(y)}}}
,
p
(
H
2
|
y
)
=
p
(
y
|
H
2
)
⋅
π
2
p
(
y
)
{\displaystyle p(H2|y)={\frac {p(y|H2)\cdot \pi _{2}}{p(y)}}}
where p(y) is the total probability of event y,
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
{\displaystyle p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}
.
H2 is chosen in case
p
(
y
|
H
2
)
⋅
π
2
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
≥
p
(
y
|
H
1
)
⋅
π
1
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
{\displaystyle {\frac {p(y|H2)\cdot \pi _{2}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}\geq {\frac {p(y|H1)\cdot \pi _{1}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}}
⇒
p
(
y
|
H
2
)
p
(
y
|
H
1
)
≥
π
1
π
2
{\displaystyle \Rightarrow {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}}{\pi _{2}}}}
and H1 otherwise.
Often, the ratio
π
1
π
2
{\displaystyle {\frac {\pi _{1}}{\pi _{2}}}}
is called
τ
M
A
P
{\displaystyle \tau _{MAP}}
and
p
(
y
|
H
2
)
p
(
y
|
H
1
)
{\displaystyle {\frac {p(y|H2)}{p(y|H1)}}}
is called
L
(
y
)
{\displaystyle L(y)}
, the likelihood ratio.
Using this terminology, H2 is chosen in case
L
(
y
)
≥
τ
M
A
P
{\displaystyle L(y)\geq \tau _{MAP}}
. This is called MAP testing, where MAP stands for "maximum a posteriori").
Taking this approach minimizes the expected number of errors one will make.
=== Bayes criterion ===
In some cases, it is far more important to respond appropriately to H1 than it is to respond appropriately to H2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a nuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a false alarm (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The Bayes criterion is an approach suitable for such cases.
Here a utility is associated with each of four situations:
U
11
{\displaystyle U_{11}}
: One responds with behavior appropriate to H1 and H1 is true: fighters destroy bomber, incurring fuel, maintenance, and weapons costs, take risk of some being shot down;
U
12
{\displaystyle U_{12}}
: One responds with behavior appropriate to H1 and H2 is true: fighters sent out, incurring fuel and maintenance costs, bomber location remains unknown;
U
21
{\displaystyle U_{21}}
: One responds with behavior appropriate to H2 and H1 is true: city destroyed;
U
22
{\displaystyle U_{22}}
: One responds with behavior appropriate to H2 and H2 is true: fighters stay home, bomber location remains unknown;
As is shown below, what is important are the differences,
U
11
−
U
21
{\displaystyle U_{11}-U_{21}}
and
U
22
−
U
12
{\displaystyle U_{22}-U_{12}}
.
Similarly, there are four probabilities,
P
11
{\displaystyle P_{11}}
,
P
12
{\displaystyle P_{12}}
, etc., for each of the cases (which are dependent on one's decision strategy).
The Bayes criterion approach is to maximize the expected utility:
E
{
U
}
=
P
11
⋅
U
11
+
P
21
⋅
U
21
+
P
12
⋅
U
12
+
P
22
⋅
U
22
{\displaystyle E\{U\}=P_{11}\cdot U_{11}+P_{21}\cdot U_{21}+P_{12}\cdot U_{12}+P_{22}\cdot U_{22}}
E
{
U
}
=
P
11
⋅
U
11
+
(
1
−
P
11
)
⋅
U
21
+
P
12
⋅
U
12
+
(
1
−
P
12
)
⋅
U
22
{\displaystyle E\{U\}=P_{11}\cdot U_{11}+(1-P_{11})\cdot U_{21}+P_{12}\cdot U_{12}+(1-P_{12})\cdot U_{22}}
E
{
U
}
=
U
21
+
U
22
+
P
11
⋅
(
U
11
−
U
21
)
−
P
12
⋅
(
U
22
−
U
12
)
{\displaystyle E\{U\}=U_{21}+U_{22}+P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}
Effectively, one may maximize the sum,
U
′
=
P
11
⋅
(
U
11
−
U
21
)
−
P
12
⋅
(
U
22
−
U
12
)
{\displaystyle U'=P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}
,
and make the following substitutions:
P
11
=
π
1
⋅
∫
R
1
p
(
y
|
H
1
)
d
y
{\displaystyle P_{11}=\pi _{1}\cdot \int _{R_{1}}p(y|H1)\,dy}
P
12
=
π
2
⋅
∫
R
1
p
(
y
|
H
2
)
d
y
{\displaystyle P_{12}=\pi _{2}\cdot \int _{R_{1}}p(y|H2)\,dy}
where
π
1
{\displaystyle \pi _{1}}
and
π
2
{\displaystyle \pi _{2}}
are the a priori probabilities,
P
(
H
1
)
{\displaystyle P(H1)}
and
P
(
H
2
)
{\displaystyle P(H2)}
, and
R
1
{\displaystyle R_{1}}
is the region of observation events, y, that are responded to as though H1 is true.
⇒
U
′
=
∫
R
1
{
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
−
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
}
d
y
{\displaystyle \Rightarrow U'=\int _{R_{1}}\left\{\pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\right\}\,dy}
U
′
{\displaystyle U'}
and thus
U
{\displaystyle U}
are maximized by extending
R
1
{\displaystyle R_{1}}
over the region where
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
−
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
>
0
{\displaystyle \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)>0}
This is accomplished by deciding H2 in case
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
≥
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
{\displaystyle \pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\geq \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)}
⇒
L
(
y
)
≡
p
(
y
|
H
2
)
p
(
y
|
H
1
)
≥
π
1
⋅
(
U
11
−
U
21
)
π
2
⋅
(
U
22
−
U
12
)
≡
τ
B
{\displaystyle \Rightarrow L(y)\equiv {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}\cdot (U_{11}-U_{21})}{\pi _{2}\cdot (U_{22}-U_{12})}}\equiv \tau _{B}}
and H1 otherwise, where L(y) is the so-defined likelihood ratio.
=== Normal distribution models ===
Das and Geisler extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate and confusion matrix for ideal observers and non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
== See also ==
== References ==
=== Bibliography ===
Coren, S., Ward, L.M., Enns, J. T. (1994) Sensation and Perception. (4th Ed.) Toronto: Harcourt Brace.
Kay, SM. Fundamentals of Statistical Signal Processing: Detection Theory (ISBN 0-13-504135-X)
McNichol, D. (1972) A Primer of Signal Detection Theory. London: George Allen & Unwin.
Van Trees HL. Detection, Estimation, and Modulation Theory, Part 1 (ISBN 0-471-09517-6; website)
Wickens, Thomas D., (2002) Elementary Signal Detection Theory. New York: Oxford University Press. (ISBN 0-19-509250-3)
== External links ==
A Description of Signal Detection Theory
An application of SDT to safety
Signal Detection Theory by Garrett Neske, The Wolfram Demonstrations Project
Lecture by Steven Pinker | Wikipedia/Signal_detection_theory |
A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.
The corresponding professional activity is called generally data modeling or, more specifically, database design.
Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar.
A data modeling language and notation are often represented in graphical form as diagrams.
A data model can sometimes be referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.
A data model explicitly determines the structure of data; conversely, structured data is data organized according to an explicit data model or data structure. Structured data is in contrast to unstructured data and semi-structured data.
== Overview ==
The term data model can refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses.
Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such as word processing documents, email messages, pictures, digital audio, and video: XDM, for example, provides a data model for XML documents.
=== The role of data models ===
The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".
"Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces".
"Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance".
"Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25–70% of the cost of current systems".
"Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardized. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper".
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.
A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3]
=== Three perspectives ===
A data model instance may be one of three kinds according to ANSI in 1975:
Conceptual data model: describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model.
Logical data model: describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things.
Physical data model: describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.
The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of a conceptual data model. Such a design can be detailed into a logical data model. In later stages, this model may be translated into physical data model. However, it is also possible to implement a conceptual model directly.
== History ==
One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958), who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
In the 1960s data modeling gained more significance with the initiation of the management information system (MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time". Towards the end of the 1960s, Edgar F. Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.
In the 1970s entity–relationship modeling emerged as a new type of conceptual data modeling, originally formalized in 1976 by Peter Chen. Entity–relationship models were being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. This technique can describe any ontology, i.e., an overview and classification of concepts and their relationships, for a certain area of interest.
In the 1970s G.M. Nijssen developed "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation with Terry Halpin into Object–Role Modeling (ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based.
Bill Kent, in his 1978 book Data and Reality, compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth.
In the 1980s, according to Jan L. Harrington (2000), "the development of the object-oriented paradigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data."
During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work of G.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information Modeling FCO-IM.
== Types ==
=== Database model ===
A database model is a specification describing how a database is structured and used.
Several such models have been suggested. Common models include:
Flat model
This may not strictly qualify as a data model. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.
Hierarchical model
The hierarchical model is similar to the network model except that links in the hierarchical model form a tree structure, while the network model allows arbitrary graph.
Network model
This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members. The network data model is an abstraction of the design concept used in the implementation of databases.
Relational model
is a database model based on first-order predicate logic. Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The power of the relational data model lies in its mathematical foundations and a simple user-level paradigm.
Object–relational model
Similar to a relational database model, but objects, classes, and inheritance are directly supported in database schemas and in the query language.
Object–role modeling
A method of data modeling that has been defined as "attribute free", and "fact-based". The result is a verifiably correct system, from which other common artifacts, such as ERD, UML, and semantic models may be derived. Associations between data objects are described during the database design procedure, such that normalization is an inevitable result of the process.
Star schema
The simplest style of data warehouse schema. The star schema consists of a few "fact tables" (possibly only one, justifying the name) referencing any number of "dimension tables". The star schema is considered an important special case of the snowflake schema.
=== Data structure diagram ===
A data structure diagram (DSD) is a diagram and data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. The basic graphic elements of DSDs are boxes, representing entities, and arrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities.
Data structure diagrams are an extension of the entity–relationship model (ER model). In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.
There are several styles for representing data structure diagrams, with the notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
=== Entity–relationship model ===
An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstract conceptual data model (or semantic data model or physical data model) used in software engineering to represent structured data. There are several notations used for ERMs. Like DSD's, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes.
There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
=== Geographic data model ===
A data model in Geographic information systems is a mathematical construct for representing geographic objects or surfaces as data. For example,
the vector data model represents geography as points, lines, and polygons
the raster data model represents geography as cell matrixes that store numeric values;
and the Triangulated irregular network (TIN) data model represents geography as sets of contiguous, nonoverlapping triangles.
=== Generic data model ===
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant.
=== Semantic data model ===
A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. A semantic data model is sometimes called a conceptual data model.
The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
== Topics ==
=== Data architecture ===
Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture.
A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.
=== Data modeling ===
Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining business requirements for a database. It is sometimes called database modeling because a data model is eventually implemented in a database.
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
=== Data properties ===
Some important properties of data for which requirements need to be met are:
definition-related properties
relevance: the usefulness of the data in the context of your business.
clarity: the availability of a clear and shared definition for the data.
consistency: the compatibility of the same type of data from different sources.
content-related properties
timeliness: the availability of data at the time required and how up-to-date that data is.
accuracy: how close to the truth the data is.
properties related to both definition and content
completeness: how much of the required data is available.
accessibility: where, how, and to whom the data is available or not available (e.g. security).
cost: the cost incurred in obtaining the data, and making it available for use.
=== Data organization ===
Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
While data analysis is a common term for data modeling, the activity actually has more in common with the ideas and methods of synthesis (inferring general concepts from particular instances) than it does with analysis (identifying component concepts from more general ones). {Presumably we call ourselves systems analysts because no one can say systems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures with relationships.
A different approach is to use adaptive systems such as artificial neural networks that can autonomously create implicit models of data.
=== Data structure ===
A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type.
A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicated grammar for a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system.
The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identify abstractions of such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such an abstract entity class is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people.
=== Data model theory ===
The term data model can have two meanings:
A data model theory, i.e. a formal description of how data may be structured and accessed.
A data model instance, i.e. applying a data model theory to create a practical data model instance for some particular application.
A data model theory has three main components:
The structural part: a collection of data structures which are used to create databases representing the entities or objects modeled by the database.
The integrity part: a collection of rules governing the constraints placed on these data structures to ensure structural integrity.
The manipulation part: a collection of operators which can be applied to the data structures, to update and query the data contained in the database.
For example, in the relational model, the structural part is based on a modified concept of the mathematical relation; the integrity part is expressed in first-order logic and the manipulation part is expressed using the relational algebra, tuple calculus and domain calculus.
A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create an entity–relationship model of the corporate data repository of some business enterprise. This model is transformed into a relational model, which in turn generates a relational database.
=== Patterns ===
Patterns are common data modeling structures that occur in many data models.
== Related models ==
=== Data-flow diagram ===
A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. It differs from the flowchart as it shows the data flow instead of the control flow of the program. A data-flow diagram can also be used for the visualization of data processing (structured design). Data-flow diagrams were invented by Larry Constantine, the original developer of structured design, based on Martin and Estrin's "data-flow graph" model of computation.
It is common practice to draw a context-level data-flow diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled
=== Information model ===
An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
According to Lee (1999) an information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context. More in general the term information model is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised to Facility Information Model, Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity–relationship models or XML schemas.
=== Object model ===
An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) [1] is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
In computing the term object model has a distinct second meaning of the general properties of objects in a specific computer programming language, technology, notation or methodology that uses them. For example, the Java object model, the COM object model, or the object model of OMT. Such object models are usually defined using concepts such as class, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
=== Object–role modeling ===
Object–Role Modeling (ORM) is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Object–Role Modeling is a fact-oriented method for performing systems analysis at the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.
The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).
=== Unified Modeling Language models ===
The Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. It is a graphical language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:
Conceptual things such as business processes and system functions
Concrete things such as programming language statements, database schemas, and
Reusable software components.
UML offers a mix of functional models, data models, and database models.
== See also ==
Business process model
Core architecture data model
Common data model, any standardised data model
Data collection system
Data dictionary
Data Format Description Language (DFDL)
Distributional–relational database
JC3IEDM
Process model
== References ==
== Further reading ==
David C. Hay (1996). Data Model Patterns: Conventions of Thought. New York:Dorset House Publishers, Inc.
Len Silverston (2001). The Data Model Resource Book Volume 1/2. John Wiley & Sons.
Len Silverston & Paul Agnew (2008). The Data Model Resource Book: Universal Patterns for data Modeling Volume 3. John Wiley & Sons.
Matthew West (2011) Developing High Quality Data Models Morgan Kaufmann | Wikipedia/Data_model |
In mathematics, operator K-theory is a noncommutative analogue of topological K-theory for Banach algebras with most applications used for C*-algebras.
== Overview ==
Operator K-theory resembles topological K-theory more than algebraic K-theory. In particular, a Bott periodicity theorem holds. So there are only two K-groups, namely K0, which is equal to algebraic K0, and K1. As a consequence of the periodicity theorem, it satisfies excision. This means that it associates to an extension of C*-algebras to a long exact sequence, which, by Bott periodicity, reduces to an exact cyclic 6-term-sequence.
Operator K-theory is a generalization of topological K-theory, defined by means of vector bundles on locally compact Hausdorff spaces. Here, a vector bundle over a topological space X is associated to a projection in the C* algebra of matrix-valued—that is,
M
n
(
C
)
{\displaystyle M_{n}(\mathbb {C} )}
-valued—continuous functions over X. Also, it is known that isomorphism of vector bundles translates to Murray-von Neumann equivalence of the associated projection in K ⊗ C(X), where K is the compact operators on a separable Hilbert space.
Hence, the K0 group of a (not necessarily commutative) C*-algebra A is defined as Grothendieck group generated by the Murray-von Neumann equivalence classes of projections in K ⊗ C(X). K0 is a functor from the category of C*-algebras and *-homomorphisms, to the category of abelian groups and group homomorphisms. The higher K-functors are defined via a C*-version of the suspension: Kn(A) = K0(Sn(A)), where
SA = C0(0,1) ⊗ A.
However, by Bott periodicity, it turns out that Kn+2(A) and Kn(A) are isomorphic for each n, and thus the only groups produced by this construction are K0 and K1.
The key reason for the introduction of K-theoretic methods into the study of C*-algebras was the Fredholm index: Given a bounded linear operator on a Hilbert space that has finite-dimensional kernel and cokernel, one can associate to it an integer, which, as it turns out, reflects the 'defect' on the operator - i.e. the extent to which it is not invertible. The Fredholm index map appears in the 6-term exact sequence given by the Calkin algebra. In the analysis on manifolds, this index and its generalizations played a crucial role in the index theory of Atiyah and Singer, where the topological index of the manifold can be expressed via the index of elliptic operators on it. Later on, Brown, Douglas and Fillmore observed that the Fredholm index was the missing ingredient in classifying essentially normal operators up to certain natural equivalence. These ideas, together with Elliott's classification of AF C*-algebras via K-theory led to a great deal of interest in adapting methods such as K-theory from algebraic topology into the study of operator algebras.
This, in turn, led to K-homology, Kasparov's bivariant KK-theory, and, more recently, Connes and Higson's E-theory.
== References ==
Rordam, M.; Larsen, Finn; Laustsen, N. (2000), An introduction to K-theory for C∗-algebras, London Mathematical Society Student Texts, vol. 49, Cambridge University Press, ISBN 978-0-521-78334-7 | Wikipedia/Operator_K-theory |
In mathematics, algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall,
with L being used as the letter after K. Algebraic L-theory, also known as "Hermitian K-theory",
is important in surgery theory.
== Definition ==
One can define L-groups for any ring with involution R: the quadratic L-groups
L
∗
(
R
)
{\displaystyle L_{*}(R)}
(Wall) and the symmetric L-groups
L
∗
(
R
)
{\displaystyle L^{*}(R)}
(Mishchenko, Ranicki).
=== Even dimension ===
The even-dimensional L-groups
L
2
k
(
R
)
{\displaystyle L_{2k}(R)}
are defined as the Witt groups of ε-quadratic forms over the ring R with
ϵ
=
(
−
1
)
k
{\displaystyle \epsilon =(-1)^{k}}
. More precisely,
L
2
k
(
R
)
{\displaystyle L_{2k}(R)}
is the abelian group of equivalence classes
[
ψ
]
{\displaystyle [\psi ]}
of non-degenerate ε-quadratic forms
ψ
∈
Q
ϵ
(
F
)
{\displaystyle \psi \in Q_{\epsilon }(F)}
over R, where the underlying R-modules F are finitely generated free. The equivalence relation is given by stabilization with respect to hyperbolic ε-quadratic forms:
[
ψ
]
=
[
ψ
′
]
⟺
n
,
n
′
∈
N
0
:
ψ
⊕
H
(
−
1
)
k
(
R
)
n
≅
ψ
′
⊕
H
(
−
1
)
k
(
R
)
n
′
{\displaystyle [\psi ]=[\psi ']\Longleftrightarrow n,n'\in {\mathbb {N} }_{0}:\psi \oplus H_{(-1)^{k}}(R)^{n}\cong \psi '\oplus H_{(-1)^{k}}(R)^{n'}}
.
The addition in
L
2
k
(
R
)
{\displaystyle L_{2k}(R)}
is defined by
[
ψ
1
]
+
[
ψ
2
]
:=
[
ψ
1
⊕
ψ
2
]
.
{\displaystyle [\psi _{1}]+[\psi _{2}]:=[\psi _{1}\oplus \psi _{2}].}
The zero element is represented by
H
(
−
1
)
k
(
R
)
n
{\displaystyle H_{(-1)^{k}}(R)^{n}}
for any
n
∈
N
0
{\displaystyle n\in {\mathbb {N} }_{0}}
. The inverse of
[
ψ
]
{\displaystyle [\psi ]}
is
[
−
ψ
]
{\displaystyle [-\psi ]}
.
=== Odd dimension ===
Defining odd-dimensional L-groups is more complicated; further details and the definition of the odd-dimensional L-groups can be found in the references mentioned below.
== Examples and applications ==
The L-groups of a group
π
{\displaystyle \pi }
are the L-groups
L
∗
(
Z
[
π
]
)
{\displaystyle L_{*}(\mathbf {Z} [\pi ])}
of the group ring
Z
[
π
]
{\displaystyle \mathbf {Z} [\pi ]}
. In the applications to topology
π
{\displaystyle \pi }
is the fundamental group
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
of a space
X
{\displaystyle X}
. The quadratic L-groups
L
∗
(
Z
[
π
]
)
{\displaystyle L_{*}(\mathbf {Z} [\pi ])}
play a central role in the surgery classification of the homotopy types of
n
{\displaystyle n}
-dimensional manifolds of dimension
n
>
4
{\displaystyle n>4}
, and in the formulation of the Novikov conjecture.
The distinction between symmetric L-groups and quadratic L-groups, indicated by upper and lower indices, reflects the usage in group homology and cohomology. The group cohomology
H
∗
{\displaystyle H^{*}}
of the cyclic group
Z
2
{\displaystyle \mathbf {Z} _{2}}
deals with the fixed points of a
Z
2
{\displaystyle \mathbf {Z} _{2}}
-action, while the group homology
H
∗
{\displaystyle H_{*}}
deals with the orbits of a
Z
2
{\displaystyle \mathbf {Z} _{2}}
-action; compare
X
G
{\displaystyle X^{G}}
(fixed points) and
X
G
=
X
/
G
{\displaystyle X_{G}=X/G}
(orbits, quotient) for upper/lower index notation.
The quadratic L-groups:
L
n
(
R
)
{\displaystyle L_{n}(R)}
and the symmetric L-groups:
L
n
(
R
)
{\displaystyle L^{n}(R)}
are related by
a symmetrization map
L
n
(
R
)
→
L
n
(
R
)
{\displaystyle L_{n}(R)\to L^{n}(R)}
which is an isomorphism modulo 2-torsion, and which corresponds to the polarization identities.
The quadratic and the symmetric L-groups are 4-fold periodic (the comment of Ranicki, page 12, on the non-periodicity of the symmetric L-groups refers to another type of L-groups, defined using "short complexes").
In view of the applications to the classification of manifolds there are extensive calculations of
the quadratic
L
{\displaystyle L}
-groups
L
∗
(
Z
[
π
]
)
{\displaystyle L_{*}(\mathbf {Z} [\pi ])}
. For finite
π
{\displaystyle \pi }
algebraic methods are used, and mostly geometric methods (e.g. controlled topology) are used for infinite
π
{\displaystyle \pi }
.
More generally, one can define L-groups for any additive category with a chain duality, as in Ranicki (section 1).
=== Integers ===
The simply connected L-groups are also the L-groups of the integers, as
L
(
e
)
:=
L
(
Z
[
e
]
)
=
L
(
Z
)
{\displaystyle L(e):=L(\mathbf {Z} [e])=L(\mathbf {Z} )}
for both
L
{\displaystyle L}
=
L
∗
{\displaystyle L^{*}}
or
L
∗
.
{\displaystyle L_{*}.}
For quadratic L-groups, these are the surgery obstructions to simply connected surgery.
The quadratic L-groups of the integers are:
L
4
k
(
Z
)
=
Z
signature
/
8
L
4
k
+
1
(
Z
)
=
0
L
4
k
+
2
(
Z
)
=
Z
/
2
Arf invariant
L
4
k
+
3
(
Z
)
=
0.
{\displaystyle {\begin{aligned}L_{4k}(\mathbf {Z} )&=\mathbf {Z} &&{\text{signature}}/8\\L_{4k+1}(\mathbf {Z} )&=0\\L_{4k+2}(\mathbf {Z} )&=\mathbf {Z} /2&&{\text{Arf invariant}}\\L_{4k+3}(\mathbf {Z} )&=0.\end{aligned}}}
In doubly even dimension (4k), the quadratic L-groups detect the signature; in singly even dimension (4k+2), the L-groups detect the Arf invariant (topologically the Kervaire invariant).
The symmetric L-groups of the integers are:
L
4
k
(
Z
)
=
Z
signature
L
4
k
+
1
(
Z
)
=
Z
/
2
de Rham invariant
L
4
k
+
2
(
Z
)
=
0
L
4
k
+
3
(
Z
)
=
0.
{\displaystyle {\begin{aligned}L^{4k}(\mathbf {Z} )&=\mathbf {Z} &&{\text{signature}}\\L^{4k+1}(\mathbf {Z} )&=\mathbf {Z} /2&&{\text{de Rham invariant}}\\L^{4k+2}(\mathbf {Z} )&=0\\L^{4k+3}(\mathbf {Z} )&=0.\end{aligned}}}
In doubly even dimension (4k), the symmetric L-groups, as with the quadratic L-groups, detect the signature; in dimension (4k+1), the L-groups detect the de Rham invariant.
== References ==
Lück, Wolfgang (2002), "A basic introduction to surgery theory" (PDF), Topology of high-dimensional manifolds, No. 1, 2 (Trieste, 2001), ICTP Lect. Notes, vol. 9, Abdus Salam Int. Cent. Theoret. Phys., Trieste, pp. 1–224, MR 1937016
Ranicki, Andrew A. (1992), Algebraic L-theory and topological manifolds (PDF), Cambridge Tracts in Mathematics, vol. 102, Cambridge University Press, ISBN 978-0-521-42024-2, MR 1211640
Wall, C. T. C. (1999) [1970], Ranicki, Andrew (ed.), Surgery on compact manifolds (PDF), Mathematical Surveys and Monographs, vol. 69 (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0942-6, MR 1687388 | Wikipedia/L-theory |
In mathematics, KK-theory is a common generalization both of K-homology and K-theory as an additive bivariant functor on separable C*-algebras. This notion was introduced by the Russian mathematician Gennadi Kasparov in 1980.
It was influenced by Atiyah's concept of Fredholm modules for the Atiyah–Singer index theorem, and the classification of extensions of C*-algebras by Lawrence G. Brown, Ronald G. Douglas, and Peter Arthur Fillmore in 1977. In turn, it has had great success in operator algebraic formalism toward the index theory and the classification of nuclear C*-algebras, as it was the key to the solutions of many problems in operator K-theory, such as, for instance, the mere calculation of K-groups. Furthermore, it was essential in the development of the Baum–Connes conjecture and plays a crucial role in noncommutative topology.
KK-theory was followed by a series of similar bifunctor constructions such as the E-theory and the bivariant periodic cyclic theory, most of them having more category-theoretic flavors, or concerning another class of algebras rather than that of the separable C*-algebras, or incorporating group actions.
== Definition ==
The following definition is quite close to the one originally given by Kasparov. This is the form in which most KK-elements arise in applications.
Let A and B be separable C*-algebras, where B is also assumed to be σ-unital. The set of cycles is the set of triples (H, ρ, F), where H is a countably generated graded Hilbert module over B, ρ is a *-representation of A on H as even bounded operators that commute with B, and F is a bounded operator on H of degree 1, which again commutes with B. They are required to fulfill the condition that
[
F
,
ρ
(
a
)
]
,
(
F
2
−
1
)
ρ
(
a
)
,
(
F
−
F
∗
)
ρ
(
a
)
{\displaystyle [F,\rho (a)],(F^{2}-1)\rho (a),(F-F^{*})\rho (a)}
for a in A are all B-compact operators. A cycle is said to be degenerate if all three expressions are 0 for all a.
Two cycles are said to be homologous, or homotopic, if there is a cycle between A and IB, where IB denotes the C*-algebra of continuous functions from [0, 1] to B, such that there is an even unitary operator from the 0-end of the homotopy to the first cycle, and a unitary operator from the 1-end of the homotopy to the second cycle.
The KK-group KK(A, B) between A and B is then defined to be the set of cycles modulo homotopy. It becomes an abelian group under the direct sum operation of bimodules as the addition, and the class of the degenerate modules as its neutral element.
There are various, but equivalent definitions of the KK-theory, notably the one due to Joachim Cuntz that eliminates bimodule and 'Fredholm' operator F from the picture and puts the accent entirely on the homomorphism ρ. More precisely it can be defined as the set of homotopy classes
K
K
(
A
,
B
)
=
[
q
A
,
K
(
H
)
⊗
B
]
{\displaystyle KK(A,B)=[qA,K(H)\otimes B]}
,
of *-homomorphisms from the classifying algebra qA of quasi-homomorphisms to the C*-algebra of compact operators of an infinite dimensional separable Hilbert space tensored with B. Here, qA is defined as the kernel of the map from the C*-algebraic free product A*A of A with itself to A defined by the identity on both factors.
== Properties ==
When one takes the C*-algebra C of the complex numbers as the first argument of KK as in KK(C, B) this additive group is naturally isomorphic to the K0-group K0(B) of the second argument B. In the Cuntz point of view, a K0-class of B is nothing but a homotopy class of *-homomorphisms from the complex numbers to the stabilization of B. Similarly when one takes the algebra C0(R) of the continuous functions on the real line decaying at infinity as the first argument, the obtained group KK(C0(R), B) is naturally isomorphic to K1(B).
An important property of KK-theory is the so-called Kasparov product, or the composition product,
K
K
(
A
,
B
)
×
K
K
(
B
,
C
)
→
K
K
(
A
,
C
)
{\displaystyle KK(A,B)\times KK(B,C)\to KK(A,C)}
,
which is bilinear with respect to the additive group structures. In particular each element of KK(A, B) gives a homomorphism of K*(A) → K*(B) and another homomorphism K*(B) → K*(A).
The product can be defined much more easily in the Cuntz picture given that there are natural maps from QA to A, and from B to K(H) ⊗ B that induce KK-equivalences.
The composition product gives a new category
K
K
{\displaystyle {\mathsf {KK}}}
, whose objects are given by the separable C*-algebras while the morphisms between them are given by elements of the corresponding KK-groups. Moreover, any *-homomorphism of A into B induces an element of KK(A, B) and this correspondence gives a functor from the original category of the separable C*-algebras into
K
K
{\displaystyle {\mathsf {KK}}}
. The approximately inner automorphisms of the algebras become identity morphisms in
K
K
{\displaystyle {\mathsf {KK}}}
.
This functor
C
∗
−
a
l
g
→
K
K
{\displaystyle {\mathsf {C^{*}\!-\!alg}}\to {\mathsf {KK}}}
is universal among the split-exact, homotopy invariant and stable additive functors on the category of the separable C*-algebras. Any such theory satisfies Bott periodicity in the appropriate sense since
K
K
{\displaystyle {\mathsf {KK}}}
does.
The Kasparov product can be further generalized to the following form:
K
K
(
A
,
B
⊗
E
)
×
K
K
(
B
⊗
D
,
C
)
→
K
K
(
A
⊗
D
,
C
⊗
E
)
.
{\displaystyle KK(A,B\otimes E)\times KK(B\otimes D,C)\to KK(A\otimes D,C\otimes E).}
It contains as special cases not only the K-theoretic cup product, but also the K-theoretic cap, cross, and slant products and the product of extensions.
== Notes ==
== References ==
== External links ==
KK-theory at the nLab
E-theory at the nLab | Wikipedia/KK-theory |
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combinations of protons and neutrons is called nuclear physics.
The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction.
Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators.
Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle.
These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory.
Experimental particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated theoretically before being confirmed by experiments.
== History ==
The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics".
Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics.
== Standard Model ==
The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, W−, W+ and Z bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson.
The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model.
== Subatomic particles ==
Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model.
Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles.
=== Quarks and leptons ===
Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or -1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement.
There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist.
=== Bosons ===
Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light.: 29–30 The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state.
=== Antiparticles and color charge ===
Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted e− and e+. However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is νe, whereas its antineutrino is νe. When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles.
Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)).
=== Composite ===
The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark).
An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon.
=== Hypothetical ===
The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy.
== Experimental laboratories ==
The world's major particle physics laboratories are:
Brookhaven National Laboratory (Long Island, New York, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider.
Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000.
CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva, Switzerland). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments.
DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL.
Fermi National Accelerator Laboratory (Fermilab) (Batavia, Illinois, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009.
Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO).
KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment and its successor T2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons.
SLAC National Accelerator Laboratory (Menlo Park, California, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world.
== Theory ==
Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today.
One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists.
Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions.
A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE".
There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.
== Practical applications ==
In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics.
== Future ==
Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments.
== See also ==
== References ==
== External links == | Wikipedia/High_energy_physics |
In mathematics, derived noncommutative algebraic geometry, the derived version of noncommutative algebraic geometry, is the geometric study of derived categories and related constructions of triangulated categories using categorical tools. Some basic examples include the bounded derived category of coherent sheaves on a smooth variety,
D
b
(
X
)
{\displaystyle D^{b}(X)}
, called its derived category, or the derived category of perfect complexes on an algebraic variety, denoted
D
perf
(
X
)
{\displaystyle D_{\operatorname {perf} }(X)}
. For instance, the derived category of coherent sheaves
D
b
(
X
)
{\displaystyle D^{b}(X)}
on a smooth projective variety can be used as an invariant of the underlying variety for many cases (if
X
{\displaystyle X}
has an ample (anti-)canonical sheaf). Unfortunately, studying derived categories as geometric objects of themselves does not have a standardized name.
== Derived category of projective line ==
The derived category of
P
1
{\displaystyle \mathbb {P} ^{1}}
is one of the motivating examples for derived non-commutative schemes due to its easy categorical structure. Recall that the Euler sequence of
P
1
{\displaystyle \mathbb {P} ^{1}}
is the short exact sequence
0
→
O
(
−
2
)
→
O
(
−
1
)
⊕
2
→
O
→
0
{\displaystyle 0\to {\mathcal {O}}(-2)\to {\mathcal {O}}(-1)^{\oplus 2}\to {\mathcal {O}}\to 0}
if we consider the two terms on the right as a complex, then we get the distinguished triangle
O
(
−
1
)
⊕
2
→
ϕ
O
→
Cone
(
ϕ
)
→
+
1
.
{\displaystyle {\mathcal {O}}(-1)^{\oplus 2}{\overset {\phi }{\rightarrow }}{\mathcal {O}}\to \operatorname {Cone} (\phi ){\overset {+1}{\rightarrow }}.}
Since
Cone
(
ϕ
)
≅
O
(
−
2
)
[
+
1
]
{\displaystyle \operatorname {Cone} (\phi )\cong {\mathcal {O}}(-2)[+1]}
we have constructed this sheaf
O
(
−
2
)
{\displaystyle {\mathcal {O}}(-2)}
using only categorical tools. We could repeat this again by tensoring the Euler sequence by the flat sheaf
O
(
−
1
)
{\displaystyle {\mathcal {O}}(-1)}
, and apply the cone construction again. If we take the duals of the sheaves, then we can construct all of the line bundles in
Coh
(
P
1
)
{\displaystyle \operatorname {Coh} (\mathbb {P} ^{1})}
using only its triangulated structure. It turns out the correct way of studying derived categories from its objects and triangulated structure is with exceptional collections.
== Semiorthogonal decompositions and exceptional collections ==
The technical tools for encoding this construction are semiorthogonal decompositions and exceptional collections. A semiorthogonal decomposition of a triangulated category
T
{\displaystyle {\mathcal {T}}}
is a collection of full triangulated subcategories
T
1
,
…
,
T
n
{\displaystyle {\mathcal {T}}_{1},\ldots ,{\mathcal {T}}_{n}}
such that the following two properties hold
(1) For objects
T
i
∈
Ob
(
T
i
)
{\displaystyle T_{i}\in \operatorname {Ob} ({\mathcal {T}}_{i})}
we have
Hom
(
T
i
,
T
j
)
=
0
{\displaystyle \operatorname {Hom} (T_{i},T_{j})=0}
for
i
>
j
{\displaystyle i>j}
(2) The subcategories
T
i
{\displaystyle {\mathcal {T}}_{i}}
generate
T
{\displaystyle {\mathcal {T}}}
, meaning every object
T
∈
Ob
(
T
)
{\displaystyle T\in \operatorname {Ob} ({\mathcal {T}})}
can be decomposed in to a sequence of
T
i
∈
Ob
(
T
)
{\displaystyle T_{i}\in \operatorname {Ob} ({\mathcal {T}})}
,
0
=
T
n
→
T
n
−
1
→
⋯
→
T
1
→
T
0
=
T
{\displaystyle 0=T_{n}\to T_{n-1}\to \cdots \to T_{1}\to T_{0}=T}
such that
Cone
(
T
i
→
T
i
−
1
)
∈
Ob
(
T
i
)
{\displaystyle \operatorname {Cone} (T_{i}\to T_{i-1})\in \operatorname {Ob} ({\mathcal {T}}_{i})}
. Notice this is analogous to a filtration of an object in an abelian category such that the cokernels live in a specific subcategory.
We can specialize this a little further by considering exceptional collections of objects, which generate their own subcategories. An object
E
{\displaystyle E}
in a triangulated category is called exceptional if the following property holds
Hom
(
E
,
E
[
+
ℓ
]
)
=
{
k
if
ℓ
=
0
0
if
ℓ
≠
0
{\displaystyle \operatorname {Hom} (E,E[+\ell ])={\begin{cases}k&{\text{if }}\ell =0\\0&{\text{if }}\ell \neq 0\end{cases}}}
where
k
{\displaystyle k}
is the underlying field of the vector space of morphisms. A collection of exceptional objects
E
1
,
…
,
E
r
{\displaystyle E_{1},\ldots ,E_{r}}
is an exceptional collection of length
r
{\displaystyle r}
if for any
i
>
j
{\displaystyle i>j}
and any
ℓ
{\displaystyle \ell }
, we have
Hom
(
E
i
,
E
j
[
+
ℓ
]
)
=
0
{\displaystyle \operatorname {Hom} (E_{i},E_{j}[+\ell ])=0}
and is a strong exceptional collection if in addition, for any
ℓ
≠
0
{\displaystyle \ell \neq 0}
and any
i
,
j
{\displaystyle i,j}
, we have
Hom
(
E
i
,
E
j
[
+
ℓ
]
)
=
0
{\displaystyle \operatorname {Hom} (E_{i},E_{j}[+\ell ])=0}
We can then decompose our triangulated category into the semiorthogonal decomposition
T
=
⟨
T
′
,
E
1
,
…
,
E
r
⟩
{\displaystyle {\mathcal {T}}=\langle {\mathcal {T}}',E_{1},\ldots ,E_{r}\rangle }
where
T
′
=
⟨
E
1
,
…
,
E
r
⟩
⊥
{\displaystyle {\mathcal {T}}'=\langle E_{1},\ldots ,E_{r}\rangle ^{\perp }}
, the subcategory of objects in
E
∈
Ob
(
T
)
{\displaystyle E\in \operatorname {Ob} ({\mathcal {T}})}
such that
Hom
(
E
,
E
i
[
+
ℓ
]
)
=
0
{\displaystyle \operatorname {Hom} (E,E_{i}[+\ell ])=0}
. If in addition
T
′
=
0
{\displaystyle {\mathcal {T}}'=0}
then the strong exceptional collection is called full.
== Beilinson's theorem ==
Beilinson provided the first example of a full strong exceptional collection. In the derived category
D
b
(
P
n
)
{\displaystyle D^{b}(\mathbb {P} ^{n})}
the line bundles
O
(
−
n
)
,
O
(
−
n
+
1
)
,
…
,
O
(
−
1
)
,
O
{\displaystyle {\mathcal {O}}(-n),{\mathcal {O}}(-n+1),\ldots ,{\mathcal {O}}(-1),{\mathcal {O}}}
form a full strong exceptional collection. He proves the theorem in two parts. First showing these objects are an exceptional collection and second by showing the diagonal
O
Δ
{\displaystyle {\mathcal {O}}_{\Delta }}
of
P
n
×
P
n
{\displaystyle \mathbb {P} ^{n}\times \mathbb {P} ^{n}}
has a resolution whose compositions are tensors of the pullback of the exceptional objects.
Technical Lemma
An exceptional collection of sheaves
E
1
,
E
2
,
…
,
E
r
{\displaystyle E_{1},E_{2},\ldots ,E_{r}}
on
X
{\displaystyle X}
is full if there exists a resolution
0
→
p
1
∗
E
1
⊗
p
2
∗
F
1
→
⋯
→
p
1
∗
E
n
⊗
p
2
∗
F
n
→
O
Δ
→
0
{\displaystyle 0\to p_{1}^{*}E_{1}\otimes p_{2}^{*}F_{1}\to \cdots \to p_{1}^{*}E_{n}\otimes p_{2}^{*}F_{n}\to {\mathcal {O}}_{\Delta }\to 0}
in
D
b
(
X
×
X
)
{\displaystyle D^{b}(X\times X)}
where
F
i
{\displaystyle F_{i}}
are arbitrary coherent sheaves on
X
{\displaystyle X}
.
Another way to reformulate this lemma for
X
=
P
n
{\displaystyle X=\mathbb {P} ^{n}}
is by looking at the Koszul complex associated to
⨁
i
=
0
n
O
(
−
D
i
)
→
ϕ
O
{\displaystyle \bigoplus _{i=0}^{n}{\mathcal {O}}(-D_{i})\xrightarrow {\phi } {\mathcal {O}}}
where
D
i
{\displaystyle D_{i}}
are hyperplane divisors of
P
n
{\displaystyle \mathbb {P} ^{n}}
. This gives the exact complex
0
→
O
(
−
∑
i
=
1
n
D
i
)
→
⋯
→
⨁
i
≠
j
O
(
−
D
i
−
D
j
)
→
⨁
i
=
1
n
O
(
−
D
i
)
→
O
→
0
{\displaystyle 0\to {\mathcal {O}}\left(-\sum _{i=1}^{n}D_{i}\right)\to \cdots \to \bigoplus _{i\neq j}{\mathcal {O}}(-D_{i}-D_{j})\to \bigoplus _{i=1}^{n}{\mathcal {O}}(-D_{i})\to {\mathcal {O}}\to 0}
which gives a way to construct
O
(
−
n
−
1
)
{\displaystyle {\mathcal {O}}(-n-1)}
using the sheaves
O
(
−
n
)
,
…
,
O
(
−
1
)
,
O
{\displaystyle {\mathcal {O}}(-n),\ldots ,{\mathcal {O}}(-1),{\mathcal {O}}}
, since they are the sheaves used in all terms in the above exact sequence, except for
O
(
−
∑
i
=
0
n
D
i
)
≅
O
(
−
n
−
1
)
{\displaystyle {\mathcal {O}}\left(-\sum _{i=0}^{n}D_{i}\right)\cong {\mathcal {O}}(-n-1)}
which gives a derived equivalence of the rest of the terms of the above complex with
O
(
−
n
−
1
)
{\displaystyle {\mathcal {O}}(-n-1)}
. For
n
=
2
{\displaystyle n=2}
the Koszul complex above is the exact complex
0
→
O
(
−
3
)
→
O
(
−
2
)
⊕
O
(
−
2
)
→
O
(
−
1
)
⊕
O
(
−
1
)
→
O
→
0
{\displaystyle 0\to {\mathcal {O}}(-3)\to {\mathcal {O}}(-2)\oplus {\mathcal {O}}(-2)\to {\mathcal {O}}(-1)\oplus {\mathcal {O}}(-1)\to {\mathcal {O}}\to 0}
giving the quasi isomorphism of
O
(
−
3
)
{\displaystyle {\mathcal {O}}(-3)}
with the complex
0
→
O
(
−
2
)
⊕
O
(
−
2
)
→
O
(
−
1
)
⊕
O
(
−
1
)
→
O
→
0
{\displaystyle 0\to {\mathcal {O}}(-2)\oplus {\mathcal {O}}(-2)\to {\mathcal {O}}(-1)\oplus {\mathcal {O}}(-1)\to {\mathcal {O}}\to 0}
== Orlov's reconstruction theorem ==
If
X
{\displaystyle X}
is a smooth projective variety with ample (anti-)canonical sheaf and there is an equivalence of derived categories
F
:
D
b
(
X
)
→
D
b
(
Y
)
{\displaystyle F:D^{b}(X)\to D^{b}(Y)}
, then there is an isomorphism of the underlying varieties.
=== Sketch of proof ===
The proof starts out by analyzing two induced Serre functors on
D
b
(
Y
)
{\displaystyle D^{b}(Y)}
and finding an isomorphism between them. It particular, it shows there is an object
ω
Y
=
F
(
ω
X
)
{\displaystyle \omega _{Y}=F(\omega _{X})}
which acts like the dualizing sheaf on
Y
{\displaystyle Y}
. The isomorphism between these two functors gives an isomorphism of the set of underlying points of the derived categories. Then, what needs to be check is an ismorphism
F
(
ω
X
⊗
k
)
≅
ω
Y
⊗
k
{\displaystyle F(\omega _{X}^{\otimes k})\cong \omega _{Y}^{\otimes k}}
, for any
k
∈
N
{\displaystyle k\in \mathbb {N} }
, giving an isomorphism of canonical rings
A
(
X
)
=
⨁
k
=
0
∞
H
0
(
X
,
ω
X
⊗
k
)
≅
⨁
k
=
0
∞
H
0
(
Y
,
ω
Y
⊗
k
)
{\displaystyle A(X)=\bigoplus _{k=0}^{\infty }H^{0}(X,\omega _{X}^{\otimes k})\cong \bigoplus _{k=0}^{\infty }H^{0}(Y,\omega _{Y}^{\otimes k})}
If
ω
Y
{\displaystyle \omega _{Y}}
can be shown to be (anti-)ample, then the proj of these rings will give an isomorphism
X
→
Y
{\displaystyle X\to Y}
. All of the details are contained in Dolgachev's notes.
=== Failure of reconstruction ===
This theorem fails in the case
X
{\displaystyle X}
is Calabi-Yau, since
ω
X
≅
O
X
{\displaystyle \omega _{X}\cong {\mathcal {O}}_{X}}
, or is the product of a variety which is Calabi-Yau. Abelian varieties are a class of examples where a reconstruction theorem could never hold. If
X
{\displaystyle X}
is an abelian variety and
X
^
{\displaystyle {\hat {X}}}
is its dual, the Fourier–Mukai transform with kernel
P
{\displaystyle {\mathcal {P}}}
, the Poincare bundle, gives an equivalence
F
M
P
:
D
b
(
X
)
→
D
b
(
X
^
)
{\displaystyle FM_{\mathcal {P}}:D^{b}(X)\to D^{b}({\hat {X}})}
of derived categories. Since an abelian variety is generally not isomorphic to its dual, there are derived equivalent derived categories without isomorphic underlying varieties. There is an alternative theory of tensor triangulated geometry where we consider not only a triangulated category, but also a monoidal structure, i.e. a tensor product. This geometry has a full reconstruction theorem using the spectrum of categories.
=== Equivalences on K3 surfaces ===
K3 surfaces are another class of examples where reconstruction fails due to their Calabi-Yau property. There is a criterion for determining whether or not two K3 surfaces are derived equivalent: the derived category of the K3 surface
D
b
(
X
)
{\displaystyle D^{b}(X)}
is derived equivalent to another K3
D
b
(
Y
)
{\displaystyle D^{b}(Y)}
if and only if there is a Hodge isometry
H
2
(
X
,
Z
)
→
H
2
(
Y
,
Z
)
{\displaystyle H^{2}(X,\mathbb {Z} )\to H^{2}(Y,\mathbb {Z} )}
, that is, an isomorphism of Hodge structure. Moreover, this theorem is reflected in the motivic world as well, where the Chow motives are isomorphic if and only if there is an isometry of Hodge structures.
=== Autoequivalences ===
One nice application of the proof of this theorem is the identification of autoequivalences of the derived category of a smooth projective variety with ample (anti-)canonical sheaf. This is given by
Auteq
(
D
b
(
X
)
)
≅
(
Pic
(
X
)
⋊
Aut
(
X
)
)
×
Z
{\displaystyle \operatorname {Auteq} (D^{b}(X))\cong (\operatorname {Pic} (X)\rtimes \operatorname {Aut} (X))\times \mathbb {Z} }
Where an autoequivalence
F
{\displaystyle F}
is given by an automorphism
f
:
X
→
X
{\displaystyle f:X\to X}
, then tensored by a line bundle
L
∈
Pic
(
X
)
{\displaystyle {\mathcal {L}}\in \operatorname {Pic} (X)}
and finally composed with a shift. Note that
Aut
(
X
)
{\displaystyle \operatorname {Aut} (X)}
acts on
Pic
(
X
)
{\displaystyle \operatorname {Pic} (X)}
via the polarization map,
g
↦
g
∗
(
L
)
⊗
L
−
1
{\displaystyle g\mapsto g^{*}(L)\otimes L^{-1}}
.
== Relation with motives ==
The bounded derived category
D
b
(
X
)
{\displaystyle D^{b}(X)}
was used extensively in SGA6 to construct an intersection theory with
K
(
X
)
{\displaystyle K(X)}
and
G
r
γ
K
(
X
)
⊗
Q
{\displaystyle Gr_{\gamma }K(X)\otimes \mathbb {Q} }
. Since these objects are intimately relative with the Chow ring of
X
{\displaystyle X}
, its chow motive, Orlov asked the following question: given a fully-faithful functor
F
:
D
b
(
X
)
→
D
b
(
Y
)
{\displaystyle F:D^{b}(X)\to D^{b}(Y)}
is there an induced map on the chow motives
f
:
M
(
X
)
→
M
(
Y
)
{\displaystyle f:M(X)\to M(Y)}
such that
M
(
X
)
{\displaystyle M(X)}
is a summand of
M
(
Y
)
{\displaystyle M(Y)}
? In the case of K3 surfaces, a similar result has been confirmed since derived equivalent K3 surfaces have an isometry of Hodge structures, which gives an isomorphism of motives.
== Derived category of singularities ==
On a smooth variety there is an equivalence between the derived category
D
b
(
X
)
{\displaystyle D^{b}(X)}
and the thick full triangulated
D
perf
(
X
)
{\displaystyle D_{\operatorname {perf} }(X)}
of perfect complexes. For separated, Noetherian schemes of finite Krull dimension (called the ELF condition) this is not the case, and Orlov defines the derived category of singularities as their difference using a quotient of categories. For an ELF scheme
X
{\displaystyle X}
its derived category of singularities is defined as
D
s
g
(
X
)
:=
D
b
(
X
)
/
D
perf
(
X
)
{\displaystyle D_{sg}(X):=D^{b}(X)/D_{\text{perf}}(X)}
for a suitable definition of localization of triangulated categories.
=== Construction of localization ===
Although localization of categories is defined for a class of morphisms
Σ
{\displaystyle \Sigma }
in the category closed under composition, we can construct such a class from a triangulated subcategory. Given a full triangulated subcategory
N
⊂
T
{\displaystyle {\mathcal {N}}\subset {\mathcal {T}}}
the class of morphisms
Σ
(
N
)
{\displaystyle \Sigma ({\mathcal {N}})}
,
s
{\displaystyle s}
in
T
{\displaystyle {\mathcal {T}}}
where
s
{\displaystyle s}
fits into a distinguished triangle
X
→
s
Y
→
N
→
X
[
+
1
]
{\displaystyle X{\xrightarrow {s}}Y\to N\to X[+1]}
with
X
,
Y
∈
T
{\displaystyle X,Y\in {\mathcal {T}}}
and
N
∈
N
{\displaystyle N\in {\mathcal {N}}}
. It can be checked this forms a multiplicative system using the octahedral axiom for distinguished triangles. Given
X
→
s
Y
→
s
′
Z
{\displaystyle X{\xrightarrow {s}}Y{\xrightarrow {s'}}Z}
with distinguished triangles
X
→
s
Y
→
N
→
X
[
+
1
]
{\displaystyle X{\xrightarrow {s}}Y\to N\to X[+1]}
Y
→
s
′
Z
→
N
′
→
Y
[
+
1
]
{\displaystyle Y{\xrightarrow {s'}}Z\to N'\to Y[+1]}
where
N
,
N
′
∈
N
{\displaystyle N,N'\in {\mathcal {N}}}
, then there are distinguished triangles
X
→
Z
→
M
→
X
[
+
1
]
{\displaystyle X\to Z\to M\to X[+1]}
N
→
M
→
N
′
→
N
[
+
1
]
{\displaystyle N\to M\to N'\to N[+1]}
where
M
∈
N
{\displaystyle M\in {\mathcal {N}}}
since
N
{\displaystyle {\mathcal {N}}}
is closed under extensions. This new category has the following properties
It is canonically triangulated where a triangle in
T
/
N
{\displaystyle {\mathcal {T}}/{\mathcal {N}}}
is distinguished if it is isomorphic to the image of a triangle in
T
{\displaystyle {\mathcal {T}}}
The category
T
/
N
{\displaystyle {\mathcal {T}}/{\mathcal {N}}}
has the following universal property: any exact functor
F
:
T
→
T
′
{\displaystyle F:{\mathcal {T}}\to {\mathcal {T}}'}
where
F
(
N
)
≅
0
{\displaystyle F(N)\cong 0}
where
N
∈
N
{\displaystyle N\in {\mathcal {N}}}
, then it factors uniquely through the quotient functor
Q
:
T
→
T
/
N
{\displaystyle Q:{\mathcal {T}}\to {\mathcal {T}}/{\mathcal {N}}}
, so there exists a morphism
F
~
:
T
/
N
→
T
′
{\displaystyle {\tilde {F}}:{\mathcal {T}}/{\mathcal {N}}\to {\mathcal {T}}'}
such that
F
~
∘
Q
≃
F
{\displaystyle {\tilde {F}}\circ Q\simeq F}
.
=== Properties of singularity category ===
If
X
{\displaystyle X}
is a regular scheme, then every bounded complex of coherent sheaves is perfect. Hence the singularity category is trivial
Any coherent sheaf
F
{\displaystyle {\mathcal {F}}}
which has support away from
Sing
(
X
)
{\displaystyle \operatorname {Sing} (X)}
is perfect. Hence nontrivial coherent sheaves in
D
s
g
(
X
)
{\displaystyle D_{sg}(X)}
have support on
Sing
(
X
)
{\displaystyle \operatorname {Sing} (X)}
.
In particular, objects in
D
s
g
(
X
)
{\displaystyle D_{sg}(X)}
are isomorphic to
F
[
+
k
]
{\displaystyle {\mathcal {F}}[+k]}
for some coherent sheaf
F
{\displaystyle {\mathcal {F}}}
.
=== Landau–Ginzburg models ===
Kontsevich proposed a model for Landau–Ginzburg models which was worked out to the following definition: a Landau–Ginzburg model is a smooth variety
X
{\displaystyle X}
together with a morphism
W
:
X
→
A
1
{\displaystyle W:X\to \mathbb {A} ^{1}}
which is flat. There are three associated categories which can be used to analyze the D-branes in a Landau–Ginzburg model using matrix factorizations from commutative algebra.
==== Associated categories ====
With this definition, there are three categories which can be associated to any point
w
0
∈
A
1
{\displaystyle w_{0}\in \mathbb {A} ^{1}}
, a
Z
/
2
{\displaystyle \mathbb {Z} /2}
-graded category
D
G
w
0
(
W
)
{\displaystyle DG_{w_{0}}(W)}
, an exact category
Pair
w
0
(
W
)
{\displaystyle \operatorname {Pair} _{w_{0}}(W)}
, and a triangulated category
D
B
w
0
(
W
)
{\displaystyle DB_{w_{0}}(W)}
, each of which has objects
P
¯
=
(
p
1
:
P
1
→
P
0
,
p
0
:
P
0
→
P
1
)
{\displaystyle {\overline {P}}=(p_{1}:P_{1}\to P_{0},p_{0}:P_{0}\to P_{1})}
where
p
0
∘
p
1
,
p
1
∘
p
0
{\displaystyle p_{0}\circ p_{1},p_{1}\circ p_{0}}
are multiplication by
W
−
w
0
{\displaystyle W-w_{0}}
.
There is also a shift functor
[
+
1
]
{\displaystyle [+1]}
send
P
¯
{\displaystyle {\overline {P}}}
to
P
¯
[
+
1
]
=
(
−
p
0
:
P
0
→
P
1
,
−
p
1
:
P
1
→
P
0
)
{\displaystyle {\overline {P}}[+1]=(-p_{0}:P_{0}\to P_{1},-p_{1}:P_{1}\to P_{0})}
.The difference between these categories are their definition of morphisms. The most general of which is
D
G
w
0
(
W
)
{\displaystyle DG_{w_{0}}(W)}
whose morphisms are the
Z
/
2
{\displaystyle \mathbb {Z} /2}
-graded complex
Hom
(
P
¯
,
Q
¯
)
=
⨁
i
,
j
Hom
(
P
i
,
Q
j
)
{\displaystyle \operatorname {Hom} ({\overline {P}},{\overline {Q}})=\bigoplus _{i,j}\operatorname {Hom} (P_{i},Q_{j})}
where the grading is given by
(
i
−
j
)
mod
2
{\displaystyle (i-j){\bmod {2}}}
and differential acting on degree
d
{\displaystyle d}
homogeneous elements by
D
f
=
q
∘
f
−
(
−
1
)
d
f
∘
p
{\displaystyle Df=q\circ f-(-1)^{d}f\circ p}
In
Pair
w
0
(
W
)
{\displaystyle \operatorname {Pair} _{w_{0}}(W)}
the morphisms are the degree
0
{\displaystyle 0}
morphisms in
D
G
w
0
(
W
)
{\displaystyle DG_{w_{0}}(W)}
. Finally,
D
B
w
0
(
W
)
{\displaystyle DB_{w_{0}}(W)}
has the morphisms in
Pair
w
0
(
W
)
{\displaystyle \operatorname {Pair} _{w_{0}}(W)}
modulo the null-homotopies. Furthermore,
D
B
w
0
(
W
)
{\displaystyle DB_{w_{0}}(W)}
can be endowed with a triangulated structure through a graded cone-construction in
Pair
w
0
(
W
)
{\displaystyle \operatorname {Pair} _{w_{0}}(W)}
. Given
f
¯
:
P
¯
→
Q
¯
{\displaystyle {\overline {f}}:{\overline {P}}\to {\overline {Q}}}
there is a mapping code
C
(
f
)
{\displaystyle C(f)}
with maps
c
1
:
Q
1
⊕
P
0
→
Q
0
⊕
P
1
{\displaystyle c_{1}:Q_{1}\oplus P_{0}\to Q_{0}\oplus P_{1}}
where
c
1
=
[
q
0
f
1
0
−
p
1
]
{\displaystyle c_{1}={\begin{bmatrix}q_{0}&f_{1}\\0&-p_{1}\end{bmatrix}}}
and
c
0
:
Q
0
⊕
P
1
→
Q
1
⊕
P
0
{\displaystyle c_{0}:Q_{0}\oplus P_{1}\to Q_{1}\oplus P_{0}}
where
c
0
=
[
q
1
f
0
0
−
p
0
]
{\displaystyle {\displaystyle c_{0}={\begin{bmatrix}q_{1}&f_{0}\\0&-p_{0}\end{bmatrix}}}}
Then, a diagram
P
¯
→
Q
¯
→
R
¯
→
P
¯
[
+
1
]
{\displaystyle {\overline {P}}\to {\overline {Q}}\to {\overline {R}}\to {\overline {P}}[+1]}
in
D
B
w
0
(
W
)
{\displaystyle DB_{w_{0}}(W)}
is a distinguished triangle if it is isomorphic to a cone from
Pair
w
0
(
W
)
{\displaystyle \operatorname {Pair} _{w_{0}}(W)}
.
==== D-brane category ====
Using the construction of
D
B
w
0
(
W
)
{\displaystyle DB_{w_{0}}(W)}
we can define the category of D-branes of type B on
X
{\displaystyle X}
with superpotential
W
{\displaystyle W}
as the product category
D
B
(
W
)
=
∏
w
∈
A
1
D
B
w
0
(
W
)
.
{\displaystyle DB(W)=\prod _{w\in \mathbb {A} _{1}}DB_{w_{0}}(W).}
This is related to the singularity category as follows: Given a superpotential
W
{\displaystyle W}
with isolated singularities only at
0
{\displaystyle 0}
, denote
X
0
=
W
−
1
(
0
)
{\displaystyle X_{0}=W^{-1}(0)}
. Then, there is an exact equivalence of categories
D
B
w
0
(
W
)
≅
D
s
g
(
X
0
)
{\displaystyle DB_{w_{0}}(W)\cong D_{sg}(X_{0})}
given by a functor induced from cokernel functor
Cok
{\displaystyle \operatorname {Cok} }
sending a pair
P
¯
↦
Coker
(
p
1
)
{\displaystyle {\overline {P}}\mapsto \operatorname {Coker} (p_{1})}
. In particular, since
X
{\displaystyle X}
is regular, Bertini's theorem shows
D
B
(
W
)
{\displaystyle DB(W)}
is only a finite product of categories.
=== Computational tools ===
==== Knörrer periodicity ====
There is a Fourier-Mukai transform
Φ
Z
{\displaystyle \Phi _{Z}}
on the derived categories of two related varieties giving an equivalence of their singularity categories. This equivalence is called Knörrer periodicity. This can be constructed as follows: given a flat morphism
f
:
X
→
A
1
{\displaystyle f:X\to \mathbb {A} ^{1}}
from a separated regular Noetherian scheme of finite Krull dimension, there is an associated scheme
Y
=
X
×
A
2
{\displaystyle Y=X\times \mathbb {A} ^{2}}
and morphism
g
:
Y
→
A
1
{\displaystyle g:Y\to \mathbb {A} ^{1}}
such that
g
=
f
+
x
y
{\displaystyle g=f+xy}
where
x
y
{\displaystyle xy}
are the coordinates of the
A
2
{\displaystyle \mathbb {A} ^{2}}
-factor. Consider the fibers
X
0
=
f
−
1
(
0
)
{\displaystyle X_{0}=f^{-1}(0)}
,
Y
0
=
g
−
1
(
0
)
{\displaystyle Y_{0}=g^{-1}(0)}
, and the induced morphism
x
:
Y
0
→
A
1
{\displaystyle x:Y_{0}\to \mathbb {A} ^{1}}
. And the fiber
Z
=
x
−
1
(
0
)
{\displaystyle Z=x^{-1}(0)}
. Then, there is an injection
i
:
Z
→
Y
0
{\displaystyle i:Z\to Y_{0}}
and a projection
q
:
Z
→
X
0
{\displaystyle q:Z\to X_{0}}
forming an
A
1
{\displaystyle \mathbb {A} ^{1}}
-bundle. The Fourier-Mukai transform
Φ
Z
(
⋅
)
=
R
i
∗
q
∗
(
⋅
)
{\displaystyle \Phi _{Z}(\cdot )=\mathbf {R} i_{*}q^{*}(\cdot )}
induces an equivalence of categories
D
s
g
(
X
0
)
→
D
s
g
(
Y
0
)
{\displaystyle D_{sg}(X_{0})\to D_{sg}(Y_{0})}
called Knörrer periodicity. There is another form of this periodicity where
x
y
{\displaystyle xy}
is replaced by the polynomial
x
2
+
y
2
{\displaystyle x^{2}+y^{2}}
. These periodicity theorems are the main computational techniques because it allows for a reduction in the analysis of the singularity categories.
=== Computations ===
If we take the Landau–Ginzburg model
(
C
2
k
+
1
,
W
)
{\displaystyle (\mathbb {C} ^{2k+1},W)}
where
W
=
z
0
n
+
z
1
2
+
⋯
+
z
2
k
2
{\displaystyle W=z_{0}^{n}+z_{1}^{2}+\cdots +z_{2k}^{2}}
, then the only fiber singular fiber of
W
{\displaystyle W}
is the origin. Then, the D-brane category of the Landau–Ginzburg model is equivalent to the singularity category
D
sing
(
Spec
(
C
[
z
]
/
(
z
n
)
)
)
{\displaystyle D_{\text{sing}}(\operatorname {Spec} (\mathbb {C} [z]/(z^{n})))}
. Over the algebra
A
=
C
[
z
]
/
(
z
n
)
{\displaystyle A=\mathbb {C} [z]/(z^{n})}
there are indecomposable objects
V
i
=
Coker
(
A
→
z
i
A
)
=
A
/
z
i
{\displaystyle V_{i}=\operatorname {Coker} (A{\xrightarrow {z^{i}}}A)=A/z^{i}}
whose morphisms can be completely understood. For any pair
i
,
j
{\displaystyle i,j}
there are morphisms
α
j
i
:
V
i
→
V
j
{\displaystyle \alpha _{j}^{i}:V_{i}\to V_{j}}
where
for
i
≥
j
{\displaystyle i\geq j}
these are the natural projections
for
i
<
j
{\displaystyle i<j}
these are multiplication by
z
j
−
i
{\displaystyle z^{j-i}}
where every other morphism is a composition and linear combination of these morphisms. There are many other cases which can be explicitly computed, using the table of singularities found in Knörrer's original paper.
== See also ==
Derived category
Triangulated category
Perfect complex
Semiorthogonal decomposition
Fourier–Mukai transform
Bridgeland stability condition
Homological mirror symmetry
Derived Categories notes - http://www.math.lsa.umich.edu/~idolga/derived9.pdf
== References ==
=== Research articles ===
A noncommutative version of Beilinson's theorem
Derived Categories of Toric Varieties
Derived Categories of Toric Varieties II | Wikipedia/Derived_noncommutative_algebraic_geometry |
In mathematics, a linear algebraic group is a subgroup of the group of invertible
n
×
n
{\displaystyle n\times n}
matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation
M
T
M
=
I
n
{\displaystyle M^{T}M=I_{n}}
where
M
T
{\displaystyle M^{T}}
is the transpose of
M
{\displaystyle M}
.
Many Lie groups can be viewed as linear algebraic groups over the field of real or complex numbers. (For example, every compact Lie group can be regarded as a linear algebraic group over R (necessarily R-anisotropic and reductive), as can many noncompact groups such as the simple Lie group SL(n,R).) The simple Lie groups were classified by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. At that time, no special use was made of the fact that the group structure can be defined by polynomials, that is, that these are algebraic groups. The founders of the theory of algebraic groups include Maurer, Chevalley, and Kolchin (1948). In the 1950s, Armand Borel constructed much of the theory of algebraic groups as it exists today.
One of the first uses for the theory was to define the Chevalley groups.
== Examples ==
For a positive integer
n
{\displaystyle n}
, the general linear group
G
L
(
n
)
{\displaystyle GL(n)}
over a field
k
{\displaystyle k}
, consisting of all invertible
n
×
n
{\displaystyle n\times n}
matrices, is a linear algebraic group over
k
{\displaystyle k}
. It contains the subgroups
U
⊂
B
⊂
G
L
(
n
)
{\displaystyle U\subset B\subset GL(n)}
consisting of matrices of the form, resp.,
(
1
∗
…
∗
0
1
⋱
⋮
⋮
⋱
⋱
∗
0
…
0
1
)
{\displaystyle \left({\begin{array}{cccc}1&*&\dots &*\\0&1&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&1\end{array}}\right)}
and
(
∗
∗
…
∗
0
∗
⋱
⋮
⋮
⋱
⋱
∗
0
…
0
∗
)
{\displaystyle \left({\begin{array}{cccc}*&*&\dots &*\\0&*&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&*\end{array}}\right)}
.
The group
U
{\displaystyle U}
is an example of a unipotent linear algebraic group, the group
B
{\displaystyle B}
is an example of a solvable algebraic group called the Borel subgroup of
G
L
(
n
)
{\displaystyle GL(n)}
. It is a consequence of the Lie-Kolchin theorem that any connected solvable subgroup of
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
is conjugated into
B
{\displaystyle B}
. Any unipotent subgroup can be conjugated into
U
{\displaystyle U}
.
Another algebraic subgroup of
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
is the special linear group
S
L
(
n
)
{\displaystyle \mathrm {SL} (n)}
of matrices with determinant 1.
The group
G
L
(
1
)
{\displaystyle \mathrm {GL} (1)}
is called the multiplicative group, usually denoted by
G
m
{\displaystyle \mathbf {G} _{\mathrm {m} }}
. The group of
k
{\displaystyle k}
-points
G
m
(
k
)
{\displaystyle \mathbf {G} _{\mathrm {m} }(k)}
is the multiplicative group
k
∗
{\displaystyle k^{*}}
of nonzero elements of the field
k
{\displaystyle k}
. The additive group
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
, whose
k
{\displaystyle k}
-points are isomorphic to the additive group of
k
{\displaystyle k}
, can also be expressed as a matrix group, for example as the subgroup
U
{\displaystyle U}
in
G
L
(
2
)
{\displaystyle \mathrm {GL} (2)}
:
(
1
∗
0
1
)
.
{\displaystyle {\begin{pmatrix}1&*\\0&1\end{pmatrix}}.}
These two basic examples of commutative linear algebraic groups, the multiplicative and additive groups, behave very differently in terms of their linear representations (as algebraic groups). Every representation of the multiplicative group
G
m
{\displaystyle \mathbf {G} _{\mathrm {m} }}
is a direct sum of irreducible representations. (Its irreducible representations all have dimension 1, of the form
x
↦
x
n
{\displaystyle x\mapsto x^{n}}
for an integer
n
{\displaystyle n}
.) By contrast, the only irreducible representation of the additive group
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
is the trivial representation. So every representation of
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
(such as the 2-dimensional representation above) is an iterated extension of trivial representations, not a direct sum (unless the representation is trivial). The structure theory of linear algebraic groups analyzes any linear algebraic group in terms of these two basic groups and their generalizations, tori and unipotent groups, as discussed below.
== Definitions ==
For an algebraically closed field k, much of the structure of an algebraic variety X over k is encoded in its set X(k) of k-rational points, which allows an elementary definition of a linear algebraic group. First, define a function from the abstract group GL(n,k) to k to be regular if it can be written as a polynomial in the entries of an n×n matrix A and in 1/det(A), where det is the determinant. Then a linear algebraic group G over an algebraically closed field k is a subgroup G(k) of the abstract group GL(n,k) for some natural number n such that G(k) is defined by the vanishing of some set of regular functions.
For an arbitrary field k, algebraic varieties over k are defined as a special case of schemes over k. In that language, a linear algebraic group G over a field k is a smooth closed subgroup scheme of GL(n) over k for some natural number n. In particular, G is defined by the vanishing of some set of regular functions on GL(n) over k, and these functions must have the property that for every commutative k-algebra R, G(R) is a subgroup of the abstract group GL(n,R). (Thus an algebraic group G over k is not just the abstract group G(k), but rather the whole family of groups G(R) for commutative k-algebras R; this is the philosophy of describing a scheme by its functor of points.)
In either language, one has the notion of a homomorphism of linear algebraic groups. For example, when k is algebraically closed, a homomorphism from G ⊂ GL(m) to H ⊂ GL(n) is a homomorphism of abstract groups G(k) → H(k) which is defined by regular functions on G. This makes the linear algebraic groups over k into a category. In particular, this defines what it means for two linear algebraic groups to be isomorphic.
In the language of schemes, a linear algebraic group G over a field k is in particular a group scheme over k, meaning a scheme over k together with a k-point 1 ∈ G(k) and morphisms
m
:
G
×
k
G
→
G
,
i
:
G
→
G
{\displaystyle m\colon G\times _{k}G\to G,\;i\colon G\to G}
over k which satisfy the usual axioms for the multiplication and inverse maps in a group (associativity, identity, inverses). A linear algebraic group is also smooth and of finite type over k, and it is affine (as a scheme). Conversely, every affine group scheme G of finite type over a field k has a faithful representation into GL(n) over k for some n. An example is the embedding of the additive group Ga into GL(2), as mentioned above. As a result, one can think of linear algebraic groups either as matrix groups or, more abstractly, as smooth affine group schemes over a field. (Some authors use "linear algebraic group" to mean any affine group scheme of finite type over a field.)
For a full understanding of linear algebraic groups, one has to consider more general (non-smooth) group schemes. For example, let k be an algebraically closed field of characteristic p > 0. Then the homomorphism f: Gm → Gm defined by x ↦ xp induces an isomorphism of abstract groups k* → k*, but f is not an isomorphism of algebraic groups (because x1/p is not a regular function). In the language of group schemes, there is a clearer reason why f is not an isomorphism: f is surjective, but it has nontrivial kernel, namely the group scheme μp of pth roots of unity. This issue does not arise in characteristic zero. Indeed, every group scheme of finite type over a field k of characteristic zero is smooth over k. A group scheme of finite type over any field k is smooth over k if and only if it is geometrically reduced, meaning that the base change
G
k
¯
{\displaystyle G_{\overline {k}}}
is reduced, where
k
¯
{\displaystyle {\overline {k}}}
is an algebraic closure of k.
Since an affine scheme X is determined by its ring O(X) of regular functions, an affine group scheme G over a field k is determined by the ring O(G) with its structure of a Hopf algebra (coming from the multiplication and inverse maps on G). This gives an equivalence of categories (reversing arrows) between affine group schemes over k and commutative Hopf algebras over k. For example, the Hopf algebra corresponding to the multiplicative group Gm = GL(1) is the Laurent polynomial ring k[x, x−1], with comultiplication given by
x
↦
x
⊗
x
.
{\displaystyle x\mapsto x\otimes x.}
=== Basic notions ===
For a linear algebraic group G over a field k, the identity component Go (the connected component containing the point 1) is a normal subgroup of finite index. So there is a group extension
1
→
G
∘
→
G
→
F
→
1
,
{\displaystyle 1\to G^{\circ }\to G\to F\to 1,}
where F is a finite algebraic group. (For k algebraically closed, F can be identified with an abstract finite group.) Because of this, the study of algebraic groups mostly focuses on connected groups.
Various notions from abstract group theory can be extended to linear algebraic groups. It is straightforward to define what it means for a linear algebraic group to be commutative, nilpotent, or solvable, by analogy with the definitions in abstract group theory. For example, a linear algebraic group is solvable if it has a composition series of linear algebraic subgroups such that the quotient groups are commutative. Also, the normalizer, the center, and the centralizer of a closed subgroup H of a linear algebraic group G are naturally viewed as closed subgroup schemes of G. If they are smooth over k, then they are linear algebraic groups as defined above.
One may ask to what extent the properties of a connected linear algebraic group G over a field k are determined by the abstract group G(k). A useful result in this direction is that if the field k is perfect (for example, of characteristic zero), or if G is reductive (as defined below), then G is unirational over k. Therefore, if in addition k is infinite, the group G(k) is Zariski dense in G. For example, under the assumptions mentioned, G is commutative, nilpotent, or solvable if and only if G(k) has the corresponding property.
The assumption of connectedness cannot be omitted in these results. For example, let G be the group μ3 ⊂ GL(1) of cube roots of unity over the rational numbers Q. Then G is a linear algebraic group over Q for which G(Q) = 1 is not Zariski dense in G, because
G
(
Q
¯
)
{\displaystyle G({\overline {\mathbf {Q} }})}
is a group of order 3.
Over an algebraically closed field, there is a stronger result about algebraic groups as algebraic varieties: every connected linear algebraic group over an algebraically closed field is a rational variety.
== The Lie algebra of an algebraic group ==
The Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of an algebraic group G can be defined in several equivalent ways: as the tangent space T1(G) at the identity element 1 ∈ G(k), or as the space of left-invariant derivations. If k is algebraically closed, a derivation D: O(G) → O(G) over k of the coordinate ring of G is left-invariant if
D
λ
x
=
λ
x
D
{\displaystyle D\lambda _{x}=\lambda _{x}D}
for every x in G(k), where λx: O(G) → O(G) is induced by left multiplication by x. For an arbitrary field k, left invariance of a derivation is defined as an analogous equality of two linear maps O(G) → O(G) ⊗O(G). The Lie bracket of two derivations is defined by [D1, D2] =D1D2 − D2D1.
The passage from G to
g
{\displaystyle {\mathfrak {g}}}
is thus a process of differentiation. For an element x ∈ G(k), the derivative at 1 ∈ G(k) of the conjugation map G → G, g ↦ xgx−1, is an automorphism of
g
{\displaystyle {\mathfrak {g}}}
, giving the adjoint representation:
Ad
:
G
→
Aut
(
g
)
.
{\displaystyle \operatorname {Ad} \colon G\to \operatorname {Aut} ({\mathfrak {g}}).}
Over a field of characteristic zero, a connected subgroup H of a linear algebraic group G is uniquely determined by its Lie algebra
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
. But not every Lie subalgebra of
g
{\displaystyle {\mathfrak {g}}}
corresponds to an algebraic subgroup of G, as one sees in the example of the torus G = (Gm)2 over C. In positive characteristic, there can be many different connected subgroups of a group G with the same Lie algebra (again, the torus G = (Gm)2 provides examples). For these reasons, although the Lie algebra of an algebraic group is important, the structure theory of algebraic groups requires more global tools.
== Semisimple and unipotent elements ==
For an algebraically closed field k, a matrix g in GL(n,k) is called semisimple if it is diagonalizable, and unipotent if the matrix g − 1 is nilpotent. Equivalently, g is unipotent if all eigenvalues of g are equal to 1. The Jordan canonical form for matrices implies that every element g of GL(n,k) can be written uniquely as a product g = gssgu such that gss is semisimple, gu is unipotent, and gss and gu commute with each other.
For any field k, an element g of GL(n,k) is said to be semisimple if it becomes diagonalizable over the algebraic closure of k. If the field k is perfect, then the semisimple and unipotent parts of g also lie in GL(n,k). Finally, for any linear algebraic group G ⊂ GL(n) over a field k, define a k-point of G to be semisimple or unipotent if it is semisimple or unipotent in GL(n,k). (These properties are in fact independent of the choice of a faithful representation of G.) If the field k is perfect, then the semisimple and unipotent parts of a k-point of G are automatically in G. That is (the Jordan decomposition): every element g of G(k) can be written uniquely as a product g = gssgu in G(k) such that gss is semisimple, gu is unipotent, and gss and gu commute with each other. This reduces the problem of describing the conjugacy classes in G(k) to the semisimple and unipotent cases.
== Tori ==
A torus over an algebraically closed field k means a group isomorphic to (Gm)n, the product of n copies of the multiplicative group over k, for some natural number n. For a linear algebraic group G, a maximal torus in G means a torus in G that is not contained in any bigger torus. For example, the group of diagonal matrices in GL(n) over k is a maximal torus in GL(n), isomorphic to (Gm)n. A basic result of the theory is that any two maximal tori in a group G over an algebraically closed field k are conjugate by some element of G(k). The rank of G means the dimension of any maximal torus.
For an arbitrary field k, a torus T over k means a linear algebraic group over k whose base change
T
k
¯
{\displaystyle T_{\overline {k}}}
to the algebraic closure of k is isomorphic to (Gm)n over
k
¯
{\displaystyle {\overline {k}}}
, for some natural number n. A split torus over k means a group isomorphic to (Gm)n over k for some n. An example of a non-split torus over the real numbers R is
T
=
{
(
x
,
y
)
∈
A
R
2
:
x
2
+
y
2
=
1
}
,
{\displaystyle T=\{(x,y)\in A_{\mathbf {R} }^{2}:x^{2}+y^{2}=1\},}
with group structure given by the formula for multiplying complex numbers x+iy. Here T is a torus of dimension 1 over R. It is not split, because its group of real points T(R) is the circle group, which is not isomorphic even as an abstract group to Gm(R) = R*.
Every point of a torus over a field k is semisimple. Conversely, if G is a connected linear algebraic group such that every element of
G
(
k
¯
)
{\displaystyle G({\overline {k}})}
is semisimple, then G is a torus.
For a linear algebraic group G over a general field k, one cannot expect all maximal tori in G over k to be conjugate by elements of G(k). For example, both the multiplicative group Gm and the circle group T above occur as maximal tori in SL(2) over R. However, it is always true that any two maximal split tori in G over k (meaning split tori in G that are not contained in a bigger split torus) are conjugate by some element of G(k). As a result, it makes sense to define the k-rank or split rank of a group G over k as the dimension of any maximal split torus in G over k.
For any maximal torus T in a linear algebraic group G over a field k, Grothendieck showed that
T
k
¯
{\displaystyle T_{\overline {k}}}
is a maximal torus in
G
k
¯
{\displaystyle G_{\overline {k}}}
. It follows that any two maximal tori in G over a field k have the same dimension, although they need not be isomorphic.
== Unipotent groups ==
Let Un be the group of upper-triangular matrices in GL(n) with diagonal entries equal to 1, over a field k. A group scheme over a field k (for example, a linear algebraic group) is called unipotent if it is isomorphic to a closed subgroup scheme of Un for some n. It is straightforward to check that the group Un is nilpotent. As a result, every unipotent group scheme is nilpotent.
A linear algebraic group G over a field k is unipotent if and only if every element of
G
(
k
¯
)
{\displaystyle G({\overline {k}})}
is unipotent.
The group Bn of upper-triangular matrices in GL(n) is a semidirect product
B
n
=
T
n
⋉
U
n
,
{\displaystyle B_{n}=T_{n}\ltimes U_{n},}
where Tn is the diagonal torus (Gm)n. More generally, every connected solvable linear algebraic group is a semidirect product of a torus with a unipotent group, T ⋉ U.
A smooth connected unipotent group over a perfect field k (for example, an algebraically closed field) has a composition series with all quotient groups isomorphic to the additive group Ga.
== Borel subgroups ==
The Borel subgroups are important for the structure theory of linear algebraic groups. For a linear algebraic group G over an algebraically closed field k, a Borel subgroup of G means a maximal smooth connected solvable subgroup. For example, one Borel subgroup of GL(n) is the subgroup B of upper-triangular matrices (all entries below the diagonal are zero).
A basic result of the theory is that any two Borel subgroups of a connected group G over an algebraically closed field k are conjugate by some element of G(k). (A standard proof uses the Borel fixed-point theorem: for a connected solvable group G acting on a proper variety X over an algebraically closed field k, there is a k-point in X which is fixed by the action of G.) The conjugacy of Borel subgroups in GL(n) amounts to the Lie–Kolchin theorem: every smooth connected solvable subgroup of GL(n) is conjugate to a subgroup of the upper-triangular subgroup in GL(n).
For an arbitrary field k, a Borel subgroup B of G is defined to be a subgroup over k such that, over an algebraic closure
k
¯
{\displaystyle {\overline {k}}}
of k,
B
k
¯
{\displaystyle B_{\overline {k}}}
is a Borel subgroup of
G
k
¯
{\displaystyle G_{\overline {k}}}
. Thus G may or may not have a Borel subgroup over k.
For a closed subgroup scheme H of G, the quotient space G/H is a smooth quasi-projective scheme over k. A smooth subgroup P of a connected group G is called parabolic if G/P is projective over k (or equivalently, proper over k). An important property of Borel subgroups B is that G/B is a projective variety, called the flag variety of G. That is, Borel subgroups are parabolic subgroups. More precisely, for k algebraically closed, the Borel subgroups are exactly the minimal parabolic subgroups of G; conversely, every subgroup containing a Borel subgroup is parabolic. So one can list all parabolic subgroups of G (up to conjugation by G(k)) by listing all the linear algebraic subgroups of G that contain a fixed Borel subgroup. For example, the subgroups P ⊂ GL(3) over k that contain the Borel subgroup B of upper-triangular matrices are B itself, the whole group GL(3), and the intermediate subgroups
{
[
∗
∗
∗
0
∗
∗
0
∗
∗
]
}
{\displaystyle \left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&*&*\end{bmatrix}}\right\}}
and
{
[
∗
∗
∗
∗
∗
∗
0
0
∗
]
}
.
{\displaystyle \left\{{\begin{bmatrix}*&*&*\\*&*&*\\0&0&*\end{bmatrix}}\right\}.}
The corresponding projective homogeneous varieties GL(3)/P are (respectively): the flag manifold of all chains of linear subspaces
0
⊂
V
1
⊂
V
2
⊂
A
k
3
{\displaystyle 0\subset V_{1}\subset V_{2}\subset A_{k}^{3}}
with Vi of dimension i; a point; the projective space P2 of lines (1-dimensional linear subspaces) in A3; and the dual projective space P2 of planes in A3.
== Semisimple and reductive groups ==
A connected linear algebraic group G over an algebraically closed field is called semisimple if every smooth connected solvable normal subgroup of G is trivial. More generally, a connected linear algebraic group G over an algebraically closed field is called reductive if every smooth connected unipotent normal subgroup of G is trivial. (Some authors do not require reductive groups to be connected.) A semisimple group is reductive. A group G over an arbitrary field k is called semisimple or reductive if
G
k
¯
{\displaystyle G_{\overline {k}}}
is semisimple or reductive. For example, the group SL(n) of n × n matrices with determinant 1 over any field k is semisimple, whereas a nontrivial torus is reductive but not semisimple. Likewise, GL(n) is reductive but not semisimple (because its center Gm is a nontrivial smooth connected solvable normal subgroup).
Every compact connected Lie group has a complexification, which is a complex reductive algebraic group. In fact, this construction gives a one-to-one correspondence between compact connected Lie groups and complex reductive groups, up to isomorphism.
A linear algebraic group G over a field k is called simple (or k-simple) if it is semisimple, nontrivial, and every smooth connected normal subgroup of G over k is trivial or equal to G. (Some authors call this property "almost simple".) This differs slightly from the terminology for abstract groups, in that a simple algebraic group may have nontrivial center (although the center must be finite). For example, for any integer n at least 2 and any field k, the group SL(n) over k is simple, and its center is the group scheme μn of nth roots of unity.
Every connected linear algebraic group G over a perfect field k is (in a unique way) an extension of a reductive group R by a smooth connected unipotent group U, called the unipotent radical of G:
1
→
U
→
G
→
R
→
1.
{\displaystyle 1\to U\to G\to R\to 1.}
If k has characteristic zero, then one has the more precise Levi decomposition: every connected linear algebraic group G over k is a semidirect product
R
⋉
U
{\displaystyle R\ltimes U}
of a reductive group by a unipotent group.
== Classification of reductive groups ==
Reductive groups include the most important linear algebraic groups in practice, such as the classical groups: GL(n), SL(n), the orthogonal groups SO(n) and the symplectic groups Sp(2n). On the other hand, the definition of reductive groups is quite "negative", and it is not clear that one can expect to say much about them. Remarkably, Claude Chevalley gave a complete classification of the reductive groups over an algebraically closed field: they are determined by root data. In particular, simple groups over an algebraically closed field k are classified (up to quotients by finite central subgroup schemes) by their Dynkin diagrams. It is striking that this classification is independent of the characteristic of k. For example, the exceptional Lie groups G2, F4, E6, E7, and E8 can be defined in any characteristic (and even as group schemes over Z). The classification of finite simple groups says that most finite simple groups arise as the group of k-points of a simple algebraic group over a finite field k, or as minor variants of that construction.
Every reductive group over a field is the quotient by a finite central subgroup scheme of the product of a torus and some simple groups. For example,
G
L
(
n
)
≅
(
G
m
×
S
L
(
n
)
)
/
μ
n
.
{\displaystyle GL(n)\cong (G_{m}\times SL(n))/\mu _{n}.}
For an arbitrary field k, a reductive group G is called split if it contains a split maximal torus over k (that is, a split torus in G which remains maximal over an algebraic closure of k). For example, GL(n) is a split reductive group over any field k. Chevalley showed that the classification of split reductive groups is the same over any field. By contrast, the classification of arbitrary reductive groups can be hard, depending on the base field. For example, every nondegenerate quadratic form q over a field k determines a reductive group SO(q), and every central simple algebra A over k determines a reductive group SL1(A). As a result, the problem of classifying reductive groups over k essentially includes the problem of classifying all quadratic forms over k or all central simple algebras over k. These problems are easy for k algebraically closed, and they are understood for some other fields such as number fields, but for arbitrary fields there are many open questions.
== Applications ==
=== Representation theory ===
One reason for the importance of reductive groups comes from representation theory. Every irreducible representation of a unipotent group is trivial. More generally, for any linear algebraic group G written as an extension
1
→
U
→
G
→
R
→
1
{\displaystyle 1\to U\to G\to R\to 1}
with U unipotent and R reductive, every irreducible representation of G factors through R. This focuses attention on the representation theory of reductive groups. (To be clear, the representations considered here are representations of G as an algebraic group. Thus, for a group G over a field k, the representations are on k-vector spaces, and the action of G is given by regular functions. It is an important but different problem to classify continuous representations of the group G(R) for a real reductive group G, or similar problems over other fields.)
Chevalley showed that the irreducible representations of a split reductive group over a field k are finite-dimensional, and they are indexed by dominant weights. This is the same as what happens in the representation theory of compact connected Lie groups, or the finite-dimensional representation theory of complex semisimple Lie algebras. For k of characteristic zero, all these theories are essentially equivalent. In particular, every representation of a reductive group G over a field of characteristic zero is a direct sum of irreducible representations, and if G is split, the characters of the irreducible representations are given by the Weyl character formula. The Borel–Weil theorem gives a geometric construction of the irreducible representations of a reductive group G in characteristic zero, as spaces of sections of line bundles over the flag manifold G/B.
The representation theory of reductive groups (other than tori) over a field of positive characteristic p is less well understood. In this situation, a representation need not be a direct sum of irreducible representations. And although irreducible representations are indexed by dominant weights, the dimensions and characters of the irreducible representations are known only in some cases. Andersen, Jantzen and Soergel (1994) determined these characters (proving Lusztig's conjecture) when the characteristic p is sufficiently large compared to the Coxeter number of the group. For small primes p, there is not even a precise conjecture.
=== Group actions and geometric invariant theory ===
An action of a linear algebraic group G on a variety (or scheme) X over a field k is a morphism
G
×
k
X
→
X
{\displaystyle G\times _{k}X\to X}
that satisfies the axioms of a group action. As in other types of group theory, it is important to study group actions, since groups arise naturally as symmetries of geometric objects.
Part of the theory of group actions is geometric invariant theory, which aims to construct a quotient variety X/G, describing the set of orbits of a linear algebraic group G on X as an algebraic variety. Various complications arise. For example, if X is an affine variety, then one can try to construct X/G as Spec of the ring of invariants O(X)G. However, Masayoshi Nagata showed that the ring of invariants need not be finitely generated as a k-algebra (and so Spec of the ring is a scheme but not a variety), a negative answer to Hilbert's 14th problem. In the positive direction, the ring of invariants is finitely generated if G is reductive, by Haboush's theorem, proved in characteristic zero by Hilbert and Nagata.
Geometric invariant theory involves further subtleties when a reductive group G acts on a projective variety X. In particular, the theory defines open subsets of "stable" and "semistable" points in X, with the quotient morphism only defined on the set of semistable points.
== Related notions ==
Linear algebraic groups admit variants in several directions. Dropping the existence of the inverse map
i
:
G
→
G
{\displaystyle i\colon G\to G}
, one obtains the notion of a linear algebraic monoid.
=== Lie groups ===
For a linear algebraic group G over the real numbers R, the group of real points G(R) is a Lie group, essentially because real polynomials, which describe the multiplication on G, are smooth functions. Likewise, for a linear algebraic group G over C, G(C) is a complex Lie group. Much of the theory of algebraic groups was developed by analogy with Lie groups.
There are several reasons why a Lie group may not have the structure of a linear algebraic group over R.
A Lie group with an infinite group of components G/Go cannot be realized as a linear algebraic group.
An algebraic group G over R may be connected as an algebraic group while the Lie group G(R) is not connected, and likewise for simply connected groups. For example, the algebraic group SL(2) is simply connected over any field, whereas the Lie group SL(2,R) has fundamental group isomorphic to the integers Z. The double cover H of SL(2,R), known as the metaplectic group, is a Lie group that cannot be viewed as a linear algebraic group over R. More strongly, H has no faithful finite-dimensional representation.
Anatoly Maltsev showed that every simply connected nilpotent Lie group can be viewed as a unipotent algebraic group G over R in a unique way. (As a variety, G is isomorphic to affine space of some dimension over R.) By contrast, there are simply connected solvable Lie groups that cannot be viewed as real algebraic groups. For example, the universal cover H of the semidirect product S1 ⋉ R2 has center isomorphic to Z, which is not a linear algebraic group, and so H cannot be viewed as a linear algebraic group over R.
=== Abelian varieties ===
Algebraic groups which are not affine behave very differently. In particular, a smooth connected group scheme which is a projective variety over a field is called an abelian variety. In contrast to linear algebraic groups, every abelian variety is commutative. Nonetheless, abelian varieties have a rich theory. Even the case of elliptic curves (abelian varieties of dimension 1) is central to number theory, with applications including the proof of Fermat's Last Theorem.
=== Tannakian categories ===
The finite-dimensional representations of an algebraic group G, together with the tensor product of representations, form a tannakian category RepG. In fact, tannakian categories with a "fiber functor" over a field are equivalent to affine group schemes. (Every affine group scheme over a field k is pro-algebraic in the sense that it is an inverse limit of affine group schemes of finite type over k.) For example, the Mumford–Tate group and the motivic Galois group are constructed using this formalism. Certain properties of a (pro-)algebraic group G can be read from its category of representations. For example, over a field of characteristic zero, RepG is a semisimple category if and only if the identity component of G is pro-reductive.
== See also ==
The groups of Lie type are the finite simple groups constructed from simple algebraic groups over finite fields.
Lang's theorem
Generalized flag variety, Bruhat decomposition, BN pair, Weyl group, Cartan subgroup, group of adjoint type, parabolic induction
Real form (Lie theory), Satake diagram
Adelic algebraic group, Weil's conjecture on Tamagawa numbers
Langlands classification, Langlands program, geometric Langlands program
Torsor, nonabelian cohomology, special group, cohomological invariant, essential dimension, Kneser–Tits conjecture, Serre's conjecture II
Pseudo-reductive group
Differential Galois theory
Distribution on a linear algebraic group
== Notes ==
== References ==
Andersen, H. H.; Jantzen, J. C.; Soergel, W. (1994), Representations of Quantum Groups at a pth Root of Unity and of Semisimple Groups in Characteristic p: Independence of p, Astérisque, vol. 220, Société Mathématique de France, ISSN 0303-1179, MR 1272539
Borel, Armand (1991) [1969], Linear Algebraic Groups (2nd ed.), New York: Springer-Verlag, ISBN 0-387-97370-2, MR 1102012
Bröcker, Theodor; tom Dieck, Tammo (1985), Representations of Compact Lie Groups, Springer Nature, ISBN 0-387-13678-9, MR 0781344
Conrad, Brian (2014), "Reductive group schemes" (PDF), Autour des schémas en groupes, vol. 1, Paris: Société Mathématique de France, pp. 93–444, ISBN 978-2-85629-794-0, MR 3309122
Deligne, Pierre; Milne, J. S. (1982), "Tannakian categories", Hodge Cycles, Motives, and Shimura Varieties, Lecture Notes in Mathematics, vol. 900, Springer Nature, pp. 101–228, ISBN 3-540-11174-3, MR 0654325
De Medts, Tom (2019), Linear Algebraic Groups (course notes) (PDF), Ghent University
Humphreys, James E. (1975), Linear Algebraic Groups, Springer, ISBN 0-387-90108-6, MR 0396773
Kolchin, E. R. (1948), "Algebraic matric groups and the Picard–Vessiot theory of homogeneous linear ordinary differential equations", Annals of Mathematics, Second Series, 49 (1): 1–42, doi:10.2307/1969111, ISSN 0003-486X, JSTOR 1969111, MR 0024884
Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, ISBN 978-1107167483, MR 3729270
Springer, Tonny A. (1998) [1981], Linear Algebraic Groups (2nd ed.), New York: Birkhäuser, ISBN 0-8176-4021-5, MR 1642713
== External links ==
"Linear algebraic group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Linear_algebraic_group_action |
In mathematics, KR-theory is a variant of topological K-theory defined for spaces with an involution. It was introduced by Atiyah (1966), motivated by applications to the Atiyah–Singer index theorem for real elliptic operators.
== Definition ==
A real space is a defined to be a topological space with an involution. A real vector bundle over a real space X is defined to be a complex vector bundle E over X that is also a real space, such that the natural maps from E to X and from
C
{\displaystyle \mathbb {C} }
×E to E commute with the involution, where the involution acts as complex conjugation on
C
{\displaystyle \mathbb {C} }
. (This differs from the notion of a complex vector bundle in the category of Z/2Z spaces, where the involution acts trivially on
C
{\displaystyle \mathbb {C} }
.)
The group KR(X) is the Grothendieck group of finite-dimensional real vector bundles over the real space X.
== Periodicity ==
Similarly to Bott periodicity, the periodicity theorem for KR states that KRp,q = KRp+1,q+1, where KRp,q is suspension with respect to Rp,q =
Rq + iRp (with a switch in the order of p and q), given by
K
R
p
,
q
(
X
,
Y
)
=
K
R
(
X
×
B
p
,
q
,
X
×
S
p
,
q
∪
Y
×
B
p
,
q
)
{\displaystyle KR^{p,q}(X,Y)=KR(X\times B^{p,q},X\times S^{p,q}\cup Y\times B^{p,q})}
and Bp,q, Sp,q are the unit ball and sphere in Rp,q.
== References ==
Atiyah, Michael Francis (1966), "K-theory and reality", The Quarterly Journal of Mathematics, Second Series, 17 (1): 367–386, doi:10.1093/qmath/17.1.367, ISSN 0033-5606, MR 0206940, archived from the original on 2013-04-15 | Wikipedia/KR-theory |
In theoretical physics, type II string theory is a unified term that includes both type IIA strings and type IIB strings theories. Type II string theory accounts for two of the five consistent superstring theories in ten dimensions. Both theories have
N
=
2
{\displaystyle {\mathcal {N}}=2}
extended supersymmetry which is maximal amount of supersymmetry — namely 32 supercharges — in ten dimensions. Both theories are based on oriented closed strings. On the worldsheet, they differ only in the choice of GSO projection. They were first discovered by Michael Green and John Henry Schwarz in 1982, with the terminology of type I and type II coined to classify the three string theories known at the time.
== Type IIA string theory ==
At low energies, type IIA string theory is described by type IIA supergravity in ten dimensions which is a non-chiral theory (i.e. left–right symmetric) with (1,1) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore trivial.
In the 1990s it was realized by Edward Witten (building on previous insights by Michael Duff, Paul Townsend, and others) that the limit of type IIA string theory in which the string coupling goes to infinity becomes a new 11-dimensional theory called M-theory. Consequently the low energy type IIA supergravity theory can also be derived from the unique maximal supergravity theory in 11 dimensions (low energy version of M-theory) via a dimensional reduction.
The content of the massless sector of the theory (which is relevant in the low energy limit) is given by
(
8
v
⊕
8
s
)
⊗
(
8
v
⊕
8
c
)
{\textstyle (8_{v}\oplus 8_{s})\otimes (8_{v}\oplus 8_{c})}
representation of SO(8) where
8
v
{\displaystyle 8_{v}}
is the irreducible vector representation,
8
c
{\displaystyle 8_{c}}
and
8
s
{\displaystyle 8_{s}}
are the irreducible representations with odd and even eigenvalues of the fermionic parity operator often called co-spinor and spinor representations. These three representations enjoy a triality symmetry which is evident from its Dynkin diagram. The four sectors of the massless spectrum after GSO projection and decomposition into irreducible representations are
NS-NS
:
8
v
⊗
8
v
=
1
⊕
28
⊕
35
=
Φ
⊕
B
μ
ν
⊕
G
μ
ν
{\displaystyle {\text{NS-NS}}:~8_{v}\otimes 8_{v}=1\oplus 28\oplus 35=\Phi \oplus B_{\mu \nu }\oplus G_{\mu \nu }}
NS-R
:
8
v
⊗
8
c
=
8
s
⊕
56
c
=
λ
+
⊕
ψ
m
−
{\displaystyle {\text{NS-R}}:8_{v}\otimes 8_{c}=8_{s}\oplus 56_{c}=\lambda ^{+}\oplus \psi _{m}^{-}}
R-NS
:
8
c
⊗
8
s
=
8
s
⊕
56
s
=
λ
−
⊕
ψ
m
+
{\displaystyle {\text{R-NS}}:8_{c}\otimes 8_{s}=8_{s}\oplus 56_{s}=\lambda ^{-}\oplus \psi _{m}^{+}}
R-R
:
8
s
⊗
8
c
=
8
v
⊕
56
t
=
C
n
⊕
C
n
m
p
{\displaystyle {\text{R-R}}:8_{s}\otimes 8_{c}=8_{v}\oplus 56_{t}=C_{n}\oplus C_{nmp}}
where
R
{\displaystyle {\text{R}}}
and
NS
{\displaystyle {\text{NS}}}
stands for Ramond and Neveu–Schwarz sectors respectively. The numbers denote the dimension of the irreducible representation and equivalently the number of components of the corresponding fields. The various massless fields obtained are the graviton
G
μ
ν
{\displaystyle G_{\mu \nu }}
with two superpartner gravitinos
ψ
m
±
{\displaystyle \psi _{m}^{\pm }}
which gives rise to local spacetime supersymmetry, a scalar dilaton
Φ
{\displaystyle \Phi }
with two superpartner spinors—the dilatinos
λ
±
{\displaystyle \lambda ^{\pm }}
, a 2-form spin-2 gauge field
B
μ
ν
{\displaystyle B_{\mu \nu }}
often called the Kalb–Ramond field, a 1-form
C
n
{\displaystyle C_{n}}
and a 3-form
C
n
m
p
{\displaystyle C_{nmp}}
. Since the
p
{\displaystyle {\text{p}}}
-form gauge fields naturally couple to extended objects with
p+1
{\displaystyle {\text{p+1}}}
dimensional world-volume, Type IIA string theory naturally incorporates various extended objects like D0, D2, D4 and D6 branes (using Hodge duality) among the D-branes (which are
R
{\displaystyle {\text{R}}}
R
{\displaystyle {\text{R}}}
charged) and F1 string and NS5 brane among other objects.
The mathematical treatment of type IIA string theory belongs to symplectic topology and algebraic geometry, particularly Gromov–Witten invariants.
== Type IIB string theory ==
At low energies, type IIB string theory is described by type IIB supergravity in ten dimensions which is a chiral theory (left–right asymmetric) with (2,0) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore nontrivial.
In the 1990s it was realized that type IIB string theory with the string coupling constant g is equivalent to the same theory with the coupling 1/g. This equivalence is known as S-duality.
Orientifold of type IIB string theory leads to type I string theory.
The mathematical treatment of type IIB string theory belongs to algebraic geometry, specifically the deformation theory of complex structures originally studied by Kunihiko Kodaira and Donald C. Spencer.
In 1997 Juan Maldacena gave some arguments indicating that type IIB string theory is equivalent to N = 4 supersymmetric Yang–Mills theory in the 't Hooft limit; it was the first suggestion concerning the AdS/CFT correspondence.
== Relationship between the type II theories ==
In the late 1980s, it was realized that type IIA string theory is related to type IIB string theory by T-duality.
== See also ==
Superstring theory
Type I string
Heterotic string
== References == | Wikipedia/Type_II_string_theory |
In mathematics, topological K-theory is a branch of algebraic topology. It was founded to study vector bundles on topological spaces, by means of ideas now recognised as (general) K-theory that were introduced by Alexander Grothendieck. The early work on topological K-theory is due to Michael Atiyah and Friedrich Hirzebruch.
== Definitions ==
Let X be a compact Hausdorff space and
k
=
R
{\displaystyle k=\mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
. Then
K
k
(
X
)
{\displaystyle K_{k}(X)}
is defined to be the Grothendieck group of the commutative monoid of isomorphism classes of finite-dimensional k-vector bundles over X under Whitney sum. Tensor product of bundles gives K-theory a commutative ring structure. Without subscripts,
K
(
X
)
{\displaystyle K(X)}
usually denotes complex K-theory whereas real K-theory is sometimes written as
K
O
(
X
)
{\displaystyle KO(X)}
. The remaining discussion is focused on complex K-theory.
As a first example, note that the K-theory of a point is the integers. This is because vector bundles over a point are trivial and thus classified by their rank and the Grothendieck group of the natural numbers is the integers.
There is also a reduced version of K-theory,
K
~
(
X
)
{\displaystyle {\widetilde {K}}(X)}
, defined for X a compact pointed space (cf. reduced homology). This reduced theory is intuitively K(X) modulo trivial bundles. It is defined as the group of stable equivalence classes of bundles. Two bundles E and F are said to be stably isomorphic if there are trivial bundles
ε
1
{\displaystyle \varepsilon _{1}}
and
ε
2
{\displaystyle \varepsilon _{2}}
, so that
E
⊕
ε
1
≅
F
⊕
ε
2
{\displaystyle E\oplus \varepsilon _{1}\cong F\oplus \varepsilon _{2}}
. This equivalence relation results in a group since every vector bundle can be completed to a trivial bundle by summing with its orthogonal complement. Alternatively,
K
~
(
X
)
{\displaystyle {\widetilde {K}}(X)}
can be defined as the kernel of the map
K
(
X
)
→
K
(
x
0
)
≅
Z
{\displaystyle K(X)\to K(x_{0})\cong \mathbb {Z} }
induced by the inclusion of the base point x0 into X.
K-theory forms a multiplicative (generalized) cohomology theory as follows. The short exact sequence of a pair of pointed spaces (X, A)
K
~
(
X
/
A
)
→
K
~
(
X
)
→
K
~
(
A
)
{\displaystyle {\widetilde {K}}(X/A)\to {\widetilde {K}}(X)\to {\widetilde {K}}(A)}
extends to a long exact sequence
⋯
→
K
~
(
S
X
)
→
K
~
(
S
A
)
→
K
~
(
X
/
A
)
→
K
~
(
X
)
→
K
~
(
A
)
.
{\displaystyle \cdots \to {\widetilde {K}}(SX)\to {\widetilde {K}}(SA)\to {\widetilde {K}}(X/A)\to {\widetilde {K}}(X)\to {\widetilde {K}}(A).}
Let Sn be the n-th reduced suspension of a space and then define
K
~
−
n
(
X
)
:=
K
~
(
S
n
X
)
,
n
≥
0.
{\displaystyle {\widetilde {K}}^{-n}(X):={\widetilde {K}}(S^{n}X),\qquad n\geq 0.}
Negative indices are chosen so that the coboundary maps increase dimension.
It is often useful to have an unreduced version of these groups, simply by defining:
K
−
n
(
X
)
=
K
~
−
n
(
X
+
)
.
{\displaystyle K^{-n}(X)={\widetilde {K}}^{-n}(X_{+}).}
Here
X
+
{\displaystyle X_{+}}
is
X
{\displaystyle X}
with a disjoint basepoint labeled '+' adjoined.
Finally, the Bott periodicity theorem as formulated below extends the theories to positive integers.
== Properties ==
K
n
{\displaystyle K^{n}}
(respectively,
K
~
n
{\displaystyle {\widetilde {K}}^{n}}
) is a contravariant functor from the homotopy category of (pointed) spaces to the category of commutative rings. Thus, for instance, the K-theory over contractible spaces is always
Z
.
{\displaystyle \mathbb {Z} .}
The spectrum of K-theory is
B
U
×
Z
{\displaystyle BU\times \mathbb {Z} }
(with the discrete topology on
Z
{\displaystyle \mathbb {Z} }
), i.e.
K
(
X
)
≅
[
X
+
,
Z
×
B
U
]
,
{\displaystyle K(X)\cong \left[X_{+},\mathbb {Z} \times BU\right],}
where [ , ] denotes pointed homotopy classes and BU is the colimit of the classifying spaces of the unitary groups:
B
U
(
n
)
≅
Gr
(
n
,
C
∞
)
.
{\displaystyle BU(n)\cong \operatorname {Gr} \left(n,\mathbb {C} ^{\infty }\right).}
Similarly,
K
~
(
X
)
≅
[
X
,
Z
×
B
U
]
.
{\displaystyle {\widetilde {K}}(X)\cong [X,\mathbb {Z} \times BU].}
For real K-theory use BO.
There is a natural ring homomorphism
K
0
(
X
)
→
H
2
∗
(
X
,
Q
)
,
{\displaystyle K^{0}(X)\to H^{2*}(X,\mathbb {Q} ),}
the Chern character, such that
K
0
(
X
)
⊗
Q
→
H
2
∗
(
X
,
Q
)
{\displaystyle K^{0}(X)\otimes \mathbb {Q} \to H^{2*}(X,\mathbb {Q} )}
is an isomorphism.
The equivalent of the Steenrod operations in K-theory are the Adams operations. They can be used to define characteristic classes in topological K-theory.
The Splitting principle of topological K-theory allows one to reduce statements about arbitrary vector bundles to statements about sums of line bundles.
The Thom isomorphism theorem in topological K-theory is
K
(
X
)
≅
K
~
(
T
(
E
)
)
,
{\displaystyle K(X)\cong {\widetilde {K}}(T(E)),}
where T(E) is the Thom space of the vector bundle E over X. This holds whenever E is a spin-bundle.
The Atiyah-Hirzebruch spectral sequence allows computation of K-groups from ordinary cohomology groups.
Topological K-theory can be generalized vastly to a functor on C*-algebras, see operator K-theory and KK-theory.
== Bott periodicity ==
The phenomenon of periodicity named after Raoul Bott (see Bott periodicity theorem) can be formulated this way:
K
(
X
×
S
2
)
=
K
(
X
)
⊗
K
(
S
2
)
,
{\displaystyle K(X\times \mathbb {S} ^{2})=K(X)\otimes K(\mathbb {S} ^{2}),}
and
K
(
S
2
)
=
Z
[
H
]
/
(
H
−
1
)
2
{\displaystyle K(\mathbb {S} ^{2})=\mathbb {Z} [H]/(H-1)^{2}}
where H is the class of the tautological bundle on
S
2
=
P
1
(
C
)
,
{\displaystyle \mathbb {S} ^{2}=\mathbb {P} ^{1}(\mathbb {C} ),}
i.e. the Riemann sphere.
K
~
n
+
2
(
X
)
=
K
~
n
(
X
)
.
{\displaystyle {\widetilde {K}}^{n+2}(X)={\widetilde {K}}^{n}(X).}
Ω
2
B
U
≅
B
U
×
Z
.
{\displaystyle \Omega ^{2}BU\cong BU\times \mathbb {Z} .}
In real K-theory there is a similar periodicity, but modulo 8.
== Applications ==
Topological K-theory has been applied in John Frank Adams’ proof of the “Hopf invariant one” problem via Adams operations. Adams also proved an upper bound for the number of linearly-independent vector fields on spheres.
== Chern character ==
Michael Atiyah and Friedrich Hirzebruch proved a theorem relating the topological K-theory of a finite CW complex
X
{\displaystyle X}
with its rational cohomology. In particular, they showed that there exists a homomorphism
c
h
:
K
top
∗
(
X
)
⊗
Q
→
H
∗
(
X
;
Q
)
{\displaystyle ch:K_{\text{top}}^{*}(X)\otimes \mathbb {Q} \to H^{*}(X;\mathbb {Q} )}
such that
K
top
0
(
X
)
⊗
Q
≅
⨁
k
H
2
k
(
X
;
Q
)
K
top
1
(
X
)
⊗
Q
≅
⨁
k
H
2
k
+
1
(
X
;
Q
)
{\displaystyle {\begin{aligned}K_{\text{top}}^{0}(X)\otimes \mathbb {Q} &\cong \bigoplus _{k}H^{2k}(X;\mathbb {Q} )\\K_{\text{top}}^{1}(X)\otimes \mathbb {Q} &\cong \bigoplus _{k}H^{2k+1}(X;\mathbb {Q} )\end{aligned}}}
There is an algebraic analogue relating the Grothendieck group of coherent sheaves and the Chow ring of a smooth projective variety
X
{\displaystyle X}
.
== See also ==
Atiyah–Hirzebruch spectral sequence (computational tool for finding K-theory groups)
KR-theory
Atiyah–Singer index theorem
Snaith's theorem
Algebraic K-theory
== References ==
Atiyah, Michael Francis (1989). K-theory. Advanced Book Classics (2nd ed.). Addison-Wesley. ISBN 978-0-201-09394-0. MR 1043170.
Friedlander, Eric; Grayson, Daniel, eds. (2005). Handbook of K-Theory. Berlin, New York: Springer-Verlag. doi:10.1007/978-3-540-27855-9. ISBN 978-3-540-30436-4. MR 2182598.
Karoubi, Max (1978). K-theory: an introduction. Classics in Mathematics. Springer-Verlag. doi:10.1007/978-3-540-79890-3. ISBN 0-387-08090-2.
Karoubi, Max (2006). "K-theory. An elementary introduction". arXiv:math/0602082.
Hatcher, Allen (2003). "Vector Bundles & K-Theory".
Stykow, Maxim (2013). "Connections of K-Theory to Geometry and Topology". | Wikipedia/Topological_K-theory |
In string theory, K-theory classification refers to a conjectured application of K-theory (in abstract algebra and algebraic topology) to superstrings, to classify the allowed Ramond–Ramond field strengths as well as the charges of stable D-branes.
In condensed matter physics K-theory has also found important applications, specially in the topological classification of topological insulators, superconductors and stable Fermi surfaces (Kitaev (2009), Horava (2005)).
== History ==
This conjecture, applied to D-brane charges, was first proposed by Minasian & Moore (1997). It was popularized by Witten (1998) who demonstrated that in type IIB string theory arises naturally from Ashoke Sen's realization of arbitrary D-brane configurations as stacks of D9 and anti-D9-branes after tachyon condensation.
Such stacks of branes are inconsistent in a non-torsion Neveu–Schwarz (NS) 3-form background, which, as was highlighted by Kapustin (2000), complicates the extension of the K-theory classification to such cases. Bouwknegt & Varghese (2000) suggested a solution to this problem: D-branes are in general classified by a twisted K-theory, that had earlier been defined by Rosenberg (1989).
== Applications ==
The K-theory classification of D-branes has had numerous applications. For example, Hanany & Kol (2000) used it to argue that there are eight species of orientifold one-plane. Uranga (2001) applied the K-theory classification to derive new consistency conditions for flux compactifications. K-theory has also been used to conjecture a formula for the topologies of T-dual manifolds by Bouwknegt, Evslin & Varghese (2004). Recently K-theory has been conjectured to classify the spinors in compactifications on generalized complex manifolds.
=== Open problems ===
Despite these successes, RR fluxes are not quite classified by K-theory. Diaconescu, Moore & Witten (2003) argued that the K-theory classification is incompatible with S-duality in IIB string theory.
In addition, if one attempts to classify fluxes on a compact ten-dimensional spacetime, then a complication arises due to the self-duality of the RR fluxes. The duality uses the Hodge star, which depends on the metric and so is continuously valued and in particular is generically irrational. Thus not all of the RR fluxes, which are interpreted as the Chern characters in K-theory, can be rational. However Chern characters are always rational, and so the K-theory classification must be replaced. One needs to choose a half of the fluxes to quantize, or a polarization in the geometric quantization-inspired language of Diaconescu, Moore, and Witten and later of Varghese & Sati (2004). Alternately one may use the K-theory of a 9-dimensional time slice as has been done by Maldacena, Moore & Seiberg (2001).
== K-theory classification of RR fluxes ==
In the classical limit of type II string theory, which is type II supergravity, the Ramond–Ramond field strengths are differential forms. In the quantum theory the well-definedness of the partition functions of D-branes implies that the RR field strengths obey Dirac quantization conditions when spacetime is compact, or when a spatial slice is compact and one considers only the (magnetic) components of the field strength which lie along the spatial directions. This led twentieth century physicists to classify RR field strengths using cohomology with integral coefficients.
However some authors have argued that the cohomology of spacetime with integral coefficients is too big. For example, in the presence of Neveu–Schwarz H-flux or non-spin cycles some RR fluxes dictate the presence of D-branes. In the former case this is a consequence of the supergravity equation of motion which states that the product of a RR flux with the NS 3-form is a D-brane charge density. Thus the set of topologically distinct RR field strengths that can exist in brane-free configurations is only a subset of the cohomology with integral coefficients.
This subset is still too big, because some of these classes are related by large gauge transformations. In QED there are large gauge transformations which add integral multiples of two pi to Wilson loops. The p-form potentials in type II supergravity theories also enjoy these large gauge transformations, but due to the presence of Chern-Simons terms in the supergravity actions these large gauge transformations transform not only the p-form potentials but also simultaneously the (p+3)-form field strengths. Thus to obtain the space of inequivalent field strengths from the forementioned subset of integral cohomology we must quotient by these large gauge transformations.
The Atiyah–Hirzebruch spectral sequence constructs twisted K-theory, with a twist given by the NS 3-form field strength, as a quotient of a subset of the cohomology with integral coefficients. In the classical limit, which corresponds to working with rational coefficients, this is precisely the quotient of a subset described above in supergravity. The quantum corrections come from torsion classes and contain mod 2 torsion corrections due to the Freed-Witten anomaly.
Thus twisted K-theory classifies the subset of RR field strengths that can exist in the absence of D-branes quotiented by large gauge transformations. Daniel Freed has attempted to extend this classification to include also the RR potentials using differential K-theory.
== K-theory classification of D-branes ==
K-theory classifies D-branes in noncompact spacetimes, intuitively in spacetimes in which we are not concerned about the flux sourced by the brane having nowhere to go. While the K-theory of a 10d spacetime classifies D-branes as subsets of that spacetime, if the spacetime is the product of time and a fixed 9-manifold then K-theory also classifies the conserved D-brane charges on each 9-dimensional spatial slice. While we were required to forget about RR potentials to obtain the K-theory classification of RR field strengths, we are required to forget about RR field strengths to obtain the K-theory classification of D-branes.
=== K-theory charge versus BPS charge ===
As has been stressed by Petr Hořava, the K-theory classification of D-branes is independent of, and in some ways stronger than, the classification of BPS states. K-theory appears to classify stable D-branes missed by supersymmetry based classifications.
For example, D-branes with torsion charges, that is with charges in the order N cyclic group
Z
N
{\displaystyle \mathbf {Z} _{N}}
, attract each other and so can never be BPS. In fact, N such branes can decay, whereas no superposition of branes that satisfy a Bogomolny bound may ever decay. However the charge of such branes is conserved modulo N, and this is captured by the K-theory classification but not by a BPS classification. Such torsion branes have been applied, for example, to model Douglas-Shenker strings in supersymmetric U(N) gauge theories.
=== K-theory from tachyon condensation ===
Ashoke Sen has conjectured that, in the absence of a topologically nontrivial NS 3-form flux, all IIB brane configurations can be obtained from stacks of spacefilling D9 and anti D9 branes via tachyon condensation. The topology of the resulting branes is encoded in the topology of the gauge bundle on the stack of the spacefilling branes. The topology of the gauge bundle of a stack of D9s and anti D9s can be decomposed into a gauge bundle on the D9's and another bundle on the anti D9's. Tachyon condensation transforms such a pair of bundles to another pair in which the same bundle is direct summed with each component in the pair. Thus the tachyon condensation invariant quantity, that is, the charge which is conserved by the tachyon condensation process, is not a pair of bundles but rather the equivalence class of a pair of bundles under direct sums of the same bundle on both sides of the pair. This is precisely the usual construction of topological K-theory. Thus the gauge bundles on stacks of D9's and anti-D9's are classified by topological K-theory. If Sen's conjecture is right, all D-brane configurations in type IIB are then classified by K-theory. Petr Horava has extended this conjecture to type IIA using D8-branes.
=== Twisted K-theory from MMS instantons ===
While the tachyon condensation picture of the K-theory classification classifies D-branes as subsets of a 10-dimensional spacetime with no NS 3-form flux, the Maldacena, Moore, Seiberg picture classifies stable D-branes with finite mass as subsets of a 9-dimensional spatial slice of spacetime.
The central observation is that D-branes are not classified by integral homology because Dp-branes wrapping certain cycles suffer from a Freed-Witten anomaly, which is cancelled by the insertion of D(p-2)-branes and sometimes D(p-4)-branes that end on the afflicted Dp-brane. These inserted branes may either continue to infinity, in which case the composite object has an infinite mass, or else they may end on an anti-Dp-brane, in which case the total Dp-brane charge is zero. In either case, one may wish to remove the anomalous Dp-branes from the spectrum, leaving only a subset of the original integral cohomology.
The inserted branes are unstable. To see this, imagine that they extend in time away (into the past) from the anomalous brane. This corresponds to a process in which the inserted branes decay via a Dp-brane that forms, wraps the forementioned cycle and then disappears. MMS refer to this process as an instanton, although really it need not be instantonic.
The conserved charges are thus the nonanomolous subset quotiented by the unstable insertions. This is precisely the Atiyah-Hirzebruch spectral sequence construction of twisted K-theory as a set.
== Reconciling twisted K-theory and S-duality ==
Diaconescu, Moore, and Witten have pointed out that the twisted K-theory classification is not compatible with the S-duality covariance of type IIB string theory. For example, consider the constraint on the Ramond–Ramond 3-form field strength G3 in the Atiyah-Hirzebruch spectral sequence (AHSS):
d
3
G
3
=
S
q
3
G
3
+
H
∪
G
3
=
G
3
∪
G
3
+
H
∪
G
3
=
0
{\displaystyle d_{3}G_{3}=Sq^{3}G_{3}+H\cup G_{3}=G_{3}\cup G_{3}+H\cup G_{3}=0}
where d3=Sq3+H is the first nontrivial differential in the AHSS, Sq3 is the third Steenrod square and the last equality follows from the fact that the nth Steenrod square acting on any n-form x is x
∪
{\displaystyle \cup }
x.
The above equation is not invariant under S-duality, which exchanges G3 and H. Instead Diaconescu, Moore, and Witten have proposed the following S-duality covariant extension
G
3
∪
G
3
+
H
∪
G
3
+
H
∪
H
=
P
{\displaystyle G_{3}\cup G_{3}+H\cup G_{3}+H\cup H=P}
where P is an unknown characteristic class that depends only on the topology, and in particular not on the fluxes. Diaconescu, Freed & Moore (2007) have found a constraint on P using the E8 gauge theory approach to M-theory pioneered by Diaconescu, Moore, and Witten.
Thus D-branes in IIB are not classified by twisted K-theory after all, but some unknown S-duality-covariant object that inevitably also classifies both fundamental strings and NS5-branes.
However the MMS prescription for calculating twisted K-theory is easily S-covariantized, as the Freed-Witten anomalies respect S-duality. Thus the S-covariantized form of the MMS construction may be applied to construct the S-covariantized twisted K-theory, as a set, without knowing having any geometric description for just what this strange covariant object is. This program has been carried out in a number of papers, such as Evslin & Varadarajan (2003) and Evslin (2003a), and was also applied to the classification of fluxes by Evslin (2003b). Bouwknegt et al. (2006) use this approach to prove Diaconescu, Moore, and Witten's conjectured constraint on the 3-fluxes, and they show that there is an additional term equal to the D3-brane charge. Evslin (2006) shows that the Klebanov-Strassler cascade of Seiberg dualities consists of a series of S-dual MMS instantons, one for each Seiberg duality. The group,
Z
N
{\displaystyle \mathbf {Z} _{N}}
of universality classes of the
S
U
(
M
+
N
)
×
S
U
(
M
)
{\displaystyle SU(M+N)\times SU(M)}
supersymmetric gauge theory is then shown to agree with the S-dual twisted K-theory and not with the original twisted K-theory.
Some authors have proposed radically different solutions to this puzzle. For example, Kriz & Sati (2005) propose that instead of twisted K-theory, II string theory configurations should be classified by elliptic cohomology.
== Researchers ==
Prominent researchers in this area include Edward Witten, Peter Bouwknegt, Angel Uranga, Emanuel Diaconescu, Gregory Moore, Anton Kapustin, Jonathan Rosenberg, Ruben Minasian, Amihay Hanany, Hisham Sati, Nathan Seiberg, Juan Maldacena, Alexei Kitaev, Daniel Freed, and Igor Kriz.
== See also ==
Kalb–Ramond field
== Notes ==
== References ==
Bouwknegt, Peter; Evslin, Jarah; Jurco, Branislav; Varghese, Mathai; Sati, Hisham (2006), "Flux Compactifications on Projective Spaces and The S-Duality Puzzle", Advances in Theoretical and Mathematical Physics, 10 (3): 345–394, arXiv:hep-th/0501110, Bibcode:2005hep.th....1110B, doi:10.4310/atmp.2006.v10.n3.a3, S2CID 15571867.
Bouwknegt, Peter; Evslin, Jarah; Varghese, Mathai (2004), "T-Duality: Topology Change from H-flux", Communications in Mathematical Physics, 249 (2): 383–415, arXiv:hep-th/0306062, Bibcode:2004CMaPh.249..383B, doi:10.1007/s00220-004-1115-6, S2CID 6041460.
Bouwknegt, Peter; Varghese, Mathai (2000), "D-branes, B-fields and twisted K-theory", Journal of High Energy Physics, 0003 (7): 007, arXiv:hep-th/0002023, Bibcode:2000JHEP...03..007B, doi:10.1088/1126-6708/2000/03/007, S2CID 12897181.
Diaconescu, Emanuel; Freed, Daniel S.; Moore, Gregory (2007), "The M-theory 3-form and E8 gauge theory", in Miller, Haynes R.; Ravenel, Douglas C. (eds.), Elliptic Cohomology: Geometry, Applications, and Higher Chromatic Analogues, Cambridge University Press, pp. 44–88, arXiv:hep-th/0312069, Bibcode:2003hep.th...12069D.
Diaconescu, Emanuel; Moore, Gregory; Witten, Edward (2003), "E8 Gauge Theory, and a Derivation of K-Theory from M-Theory", Advances in Theoretical and Mathematical Physics, 6 (6): 1031–1134, arXiv:hep-th/0005090, Bibcode:2000hep.th....5090D, doi:10.4310/ATMP.2002.v6.n6.a2, S2CID 11647083.
Evslin, Jarah (2003a), "IIB Soliton Spectra with All Fluxes Activated", Nuclear Physics B, 657: 139–168, arXiv:hep-th/0211172, Bibcode:2003NuPhB.657..139E, doi:10.1016/S0550-3213(03)00154-8, S2CID 119350721.
Evslin, Jarah (2003b), "Twisted K-Theory from Monodromies", Journal of High Energy Physics, 0305 (30): 030, arXiv:hep-th/0302081, Bibcode:2003JHEP...05..030E, doi:10.1088/1126-6708/2003/05/030, S2CID 14606015.
Evslin, Jarah (2006), "The Cascade is a MMS Instanton", Advances in Soliton Research, Nova Science Publishers, pp. 153–187, arXiv:hep-th/0405210, Bibcode:2004hep.th....5210E.
Evslin, Jarah; Varadarajan, Uday (2003), "K-Theory and S-Duality: Starting Over from Square 3", Journal of High Energy Physics, 0303 (26): 026, arXiv:hep-th/0112084, Bibcode:2003JHEP...03..026E, doi:10.1088/1126-6708/2003/03/026, S2CID 2809191.
Hanany, Amihay; Kol, Barak (2000), "On Orientifolds, Discrete Torsion, Branes and M Theory", Journal of High Energy Physics, 0006 (13): 013, arXiv:hep-th/0003025, Bibcode:2000JHEP...06..013H, doi:10.1088/1126-6708/2000/06/013, S2CID 11424297.
Kapustin, Anton (2000), "D-branes in a topologically nontrivial B-field", Advances in Theoretical and Mathematical Physics, 4: 127–154, arXiv:hep-th/9909089, Bibcode:1999hep.th....9089K, doi:10.4310/ATMP.2000.v4.n1.a3, S2CID 853130.
Kriz, Igor; Sati, Hisham (2005), "Type IIB String Theory, S-Duality, and Generalized Cohomology", Nuclear Physics B, 715 (3): 639–664, arXiv:hep-th/0410293, Bibcode:2005NuPhB.715..639K, doi:10.1016/j.nuclphysb.2005.02.016, S2CID 16552348.
Maldacena, Juan; Moore, Gregory; Seiberg, Nathan (2001), "D-Brane Instantons and K-Theory Charges", Journal of High Energy Physics, 0111 (62): 062, arXiv:hep-th/0108100, Bibcode:2001JHEP...11..062M, doi:10.1088/1126-6708/2001/11/062, S2CID 15132458.
Minasian, Ruben; Moore, Gregory (1997), "K-theory and Ramond-Ramond charge", Journal of High Energy Physics, 9711 (2): 002, arXiv:hep-th/9710230, Bibcode:1997JHEP...11..002M, doi:10.1088/1126-6708/1997/11/002, S2CID 3095614.
Olsen, Kasper; Szabo, Richard J. (1999), "Constructing D-Branes from K-Theory", Advances in Theoretical and Mathematical Physics, 3 (4): 889–1025, arXiv:hep-th/9907140, Bibcode:1999hep.th....7140O, doi:10.4310/ATMP.1999.v3.n4.a5, S2CID 117445831.
Rosenberg, Jonathan (1989), "Continuous-Trace Algebras from the Bundle Theoretic Point of View", Journal of the Australian Mathematical Society, Series A, 47 (3): 368–381, doi:10.1017/S1446788700033097.
Uranga, Angel M. (2001), "D-brane probes, RR tadpole cancellation and K-theory charge", Nuclear Physics B, 598 (1–2): 225–246, arXiv:hep-th/0011048, Bibcode:2001NuPhB.598..225U, doi:10.1016/S0550-3213(00)00787-2, S2CID 15021358.
Varghese, Mathai; Sati, Hisham (2004), "Some Relations between Twisted K-theory and E8 Gauge Theory", Journal of High Energy Physics, 0403 (16): 016, arXiv:hep-th/0312033, Bibcode:2004JHEP...03..016M, doi:10.1088/1126-6708/2004/03/016, S2CID 119380196.
Witten, Edward (1998), "D-Branes and K-Theory", Journal of High Energy Physics, 9812 (19): 019, arXiv:hep-th/9810188, Bibcode:1998JHEP...12..019W, doi:10.1088/1126-6708/1998/12/019, S2CID 14970516.
== References (condensed matter physics) ==
Kitaev, Alexei (2009), "Periodic table for topological insulators and superconductors", AIP Conference Proceedings, 1134 (1): 22–30, arXiv:0901.2686, Bibcode:2009AIPC.1134...22K, doi:10.1063/1.3149495, S2CID 14320124.
Horava, Petr (2005), "Stability of Fermi Surfaces and K Theory", Physical Review Letters, 95 (16405): 016405, arXiv:hep-th/0503006, Bibcode:2005PhRvL..95a6405H, doi:10.1103/physrevlett.95.016405, PMID 16090638, S2CID 15197829.
Roy, Rahul; Fenner Harper (2017), "Periodic Table for Floquet Topological Insulators", Physical Review B, 96 (15): 155118, arXiv:1603.06944, Bibcode:2017PhRvB..96o5118R, doi:10.1103/PhysRevB.96.155118, S2CID 119270701.
== Further reading ==
An excellent introduction to the K-theory classification of D-branes in 10 dimensions via Ashoke Sen's conjecture is the original paper "D-branes and K-theory" by Edward Witten; there is also an extensive review by Olsen & Szabo (1999).
A very comprehensible introduction to the twisted K-theory classification of conserved D-brane charges on a 9-dimensional timeslice in the presence of Neveu–Schwarz flux is Maldacena, Moore & Seiberg (2001).
== External links ==
K-theory on arxiv.org | Wikipedia/K-theory_(physics) |
In mathematics, twisted K-theory (also called K-theory with local coefficients) is a variation on K-theory, a mathematical theory from the 1950s that spans algebraic topology, abstract algebra and operator theory.
More specifically, twisted K-theory with twist H is a particular variant of K-theory, in which the twist is given by an integral 3-dimensional cohomology class. It is special among the various twists that K-theory admits for two reasons. First, it admits a geometric formulation. This was provided in two steps; the first one was done in 1970 (Publ. Math. de l'IHÉS) by Peter Donovan and Max Karoubi; the second one in 1988 by Jonathan Rosenberg in Continuous-Trace Algebras from the Bundle Theoretic Point of View.
In physics, it has been conjectured to classify D-branes, Ramond-Ramond field strengths and in some cases even spinors in type II string theory. For more information on twisted K-theory in string theory, see K-theory (physics).
In the broader context of K-theory, in each subject it has numerous isomorphic formulations and, in many cases, isomorphisms relating definitions in various subjects have been proven. It also has numerous deformations, for example, in abstract algebra K-theory may be twisted by any integral cohomology class.
== Definition ==
To motivate Rosenberg's geometric formulation of twisted K-theory, start from the Atiyah–Jänich theorem, stating that
F
r
e
d
(
H
)
,
{\displaystyle Fred({\mathcal {H}}),}
the Fredholm operators on Hilbert space
H
{\displaystyle {\mathcal {H}}}
, is a classifying space for ordinary, untwisted K-theory. This means that the K-theory of the space
M
{\displaystyle M}
consists of the homotopy classes of maps
[
M
→
F
r
e
d
(
H
)
]
{\displaystyle [M\rightarrow Fred({\mathcal {H}})]}
from
M
{\displaystyle M}
to
F
r
e
d
(
H
)
.
{\displaystyle Fred({\mathcal {H}}).}
A slightly more complicated way of saying the same thing is as follows. Consider the trivial bundle of
F
r
e
d
(
H
)
{\displaystyle Fred({\mathcal {H}})}
over
M
{\displaystyle M}
, that is, the Cartesian product of
M
{\displaystyle M}
and
F
r
e
d
(
H
)
{\displaystyle Fred({\mathcal {H}})}
. Then the K-theory of
M
{\displaystyle M}
consists of the homotopy classes of sections of this bundle.
We can make this yet more complicated by introducing a trivial
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
bundle
P
{\displaystyle P}
over
M
{\displaystyle M}
, where
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
is the group of projective unitary operators on the Hilbert space
H
{\displaystyle {\mathcal {H}}}
. Then the group of maps
[
P
→
F
r
e
d
(
H
)
]
P
U
(
H
)
{\displaystyle [P\rightarrow Fred({\mathcal {H}})]_{PU({\mathcal {H}})}}
from
P
{\displaystyle P}
to
F
r
e
d
(
H
)
{\displaystyle Fred({\mathcal {H}})}
which are equivariant under an action of
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
is equivalent to the original groups of maps
[
M
→
F
r
e
d
(
H
)
]
.
{\displaystyle [M\rightarrow Fred({\mathcal {H}})].}
This more complicated construction of ordinary K-theory is naturally generalized to the twisted case. To see this, note that
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
bundles on
M
{\displaystyle M}
are classified by elements
H
{\displaystyle H}
of the third integral cohomology group of
M
{\displaystyle M}
. This is a consequence of the fact that
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
topologically is a representative Eilenberg–MacLane space
K
(
Z
,
3
)
{\displaystyle K(\mathbf {Z} ,3)}
.
The generalization is then straightforward. Rosenberg has defined
K
H
(
M
)
{\displaystyle K_{H}(M)}
,
the twisted K-theory of
M
{\displaystyle M}
with twist given by the 3-class
H
{\displaystyle H}
, to be the space of homotopy classes of sections of the trivial
F
r
e
d
(
H
)
{\displaystyle Fred({\mathcal {H}})}
bundle over
M
{\displaystyle M}
that are covariant with respect to a
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
bundle
P
H
{\displaystyle P_{H}}
fibered over
M
{\displaystyle M}
with 3-class
H
{\displaystyle H}
, that is
K
H
(
M
)
=
[
P
H
→
F
r
e
d
(
H
)
]
P
U
(
H
)
.
{\displaystyle K_{H}(M)=[P_{H}\rightarrow Fred({\mathcal {H}})]_{PU({\mathcal {H}})}.}
Equivalently, it is the space of homotopy classes of sections of the
F
r
e
d
(
H
)
{\displaystyle Fred({\mathcal {H}})}
bundles associated to a
P
U
(
H
)
{\displaystyle PU({\mathcal {H}})}
bundle with class
H
{\displaystyle H}
.
== Relation to K-theory ==
When
H
{\displaystyle H}
is the trivial class, twisted K-theory is just untwisted K-theory, which is a ring. However, when
H
{\displaystyle H}
is nontrivial this theory is no longer a ring. It has an addition, but it is no longer closed under multiplication.
However, the direct sum of the twisted K-theories of
M
{\displaystyle M}
with all possible twists is a ring. In particular, the product of an element of K-theory with twist
H
{\displaystyle H}
with an element of K-theory with twist
H
′
{\displaystyle H'}
is an element of K-theory twisted by
H
+
H
′
{\displaystyle H+H'}
. This element can be constructed directly from the above definition by using adjoints of Fredholm operators and construct a specific 2 x 2 matrix out of them (see the reference 1, where a more natural and general Z/2-graded version is also presented). In particular twisted K-theory is a module over classical K-theory.
== Calculations ==
Physicist typically want to calculate twisted K-theory using the Atiyah–Hirzebruch spectral sequence. The idea is that one begins with all of the even or all of the odd integral cohomology, depending on whether one wishes to calculate the twisted
K
0
{\displaystyle K_{0}}
or the twisted
K
0
{\displaystyle K^{0}}
, and then one takes the cohomology with respect to a series of differential operators. The first operator,
d
3
{\displaystyle d_{3}}
, for example, is the sum of the three-class
H
{\displaystyle H}
, which in string theory corresponds to the Neveu-Schwarz 3-form, and the third Steenrod square, so
d
3
p
,
q
=
S
q
3
+
H
{\displaystyle d_{3}^{p,q}=Sq^{3}+H}
No elementary form for the next operator,
d
5
{\displaystyle d_{5}}
, has been found, although several conjectured forms exist. Higher operators do not contribute to the
K
{\displaystyle K}
-theory of a 10-manifold, which is the dimension of interest in critical superstring theory. Over the rationals Michael Atiyah and Graeme Segal have shown that all of the differentials reduce to Massey products of
M
{\displaystyle M}
.
After taking the cohomology with respect to the full series of differentials one obtains twisted
K
{\displaystyle K}
-theory as a set, but to obtain the full group structure one in general needs to solve an extension problem.
=== Example: the three-sphere ===
The three-sphere,
S
3
{\displaystyle S^{3}}
, has trivial cohomology except for
H
0
(
S
3
)
{\displaystyle H^{0}(S^{3})}
and
H
3
(
S
3
)
{\displaystyle H^{3}(S^{3})}
which are both isomorphic to the integers. Thus the even and odd cohomologies are both isomorphic to the integers. Because the three-sphere is of dimension three, which is less than five, the third Steenrod square is trivial on its cohomology and so the first nontrivial differential is just
d
3
=
H
{\displaystyle d_{3}=H}
. The later differentials increase the degree of a cohomology class by more than three and so are again trivial; thus the twisted
K
{\displaystyle K}
-theory is just the cohomology of the operator
d
3
{\displaystyle d_{3}}
which acts on a class by cupping it with the 3-class
H
{\displaystyle H}
.
Imagine that
H
{\displaystyle H}
is the trivial class, zero. Then
d
3
{\displaystyle d_{3}}
is also trivial. Thus its entire domain is its kernel, and nothing is in its image. Thus
K
H
0
(
S
3
)
{\displaystyle K_{H}^{0}(S^{3})}
is the kernel of
d
3
{\displaystyle d_{3}}
in the even cohomology, which is the full even cohomology, which consists of the integers. Similarly
K
H
1
(
S
3
)
{\displaystyle K_{H}^{1}(S^{3})}
consists of the odd cohomology quotiented by the image of
d
3
{\displaystyle d_{3}}
, in other words quotiented by the trivial group. This leaves the original odd cohomology, which is again the integers. In conclusion,
K
0
{\displaystyle K^{0}}
and
K
1
{\displaystyle K^{1}}
of the three-sphere with trivial twist are both isomorphic to the integers. As expected, this agrees with the untwisted
K
{\displaystyle K}
-theory.
Now consider the case in which
H
{\displaystyle H}
is nontrivial.
H
{\displaystyle H}
is defined to be an element of the third integral cohomology, which is isomorphic to the integers. Thus
H
{\displaystyle H}
corresponds to a number, which we will call
n
{\displaystyle n}
.
d
3
{\displaystyle d_{3}}
now takes an element
m
{\displaystyle m}
of
H
0
{\displaystyle H^{0}}
and yields the element
n
m
{\displaystyle nm}
of
H
3
{\displaystyle H^{3}}
. As
n
{\displaystyle n}
is not equal to zero by assumption, the only element of the kernel of
d
3
{\displaystyle d_{3}}
is the zero element, and so
K
H
=
n
0
(
S
3
)
=
0
{\displaystyle K_{H=n}^{0}(S^{3})=0}
. The image of
d
3
{\displaystyle d_{3}}
consists of all elements of the integers that are multiples of
n
{\displaystyle n}
. Therefore, the odd cohomology,
Z
{\displaystyle \mathbb {Z} }
, quotiented by the image of
d
3
{\displaystyle d_{3}}
,
n
Z
{\displaystyle n\mathbb {Z} }
, is the cyclic group of order
n
{\displaystyle n}
,
Z
/
n
{\displaystyle \mathbb {Z} /n}
. In conclusion
K
H
=
n
1
(
S
3
)
=
Z
/
n
{\displaystyle K_{H=n}^{1}(S^{3})=\mathbb {Z} /n}
In string theory this result reproduces the classification of D-branes on the 3-sphere with
n
{\displaystyle n}
units of
H
{\displaystyle H}
-flux, which corresponds to the set of symmetric boundary conditions in the supersymmetric
S
U
(
2
)
{\displaystyle SU(2)}
WZW model at level
n
−
2
{\displaystyle n-2}
.
There is an extension of this calculation to the group manifold of SU(3). In this case the Steenrod square term in
d
3
{\displaystyle d_{3}}
, the operator
d
5
{\displaystyle d_{5}}
, and the extension problem are nontrivial.
== See also ==
K-theory (physics)
Wess–Zumino–Witten model
Bundle gerbe
== Notes ==
== References ==
"Graded Brauer groups and K-theory with local coefficients", by Peter Donovan and Max Karoubi. Publ. Math. IHÉS Nr. 38, pp. 5–25 (1970).
D-Brane Instantons and K-Theory Charges by Juan Maldacena, Gregory Moore and Nathan Seiberg
Twisted K-theory and Cohomology by Michael Atiyah and Graeme Segal
Twisted K-theory and the K-theory of Bundle Gerbes by Peter Bouwknegt, Alan Carey, Varghese Mathai, Michael Murray and Danny Stevenson.
Twisted K-theory, old and new
== External links ==
Strings 2002, Michael Atiyah lecture, "Twisted K-theory and physics"
The Verlinde algebra is twisted equivariant K-theory (PDF)
Riemann–Roch and index formulae in twisted K-theory (PDF) | Wikipedia/Twisted_K-theory |
In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaves are a class of sheaves closely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to a sheaf of rings that codifies this geometric information.
Coherent sheaves can be seen as a generalization of vector bundles. Unlike vector bundles, they form an abelian category, and so they are closed under operations such as taking kernels, images, and cokernels. The quasi-coherent sheaves are a generalization of coherent sheaves and include the locally free sheaves of infinite rank.
Coherent sheaf cohomology is a powerful technique, in particular for studying the sections of a given coherent sheaf.
== Definitions ==
A quasi-coherent sheaf on a ringed space
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a sheaf
F
{\displaystyle {\mathcal {F}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules that has a local presentation, that is, every point in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
in which there is an exact sequence
O
X
⊕
I
|
U
→
O
X
⊕
J
|
U
→
F
|
U
→
0
{\displaystyle {\mathcal {O}}_{X}^{\oplus I}|_{U}\to {\mathcal {O}}_{X}^{\oplus J}|_{U}\to {\mathcal {F}}|_{U}\to 0}
for some (possibly infinite) sets
I
{\displaystyle I}
and
J
{\displaystyle J}
.
A coherent sheaf on a ringed space
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a sheaf
F
{\displaystyle {\mathcal {F}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules satisfying the following two properties:
F
{\displaystyle {\mathcal {F}}}
is of finite type over
O
X
{\displaystyle {\mathcal {O}}_{X}}
, that is, every point in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
in
X
{\displaystyle X}
such that there is a surjective morphism
O
X
n
|
U
→
F
|
U
{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {F}}|_{U}}
for some natural number
n
{\displaystyle n}
;
for any open set
U
⊆
X
{\displaystyle U\subseteq X}
, any natural number
n
{\displaystyle n}
, and any morphism
φ
:
O
X
n
|
U
→
F
|
U
{\displaystyle \varphi :{\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {F}}|_{U}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules, the kernel of
φ
{\displaystyle \varphi }
is of finite type.
Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules.
=== The case of schemes ===
When
X
{\displaystyle X}
is a scheme, the general definitions above are equivalent to more explicit ones. A sheaf
F
{\displaystyle {\mathcal {F}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules is quasi-coherent if and only if over each open affine subscheme
U
=
Spec
A
{\displaystyle U=\operatorname {Spec} A}
the restriction
F
|
U
{\displaystyle {\mathcal {F}}|_{U}}
is isomorphic to the sheaf
M
~
{\displaystyle {\tilde {M}}}
associated to the module
M
=
Γ
(
U
,
F
)
{\displaystyle M=\Gamma (U,{\mathcal {F}})}
over
A
{\displaystyle A}
. When
X
{\displaystyle X}
is a locally Noetherian scheme,
F
{\displaystyle {\mathcal {F}}}
is coherent if and only if it is quasi-coherent and the modules
M
{\displaystyle M}
above can be taken to be finitely generated.
On an affine scheme
U
=
Spec
A
{\displaystyle U=\operatorname {Spec} A}
, there is an equivalence of categories from
A
{\displaystyle A}
-modules to quasi-coherent sheaves, taking a module
M
{\displaystyle M}
to the associated sheaf
M
~
{\displaystyle {\tilde {M}}}
. The inverse equivalence takes a quasi-coherent sheaf
F
{\displaystyle {\mathcal {F}}}
on
U
{\displaystyle U}
to the
A
{\displaystyle A}
-module
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
of global sections of
F
{\displaystyle {\mathcal {F}}}
.
Here are several further characterizations of quasi-coherent sheaves on a scheme.
== Properties ==
On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on any scheme form an abelian category, and they are extremely useful in that context.
On any ringed space
X
{\displaystyle X}
, the coherent sheaves form an abelian category, a full subcategory of the category of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules. (Analogously, the category of coherent modules over any ring
A
{\displaystyle A}
is a full abelian subcategory of the category of all
A
{\displaystyle A}
-modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. The direct sum of two coherent sheaves is coherent; more generally, an
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module that is an extension of two coherent sheaves is coherent.
A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always an
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module of finite presentation, meaning that each point
x
{\displaystyle x}
in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
such that the restriction
F
|
U
{\displaystyle {\mathcal {F}}|_{U}}
of
F
{\displaystyle {\mathcal {F}}}
to
U
{\displaystyle U}
is isomorphic to the cokernel of a morphism
O
X
n
|
U
→
O
X
m
|
U
{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {O}}_{X}^{m}|_{U}}
for some natural numbers
n
{\displaystyle n}
and
m
{\displaystyle m}
. If
O
X
{\displaystyle {\mathcal {O}}_{X}}
is coherent, then, conversely, every sheaf of finite presentation over
O
X
{\displaystyle {\mathcal {O}}_{X}}
is coherent.
The sheaf of rings
O
X
{\displaystyle {\mathcal {O}}_{X}}
is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, the Oka coherence theorem states that the sheaf of holomorphic functions on a complex analytic space
X
{\displaystyle X}
is a coherent sheaf of rings. The main part of the proof is the case
X
=
C
n
{\displaystyle X=\mathbf {C} ^{n}}
. Likewise, on a locally Noetherian scheme
X
{\displaystyle X}
, the structure sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
is a coherent sheaf of rings.
== Basic constructions of coherent sheaves ==
An
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module
F
{\displaystyle {\mathcal {F}}}
on a ringed space
X
{\displaystyle X}
is called locally free of finite rank, or a vector bundle, if every point in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
such that the restriction
F
|
U
{\displaystyle {\mathcal {F}}|_{U}}
is isomorphic to a finite direct sum of copies of
O
X
|
U
{\displaystyle {\mathcal {O}}_{X}|_{U}}
. If
F
{\displaystyle {\mathcal {F}}}
is free of the same rank
n
{\displaystyle n}
near every point of
X
{\displaystyle X}
, then the vector bundle
F
{\displaystyle {\mathcal {F}}}
is said to be of rank
n
{\displaystyle n}
.
Vector bundles in this sheaf-theoretic sense over a scheme
X
{\displaystyle X}
are equivalent to vector bundles defined in a more geometric way, as a scheme
E
{\displaystyle E}
with a morphism
π
:
E
→
X
{\displaystyle \pi :E\to X}
and with a covering of
X
{\displaystyle X}
by open sets
U
α
{\displaystyle U_{\alpha }}
with given isomorphisms
π
−
1
(
U
α
)
≅
A
n
×
U
α
{\displaystyle \pi ^{-1}(U_{\alpha })\cong \mathbb {A} ^{n}\times U_{\alpha }}
over
U
α
{\displaystyle U_{\alpha }}
such that the two isomorphisms over an intersection
U
α
∩
U
β
{\displaystyle U_{\alpha }\cap U_{\beta }}
differ by a linear automorphism. (The analogous equivalence also holds for complex analytic spaces.) For example, given a vector bundle
E
{\displaystyle E}
in this geometric sense, the corresponding sheaf
F
{\displaystyle {\mathcal {F}}}
is defined by: over an open set
U
{\displaystyle U}
of
X
{\displaystyle X}
, the
O
(
U
)
{\displaystyle {\mathcal {O}}(U)}
-module
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
is the set of sections of the morphism
π
−
1
(
U
)
→
U
{\displaystyle \pi ^{-1}(U)\to U}
. The sheaf-theoretic interpretation of vector bundles has the advantage that vector bundles (on a locally Noetherian scheme) are included in the abelian category of coherent sheaves.
Locally free sheaves come equipped with the standard
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module operations, but these give back locally free sheaves.
Let
X
=
Spec
(
R
)
{\displaystyle X=\operatorname {Spec} (R)}
,
R
{\displaystyle R}
a Noetherian ring. Then vector bundles on
X
{\displaystyle X}
are exactly the sheaves associated to finitely generated projective modules over
R
{\displaystyle R}
, or (equivalently) to finitely generated flat modules over
R
{\displaystyle R}
.
Let
X
=
Proj
(
R
)
{\displaystyle X=\operatorname {Proj} (R)}
,
R
{\displaystyle R}
a Noetherian
N
{\displaystyle \mathbb {N} }
-graded ring, be a projective scheme over a Noetherian ring
R
0
{\displaystyle R_{0}}
. Then each
Z
{\displaystyle \mathbb {Z} }
-graded
R
{\displaystyle R}
-module
M
{\displaystyle M}
determines a quasi-coherent sheaf
F
{\displaystyle {\mathcal {F}}}
on
X
{\displaystyle X}
such that
F
|
{
f
≠
0
}
{\displaystyle {\mathcal {F}}|_{\{f\neq 0\}}}
is the sheaf associated to the
R
[
f
−
1
]
0
{\displaystyle R[f^{-1}]_{0}}
-module
M
[
f
−
1
]
0
{\displaystyle M[f^{-1}]_{0}}
, where
f
{\displaystyle f}
is a homogeneous element of
R
{\displaystyle R}
of positive degree and
{
f
≠
0
}
=
Spec
R
[
f
−
1
]
0
{\displaystyle \{f\neq 0\}=\operatorname {Spec} R[f^{-1}]_{0}}
is the locus where
f
{\displaystyle f}
does not vanish.
For example, for each integer
n
{\displaystyle n}
, let
R
(
n
)
{\displaystyle R(n)}
denote the graded
R
{\displaystyle R}
-module given by
R
(
n
)
l
=
R
n
+
l
{\displaystyle R(n)_{l}=R_{n+l}}
. Then each
R
(
n
)
{\displaystyle R(n)}
determines the quasi-coherent sheaf
O
X
(
n
)
{\displaystyle {\mathcal {O}}_{X}(n)}
on
X
{\displaystyle X}
. If
R
{\displaystyle R}
is generated as
R
0
{\displaystyle R_{0}}
-algebra by
R
1
{\displaystyle R_{1}}
, then
O
X
(
n
)
{\displaystyle {\mathcal {O}}_{X}(n)}
is a line bundle (invertible sheaf) on
X
{\displaystyle X}
and
O
X
(
n
)
{\displaystyle {\mathcal {O}}_{X}(n)}
is the
n
{\displaystyle n}
-th tensor power of
O
X
(
1
)
{\displaystyle {\mathcal {O}}_{X}(1)}
. In particular,
O
P
n
(
−
1
)
{\displaystyle {\mathcal {O}}_{\mathbb {P} ^{n}}(-1)}
is called the tautological line bundle on the projective
n
{\displaystyle n}
-space.
A simple example of a coherent sheaf on
P
2
{\displaystyle \mathbb {P} ^{2}}
that is not a vector bundle is given by the cokernel in the following sequence
O
(
1
)
→
⋅
(
x
2
−
y
z
,
y
3
+
x
y
2
−
x
y
z
)
O
(
3
)
⊕
O
(
4
)
→
E
→
0
{\displaystyle {\mathcal {O}}(1){\xrightarrow {\cdot (x^{2}-yz,y^{3}+xy^{2}-xyz)}}{\mathcal {O}}(3)\oplus {\mathcal {O}}(4)\to {\mathcal {E}}\to 0}
this is because
E
{\displaystyle {\mathcal {E}}}
restricted to the vanishing locus of the two polynomials has two-dimensional fibers, and has one-dimensional fibers elsewhere.
Ideal sheaves: If
Z
{\displaystyle Z}
is a closed subscheme of a locally Noetherian scheme
X
{\displaystyle X}
, the sheaf
I
Z
/
X
{\displaystyle {\mathcal {I}}_{Z/X}}
of all regular functions vanishing on
Z
{\displaystyle Z}
is coherent. Likewise, if
Z
{\displaystyle Z}
is a closed analytic subspace of a complex analytic space
X
{\displaystyle X}
, the ideal sheaf
I
Z
/
X
{\displaystyle {\mathcal {I}}_{Z/X}}
is coherent.
The structure sheaf
O
Z
{\displaystyle {\mathcal {O}}_{Z}}
of a closed subscheme
Z
{\displaystyle Z}
of a locally Noetherian scheme
X
{\displaystyle X}
can be viewed as a coherent sheaf on
X
{\displaystyle X}
. To be precise, this is the direct image sheaf
i
∗
O
Z
{\displaystyle i_{*}{\mathcal {O}}_{Z}}
, where
i
:
Z
→
X
{\displaystyle i:Z\to X}
is the inclusion. Likewise for a closed analytic subspace of a complex analytic space. The sheaf
i
∗
O
Z
{\displaystyle i_{*}{\mathcal {O}}_{Z}}
has fiber (defined below) of dimension zero at points in the open set
X
−
Z
{\displaystyle X-Z}
, and fiber of dimension 1 at points in
Z
{\displaystyle Z}
. There is a short exact sequence of coherent sheaves on
X
{\displaystyle X}
:
0
→
I
Z
/
X
→
O
X
→
i
∗
O
Z
→
0.
{\displaystyle 0\to {\mathcal {I}}_{Z/X}\to {\mathcal {O}}_{X}\to i_{*}{\mathcal {O}}_{Z}\to 0.}
Most operations of linear algebra preserve coherent sheaves. In particular, for coherent sheaves
F
{\displaystyle {\mathcal {F}}}
and
G
{\displaystyle {\mathcal {G}}}
on a ringed space
X
{\displaystyle X}
, the tensor product sheaf
F
⊗
O
X
G
{\displaystyle {\mathcal {F}}\otimes _{{\mathcal {O}}_{X}}{\mathcal {G}}}
and the sheaf of homomorphisms
H
o
m
O
X
(
F
,
G
)
{\displaystyle {\mathcal {H}}om_{{\mathcal {O}}_{X}}({\mathcal {F}},{\mathcal {G}})}
are coherent.
A simple non-example of a quasi-coherent sheaf is given by the extension by zero functor. For example, consider
i
!
O
X
{\displaystyle i_{!}{\mathcal {O}}_{X}}
for
X
=
Spec
(
C
[
x
,
x
−
1
]
)
→
i
Spec
(
C
[
x
]
)
=
Y
{\displaystyle X=\operatorname {Spec} (\mathbb {C} [x,x^{-1}]){\xrightarrow {i}}\operatorname {Spec} (\mathbb {C} [x])=Y}
Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category of modules over the underlying ring, and the adjunction comes from taking global sections.
== Functoriality ==
Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a morphism of ringed spaces (for example, a morphism of schemes). If
F
{\displaystyle {\mathcal {F}}}
is a quasi-coherent sheaf on
Y
{\displaystyle Y}
, then the inverse image
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module (or pullback)
f
∗
F
{\displaystyle f^{*}{\mathcal {F}}}
is quasi-coherent on
X
{\displaystyle X}
. For a morphism of schemes
f
:
X
→
Y
{\displaystyle f:X\to Y}
and a coherent sheaf
F
{\displaystyle {\mathcal {F}}}
on
Y
{\displaystyle Y}
, the pullback
f
∗
F
{\displaystyle f^{*}{\mathcal {F}}}
is not coherent in full generality (for example,
f
∗
O
Y
=
O
X
{\displaystyle f^{*}{\mathcal {O}}_{Y}={\mathcal {O}}_{X}}
, which might not be coherent), but pullbacks of coherent sheaves are coherent if
X
{\displaystyle X}
is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle.
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a quasi-compact quasi-separated morphism of schemes and
F
{\displaystyle {\mathcal {F}}}
is a quasi-coherent sheaf on
X
{\displaystyle X}
, then the direct image sheaf (or pushforward)
f
∗
F
{\displaystyle f_{*}{\mathcal {F}}}
is quasi-coherent on
Y
{\displaystyle Y}
.
The direct image of a coherent sheaf is often not coherent. For example, for a field
k
{\displaystyle k}
, let
X
{\displaystyle X}
be the affine line over
k
{\displaystyle k}
, and consider the morphism
f
:
X
→
Spec
(
k
)
{\displaystyle f:X\to \operatorname {Spec} (k)}
; then the direct image
f
∗
O
X
{\displaystyle f_{*}{\mathcal {O}}_{X}}
is the sheaf on
Spec
(
k
)
{\displaystyle \operatorname {Spec} (k)}
associated to the polynomial ring
k
[
x
]
{\displaystyle k[x]}
, which is not coherent because
k
[
x
]
{\displaystyle k[x]}
has infinite dimension as a
k
{\displaystyle k}
-vector space. On the other hand, the direct image of a coherent sheaf under a proper morphism is coherent, by results of Grauert and Grothendieck.
== Local behavior of coherent sheaves ==
An important feature of coherent sheaves
F
{\displaystyle {\mathcal {F}}}
is that the properties of
F
{\displaystyle {\mathcal {F}}}
at a point
x
{\displaystyle x}
control the behavior of
F
{\displaystyle {\mathcal {F}}}
in a neighborhood of
x
{\displaystyle x}
, more than would be true for an arbitrary sheaf. For example, Nakayama's lemma says (in geometric language) that if
F
{\displaystyle {\mathcal {F}}}
is a coherent sheaf on a scheme
X
{\displaystyle X}
, then the fiber
F
x
⊗
O
X
,
x
k
(
x
)
{\displaystyle {\mathcal {F}}_{x}\otimes _{{\mathcal {O}}_{X,x}}k(x)}
of
F
{\displaystyle F}
at a point
x
{\displaystyle x}
(a vector space over the residue field
k
(
x
)
{\displaystyle k(x)}
) is zero if and only if the sheaf
F
{\displaystyle {\mathcal {F}}}
is zero on some open neighborhood of
x
{\displaystyle x}
. A related fact is that the dimension of the fibers of a coherent sheaf is upper-semicontinuous. Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset.
In the same spirit: a coherent sheaf
F
{\displaystyle {\mathcal {F}}}
on a scheme
X
{\displaystyle X}
is a vector bundle if and only if its stalk
F
x
{\displaystyle {\mathcal {F}}_{x}}
is a free module over the local ring
O
X
,
x
{\displaystyle {\mathcal {O}}_{X,x}}
for every point
x
{\displaystyle x}
in
X
{\displaystyle X}
.
On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On a reduced locally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant.
== Examples of vector bundles ==
For a morphism of schemes
X
→
Y
{\displaystyle X\to Y}
, let
Δ
:
X
→
X
×
Y
X
{\displaystyle \Delta :X\to X\times _{Y}X}
be the diagonal morphism, which is a closed immersion if
X
{\displaystyle X}
is separated over
Y
{\displaystyle Y}
. Let
I
{\displaystyle {\mathcal {I}}}
be the ideal sheaf of
X
{\displaystyle X}
in
X
×
Y
X
{\displaystyle X\times _{Y}X}
. Then the sheaf of differentials
Ω
X
/
Y
1
{\displaystyle \Omega _{X/Y}^{1}}
can be defined as the pullback
Δ
∗
I
{\displaystyle \Delta ^{*}{\mathcal {I}}}
of
I
{\displaystyle {\mathcal {I}}}
to
X
{\displaystyle X}
. Sections of this sheaf are called 1-forms on
X
{\displaystyle X}
over
Y
{\displaystyle Y}
, and they can be written locally on
X
{\displaystyle X}
as finite sums
∑
f
j
d
g
j
{\displaystyle \textstyle \sum f_{j}\,dg_{j}}
for regular functions
f
j
{\displaystyle f_{j}}
and
g
j
{\displaystyle g_{j}}
. If
X
{\displaystyle X}
is locally of finite type over a field
k
{\displaystyle k}
, then
Ω
X
/
k
1
{\displaystyle \Omega _{X/k}^{1}}
is a coherent sheaf on
X
{\displaystyle X}
.
If
X
{\displaystyle X}
is smooth over
k
{\displaystyle k}
, then
Ω
1
{\displaystyle \Omega ^{1}}
(meaning
Ω
X
/
k
1
{\displaystyle \Omega _{X/k}^{1}}
) is a vector bundle over
X
{\displaystyle X}
, called the cotangent bundle of
X
{\displaystyle X}
. Then the tangent bundle
T
X
{\displaystyle TX}
is defined to be the dual bundle
(
Ω
1
)
∗
{\displaystyle (\Omega ^{1})^{*}}
. For
X
{\displaystyle X}
smooth over
k
{\displaystyle k}
of dimension
n
{\displaystyle n}
everywhere, the tangent bundle has rank
n
{\displaystyle n}
.
If
Y
{\displaystyle Y}
is a smooth closed subscheme of a smooth scheme
X
{\displaystyle X}
over
k
{\displaystyle k}
, then there is a short exact sequence of vector bundles on
Y
{\displaystyle Y}
:
0
→
T
Y
→
T
X
|
Y
→
N
Y
/
X
→
0
,
{\displaystyle 0\to TY\to TX|_{Y}\to N_{Y/X}\to 0,}
which can be used as a definition of the normal bundle
N
Y
/
X
{\displaystyle N_{Y/X}}
to
Y
{\displaystyle Y}
in
X
{\displaystyle X}
.
For a smooth scheme
X
{\displaystyle X}
over a field
k
{\displaystyle k}
and a natural number
i
{\displaystyle i}
, the vector bundle
Ω
i
{\displaystyle \Omega ^{i}}
of i-forms on
X
{\displaystyle X}
is defined as the
i
{\displaystyle i}
-th exterior power of the cotangent bundle,
Ω
i
=
Λ
i
Ω
1
{\displaystyle \Omega ^{i}=\Lambda ^{i}\Omega ^{1}}
. For a smooth variety
X
{\displaystyle X}
of dimension
n
{\displaystyle n}
over
k
{\displaystyle k}
, the canonical bundle
K
X
{\displaystyle K_{X}}
means the line bundle
Ω
n
{\displaystyle \Omega ^{n}}
. Thus sections of the canonical bundle are algebro-geometric analogs of volume forms on
X
{\displaystyle X}
. For example, a section of the canonical bundle of affine space
A
n
{\displaystyle \mathbb {A} ^{n}}
over
k
{\displaystyle k}
can be written as
f
(
x
1
,
…
,
x
n
)
d
x
1
∧
⋯
∧
d
x
n
,
{\displaystyle f(x_{1},\ldots ,x_{n})\;dx_{1}\wedge \cdots \wedge dx_{n},}
where
f
{\displaystyle f}
is a polynomial with coefficients in
k
{\displaystyle k}
.
Let
R
{\displaystyle R}
be a commutative ring and
n
{\displaystyle n}
a natural number. For each integer
j
{\displaystyle j}
, there is an important example of a line bundle on projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
over
R
{\displaystyle R}
, called
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
. To define this, consider the morphism of
R
{\displaystyle R}
-schemes
π
:
A
n
+
1
−
0
→
P
n
{\displaystyle \pi :\mathbb {A} ^{n+1}-0\to \mathbb {P} ^{n}}
given in coordinates by
(
x
0
,
…
,
x
n
)
↦
[
x
0
,
…
,
x
n
]
{\displaystyle (x_{0},\ldots ,x_{n})\mapsto [x_{0},\ldots ,x_{n}]}
. (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section of
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
over an open subset
U
{\displaystyle U}
of
P
n
{\displaystyle \mathbb {P} ^{n}}
is defined to be a regular function
f
{\displaystyle f}
on
π
−
1
(
U
)
{\displaystyle \pi ^{-1}(U)}
that is homogeneous of degree
j
{\displaystyle j}
, meaning that
f
(
a
x
)
=
a
j
f
(
x
)
{\displaystyle f(ax)=a^{j}f(x)}
as regular functions on (
A
1
−
0
)
×
π
−
1
(
U
)
{\displaystyle \mathbb {A} ^{1}-0)\times \pi ^{-1}(U)}
. For all integers
i
{\displaystyle i}
and
j
{\displaystyle j}
, there is an isomorphism
O
(
i
)
⊗
O
(
j
)
≅
O
(
i
+
j
)
{\displaystyle {\mathcal {O}}(i)\otimes {\mathcal {O}}(j)\cong {\mathcal {O}}(i+j)}
of line bundles on
P
n
{\displaystyle \mathbb {P} ^{n}}
.
In particular, every homogeneous polynomial in
x
0
,
…
,
x
n
{\displaystyle x_{0},\ldots ,x_{n}}
of degree
j
{\displaystyle j}
over
R
{\displaystyle R}
can be viewed as a global section of
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
over
P
n
{\displaystyle \mathbb {P} ^{n}}
. Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundles
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
. This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
over
R
{\displaystyle R}
are just the "constants" (the ring
R
{\displaystyle R}
), and so it is essential to work with the line bundles
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
.
Serre gave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, let
R
{\displaystyle R}
be a Noetherian ring (for example, a field), and consider the polynomial ring
S
=
R
[
x
0
,
…
,
x
n
]
{\displaystyle S=R[x_{0},\ldots ,x_{n}]}
as a graded ring with each
x
i
{\displaystyle x_{i}}
having degree 1. Then every finitely generated graded
S
{\displaystyle S}
-module
M
{\displaystyle M}
has an associated coherent sheaf
M
~
{\displaystyle {\tilde {M}}}
on
P
n
{\displaystyle \mathbb {P} ^{n}}
over
R
{\displaystyle R}
. Every coherent sheaf on
P
n
{\displaystyle \mathbb {P} ^{n}}
arises in this way from a finitely generated graded
S
{\displaystyle S}
-module
M
{\displaystyle M}
. (For example, the line bundle
O
(
j
)
{\displaystyle {\mathcal {O}}(j)}
is the sheaf associated to the
S
{\displaystyle S}
-module
S
{\displaystyle S}
with its grading lowered by
j
{\displaystyle j}
.) But the
S
{\displaystyle S}
-module
M
{\displaystyle M}
that yields a given coherent sheaf on
P
n
{\displaystyle \mathbb {P} ^{n}}
is not unique; it is only unique up to changing
M
{\displaystyle M}
by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves on
P
n
{\displaystyle \mathbb {P} ^{n}}
is the quotient of the category of finitely generated graded
S
{\displaystyle S}
-modules by the Serre subcategory of modules that are nonzero in only finitely many degrees.
The tangent bundle of projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
over a field
k
{\displaystyle k}
can be described in terms of the line bundle
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
. Namely, there is a short exact sequence, the Euler sequence:
0
→
O
P
n
→
O
(
1
)
⊕
n
+
1
→
T
P
n
→
0.
{\displaystyle 0\to {\mathcal {O}}_{\mathbb {P} ^{n}}\to {\mathcal {O}}(1)^{\oplus \;n+1}\to T\mathbb {P} ^{n}\to 0.}
It follows that the canonical bundle
K
P
n
{\displaystyle K_{\mathbb {P} ^{n}}}
(the dual of the determinant line bundle of the tangent bundle) is isomorphic to
O
(
−
n
−
1
)
{\displaystyle {\mathcal {O}}(-n-1)}
. This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of the ample line bundle
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
means that projective space is a Fano variety. Over the complex numbers, this means that projective space has a Kähler metric with positive Ricci curvature.
=== Vector bundles on a hypersurface ===
Consider a smooth degree-
d
{\displaystyle d}
hypersurface
X
⊆
P
n
{\displaystyle X\subseteq \mathbb {P} ^{n}}
defined by the homogeneous polynomial
f
{\displaystyle f}
of degree
d
{\displaystyle d}
. Then, there is an exact sequence
0
→
O
X
(
−
d
)
→
i
∗
Ω
P
n
→
Ω
X
→
0
{\displaystyle 0\to {\mathcal {O}}_{X}(-d)\to i^{*}\Omega _{\mathbb {P} ^{n}}\to \Omega _{X}\to 0}
where the second map is the pullback of differential forms, and the first map sends
ϕ
↦
d
(
f
⋅
ϕ
)
{\displaystyle \phi \mapsto d(f\cdot \phi )}
Note that this sequence tells us that
O
(
−
d
)
{\displaystyle {\mathcal {O}}(-d)}
is the conormal sheaf of
X
{\displaystyle X}
in
P
n
{\displaystyle \mathbb {P} ^{n}}
. Dualizing this yields the exact sequence
0
→
T
X
→
i
∗
T
P
n
→
O
(
d
)
→
0
{\displaystyle 0\to T_{X}\to i^{*}T_{\mathbb {P} ^{n}}\to {\mathcal {O}}(d)\to 0}
hence
O
(
d
)
{\displaystyle {\mathcal {O}}(d)}
is the normal bundle of
X
{\displaystyle X}
in
P
n
{\displaystyle \mathbb {P} ^{n}}
. If we use the fact that given an exact sequence
0
→
E
1
→
E
2
→
E
3
→
0
{\displaystyle 0\to {\mathcal {E}}_{1}\to {\mathcal {E}}_{2}\to {\mathcal {E}}_{3}\to 0}
of vector bundles with ranks
r
1
{\displaystyle r_{1}}
,
r
2
{\displaystyle r_{2}}
,
r
3
{\displaystyle r_{3}}
, there is an isomorphism
Λ
r
2
E
2
≅
Λ
r
1
E
1
⊗
Λ
r
3
E
3
{\displaystyle \Lambda ^{r_{2}}{\mathcal {E}}_{2}\cong \Lambda ^{r_{1}}{\mathcal {E}}_{1}\otimes \Lambda ^{r_{3}}{\mathcal {E}}_{3}}
of line bundles, then we see that there is the isomorphism
i
∗
ω
P
n
≅
ω
X
⊗
O
X
(
−
d
)
{\displaystyle i^{*}\omega _{\mathbb {P} ^{n}}\cong \omega _{X}\otimes {\mathcal {O}}_{X}(-d)}
showing that
ω
X
≅
O
X
(
d
−
n
−
1
)
{\displaystyle \omega _{X}\cong {\mathcal {O}}_{X}(d-n-1)}
== Serre construction and vector bundles ==
One useful technique for constructing rank 2 vector bundles is the Serre constructionpg 3 which establishes a correspondence between rank 2 vector bundles
E
{\displaystyle {\mathcal {E}}}
on a smooth projective variety
X
{\displaystyle X}
and codimension 2 subvarieties
Y
{\displaystyle Y}
using a certain
Ext
1
{\displaystyle {\text{Ext}}^{1}}
-group calculated on
X
{\displaystyle X}
. This is given by a cohomological condition on the line bundle
∧
2
E
{\displaystyle \wedge ^{2}{\mathcal {E}}}
(see below).
The correspondence in one direction is given as follows: for a section
s
∈
Γ
(
X
,
E
)
{\displaystyle s\in \Gamma (X,{\mathcal {E}})}
we can associated the vanishing locus
V
(
s
)
⊆
X
{\displaystyle V(s)\subseteq X}
. If
V
(
s
)
{\displaystyle V(s)}
is a codimension 2 subvariety, then
It is a local complete intersection, meaning if we take an affine chart
U
i
⊆
X
{\displaystyle U_{i}\subseteq X}
then
s
|
U
i
∈
Γ
(
U
i
,
E
)
{\displaystyle s|_{U_{i}}\in \Gamma (U_{i},{\mathcal {E}})}
can be represented as a function
s
i
:
U
i
→
A
2
{\displaystyle s_{i}:U_{i}\to \mathbb {A} ^{2}}
, where
s
i
(
p
)
=
(
s
i
1
(
p
)
,
s
i
2
(
p
)
)
{\displaystyle s_{i}(p)=(s_{i}^{1}(p),s_{i}^{2}(p))}
and
V
(
s
)
∩
U
i
=
V
(
s
i
1
,
s
i
2
)
{\displaystyle V(s)\cap U_{i}=V(s_{i}^{1},s_{i}^{2})}
The line bundle
ω
X
⊗
∧
2
E
|
V
(
s
)
{\displaystyle \omega _{X}\otimes \wedge ^{2}{\mathcal {E}}|_{V(s)}}
is isomorphic to the canonical bundle
ω
V
(
s
)
{\displaystyle \omega _{V(s)}}
on
V
(
s
)
{\displaystyle V(s)}
In the other direction, for a codimension 2 subvariety
Y
⊆
X
{\displaystyle Y\subseteq X}
and a line bundle
L
→
X
{\displaystyle {\mathcal {L}}\to X}
such that
H
1
(
X
,
L
)
=
H
2
(
X
,
L
)
=
0
{\displaystyle H^{1}(X,{\mathcal {L}})=H^{2}(X,{\mathcal {L}})=0}
ω
Y
≅
(
ω
X
⊗
L
)
|
Y
{\displaystyle \omega _{Y}\cong (\omega _{X}\otimes {\mathcal {L}})|_{Y}}
there is a canonical isomorphism
Hom
(
(
ω
X
⊗
L
)
|
Y
,
ω
Y
)
≅
Ext
1
(
I
Y
⊗
L
,
O
X
)
{\displaystyle {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})\cong {\text{Ext}}^{1}({\mathcal {I}}_{Y}\otimes {\mathcal {L}},{\mathcal {O}}_{X})}
,which is functorial with respect to inclusion of codimension
2
{\displaystyle 2}
subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, for
s
∈
Hom
(
(
ω
X
⊗
L
)
|
Y
,
ω
Y
)
{\displaystyle s\in {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})}
that is an isomorphism there is a corresponding locally free sheaf
E
{\displaystyle {\mathcal {E}}}
of rank 2 that fits into a short exact sequence
0
→
O
X
→
E
→
I
Y
⊗
L
→
0
{\displaystyle 0\to {\mathcal {O}}_{X}\to {\mathcal {E}}\to {\mathcal {I}}_{Y}\otimes {\mathcal {L}}\to 0}
This vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studying moduli of stable vector bundles in many specific cases, such as on principally polarized abelian varieties and K3 surfaces.
== Chern classes and algebraic K-theory ==
A vector bundle
E
{\displaystyle E}
on a smooth variety
X
{\displaystyle X}
over a field has Chern classes in the Chow ring of
X
{\displaystyle X}
,
c
i
(
E
)
{\displaystyle c_{i}(E)}
in
C
H
i
(
X
)
{\displaystyle CH^{i}(X)}
for
i
≥
0
{\displaystyle i\geq 0}
. These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence
0
→
A
→
B
→
C
→
0
{\displaystyle 0\to A\to B\to C\to 0}
of vector bundles on
X
{\displaystyle X}
, the Chern classes of
B
{\displaystyle B}
are given by
c
i
(
B
)
=
c
i
(
A
)
+
c
1
(
A
)
c
i
−
1
(
C
)
+
⋯
+
c
i
−
1
(
A
)
c
1
(
C
)
+
c
i
(
C
)
.
{\displaystyle c_{i}(B)=c_{i}(A)+c_{1}(A)c_{i-1}(C)+\cdots +c_{i-1}(A)c_{1}(C)+c_{i}(C).}
It follows that the Chern classes of a vector bundle
E
{\displaystyle E}
depend only on the class of
E
{\displaystyle E}
in the Grothendieck group
K
0
(
X
)
{\displaystyle K_{0}(X)}
. By definition, for a scheme
X
{\displaystyle X}
,
K
0
(
X
)
{\displaystyle K_{0}(X)}
is the quotient of the free abelian group on the set of isomorphism classes of vector bundles on
X
{\displaystyle X}
by the relation that
[
B
]
=
[
A
]
+
[
C
]
{\displaystyle [B]=[A]+[C]}
for any short exact sequence as above. Although
K
0
(
X
)
{\displaystyle K_{0}(X)}
is hard to compute in general, algebraic K-theory provides many tools for studying it, including a sequence of related groups
K
i
(
X
)
{\displaystyle K_{i}(X)}
for integers
i
>
0
{\displaystyle i>0}
.
A variant is the group
G
0
(
X
)
{\displaystyle G_{0}(X)}
(or
K
0
′
(
X
)
{\displaystyle K_{0}'(X)}
), the Grothendieck group of coherent sheaves on
X
{\displaystyle X}
. (In topological terms, G-theory has the formal properties of a Borel–Moore homology theory for schemes, while K-theory is the corresponding cohomology theory.) The natural homomorphism
K
0
(
X
)
→
G
0
(
X
)
{\displaystyle K_{0}(X)\to G_{0}(X)}
is an isomorphism if
X
{\displaystyle X}
is a regular separated Noetherian scheme, using that every coherent sheaf has a finite resolution by vector bundles in that case. For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field.
More generally, a Noetherian scheme
X
{\displaystyle X}
is said to have the resolution property if every coherent sheaf on
X
{\displaystyle X}
has a surjection from some vector bundle on
X
{\displaystyle X}
. For example, every quasi-projective scheme over a Noetherian ring has the resolution property.
=== Applications of resolution property ===
Since the resolution property states that a coherent sheaf
E
{\displaystyle {\mathcal {E}}}
on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles :
E
k
→
⋯
→
E
1
→
E
0
{\displaystyle {\mathcal {E}}_{k}\to \cdots \to {\mathcal {E}}_{1}\to {\mathcal {E}}_{0}}
we can compute the total Chern class of
E
{\displaystyle {\mathcal {E}}}
with
c
(
E
)
=
c
(
E
0
)
c
(
E
1
)
−
1
⋯
c
(
E
k
)
(
−
1
)
k
{\displaystyle c({\mathcal {E}})=c({\mathcal {E}}_{0})c({\mathcal {E}}_{1})^{-1}\cdots c({\mathcal {E}}_{k})^{(-1)^{k}}}
For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme of
X
{\displaystyle X}
. If we take the projective scheme
Z
{\displaystyle Z}
associated to the ideal
(
x
y
,
x
z
)
⊆
C
[
x
,
y
,
z
,
w
]
{\displaystyle (xy,xz)\subseteq \mathbb {C} [x,y,z,w]}
, then
c
(
O
Z
)
=
c
(
O
)
c
(
O
(
−
3
)
)
c
(
O
(
−
2
)
⊕
O
(
−
2
)
)
{\displaystyle c({\mathcal {O}}_{Z})={\frac {c({\mathcal {O}})c({\mathcal {O}}(-3))}{c({\mathcal {O}}(-2)\oplus {\mathcal {O}}(-2))}}}
since there is the resolution
0
→
O
(
−
3
)
→
O
(
−
2
)
⊕
O
(
−
2
)
→
O
→
O
Z
→
0
{\displaystyle 0\to {\mathcal {O}}(-3)\to {\mathcal {O}}(-2)\oplus {\mathcal {O}}(-2)\to {\mathcal {O}}\to {\mathcal {O}}_{Z}\to 0}
over
C
P
3
{\displaystyle \mathbb {CP} ^{3}}
.
== Bundle homomorphism vs. sheaf homomorphism ==
When vector bundles and locally free sheaves of finite constant rank are used interchangeably,
care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundles
p
:
E
→
X
,
q
:
F
→
X
{\displaystyle p:E\to X,\,q:F\to X}
, by definition, a bundle homomorphism
φ
:
E
→
F
{\displaystyle \varphi :E\to F}
is a scheme morphism over
X
{\displaystyle X}
(i.e.,
p
=
q
∘
φ
{\displaystyle p=q\circ \varphi }
) such that, for each geometric point
x
{\displaystyle x}
in
X
{\displaystyle X}
,
φ
x
:
p
−
1
(
x
)
→
q
−
1
(
x
)
{\displaystyle \varphi _{x}:p^{-1}(x)\to q^{-1}(x)}
is a linear map of rank independent of
x
{\displaystyle x}
. Thus, it induces the sheaf homomorphism
φ
~
:
E
→
F
{\displaystyle {\widetilde {\varphi }}:{\mathcal {E}}\to {\mathcal {F}}}
of constant rank between the corresponding locally free
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules (sheaves of dual sections). But there may be an
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module homomorphism that does not arise this way; namely, those not having constant rank.
In particular, a subbundle
E
⊆
F
{\displaystyle E\subseteq F}
is a subsheaf (i.e.,
E
{\displaystyle {\mathcal {E}}}
is a subsheaf of
F
{\displaystyle {\mathcal {F}}}
). But the converse can fail; for example, for an effective Cartier divisor
D
{\displaystyle D}
on
X
{\displaystyle X}
,
O
X
(
−
D
)
⊆
O
X
{\displaystyle {\mathcal {O}}_{X}(-D)\subseteq {\mathcal {O}}_{X}}
is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles).
== The category of quasi-coherent sheaves ==
The quasi-coherent sheaves on any fixed scheme form an abelian category. Gabber showed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, a Grothendieck category. A quasi-compact quasi-separated scheme
X
{\displaystyle X}
(such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves on
X
{\displaystyle X}
, by Rosenberg, generalizing a result of Gabriel.
== Coherent cohomology ==
The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language of sheaf cohomology applied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role.
Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such as Serre duality, relations between topology and algebraic geometry such as Hodge theory, and formulas for Euler characteristics of coherent sheaves such as the Riemann–Roch theorem.
== See also ==
Picard group
Divisor (algebraic geometry)
Reflexive sheaf
Quot scheme
Twisted sheaf
Essentially finite vector bundle
Bundle of principal parts
Gabriel–Rosenberg reconstruction theorem
Pseudo-coherent sheaf
Quasi-coherent sheaf on an algebraic stack
== Notes ==
== References ==
Antieau, Benjamin (2016), "A reconstruction theorem for abelian categories of twisted sheaves", Journal für die reine und angewandte Mathematik, 2016 (712): 175–188, arXiv:1305.2541, doi:10.1515/crelle-2013-0119, MR 3466552
Danilov, V. I. (2001) [1994], "Coherent algebraic sheaf", Encyclopedia of Mathematics, EMS Press
Grauert, Hans; Remmert, Reinhold (1984), Coherent Analytic Sheaves, Grundlehren der mathematischen Wissenschaften, vol. 265, Springer-Verlag, doi:10.1007/978-3-642-69582-7, ISBN 3-540-13178-7, MR 0755331
Eisenbud, David (1995), Commutative Algebra with a View toward Algebraic Geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94268-1, MR 1322960
Fulton, William (1998), Intersection Theory, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-1700-8, ISBN 978-0-387-98549-7, MR 1644323
Sections 0.5.3 and 0.5.4 of Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4. doi:10.1007/bf02684778. MR 0217083.
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
Mumford, David (1999). The Red Book of Varieties and Schemes: Includes the Michigan Lectures (1974) on Curves and Their Jacobians. Lecture Notes in Mathematics. Vol. 1358 (2nd ed.). Springer-Verlag. doi:10.1007/b62130. ISBN 354063293X. MR 1748380.
Onishchik, A.L. (2001) [1994], "Coherent analytic sheaf", Encyclopedia of Mathematics, EMS Press
Onishchik, A.L. (2001) [1994], "Coherent sheaf", Encyclopedia of Mathematics, EMS Press
Serre, Jean-Pierre (1955), "Faisceaux algébriques cohérents", Annals of Mathematics, 61 (2): 197–278, doi:10.2307/1969915, JSTOR 1969915, MR 0068874
== External links ==
The Stacks Project Authors, The Stacks Project
Part V of Vakil, Ravi, The Rising Sea | Wikipedia/Algebraic_vector_bundle |
In mathematics, specifically in homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups, usually one associated with a topological space, often defined from a cochain complex. Cohomology can be viewed as a method of assigning richer algebraic invariants to a space than homology. Some versions of cohomology arise by dualizing the construction of homology. In other words, cochains are functions on the group of chains in homology theory.
From its start in topology, this idea became a dominant method in the mathematics of the second half of the twentieth century. From the initial idea of homology as a method of constructing algebraic invariants of topological spaces, the range of applications of homology and cohomology theories has spread throughout geometry and algebra. The terminology tends to hide the fact that cohomology, a contravariant theory, is more natural than homology in many applications. At a basic level, this has to do with functions and pullbacks in geometric situations: given spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, and some function
F
{\displaystyle F}
on
Y
{\displaystyle Y}
, for any mapping
f
:
X
→
Y
{\displaystyle f:X\to Y}
, composition with
f
{\displaystyle f}
gives rise to a function
F
∘
f
{\displaystyle F\circ f}
on
X
{\displaystyle X}
. The most important cohomology theories have a product, the cup product, which gives them a ring structure. Because of this feature, cohomology is usually a stronger invariant than homology.
== Singular cohomology ==
Singular cohomology is a powerful invariant in topology, associating a graded-commutative ring with any topological space. Every continuous map
f
:
X
→
Y
{\displaystyle f:X\to Y}
determines a homomorphism from the cohomology ring of
Y
{\displaystyle Y}
to that of
X
{\displaystyle X}
; this puts strong restrictions on the possible maps from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
. Unlike more subtle invariants such as homotopy groups, the cohomology ring tends to be computable in practice for spaces of interest.
For a topological space
X
{\displaystyle X}
, the definition of singular cohomology starts with the singular chain complex:
⋯
→
C
i
+
1
→
∂
i
+
1
C
i
→
∂
i
C
i
−
1
→
⋯
{\displaystyle \cdots \to C_{i+1}{\stackrel {\partial _{i+1}}{\to }}C_{i}{\stackrel {\partial _{i}}{\to }}\ C_{i-1}\to \cdots }
By definition, the singular homology of
X
{\displaystyle X}
is the homology of this chain complex (the kernel of one homomorphism modulo the image of the previous one). In more detail,
C
i
{\displaystyle C_{i}}
is the free abelian group on the set of continuous maps from the standard
i
{\displaystyle i}
-simplex to
X
{\displaystyle X}
(called "singular
i
{\displaystyle i}
-simplices in
X
{\displaystyle X}
"), and
∂
i
{\displaystyle \partial _{i}}
is the
i
{\displaystyle i}
-th boundary homomorphism. The groups
C
i
{\displaystyle C_{i}}
are zero for
i
{\displaystyle i}
negative.
Now fix an abelian group
A
{\displaystyle A}
, and replace each group
C
i
{\displaystyle C_{i}}
by its dual group
C
i
∗
=
H
o
m
(
C
i
,
A
)
,
{\displaystyle C_{i}^{*}=\mathrm {Hom} (C_{i},A),}
and
∂
i
{\displaystyle \partial _{i}}
by its dual homomorphism
d
i
−
1
:
C
i
−
1
∗
→
C
i
∗
.
{\displaystyle d_{i-1}:C_{i-1}^{*}\to C_{i}^{*}.}
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
⋯
←
C
i
+
1
∗
←
d
i
C
i
∗
←
d
i
−
1
C
i
−
1
∗
←
⋯
{\displaystyle \cdots \leftarrow C_{i+1}^{*}{\stackrel {d_{i}}{\leftarrow }}\ C_{i}^{*}{\stackrel {d_{i-1}}{\leftarrow }}C_{i-1}^{*}\leftarrow \cdots }
For an integer
i
{\displaystyle i}
, the
i
{\displaystyle i}
th cohomology group of
X
{\displaystyle X}
with coefficients in
A
{\displaystyle A}
is defined to be
ker
(
d
i
)
/
im
(
d
i
−
1
)
{\displaystyle \operatorname {ker} (d_{i})/\operatorname {im} (d_{i-1})}
and denoted by
H
i
(
X
,
A
)
{\displaystyle H^{i}(X,A)}
. The group
H
i
(
X
,
A
)
{\displaystyle H^{i}(X,A)}
is zero for
i
{\displaystyle i}
negative. The elements of
C
i
∗
{\displaystyle C_{i}^{*}}
are called singular
i
{\displaystyle i}
-cochains with coefficients in
A
{\displaystyle A}
. (Equivalently, an
i
{\displaystyle i}
-cochain on
X
{\displaystyle X}
can be identified with a function from the set of singular
i
{\displaystyle i}
-simplices in
X
{\displaystyle X}
to
A
{\displaystyle A}
.) Elements of
ker
(
d
)
{\displaystyle \ker(d)}
and
im
(
d
)
{\displaystyle {\textrm {im}}(d)}
are called cocycles and coboundaries, respectively, while elements of
ker
(
d
i
)
/
im
(
d
i
−
1
)
=
H
i
(
X
,
A
)
{\displaystyle \operatorname {ker} (d_{i})/\operatorname {im} (d_{i-1})=H^{i}(X,A)}
are called cohomology classes (because they are equivalence classes of cocycles).
In what follows, the coefficient group
A
{\displaystyle A}
is sometimes not written. It is common to take
A
{\displaystyle A}
to be a commutative ring
R
{\displaystyle R}
; then the cohomology groups are
R
{\displaystyle R}
-modules. A standard choice is the ring
Z
{\displaystyle \mathbb {Z} }
of integers.
Some of the formal properties of cohomology are only minor variants of the properties of homology:
A continuous map
f
:
X
→
Y
{\displaystyle f:X\to Y}
determines a pushforward homomorphism
f
∗
:
H
i
(
X
)
→
H
i
(
Y
)
{\displaystyle f_{*}:H_{i}(X)\to H_{i}(Y)}
on homology and a pullback homomorphism
f
∗
:
H
i
(
Y
)
→
H
i
(
X
)
{\displaystyle f^{*}:H^{i}(Y)\to H^{i}(X)}
on cohomology. This makes cohomology into a contravariant functor from topological spaces to abelian groups (or
R
{\displaystyle R}
-modules).
Two homotopic maps from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
induce the same homomorphism on cohomology (just as on homology).
The Mayer–Vietoris sequence is an important computational tool in cohomology, as in homology. Note that the boundary homomorphism increases (rather than decreases) degree in cohomology. That is, if a space
X
{\displaystyle X}
is the union of open subsets
U
{\displaystyle U}
and
V
{\displaystyle V}
, then there is a long exact sequence:
⋯
→
H
i
(
X
)
→
H
i
(
U
)
⊕
H
i
(
V
)
→
H
i
(
U
∩
V
)
→
H
i
+
1
(
X
)
→
⋯
{\displaystyle \cdots \to H^{i}(X)\to H^{i}(U)\oplus H^{i}(V)\to H^{i}(U\cap V)\to H^{i+1}(X)\to \cdots }
There are relative cohomology groups
H
i
(
X
,
Y
;
A
)
{\displaystyle H^{i}(X,Y;A)}
for any subspace
Y
{\displaystyle Y}
of a space
X
{\displaystyle X}
. They are related to the usual cohomology groups by a long exact sequence:
⋯
→
H
i
(
X
,
Y
)
→
H
i
(
X
)
→
H
i
(
Y
)
→
H
i
+
1
(
X
,
Y
)
→
⋯
{\displaystyle \cdots \to H^{i}(X,Y)\to H^{i}(X)\to H^{i}(Y)\to H^{i+1}(X,Y)\to \cdots }
The universal coefficient theorem describes cohomology in terms of homology, using Ext groups. Namely, there is a short exact sequence
0
→
Ext
Z
1
(
H
i
−
1
(
X
,
Z
)
,
A
)
→
H
i
(
X
,
A
)
→
Hom
Z
(
H
i
(
X
,
Z
)
,
A
)
→
0.
{\displaystyle 0\to \operatorname {Ext} _{\mathbb {Z} }^{1}(\operatorname {H} _{i-1}(X,\mathbb {Z} ),A)\to H^{i}(X,A)\to \operatorname {Hom} _{\mathbb {Z} }(H_{i}(X,\mathbb {Z} ),A)\to 0.}
A related statement is that for a field
F
{\displaystyle F}
,
H
i
(
X
,
F
)
{\displaystyle H^{i}(X,F)}
is precisely the dual space of the vector space
H
i
(
X
,
F
)
{\displaystyle H_{i}(X,F)}
.
If
X
{\displaystyle X}
is a topological manifold or a CW complex, then the cohomology groups
H
i
(
X
,
A
)
{\displaystyle H^{i}(X,A)}
are zero for
i
{\displaystyle i}
greater than the dimension of
X
{\displaystyle X}
. If
X
{\displaystyle X}
is a compact manifold (possibly with boundary), or a CW complex with finitely many cells in each dimension, and
R
{\displaystyle R}
is a commutative Noetherian ring, then the
R
{\displaystyle R}
-module
H
i
(
X
,
R
)
{\displaystyle H^{i}(X,R)}
is finitely generated for each
i
{\displaystyle i}
.
On the other hand, cohomology has a crucial structure that homology does not: for any topological space
X
{\displaystyle X}
and commutative ring
R
{\displaystyle R}
, there is a bilinear map, called the cup product:
H
i
(
X
,
R
)
×
H
j
(
X
,
R
)
→
H
i
+
j
(
X
,
R
)
,
{\displaystyle H^{i}(X,R)\times H^{j}(X,R)\to H^{i+j}(X,R),}
defined by an explicit formula on singular cochains. The product of cohomology classes
u
{\displaystyle u}
and
v
{\displaystyle v}
is written as
u
∪
v
{\displaystyle u\cup v}
or simply as
u
v
{\displaystyle uv}
. This product makes the direct sum
H
∗
(
X
,
R
)
=
⨁
i
H
i
(
X
,
R
)
{\displaystyle H^{*}(X,R)=\bigoplus _{i}H^{i}(X,R)}
into a graded ring, called the cohomology ring of
X
{\displaystyle X}
. It is graded-commutative in the sense that:
u
v
=
(
−
1
)
i
j
v
u
,
u
∈
H
i
(
X
,
R
)
,
v
∈
H
j
(
X
,
R
)
.
{\displaystyle uv=(-1)^{ij}vu,\qquad u\in H^{i}(X,R),v\in H^{j}(X,R).}
For any continuous map
f
:
X
→
Y
,
{\displaystyle f\colon X\to Y,}
the pullback
f
∗
:
H
∗
(
Y
,
R
)
→
H
∗
(
X
,
R
)
{\displaystyle f^{*}:H^{*}(Y,R)\to H^{*}(X,R)}
is a homomorphism of graded
R
{\displaystyle R}
-algebras. It follows that if two spaces are homotopy equivalent, then their cohomology rings are isomorphic.
Here are some of the geometric interpretations of the cup product. In what follows, manifolds are understood to be without boundary, unless stated otherwise. A closed manifold means a compact manifold (without boundary), whereas a closed submanifold N of a manifold M means a submanifold that is a closed subset of M, not necessarily compact (although N is automatically compact if M is).
Let X be a closed oriented manifold of dimension n. Then Poincaré duality gives an isomorphism HiX ≅ Hn−iX. As a result, a closed oriented submanifold S of codimension i in X determines a cohomology class in HiX, called [S]. In these terms, the cup product describes the intersection of submanifolds. Namely, if S and T are submanifolds of codimension i and j that intersect transversely, then
[
S
]
[
T
]
=
[
S
∩
T
]
∈
H
i
+
j
(
X
)
,
{\displaystyle [S][T]=[S\cap T]\in H^{i+j}(X),}
where the intersection S ∩ T is a submanifold of codimension i + j, with an orientation determined by the orientations of S, T, and X. In the case of smooth manifolds, if S and T do not intersect transversely, this formula can still be used to compute the cup product [S][T], by perturbing S or T to make the intersection transverse. More generally, without assuming that X has an orientation, a closed submanifold of X with an orientation on its normal bundle determines a cohomology class on X. If X is a noncompact manifold, then a closed submanifold (not necessarily compact) determines a cohomology class on X. In both cases, the cup product can again be described in terms of intersections of submanifolds. Note that Thom constructed an integral cohomology class of degree 7 on a smooth 14-manifold that is not the class of any smooth submanifold. On the other hand, he showed that every integral cohomology class of positive degree on a smooth manifold has a positive multiple that is the class of a smooth submanifold. Also, every integral cohomology class on a manifold can be represented by a "pseudomanifold", that is, a simplicial complex that is a manifold outside a closed subset of codimension at least 2.
For a smooth manifold X, de Rham's theorem says that the singular cohomology of X with real coefficients is isomorphic to the de Rham cohomology of X, defined using differential forms. The cup product corresponds to the product of differential forms. This interpretation has the advantage that the product on differential forms is graded-commutative, whereas the product on singular cochains is only graded-commutative up to chain homotopy. In fact, it is impossible to modify the definition of singular cochains with coefficients in the integers
Z
{\displaystyle \mathbb {Z} }
or in
Z
/
p
{\displaystyle \mathbb {Z} /p}
for a prime number p to make the product graded-commutative on the nose. The failure of graded-commutativity at the cochain level leads to the Steenrod operations on mod p cohomology.
Very informally, for any topological space X, elements of
H
i
(
X
)
{\displaystyle H^{i}(X)}
can be thought of as represented by codimension-i subspaces of X that can move freely on X. For example, one way to define an element of
H
i
(
X
)
{\displaystyle H^{i}(X)}
is to give a continuous map f from X to a manifold M and a closed codimension-i submanifold N of M with an orientation on the normal bundle. Informally, one thinks of the resulting class
f
∗
(
[
N
]
)
∈
H
i
(
X
)
{\displaystyle f^{*}([N])\in H^{i}(X)}
as lying on the subspace
f
−
1
(
N
)
{\displaystyle f^{-1}(N)}
of X; this is justified in that the class
f
∗
(
[
N
]
)
{\displaystyle f^{*}([N])}
restricts to zero in the cohomology of the open subset
X
−
f
−
1
(
N
)
.
{\displaystyle X-f^{-1}(N).}
The cohomology class
f
∗
(
[
N
]
)
{\displaystyle f^{*}([N])}
can move freely on X in the sense that N could be replaced by any continuous deformation of N inside M.
== Examples ==
In what follows, cohomology is taken with coefficients in the integers Z, unless stated otherwise.
The cohomology ring of a point is the ring Z in degree 0. By homotopy invariance, this is also the cohomology ring of any contractible space, such as Euclidean space Rn.
For a positive integer n, the cohomology ring of the sphere
S
n
{\displaystyle S^{n}}
is Z[x]/(x2) (the quotient ring of a polynomial ring by the given ideal), with x in degree n. In terms of Poincaré duality as above, x is the class of a point on the sphere.
The cohomology ring of the torus
(
S
1
)
n
{\displaystyle (S^{1})^{n}}
is the exterior algebra over Z on n generators in degree 1. For example, let P denote a point in the circle
S
1
{\displaystyle S^{1}}
, and Q the point (P,P) in the 2-dimensional torus
(
S
1
)
2
{\displaystyle (S^{1})^{2}}
. Then the cohomology of (S1)2 has a basis as a free Z-module of the form: the element 1 in degree 0, x := [P × S1] and y := [S1 × P] in degree 1, and xy = [Q] in degree 2. (Implicitly, orientations of the torus and of the two circles have been fixed here.) Note that yx = −xy = −[Q], by graded-commutativity.
More generally, let R be a commutative ring, and let X and Y be any topological spaces such that H*(X,R) is a finitely generated free R-module in each degree. (No assumption is needed on Y.) Then the Künneth formula gives that the cohomology ring of the product space X × Y is a tensor product of R-algebras:
H
∗
(
X
×
Y
,
R
)
≅
H
∗
(
X
,
R
)
⊗
R
H
∗
(
Y
,
R
)
.
{\displaystyle H^{*}(X\times Y,R)\cong H^{*}(X,R)\otimes _{R}H^{*}(Y,R).}
The cohomology ring of real projective space RPn with Z/2 coefficients is Z/2[x]/(xn+1), with x in degree 1. Here x is the class of a hyperplane RPn−1 in RPn; this makes sense even though RPj is not orientable for j even and positive, because Poincaré duality with Z/2 coefficients works for arbitrary manifolds. With integer coefficients, the answer is a bit more complicated. The Z-cohomology of RP2a has an element y of degree 2 such that the whole cohomology is the direct sum of a copy of Z spanned by the element 1 in degree 0 together with copies of Z/2 spanned by the elements yi for i=1,...,a. The Z-cohomology of RP2a+1 is the same together with an extra copy of Z in degree 2a+1.
The cohomology ring of complex projective space CPn is Z[x]/(xn+1), with x in degree 2. Here x is the class of a hyperplane CPn−1 in CPn. More generally, xj is the class of a linear subspace CPn−j in CPn.
The cohomology ring of the closed oriented surface X of genus g ≥ 0 has a basis as a free Z-module of the form: the element 1 in degree 0, A1,...,Ag and B1,...,Bg in degree 1, and the class P of a point in degree 2. The product is given by: AiAj = BiBj = 0 for all i and j, AiBj = 0 if i ≠ j, and AiBi = P for all i. By graded-commutativity, it follows that BiAi = −P.
On any topological space, graded-commutativity of the cohomology ring implies that 2x2 = 0 for all odd-degree cohomology classes x. It follows that for a ring R containing 1/2, all odd-degree elements of H*(X,R) have square zero. On the other hand, odd-degree elements need not have square zero if R is Z/2 or Z, as one sees in the example of RP2 (with Z/2 coefficients) or RP4 × RP2 (with Z coefficients).
== The diagonal ==
The cup product on cohomology can be viewed as coming from the diagonal map
Δ
:
X
→
X
×
X
{\displaystyle \Delta :X\to X\times X}
,
x
↦
(
x
,
x
)
{\displaystyle x\mapsto (x,x)}
. Namely, for any spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
with cohomology classes
u
∈
H
i
(
X
,
R
)
{\displaystyle u\in H^{i}(X,R)}
and
v
∈
H
j
(
Y
,
R
)
{\displaystyle v\in H^{j}(Y,R)}
, there is an external product (or cross product) cohomology class
u
×
v
∈
H
i
+
j
(
X
×
Y
,
R
)
{\displaystyle u\times v\in H^{i+j}(X\times Y,R)}
. The cup product of classes
u
∈
H
i
(
X
,
R
)
{\displaystyle u\in H^{i}(X,R)}
and
v
∈
H
j
(
X
,
R
)
{\displaystyle v\in H^{j}(X,R)}
can be defined as the pullback of the external product by the diagonal:
u
v
=
Δ
∗
(
u
×
v
)
∈
H
i
+
j
(
X
,
R
)
.
{\displaystyle uv=\Delta ^{*}(u\times v)\in H^{i+j}(X,R).}
Alternatively, the external product can be defined in terms of the cup product. For spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, write
f
:
X
×
Y
→
X
{\displaystyle f:X\times Y\to X}
and
g
:
X
×
Y
→
Y
{\displaystyle g:X\times Y\to Y}
for the two projections. Then the external product of classes
u
∈
H
i
(
X
,
R
)
{\displaystyle u\in H^{i}(X,R)}
and
v
∈
H
j
(
Y
,
R
)
{\displaystyle v\in H^{j}(Y,R)}
is:
u
×
v
=
(
f
∗
(
u
)
)
(
g
∗
(
v
)
)
∈
H
i
+
j
(
X
×
Y
,
R
)
.
{\displaystyle u\times v=(f^{*}(u))(g^{*}(v))\in H^{i+j}(X\times Y,R).}
== Poincaré duality ==
Another interpretation of Poincaré duality is that the cohomology ring of a closed oriented manifold is self-dual in a strong sense. Namely, let
X
{\displaystyle X}
be a closed connected oriented manifold of dimension
n
{\displaystyle n}
, and let
F
{\displaystyle F}
be a field. Then
H
n
(
X
,
F
)
{\displaystyle H^{n}(X,F)}
is isomorphic to
F
{\displaystyle F}
, and the product
H
i
(
X
,
F
)
×
H
n
−
i
(
X
,
F
)
→
H
n
(
X
,
F
)
≅
F
{\displaystyle H^{i}(X,F)\times H^{n-i}(X,F)\to H^{n}(X,F)\cong F}
is a perfect pairing for each integer
i
{\displaystyle i}
. In particular, the vector spaces
H
i
(
X
,
F
)
{\displaystyle H^{i}(X,F)}
and
H
n
−
i
(
X
,
F
)
{\displaystyle H^{n-i}(X,F)}
have the same (finite) dimension. Likewise, the product on integral cohomology modulo torsion with values in
H
n
(
X
,
Z
)
≅
Z
{\displaystyle H^{n}(X,\mathbb {Z} )\cong \mathbb {Z} }
is a perfect pairing over
Z
{\displaystyle \mathbb {Z} }
.
== Characteristic classes ==
An oriented real vector bundle E of rank r over a topological space X determines a cohomology class on X, the Euler class χ(E) ∈ Hr(X,Z). Informally, the Euler class is the class of the zero set of a general section of E. That interpretation can be made more explicit when E is a smooth vector bundle over a smooth manifold X, since then a general smooth section of X vanishes on a codimension-r submanifold of X.
There are several other types of characteristic classes for vector bundles that take values in cohomology, including Chern classes, Stiefel–Whitney classes, and Pontryagin classes.
== Eilenberg–MacLane spaces ==
For each abelian group A and natural number j, there is a space
K
(
A
,
j
)
{\displaystyle K(A,j)}
whose j-th homotopy group is isomorphic to A and whose other homotopy groups are zero. Such a space is called an Eilenberg–MacLane space. This space has the remarkable property that it is a classifying space for cohomology: there is a natural element u of
H
j
(
K
(
A
,
j
)
,
A
)
{\displaystyle H^{j}(K(A,j),A)}
, and every cohomology class of degree j on every space X is the pullback of u by some continuous map
X
→
K
(
A
,
j
)
{\displaystyle X\to K(A,j)}
. More precisely, pulling back the class u gives a bijection
[
X
,
K
(
A
,
j
)
]
→
≅
H
j
(
X
,
A
)
{\displaystyle [X,K(A,j)]{\stackrel {\cong }{\to }}H^{j}(X,A)}
for every space X with the homotopy type of a CW complex. Here
[
X
,
Y
]
{\displaystyle [X,Y]}
denotes the set of homotopy classes of continuous maps from X to Y.
For example, the space
K
(
Z
,
1
)
{\displaystyle K(\mathbb {Z} ,1)}
(defined up to homotopy equivalence) can be taken to be the circle
S
1
{\displaystyle S^{1}}
. So the description above says that every element of
H
1
(
X
,
Z
)
{\displaystyle H^{1}(X,\mathbb {Z} )}
is pulled back from the class u of a point on
S
1
{\displaystyle S^{1}}
by some map
X
→
S
1
{\displaystyle X\to S^{1}}
.
There is a related description of the first cohomology with coefficients in any abelian group A, say for a CW complex X. Namely,
H
1
(
X
,
A
)
{\displaystyle H^{1}(X,A)}
is in one-to-one correspondence with the set of isomorphism classes of Galois covering spaces of X with group A, also called principal A-bundles over X. For X connected, it follows that
H
1
(
X
,
A
)
{\displaystyle H^{1}(X,A)}
is isomorphic to
Hom
(
π
1
(
X
)
,
A
)
{\displaystyle \operatorname {Hom} (\pi _{1}(X),A)}
, where
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
is the fundamental group of X. For example,
H
1
(
X
,
Z
/
2
)
{\displaystyle H^{1}(X,\mathbb {Z} /2)}
classifies the double covering spaces of X, with the element
0
∈
H
1
(
X
,
Z
/
2
)
{\displaystyle 0\in H^{1}(X,\mathbb {Z} /2)}
corresponding to the trivial double covering, the disjoint union of two copies of X.
== Cap product ==
For any topological space X, the cap product is a bilinear map
∩
:
H
i
(
X
,
R
)
×
H
j
(
X
,
R
)
→
H
j
−
i
(
X
,
R
)
{\displaystyle \cap :H^{i}(X,R)\times H_{j}(X,R)\to H_{j-i}(X,R)}
for any integers i and j and any commutative ring R. The resulting map
H
∗
(
X
,
R
)
×
H
∗
(
X
,
R
)
→
H
∗
(
X
,
R
)
{\displaystyle H^{*}(X,R)\times H_{*}(X,R)\to H_{*}(X,R)}
makes the singular homology of X into a module over the singular cohomology ring of X.
For i = j, the cap product gives the natural homomorphism
H
i
(
X
,
R
)
→
Hom
R
(
H
i
(
X
,
R
)
,
R
)
,
{\displaystyle H^{i}(X,R)\to \operatorname {Hom} _{R}(H_{i}(X,R),R),}
which is an isomorphism for R a field.
For example, let X be an oriented manifold, not necessarily compact. Then a closed oriented codimension-i submanifold Y of X (not necessarily compact) determines an element of Hi(X,R), and a compact oriented j-dimensional submanifold Z of X determines an element of Hj(X,R). The cap product [Y] ∩ [Z] ∈ Hj−i(X,R) can be computed by perturbing Y and Z to make them intersect transversely and then taking the class of their intersection, which is a compact oriented submanifold of dimension j − i.
A closed oriented manifold X of dimension n has a fundamental class [X] in Hn(X,R). The Poincaré duality isomorphism
H
i
(
X
,
R
)
→
≅
H
n
−
i
(
X
,
R
)
{\displaystyle H^{i}(X,R){\overset {\cong }{\to }}H_{n-i}(X,R)}
is defined by cap product with the fundamental class of X.
== Brief history of singular cohomology ==
Although cohomology is fundamental to modern algebraic topology, its importance was not seen for some 40 years after the development of homology. The concept of dual cell structure, which Henri Poincaré used in his proof of his Poincaré duality theorem, contained the beginning of the idea of cohomology, but this was not seen until later.
There were various precursors to cohomology. In the mid-1920s, J. W. Alexander and Solomon Lefschetz founded intersection theory of cycles on manifolds. On a closed oriented n-dimensional manifold M an i-cycle and a j-cycle with nonempty intersection will, if in the general position, have as their intersection a (i + j − n)-cycle. This leads to a multiplication of homology classes
H
i
(
M
)
×
H
j
(
M
)
→
H
i
+
j
−
n
(
M
)
,
{\displaystyle H_{i}(M)\times H_{j}(M)\to H_{i+j-n}(M),}
which (in retrospect) can be identified with the cup product on the cohomology of M.
Alexander had by 1930 defined a first notion of a cochain, by thinking of an i-cochain on a space X as a function on small neighborhoods of the diagonal in Xi+1.
In 1931, Georges de Rham related homology and differential forms, proving de Rham's theorem. This result can be stated more simply in terms of cohomology.
In 1934, Lev Pontryagin proved the Pontryagin duality theorem; a result on topological groups. This (in rather special cases) provided an interpretation of Poincaré duality and Alexander duality in terms of group characters.
At a 1935 conference in Moscow, Andrey Kolmogorov and Alexander both introduced cohomology and tried to construct a cohomology product structure.
In 1936, Norman Steenrod constructed Čech cohomology by dualizing Čech homology.
From 1936 to 1938, Hassler Whitney and Eduard Čech developed the cup product (making cohomology into a graded ring) and cap product, and realized that Poincaré duality can be stated in terms of the cap product. Their theory was still limited to finite cell complexes.
In 1944, Samuel Eilenberg overcame the technical limitations, and gave the modern definition of singular homology and cohomology.
In 1945, Eilenberg and Steenrod stated the axioms defining a homology or cohomology theory, discussed below. In their 1952 book, Foundations of Algebraic Topology, they proved that the existing homology and cohomology theories did indeed satisfy their axioms.
In 1946, Jean Leray defined sheaf cohomology.
In 1948 Edwin Spanier, building on work of Alexander and Kolmogorov, developed Alexander–Spanier cohomology.
== Sheaf cohomology ==
Sheaf cohomology is a rich generalization of singular cohomology, allowing more general "coefficients" than simply an abelian group. For every sheaf of abelian groups E on a topological space X, one has cohomology groups Hi(X,E) for integers i. In particular, in the case of the constant sheaf on X associated with an abelian group A, the resulting groups Hi(X,A) coincide with singular cohomology for X a manifold or CW complex (though not for arbitrary spaces X). Starting in the 1950s, sheaf cohomology has become a central part of algebraic geometry and complex analysis, partly because of the importance of the sheaf of regular functions or the sheaf of holomorphic functions.
Grothendieck elegantly defined and characterized sheaf cohomology in the language of homological algebra. The essential point is to fix the space X and think of sheaf cohomology as a functor from the abelian category of sheaves on X to abelian groups. Start with the functor taking a sheaf E on X to its abelian group of global sections over X, E(X). This functor is left exact, but not necessarily right exact. Grothendieck defined sheaf cohomology groups to be the right derived functors of the left exact functor E ↦ E(X).
That definition suggests various generalizations. For example, one can define the cohomology of a topological space X with coefficients in any complex of sheaves, earlier called hypercohomology (but usually now just "cohomology"). From that point of view, sheaf cohomology becomes a sequence of functors from the derived category of sheaves on X to abelian groups.
In a broad sense of the word, "cohomology" is often used for the right derived functors of a left exact functor on an abelian category, while "homology" is used for the left derived functors of a right exact functor. For example, for a ring R, the Tor groups ToriR(M,N) form a "homology theory" in each variable, the left derived functors of the tensor product M⊗RN of R-modules. Likewise, the Ext groups ExtiR(M,N) can be viewed as a "cohomology theory" in each variable, the right derived functors of the Hom functor HomR(M,N).
Sheaf cohomology can be identified with a type of Ext group. Namely, for a sheaf E on a topological space X, Hi(X,E) is isomorphic to Exti(ZX, E), where ZX denotes the constant sheaf associated with the integers Z, and Ext is taken in the abelian category of sheaves on X.
== Cohomology of varieties ==
There are numerous machines built for computing the cohomology of algebraic varieties. The simplest case being the determination of cohomology for smooth projective varieties over a field of characteristic
0
{\displaystyle 0}
. Tools from Hodge theory, called Hodge structures, help give computations of cohomology of these types of varieties (with the addition of more refined information). In the simplest case the cohomology of a smooth hypersurface in
P
n
{\displaystyle \mathbb {P} ^{n}}
can be determined from the degree of the polynomial alone.
When considering varieties over a finite field, or a field of characteristic
p
{\displaystyle p}
, more powerful tools are required because the classical definitions of homology/cohomology break down. This is because varieties over finite fields will only be a finite set of points. Grothendieck came up with the idea for a Grothendieck topology and used sheaf cohomology over the étale topology to define the cohomology theory for varieties over a finite field. Using the étale topology for a variety over a field of characteristic
p
{\displaystyle p}
one can construct
ℓ
{\displaystyle \ell }
-adic cohomology for
ℓ
≠
p
{\displaystyle \ell \neq p}
. This is defined as the projective limit
H
k
(
X
;
Q
ℓ
)
:=
lim
←
n
∈
N
H
e
t
k
(
X
;
Z
/
(
ℓ
n
)
)
⊗
Z
ℓ
Q
ℓ
.
{\displaystyle H^{k}(X;\mathbb {Q} _{\ell }):=\varprojlim _{n\in \mathbb {N} }H_{et}^{k}(X;\mathbb {Z} /(\ell ^{n}))\otimes _{\mathbb {Z} _{\ell }}\mathbb {Q} _{\ell }.}
If we have a scheme of finite type
X
=
Proj
(
Z
[
x
0
,
…
,
x
n
]
(
f
1
,
…
,
f
k
)
)
{\displaystyle X=\operatorname {Proj} \left({\frac {\mathbb {Z} \left[x_{0},\ldots ,x_{n}\right]}{\left(f_{1},\ldots ,f_{k}\right)}}\right)}
then there is an equality of dimensions for the Betti cohomology of
X
(
C
)
{\displaystyle X(\mathbb {C} )}
and the
ℓ
{\displaystyle \ell }
-adic cohomology of
X
(
F
q
)
{\displaystyle X(\mathbb {F} _{q})}
whenever the variety is smooth over both fields. In addition to these cohomology theories there are other cohomology theories called Weil cohomology theories which behave similarly to singular cohomology. There is a conjectured theory of motives which underlie all of the Weil cohomology theories.
Another useful computational tool is the blowup sequence. Given a codimension
≥
2
{\displaystyle \geq 2}
subscheme
Z
⊂
X
{\displaystyle Z\subset X}
there is a Cartesian square
E
⟶
B
l
Z
(
X
)
↓
↓
Z
⟶
X
{\displaystyle {\begin{matrix}E&\longrightarrow &Bl_{Z}(X)\\\downarrow &&\downarrow \\Z&\longrightarrow &X\end{matrix}}}
From this there is an associated long exact sequence
⋯
→
H
n
(
X
)
→
H
n
(
Z
)
⊕
H
n
(
B
l
Z
(
X
)
)
→
H
n
(
E
)
→
H
n
+
1
(
X
)
→
⋯
{\displaystyle \cdots \to H^{n}(X)\to H^{n}(Z)\oplus H^{n}(Bl_{Z}(X))\to H^{n}(E)\to H^{n+1}(X)\to \cdots }
If the subvariety
Z
{\displaystyle Z}
is smooth, then the connecting morphisms are all trivial, hence
H
n
(
B
l
Z
(
X
)
)
⊕
H
n
(
Z
)
≅
H
n
(
X
)
⊕
H
n
(
E
)
{\displaystyle H^{n}(Bl_{Z}(X))\oplus H^{n}(Z)\cong H^{n}(X)\oplus H^{n}(E)}
== Axioms and generalized cohomology theories ==
There are various ways to define cohomology for topological spaces (such as singular cohomology, Čech cohomology, Alexander–Spanier cohomology or sheaf cohomology). (Here sheaf cohomology is considered only with coefficients in a constant sheaf.) These theories give different answers for some spaces, but there is a large class of spaces on which they all agree. This is most easily understood axiomatically: there is a list of properties known as the Eilenberg–Steenrod axioms, and any two constructions that share those properties will agree at least on all CW complexes. There are versions of the axioms for a homology theory as well as for a cohomology theory. Some theories can be viewed as tools for computing singular cohomology for special topological spaces, such as simplicial cohomology for simplicial complexes, cellular cohomology for CW complexes, and de Rham cohomology for smooth manifolds.
One of the Eilenberg–Steenrod axioms for a cohomology theory is the dimension axiom: if P is a single point, then Hi(P) = 0 for all i ≠ 0. Around 1960, George W. Whitehead observed that it is fruitful to omit the dimension axiom completely: this gives the notion of a generalized homology theory or a generalized cohomology theory, defined below. There are generalized cohomology theories such as K-theory or complex cobordism that give rich information about a topological space, not directly accessible from singular cohomology. (In this context, singular cohomology is often called "ordinary cohomology".)
By definition, a generalized homology theory is a sequence of functors hi (for integers i) from the category of CW-pairs (X, A) (so X is a CW complex and A is a subcomplex) to the category of abelian groups, together with a natural transformation ∂i: hi(X, A) → hi−1(A) called the boundary homomorphism (here hi−1(A) is a shorthand for hi−1(A,∅)). The axioms are:
Homotopy: If
f
:
(
X
,
A
)
→
(
Y
,
B
)
{\displaystyle f:(X,A)\to (Y,B)}
is homotopic to
g
:
(
X
,
A
)
→
(
Y
,
B
)
{\displaystyle g:(X,A)\to (Y,B)}
, then the induced homomorphisms on homology are the same.
Exactness: Each pair (X,A) induces a long exact sequence in homology, via the inclusions f: A → X and g: (X,∅) → (X,A):
⋯
→
h
i
(
A
)
→
f
∗
h
i
(
X
)
→
g
∗
h
i
(
X
,
A
)
→
∂
h
i
−
1
(
A
)
→
⋯
.
{\displaystyle \cdots \to h_{i}(A){\overset {f_{*}}{\to }}h_{i}(X){\overset {g_{*}}{\to }}h_{i}(X,A){\overset {\partial }{\to }}h_{i-1}(A)\to \cdots .}
Excision: If X is the union of subcomplexes A and B, then the inclusion f: (A,A∩B) → (X,B) induces an isomorphism
h
i
(
A
,
A
∩
B
)
→
f
∗
h
i
(
X
,
B
)
{\displaystyle h_{i}(A,A\cap B){\overset {f_{*}}{\to }}h_{i}(X,B)}
for every i.
Additivity: If (X,A) is the disjoint union of a set of pairs (Xα,Aα), then the inclusions (Xα,Aα) → (X,A) induce an isomorphism from the direct sum:
⨁
α
h
i
(
X
α
,
A
α
)
→
h
i
(
X
,
A
)
{\displaystyle \bigoplus _{\alpha }h_{i}(X_{\alpha },A_{\alpha })\to h_{i}(X,A)}
for every i.
The axioms for a generalized cohomology theory are obtained by reversing the arrows, roughly speaking. In more detail, a generalized cohomology theory is a sequence of contravariant functors hi (for integers i) from the category of CW-pairs to the category of abelian groups, together with a natural transformation d: hi(A) → hi+1(X,A) called the boundary homomorphism (writing hi(A) for hi(A,∅)). The axioms are:
Homotopy: Homotopic maps induce the same homomorphism on cohomology.
Exactness: Each pair (X,A) induces a long exact sequence in cohomology, via the inclusions f: A → X and g: (X,∅) → (X,A):
⋯
→
h
i
(
X
,
A
)
→
g
∗
h
i
(
X
)
→
f
∗
h
i
(
A
)
→
d
h
i
+
1
(
X
,
A
)
→
⋯
.
{\displaystyle \cdots \to h^{i}(X,A){\overset {g_{*}}{\to }}h^{i}(X){\overset {f_{*}}{\to }}h^{i}(A){\overset {d}{\to }}h^{i+1}(X,A)\to \cdots .}
Excision: If X is the union of subcomplexes A and B, then the inclusion f: (A,A∩B) → (X,B) induces an isomorphism
h
i
(
X
,
B
)
→
f
∗
h
i
(
A
,
A
∩
B
)
{\displaystyle h^{i}(X,B){\overset {f_{*}}{\to }}h^{i}(A,A\cap B)}
for every i.
Additivity: If (X,A) is the disjoint union of a set of pairs (Xα,Aα), then the inclusions (Xα,Aα) → (X,A) induce an isomorphism to the product group:
h
i
(
X
,
A
)
→
∏
α
h
i
(
X
α
,
A
α
)
{\displaystyle h^{i}(X,A)\to \prod _{\alpha }h^{i}(X_{\alpha },A_{\alpha })}
for every i.
A spectrum determines both a generalized homology theory and a generalized cohomology theory. A fundamental result by Brown, Whitehead, and Adams says that every generalized homology theory comes from a spectrum, and likewise every generalized cohomology theory comes from a spectrum. This generalizes the representability of ordinary cohomology by Eilenberg–MacLane spaces.
A subtle point is that the functor from the stable homotopy category (the homotopy category of spectra) to generalized homology theories on CW-pairs is not an equivalence, although it gives a bijection on isomorphism classes; there are nonzero maps in the stable homotopy category (called phantom maps) that induce the zero map between homology theories on CW-pairs. Likewise, the functor from the stable homotopy category to generalized cohomology theories on CW-pairs is not an equivalence. It is the stable homotopy category, not these other categories, that has good properties such as being triangulated.
If one prefers homology or cohomology theories to be defined on all topological spaces rather than on CW complexes, one standard approach is to include the axiom that every weak homotopy equivalence induces an isomorphism on homology or cohomology. (That is true for singular homology or singular cohomology, but not for sheaf cohomology, for example.) Since every space admits a weak homotopy equivalence from a CW complex, this axiom reduces homology or cohomology theories on all spaces to the corresponding theory on CW complexes.
Some examples of generalized cohomology theories are:
Stable cohomotopy groups
π
S
∗
(
X
)
.
{\displaystyle \pi _{S}^{*}(X).}
The corresponding homology theory is used more often: stable homotopy groups
π
∗
S
(
X
)
.
{\displaystyle \pi _{*}^{S}(X).}
Various different flavors of cobordism groups, based on studying a space by considering all maps from it to manifolds: unoriented cobordism
M
O
∗
(
X
)
{\displaystyle MO^{*}(X)}
oriented cobordism
M
S
O
∗
(
X
)
,
{\displaystyle MSO^{*}(X),}
complex cobordism
M
U
∗
(
X
)
,
{\displaystyle MU^{*}(X),}
and so on. Complex cobordism has turned out to be especially powerful in homotopy theory. It is closely related to formal groups, via a theorem of Daniel Quillen.
Various different flavors of topological K-theory, based on studying a space by considering all vector bundles over it:
K
O
∗
(
X
)
{\displaystyle KO^{*}(X)}
(real periodic K-theory),
k
o
∗
(
X
)
{\displaystyle ko^{*}(X)}
(real connective K-theory),
K
∗
(
X
)
{\displaystyle K^{*}(X)}
(complex periodic K-theory),
k
u
∗
(
X
)
{\displaystyle ku^{*}(X)}
(complex connective K-theory), and so on.
Brown–Peterson cohomology, Morava K-theory, Morava E-theory, and other theories built from complex cobordism.
Various flavors of elliptic cohomology.
Many of these theories carry richer information than ordinary cohomology, but are harder to compute.
A cohomology theory E is said to be multiplicative if
E
∗
(
X
)
{\displaystyle E^{*}(X)}
has the structure of a graded ring for each space X. In the language of spectra, there are several more precise notions of a ring spectrum, such as an E∞ ring spectrum, where the product is commutative and associative in a strong sense.
== Other cohomology theories ==
Cohomology theories in a broader sense (invariants of other algebraic or geometric structures, rather than of topological spaces) include:
== See also ==
complex-oriented cohomology theory
== Citations ==
== References ==
Dieudonné, Jean (1989), History of Algebraic and Differential Topology, Birkhäuser, ISBN 0-8176-3388-X, MR 0995842
Dold, Albrecht (1972), Lectures on Algebraic Topology, Springer-Verlag, ISBN 978-3-540-58660-9, MR 0415602
Eilenberg, Samuel; Steenrod, Norman (1952), Foundations of Algebraic Topology, Princeton University Press, ISBN 9780691627236, MR 0050886 {{citation}}: ISBN / Date incompatibility (help)
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York, Heidelberg: Springer-Verlag, ISBN 0-387-90244-9, MR 0463157
Hatcher, Allen (2001), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354
"Cohomology", Encyclopedia of Mathematics, EMS Press, 2001 [1994].
May, J. Peter (1999), A Concise Course in Algebraic Topology (PDF), University of Chicago Press, ISBN 0-226-51182-0, MR 1702278
Switzer, Robert (1975), Algebraic Topology — Homology and Homotopy, Springer-Verlag, ISBN 3-540-42750-3, MR 0385836
Thom, René (1954), "Quelques propriétés globales des variétés différentiables", Commentarii Mathematici Helvetici, 28: 17–86, doi:10.1007/BF02566923, MR 0061823, S2CID 120243638 | Wikipedia/Extraordinary_cohomology_theory |
K Theory is an electronic hip-hop act by Dylan Lewman, which formerly included Dustin Musser and Malcolm Anthony. The group was founded by Dylan Lewman and Dustin Musser in 2011. They have created remixes for Flo Rida's "GDFR", Rich Homie Quan's "Flex" and Fetty Wap's "Trap Queen".
== Career ==
On January 10, 2017, former K Theory member Malcolm Anthony left K Theory, citing "years of creative differences."
== Discography ==
=== Studio albums ===
=== Extended plays ===
=== Remixes ===
In 2015 their remix of Flo Rida's "GDFR" sold over 80,000 units on Atlantic Records.
== New Trinity Music Group (Record Label) ==
In 2015, K Theory formed their own label calling it New Trinity Music Group. The label released two tracks a week in 2015 including their 2nd Annual 25 Days of Kristmas campaign where they release 25 tracks in 25 days.
== References ==
== External links ==
K Theory's SoundCloud
K Theory's Official site | Wikipedia/K_Theory |
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence.
The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method.
The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms.
We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld.
Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics.
== Formal definition ==
=== Vertex algebra ===
A vertex algebra is a collection of data that satisfy certain axioms.
==== Data ====
a vector space
V
{\displaystyle V}
, called the space of states. The underlying field is typically taken to be the complex numbers, although Borcherds's original formulation allowed for an arbitrary commutative ring.
an identity element
1
∈
V
{\displaystyle 1\in V}
, sometimes written
|
0
⟩
{\displaystyle |0\rangle }
or
Ω
{\displaystyle \Omega }
to indicate a vacuum state.
an endomorphism
T
:
V
→
V
{\displaystyle T:V\rightarrow V}
, called "translation". (Borcherds's original formulation included a system of divided powers of
T
{\displaystyle T}
, because he did not assume the ground ring was divisible.)
a linear multiplication map
Y
:
V
⊗
V
→
V
(
(
z
)
)
{\displaystyle Y:V\otimes V\rightarrow V((z))}
, where
V
(
(
z
)
)
{\displaystyle V((z))}
is the space of all formal Laurent series with coefficients in
V
{\displaystyle V}
. This structure has some alternative presentations:
as an infinite collection of bilinear products
⋅
n
:
u
⊗
v
↦
u
n
v
{\displaystyle \cdot _{n}:u\otimes v\mapsto u_{n}v}
where
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
and
u
n
∈
E
n
d
(
V
)
{\displaystyle u_{n}\in \mathrm {End} (V)}
, so that for each
v
{\displaystyle v}
, there is an
N
{\displaystyle N}
such that
u
n
v
=
0
{\displaystyle u_{n}v=0}
for
n
<
N
{\displaystyle n<N}
.
as a left-multiplication map
V
→
E
n
d
(
V
)
[
[
z
±
1
]
]
{\displaystyle V\rightarrow \mathrm {End} (V)[[z^{\pm 1}]]}
. This is the 'state-to-field' map of the so-called state-field correspondence. For each
u
∈
V
{\displaystyle u\in V}
, the endomorphism-valued formal distribution
Y
(
u
,
z
)
{\displaystyle Y(u,z)}
is called a vertex operator or a field, and the coefficient of
z
−
n
−
1
{\displaystyle z^{-n-1}}
is the operator
u
n
{\displaystyle u_{n}}
. In the context of vertex algebras, a field is more precisely an element of
E
n
d
(
V
)
[
[
z
±
1
]
]
{\displaystyle \mathrm {End} (V)[[z^{\pm 1}]]}
, which can be written
A
(
z
)
=
∑
n
∈
Z
A
n
z
n
,
A
n
∈
E
n
d
(
V
)
{\displaystyle A(z)=\sum _{n\in \mathbb {Z} }A_{n}z^{n},A_{n}\in \mathrm {End} (V)}
such that for any
v
∈
V
,
A
n
v
=
0
{\displaystyle v\in V,A_{n}v=0}
for sufficiently small
n
{\displaystyle n}
(which may depend on
v
{\displaystyle v}
). The standard notation for the multiplication is
u
⊗
v
↦
Y
(
u
,
z
)
v
=
∑
n
∈
Z
u
n
v
z
−
n
−
1
.
{\displaystyle u\otimes v\mapsto Y(u,z)v=\sum _{n\in \mathbf {Z} }u_{n}vz^{-n-1}.}
==== Axioms ====
These data are required to satisfy the following axioms:
Identity. For any
u
∈
V
,
Y
(
1
,
z
)
u
=
u
{\displaystyle u\in V\,,\,Y(1,z)u=u}
and
Y
(
u
,
z
)
1
∈
u
+
z
V
[
[
z
]
]
{\displaystyle \,Y(u,z)1\in u+zV[[z]]}
.
Translation.
T
(
1
)
=
0
{\displaystyle T(1)=0}
, and for any
u
,
v
∈
V
{\displaystyle u,v\in V}
,
[
T
,
Y
(
u
,
z
)
]
v
=
T
Y
(
u
,
z
)
v
−
Y
(
u
,
z
)
T
v
=
d
d
z
Y
(
u
,
z
)
v
{\displaystyle [T,Y(u,z)]v=TY(u,z)v-Y(u,z)Tv={\frac {d}{dz}}Y(u,z)v}
Locality (Jacobi identity, or Borcherds identity). For any
u
,
v
∈
V
{\displaystyle u,v\in V}
, there exists a positive integer N such that:
(
z
−
x
)
N
Y
(
u
,
z
)
Y
(
v
,
x
)
=
(
z
−
x
)
N
Y
(
v
,
x
)
Y
(
u
,
z
)
.
{\displaystyle (z-x)^{N}Y(u,z)Y(v,x)=(z-x)^{N}Y(v,x)Y(u,z).}
===== Equivalent formulations of locality axiom =====
The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity:
∀
u
,
v
,
w
∈
V
{\displaystyle \forall u,v,w\in V}
,
z
−
1
δ
(
x
−
y
z
)
Y
(
u
,
x
)
Y
(
v
,
y
)
w
−
z
−
1
δ
(
−
y
+
x
z
)
Y
(
v
,
y
)
Y
(
u
,
x
)
w
=
y
−
1
δ
(
x
−
z
y
)
Y
(
Y
(
u
,
z
)
v
,
y
)
w
,
{\displaystyle {\begin{aligned}&z^{-1}\delta \left({\frac {x-y}{z}}\right)Y(u,x)Y(v,y)w-z^{-1}\delta \left({\frac {-y+x}{z}}\right)Y(v,y)Y(u,x)w\\&=y^{-1}\delta \left({\frac {x-z}{y}}\right)Y(Y(u,z)v,y)w\end{aligned}},}
where we define the formal delta series by:
δ
(
x
−
y
z
)
:=
∑
s
≥
0
,
r
∈
Z
(
r
s
)
(
−
1
)
s
y
r
−
s
x
s
z
−
r
.
{\displaystyle \delta \left({\frac {x-y}{z}}\right):=\sum _{s\geq 0,r\in \mathbf {Z} }{\binom {r}{s}}(-1)^{s}y^{r-s}x^{s}z^{-r}.}
Borcherds initially used the following two identities: for any
u
,
v
,
w
∈
V
{\displaystyle u,v,w\in V}
and integers
m
,
n
{\displaystyle m,n}
we have
(
u
m
(
v
)
)
n
(
w
)
=
∑
i
≥
0
(
−
1
)
i
(
m
i
)
(
u
m
−
i
(
v
n
+
i
(
w
)
)
−
(
−
1
)
m
v
m
+
n
−
i
(
u
i
(
w
)
)
)
{\displaystyle (u_{m}(v))_{n}(w)=\sum _{i\geq 0}(-1)^{i}{\binom {m}{i}}\left(u_{m-i}(v_{n+i}(w))-(-1)^{m}v_{m+n-i}(u_{i}(w))\right)}
and
u
m
v
=
∑
i
≥
0
(
−
1
)
m
+
i
+
1
T
i
i
!
v
m
+
i
u
{\displaystyle u_{m}v=\sum _{i\geq 0}(-1)^{m+i+1}{\frac {T^{i}}{i!}}v_{m+i}u}
.
He later gave a more expansive version that is equivalent but easier to use: for any
u
,
v
,
w
∈
V
{\displaystyle u,v,w\in V}
and integers
m
,
n
,
q
{\displaystyle m,n,q}
we have
∑
i
∈
Z
(
m
i
)
(
u
q
+
i
(
v
)
)
m
+
n
−
i
(
w
)
=
∑
i
∈
Z
(
−
1
)
i
(
q
i
)
(
u
m
+
q
−
i
(
v
n
+
i
(
w
)
)
−
(
−
1
)
q
v
n
+
q
−
i
(
u
m
+
i
(
w
)
)
)
{\displaystyle \sum _{i\in \mathbf {Z} }{\binom {m}{i}}\left(u_{q+i}(v)\right)_{m+n-i}(w)=\sum _{i\in \mathbf {Z} }(-1)^{i}{\binom {q}{i}}\left(u_{m+q-i}\left(v_{n+i}(w)\right)-(-1)^{q}v_{n+q-i}\left(u_{m+i}(w)\right)\right)}
This identity is the same as the Jacobi identity by expanding both sides in all formal variables. Finally, there is a formal function version of locality: For any
u
,
v
,
w
∈
V
{\displaystyle u,v,w\in V}
, there is an element
X
(
u
,
v
,
w
;
z
,
x
)
∈
V
[
[
z
,
x
]
]
[
z
−
1
,
x
−
1
,
(
z
−
x
)
−
1
]
{\displaystyle X(u,v,w;z,x)\in V[[z,x]]\left[z^{-1},x^{-1},(z-x)^{-1}\right]}
such that
Y
(
u
,
z
)
Y
(
v
,
x
)
w
{\displaystyle Y(u,z)Y(v,x)w}
and
Y
(
v
,
x
)
Y
(
u
,
z
)
w
{\displaystyle Y(v,x)Y(u,z)w}
are the corresponding expansions of
X
(
u
,
v
,
w
;
z
,
x
)
{\displaystyle X(u,v,w;z,x)}
in
V
(
(
z
)
)
(
(
x
)
)
{\displaystyle V((z))((x))}
and
V
(
(
x
)
)
(
(
z
)
)
{\displaystyle V((x))((z))}
.
=== Vertex operator algebra ===
A vertex operator algebra is a vertex algebra equipped with a conformal element
ω
∈
V
{\displaystyle \omega \in V}
, such that the vertex operator
Y
(
ω
,
z
)
{\displaystyle Y(\omega ,z)}
is the weight two Virasoro field
L
(
z
)
{\displaystyle L(z)}
:
Y
(
ω
,
z
)
=
∑
n
∈
Z
ω
n
z
−
n
−
1
=
L
(
z
)
=
∑
n
∈
Z
L
n
z
−
n
−
2
{\displaystyle Y(\omega ,z)=\sum _{n\in \mathbf {Z} }\omega _{n}{z^{-n-1}}=L(z)=\sum _{n\in \mathbf {Z} }L_{n}z^{-n-2}}
and satisfies the following properties:
[
L
m
,
L
n
]
=
(
m
−
n
)
L
m
+
n
+
1
12
δ
m
+
n
,
0
(
m
3
−
m
)
c
I
d
V
{\displaystyle [L_{m},L_{n}]=(m-n)L_{m+n}+{\frac {1}{12}}\delta _{m+n,0}(m^{3}-m)c\,\mathrm {Id} _{V}}
, where
c
{\displaystyle c}
is a constant called the central charge, or rank of
V
{\displaystyle V}
. In particular, the coefficients of this vertex operator endow
V
{\displaystyle V}
with an action of the Virasoro algebra with central charge
c
{\displaystyle c}
.
L
0
{\displaystyle L_{0}}
acts semisimply on
V
{\displaystyle V}
with integer eigenvalues that are bounded below.
Under the grading provided by the eigenvalues of
L
0
{\displaystyle L_{0}}
, the multiplication on
V
{\displaystyle V}
is homogeneous in the sense that if
u
{\displaystyle u}
and
v
{\displaystyle v}
are homogeneous, then
u
n
v
{\displaystyle u_{n}v}
is homogeneous of degree
d
e
g
(
u
)
+
d
e
g
(
v
)
−
n
−
1
{\displaystyle \mathrm {deg} (u)+\mathrm {deg} (v)-n-1}
.
The identity
1
{\displaystyle 1}
has degree 0, and the conformal element
ω
{\displaystyle \omega }
has degree 2.
L
−
1
=
T
{\displaystyle L_{-1}=T}
.
A homomorphism of vertex algebras is a map of the underlying vector spaces that respects the additional identity, translation, and multiplication structure. Homomorphisms of vertex operator algebras have "weak" and "strong" forms, depending on whether they respect conformal vectors.
== Commutative vertex algebras ==
A vertex algebra
V
{\displaystyle V}
is commutative if all vertex operators
Y
(
u
,
z
)
{\displaystyle Y(u,z)}
commute with each other. This is equivalent to the property that all products
Y
(
u
,
z
)
v
{\displaystyle Y(u,z)v}
lie in
V
[
[
z
]
]
{\displaystyle V[[z]]}
, or that
Y
(
u
,
z
)
∈
End
[
[
z
]
]
{\displaystyle Y(u,z)\in \operatorname {End} [[z]]}
. Thus, an alternative definition for a commutative vertex algebra is one in which all vertex operators
Y
(
u
,
z
)
{\displaystyle Y(u,z)}
are regular at
z
=
0
{\displaystyle z=0}
.
Given a commutative vertex algebra, the constant terms of multiplication endow the vector space with a commutative and associative ring structure, the vacuum vector
1
{\displaystyle 1}
is a unit and
T
{\displaystyle T}
is a derivation. Hence the commutative vertex algebra equips
V
{\displaystyle V}
with the structure of a commutative unital algebra with derivation. Conversely, any commutative ring
V
{\displaystyle V}
with derivation
T
{\displaystyle T}
has a canonical vertex algebra structure, where we set
Y
(
u
,
z
)
v
=
u
−
1
v
z
0
=
u
v
{\displaystyle Y(u,z)v=u_{-1}vz^{0}=uv}
, so that
Y
{\displaystyle Y}
restricts to a map
Y
:
V
→
End
(
V
)
{\displaystyle Y:V\rightarrow \operatorname {End} (V)}
which is the multiplication map
u
↦
u
⋅
{\displaystyle u\mapsto u\cdot }
with
⋅
{\displaystyle \cdot }
the algebra product. If the derivation
T
{\displaystyle T}
vanishes, we may set
ω
=
0
{\displaystyle \omega =0}
to obtain a vertex operator algebra concentrated in degree zero.
Any finite-dimensional vertex algebra is commutative.
Thus even the smallest examples of noncommutative vertex algebras require significant introduction.
== Basic properties ==
The translation operator
T
{\displaystyle T}
in a vertex algebra induces infinitesimal symmetries on the product structure, and satisfies the following properties:
Y
(
u
,
z
)
1
=
e
z
T
u
{\displaystyle \,Y(u,z)1=e^{zT}u}
T
u
=
u
−
2
1
{\displaystyle \,Tu=u_{-2}1}
, so
T
{\displaystyle T}
is determined by
Y
{\displaystyle Y}
.
Y
(
T
u
,
z
)
=
d
Y
(
u
,
z
)
d
z
{\displaystyle \,Y(Tu,z)={\frac {\mathrm {d} Y(u,z)}{\mathrm {d} z}}}
e
x
T
Y
(
u
,
z
)
e
−
x
T
=
Y
(
e
x
T
u
,
z
)
=
Y
(
u
,
z
+
x
)
{\displaystyle \,e^{xT}Y(u,z)e^{-xT}=Y(e^{xT}u,z)=Y(u,z+x)}
(skew-symmetry)
Y
(
u
,
z
)
v
=
e
z
T
Y
(
v
,
−
z
)
u
{\displaystyle Y(u,z)v=e^{zT}Y(v,-z)u}
For a vertex operator algebra, the other Virasoro operators satisfy similar properties:
x
L
0
Y
(
u
,
z
)
x
−
L
0
=
Y
(
x
L
0
u
,
x
z
)
{\displaystyle \,x^{L_{0}}Y(u,z)x^{-L_{0}}=Y(x^{L_{0}}u,xz)}
e
x
L
1
Y
(
u
,
z
)
e
−
x
L
1
=
Y
(
e
x
(
1
−
x
z
)
L
1
(
1
−
x
z
)
−
2
L
0
u
,
z
(
1
−
x
z
)
−
1
)
{\displaystyle \,e^{xL_{1}}Y(u,z)e^{-xL_{1}}=Y(e^{x(1-xz)L_{1}}(1-xz)^{-2L_{0}}u,z(1-xz)^{-1})}
(quasi-conformality)
[
L
m
,
Y
(
u
,
z
)
]
=
∑
k
=
0
m
+
1
(
m
+
1
k
)
z
k
Y
(
L
m
−
k
u
,
z
)
{\displaystyle [L_{m},Y(u,z)]=\sum _{k=0}^{m+1}{\binom {m+1}{k}}z^{k}Y(L_{m-k}u,z)}
for all
m
≥
−
1
{\displaystyle m\geq -1}
.
(Associativity, or Cousin property): For any
u
,
v
,
w
∈
V
{\displaystyle u,v,w\in V}
, the element
X
(
u
,
v
,
w
;
z
,
x
)
∈
V
[
[
z
,
x
]
]
[
z
−
1
,
x
−
1
,
(
z
−
x
)
−
1
]
{\displaystyle X(u,v,w;z,x)\in V[[z,x]][z^{-1},x^{-1},(z-x)^{-1}]}
given in the definition also expands to
Y
(
Y
(
u
,
z
−
x
)
v
,
x
)
w
{\displaystyle Y(Y(u,z-x)v,x)w}
in
V
(
(
x
)
)
(
(
z
−
x
)
)
{\displaystyle V((x))((z-x))}
.
The associativity property of a vertex algebra follows from the fact that the commutator of
Y
(
u
,
z
)
{\displaystyle Y(u,z)}
and
Y
(
v
,
z
)
{\displaystyle Y(v,z)}
is annihilated by a finite power of
z
−
x
{\displaystyle z-x}
, i.e., one can expand it as a finite linear combination of derivatives of the formal delta function in
(
z
−
x
)
{\displaystyle (z-x)}
, with coefficients in
E
n
d
(
V
)
{\displaystyle \mathrm {End} (V)}
.
Reconstruction: Let
V
{\displaystyle V}
be a vertex algebra, and let
J
a
{\displaystyle J_{a}}
be a set of vectors, with corresponding fields
J
a
(
z
)
∈
E
n
d
(
V
)
[
[
z
±
1
]
]
{\displaystyle J^{a}(z)\in \mathrm {End} (V)[[z^{\pm 1}]]}
. If
V
{\displaystyle V}
is spanned by monomials in the positive weight coefficients of the fields (i.e., finite products of operators
J
n
a
{\displaystyle J_{n}^{a}}
applied to
1
{\displaystyle 1}
, where
n
{\displaystyle n}
is negative), then we may write the operator product of such a monomial as a normally ordered product of divided power derivatives of fields (here, normal ordering means polar terms on the left are moved to the right). Specifically,
Y
(
J
n
1
+
1
a
1
J
n
2
+
1
a
2
.
.
.
J
n
k
+
1
a
k
1
,
z
)
=:
∂
n
1
∂
z
n
1
J
a
1
(
z
)
n
1
!
∂
n
2
∂
z
n
2
J
a
2
(
z
)
n
2
!
⋯
∂
n
k
∂
z
n
k
J
a
k
(
z
)
n
k
!
:
{\displaystyle Y(J_{n_{1}+1}^{a_{1}}J_{n_{2}+1}^{a_{2}}...J_{n_{k}+1}^{a_{k}}1,z)=:{\frac {\partial ^{n_{1}}}{\partial z^{n_{1}}}}{\frac {J^{a_{1}}(z)}{n_{1}!}}{\frac {\partial ^{n_{2}}}{\partial z^{n_{2}}}}{\frac {J^{a_{2}}(z)}{n_{2}!}}\cdots {\frac {\partial ^{n_{k}}}{\partial z^{n_{k}}}}{\frac {J^{a_{k}}(z)}{n_{k}!}}:}
More generally, if one is given a vector space
V
{\displaystyle V}
with an endomorphism
T
{\displaystyle T}
and vector
1
{\displaystyle 1}
, and one assigns to a set of vectors
J
a
{\displaystyle J^{a}}
a set of fields
J
a
(
z
)
∈
E
n
d
(
V
)
[
[
z
±
1
]
]
{\displaystyle J^{a}(z)\in \mathrm {End} (V)[[z^{\pm 1}]]}
that are mutually local, whose positive weight coefficients generate
V
{\displaystyle V}
, and that satisfy the identity and translation conditions, then the previous formula describes a vertex algebra structure.
=== Operator product expansion ===
In vertex algebra theory, due to associativity, we can abuse notation to write, for
A
,
B
,
C
∈
V
,
{\displaystyle A,B,C\in V,}
Y
(
A
,
z
)
Y
(
B
,
w
)
C
=
∑
n
∈
Z
Y
(
A
(
n
)
⋅
B
,
w
)
(
z
−
w
)
n
+
1
C
.
{\displaystyle Y(A,z)Y(B,w)C=\sum _{n\in \mathbb {Z} }{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}}C.}
This is the operator product expansion. Equivalently,
Y
(
A
,
z
)
Y
(
B
,
w
)
=
∑
n
≥
0
Y
(
A
(
n
)
⋅
B
,
w
)
(
z
−
w
)
n
+
1
+
:
Y
(
A
,
z
)
Y
(
B
,
w
)
:
.
{\displaystyle Y(A,z)Y(B,w)=\sum _{n\geq 0}{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}}+:Y(A,z)Y(B,w):.}
Since the normal ordered part is regular in
z
{\displaystyle z}
and
w
{\displaystyle w}
, this can be written more in line with physics conventions as
Y
(
A
,
z
)
Y
(
B
,
w
)
∼
∑
n
≥
0
Y
(
A
(
n
)
⋅
B
,
w
)
(
z
−
w
)
n
+
1
,
{\displaystyle Y(A,z)Y(B,w)\sim \sum _{n\geq 0}{\frac {Y(A_{(n)}\cdot B,w)}{(z-w)^{n+1}}},}
where the equivalence relation
∼
{\displaystyle \sim }
denotes equivalence up to regular terms.
==== Commonly used OPEs ====
Here some OPEs frequently found in conformal field theory are recorded.
== Examples from Lie algebras ==
The basic examples come from infinite-dimensional Lie algebras.
=== Heisenberg vertex operator algebra ===
A basic example of a noncommutative vertex algebra is the rank 1 free boson, also called the Heisenberg vertex operator algebra. It is "generated" by a single vector b, in the sense that by applying the coefficients of the field b(z) := Y(b,z) to the vector 1, we obtain a spanning set. The underlying vector space is the infinite-variable polynomial ring
C
[
b
−
1
,
b
−
2
,
⋯
]
{\displaystyle \mathbb {C} [b_{-1},b_{-2},\cdots ]}
, where for positive
n
{\displaystyle n}
,
b
−
n
{\displaystyle b_{-n}}
acts obviously by multiplication, and
b
n
{\displaystyle b_{n}}
acts as
n
∂
b
−
n
{\displaystyle n\partial _{b_{-n}}}
. The action of b0 is multiplication by zero, producing the "momentum zero" Fock representation V0 of the Heisenberg Lie algebra (generated by bn for integers n, with commutation relations [bn,bm]=n δn,–m), induced by the trivial representation of the subalgebra spanned by bn, n ≥ 0.
The Fock space V0 can be made into a vertex algebra by the following definition of the state-operator map on a basis
b
j
1
b
j
2
.
.
.
b
j
k
{\displaystyle b_{j_{1}}b_{j_{2}}...b_{j_{k}}}
with each
j
i
<
0
{\displaystyle j_{i}<0}
,
Y
(
b
j
1
b
j
2
.
.
.
b
j
k
,
z
)
:=
1
(
−
j
1
−
1
)
!
(
−
j
2
−
1
)
!
⋯
(
−
j
k
−
1
)
!
:
∂
−
j
1
−
1
b
(
z
)
∂
−
j
2
−
1
b
(
z
)
.
.
.
∂
−
j
k
−
1
b
(
z
)
:
{\displaystyle Y(b_{j_{1}}b_{j_{2}}...b_{j_{k}},z):={\frac {1}{(-j_{1}-1)!(-j_{2}-1)!\cdots (-j_{k}-1)!}}:\partial ^{-j_{1}-1}b(z)\partial ^{-j_{2}-1}b(z)...\partial ^{-j_{k}-1}b(z):}
where
:
O
:
{\displaystyle :{\mathcal {O}}:}
denotes normal ordering of an operator
O
{\displaystyle {\mathcal {O}}}
. The vertex operators may also be written as a functional of a multivariable function f as:
Y
[
f
,
z
]
≡:
f
(
b
(
z
)
0
!
,
b
′
(
z
)
1
!
,
b
″
(
z
)
2
!
,
.
.
.
)
:
{\displaystyle Y[f,z]\equiv :f\left({\frac {b(z)}{0!}},{\frac {b'(z)}{1!}},{\frac {b''(z)}{2!}},...\right):}
if we understand that each term in the expansion of f is normal ordered.
The rank n free boson is given by taking an n-fold tensor product of the rank 1 free boson. For any vector b in n-dimensional space, one has a field b(z) whose coefficients are elements of the rank n Heisenberg algebra, whose commutation relations have an extra inner product term: [bn,cm]=n (b,c) δn,–m.
The Heisenberg vertex operator algebra has a one-parameter family of conformal vectors with parameter
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
of conformal vectors
ω
λ
{\displaystyle \omega _{\lambda }}
given by
ω
λ
=
1
2
b
−
1
2
+
λ
b
−
2
,
{\displaystyle \omega _{\lambda }={\frac {1}{2}}b_{-1}^{2}+\lambda b_{-2},}
with central charge
c
λ
=
1
−
12
λ
2
{\displaystyle c_{\lambda }=1-12\lambda ^{2}}
.
When
λ
=
0
{\displaystyle \lambda =0}
, there is the following formula for the Virasoro character:
T
r
V
q
L
0
:=
∑
n
∈
Z
dim
V
n
q
n
=
∏
n
≥
1
(
1
−
q
n
)
−
1
{\displaystyle Tr_{V}q^{L_{0}}:=\sum _{n\in \mathbf {Z} }\dim V_{n}q^{n}=\prod _{n\geq 1}(1-q^{n})^{-1}}
This is the generating function for partitions, and is also written as q1/24 times the weight −1/2 modular form 1/η (the reciprocal of the Dedekind eta function). The rank n free boson then has an n parameter family of Virasoro vectors, and when those parameters are zero, the character is qn/24 times the weight −n/2 modular form η−n.
=== Virasoro vertex operator algebra ===
Virasoro vertex operator algebras are important for two reasons: First, the conformal element in a vertex operator algebra canonically induces a homomorphism from a Virasoro vertex operator algebra, so they play a universal role in the theory. Second, they are intimately connected to the theory of unitary representations of the Virasoro algebra, and these play a major role in conformal field theory. In particular, the unitary Virasoro minimal models are simple quotients of these vertex algebras, and their tensor products provide a way to combinatorially construct more complicated vertex operator algebras.
The Virasoro vertex operator algebra is defined as an induced representation of the Virasoro algebra: If we choose a central charge c, there is a unique one-dimensional module for the subalgebra C[z]∂z + K for which K acts by cId, and C[z]∂z acts trivially, and the corresponding induced module is spanned by polynomials in L–n = –z−n–1∂z as n ranges over integers greater than 1. The module then has partition function
T
r
V
q
L
0
=
∑
n
∈
R
dim
V
n
q
n
=
∏
n
≥
2
(
1
−
q
n
)
−
1
{\displaystyle Tr_{V}q^{L_{0}}=\sum _{n\in \mathbf {R} }\dim V_{n}q^{n}=\prod _{n\geq 2}(1-q^{n})^{-1}}
.
This space has a vertex operator algebra structure, where the vertex operators are defined by:
Y
(
L
−
n
1
−
2
L
−
n
2
−
2
.
.
.
L
−
n
k
−
2
|
0
⟩
,
z
)
≡
1
n
1
!
n
2
!
.
.
n
k
!
:
∂
n
1
L
(
z
)
∂
n
2
L
(
z
)
.
.
.
∂
n
k
L
(
z
)
:
{\displaystyle Y(L_{-n_{1}-2}L_{-n_{2}-2}...L_{-n_{k}-2}|0\rangle ,z)\equiv {\frac {1}{n_{1}!n_{2}!..n_{k}!}}:\partial ^{n_{1}}L(z)\partial ^{n_{2}}L(z)...\partial ^{n_{k}}L(z):}
and
ω
=
L
−
2
|
0
⟩
{\displaystyle \omega =L_{-2}|0\rangle }
. The fact that the Virasoro field L(z) is local with respect to itself can be deduced from the formula for its self-commutator:
[
L
(
z
)
,
L
(
x
)
]
=
(
∂
∂
x
L
(
x
)
)
w
−
1
δ
(
z
x
)
−
2
L
(
x
)
x
−
1
∂
∂
z
δ
(
z
x
)
−
1
12
c
x
−
1
(
∂
∂
z
)
3
δ
(
z
x
)
{\displaystyle [L(z),L(x)]=\left({\frac {\partial }{\partial x}}L(x)\right)w^{-1}\delta \left({\frac {z}{x}}\right)-2L(x)x^{-1}{\frac {\partial }{\partial z}}\delta \left({\frac {z}{x}}\right)-{\frac {1}{12}}cx^{-1}\left({\frac {\partial }{\partial z}}\right)^{3}\delta \left({\frac {z}{x}}\right)}
where c is the central charge.
Given a vertex algebra homomorphism from a Virasoro vertex algebra of central charge c to any other vertex algebra, the vertex operator attached to the image of ω automatically satisfies the Virasoro relations, i.e., the image of ω is a conformal vector. Conversely, any conformal vector in a vertex algebra induces a distinguished vertex algebra homomorphism from some Virasoro vertex operator algebra.
The Virasoro vertex operator algebras are simple, except when c has the form 1–6(p–q)2/pq for coprime integers p,q strictly greater than 1 – this follows from Kac's determinant formula. In these exceptional cases, one has a unique maximal ideal, and the corresponding quotient is called a minimal model. When p = q+1, the vertex algebras are unitary representations of Virasoro, and their modules are known as discrete series representations. They play an important role in conformal field theory in part because they are unusually tractable, and for small p, they correspond to well-known statistical mechanics systems at criticality, e.g., the Ising model, the tri-critical Ising model, the three-state Potts model, etc. By work of Weiqang Wang concerning fusion rules, we have a full description of the tensor categories of unitary minimal models. For example, when c=1/2 (Ising), there are three irreducible modules with lowest L0-weight 0, 1/2, and 1/16, and its fusion ring is Z[x,y]/(x2–1, y2–x–1, xy–y).
=== Affine vertex algebra ===
By replacing the Heisenberg Lie algebra with an untwisted affine Kac–Moody Lie algebra (i.e., the universal central extension of the loop algebra on a finite-dimensional simple Lie algebra), one may construct the vacuum representation in much the same way as the free boson vertex algebra is constructed. This algebra arises as the current algebra of the Wess–Zumino–Witten model, which produces the anomaly that is interpreted as the central extension.
Concretely, pulling back the central extension
0
→
C
→
g
^
→
g
[
t
,
t
−
1
]
→
0
{\displaystyle 0\to \mathbb {C} \to {\hat {\mathfrak {g}}}\to {\mathfrak {g}}[t,t^{-1}]\to 0}
along the inclusion
g
[
t
]
→
g
[
t
,
t
−
1
]
{\displaystyle {\mathfrak {g}}[t]\to {\mathfrak {g}}[t,t^{-1}]}
yields a split extension, and the vacuum module is induced from the one-dimensional representation of the latter on which a central basis element acts by some chosen constant called the "level". Since central elements can be identified with invariant inner products on the finite type Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, one typically normalizes the level so that the Killing form has level twice the dual Coxeter number. Equivalently, level one gives the inner product for which the longest root has norm 2. This matches the loop algebra convention, where levels are discretized by third cohomology of simply connected compact Lie groups.
By choosing a basis Ja of the finite type Lie algebra, one may form a basis of the affine Lie algebra using Jan = Ja tn together with a central element K. By reconstruction, we can describe the vertex operators by normal ordered products of derivatives of the fields
J
a
(
z
)
=
∑
n
=
−
∞
∞
J
n
a
z
−
n
−
1
=
∑
n
=
−
∞
∞
(
J
a
t
n
)
z
−
n
−
1
.
{\displaystyle J^{a}(z)=\sum _{n=-\infty }^{\infty }J_{n}^{a}z^{-n-1}=\sum _{n=-\infty }^{\infty }(J^{a}t^{n})z^{-n-1}.}
When the level is non-critical, i.e., the inner product is not minus one half of the Killing form, the vacuum representation has a conformal element, given by the Sugawara construction. For any choice of dual bases Ja, Ja with respect to the level 1 inner product, the conformal element is
ω
=
1
2
(
k
+
h
∨
)
∑
a
J
a
,
−
1
J
−
1
a
1
{\displaystyle \omega ={\frac {1}{2(k+h^{\vee })}}\sum _{a}J_{a,-1}J_{-1}^{a}1}
and yields a vertex operator algebra whose central charge is
k
⋅
dim
g
/
(
k
+
h
∨
)
{\displaystyle k\cdot \dim {\mathfrak {g}}/(k+h^{\vee })}
. At critical level, the conformal structure is destroyed, since the denominator is zero, but one may produce operators Ln for n ≥ –1 by taking a limit as k approaches criticality.
== Modules ==
Much like ordinary rings, vertex algebras admit a notion of module, or representation. Modules play an important role in conformal field theory, where they are often called sectors. A standard assumption in the physics literature is that the full Hilbert space of a conformal field theory decomposes into a sum of tensor products of left-moving and right-moving sectors:
H
≅
⨁
i
∈
I
M
i
⊗
M
i
¯
{\displaystyle {\mathcal {H}}\cong \bigoplus _{i\in I}M_{i}\otimes {\overline {M_{i}}}}
That is, a conformal field theory has a vertex operator algebra of left-moving chiral symmetries, a vertex operator algebra of right-moving chiral symmetries, and the sectors moving in a given direction are modules for the corresponding vertex operator algebra.
=== Definition ===
Given a vertex algebra V with multiplication Y, a V-module is a vector space M equipped with an action YM: V ⊗ M → M((z)), satisfying the following conditions:
(Identity) YM(1,z) = IdM
(Associativity, or Jacobi identity) For any u, v ∈ V, w ∈ M, there is an element
X
(
u
,
v
,
w
;
z
,
x
)
∈
M
[
[
z
,
x
]
]
[
z
−
1
,
x
−
1
,
(
z
−
x
)
−
1
]
{\displaystyle X(u,v,w;z,x)\in M[[z,x]][z^{-1},x^{-1},(z-x)^{-1}]}
such that YM(u,z)YM(v,x)w and YM(Y(u,z–x)v,x)w
are the corresponding expansions of
X
(
u
,
v
,
w
;
z
,
x
)
{\displaystyle X(u,v,w;z,x)}
in M((z))((x)) and M((x))((z–x)).
Equivalently, the following "Jacobi identity" holds:
z
−
1
δ
(
y
−
x
z
)
Y
M
(
u
,
x
)
Y
M
(
v
,
y
)
w
−
z
−
1
δ
(
−
y
+
x
z
)
Y
M
(
v
,
y
)
Y
M
(
u
,
x
)
w
=
y
−
1
δ
(
x
+
z
y
)
Y
M
(
Y
(
u
,
z
)
v
,
y
)
w
.
{\displaystyle z^{-1}\delta \left({\frac {y-x}{z}}\right)Y^{M}(u,x)Y^{M}(v,y)w-z^{-1}\delta \left({\frac {-y+x}{z}}\right)Y^{M}(v,y)Y^{M}(u,x)w=y^{-1}\delta \left({\frac {x+z}{y}}\right)Y^{M}(Y(u,z)v,y)w.}
The modules of a vertex algebra form an abelian category. When working with vertex operator algebras, the previous definition is sometimes given the name weak
V
{\displaystyle V}
-module, and genuine V-modules must respect the conformal structure given by the conformal vector
ω
{\displaystyle \omega }
. More precisely, they are required to satisfy the additional condition that L0 acts semisimply with finite-dimensional eigenspaces and eigenvalues bounded below in each coset of Z. Work of Huang, Lepowsky, Miyamoto, and Zhang has shown at various levels of generality that modules of a vertex operator algebra admit a fusion tensor product operation, and form a braided tensor category.
When the category of V-modules is semisimple with finitely many irreducible objects, the vertex operator algebra V is called rational. Rational vertex operator algebras satisfying an additional finiteness hypothesis (known as Zhu's C2-cofiniteness condition) are known to be particularly well-behaved, and are called regular. For example, Zhu's 1996 modular invariance theorem asserts that the characters of modules of a regular VOA form a vector-valued representation of
S
L
(
2
,
Z
)
{\displaystyle \mathrm {SL} (2,\mathbb {Z} )}
. In particular, if a VOA is holomorphic, that is, its representation category is equivalent to that of vector spaces, then its partition function is
S
L
(
2
,
Z
)
{\displaystyle \mathrm {SL} (2,\mathbb {Z} )}
-invariant up to a constant. Huang showed that the category of modules of a regular VOA is a modular tensor category, and its fusion rules satisfy the Verlinde formula.
=== Heisenberg algebra modules ===
Modules of the Heisenberg algebra can be constructed as Fock spaces
π
λ
{\displaystyle \pi _{\lambda }}
for
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
which are induced representations of the Heisenberg Lie algebra, given by a vacuum vector
v
λ
{\displaystyle v_{\lambda }}
satisfying
b
n
v
λ
=
0
{\displaystyle b_{n}v_{\lambda }=0}
for
n
>
0
{\displaystyle n>0}
,
b
0
v
λ
=
0
{\displaystyle b_{0}v_{\lambda }=0}
, and being acted on freely by the negative modes
b
−
n
{\displaystyle b_{-n}}
for
n
>
0
{\displaystyle n>0}
. The space can be written as
C
[
b
−
1
,
b
−
2
,
⋯
]
v
λ
{\displaystyle \mathbb {C} [b_{-1},b_{-2},\cdots ]v_{\lambda }}
. Every irreducible,
Z
{\displaystyle \mathbb {Z} }
-graded Heisenberg algebra module with gradation bounded below is of this form.
These are used to construct lattice vertex algebras, which as vector spaces are direct sums of Heisenberg modules, when the image of
Y
{\displaystyle Y}
is extended appropriately to module elements.
The module category is not semisimple, since one may induce a representation of the abelian Lie algebra where b0 acts by a nontrivial Jordan block. For the rank n free boson, one has an irreducible module Vλ for each vector λ in complex n-dimensional space. Each vector b ∈ Cn yields the operator b0, and the Fock space Vλ is distinguished by the property that each such b0 acts as scalar multiplication by the inner product (b, λ).
=== Twisted modules ===
Unlike ordinary rings, vertex algebras admit a notion of twisted module attached to an automorphism. For an automorphism σ of order N, the action has the form V ⊗ M → M((z1/N)), with the following monodromy condition: if u ∈ V satisfies σ u = exp(2πik/N)u, then un = 0 unless n satisfies n+k/N ∈ Z (there is some disagreement about signs among specialists). Geometrically, twisted modules can be attached to branch points on an algebraic curve with a ramified Galois cover. In the conformal field theory literature, twisted modules are called twisted sectors, and are intimately connected with string theory on orbifolds.
== Additional examples ==
=== Vertex operator algebra defined by an even lattice ===
The lattice vertex algebra construction was the original motivation for defining vertex algebras. It is constructed by taking a sum of irreducible modules for the Heisenberg algebra corresponding to lattice vectors, and defining a multiplication operation by specifying intertwining operators between them. That is, if Λ is an even lattice (if the lattice is not even, the structure obtained is instead a vertex superalgebra), the lattice vertex algebra VΛ decomposes into free bosonic modules as:
V
Λ
≅
⨁
λ
∈
Λ
V
λ
{\displaystyle V_{\Lambda }\cong \bigoplus _{\lambda \in \Lambda }V_{\lambda }}
Lattice vertex algebras are canonically attached to double covers of even integral lattices, rather than the lattices themselves. While each such lattice has a unique lattice vertex algebra up to isomorphism, the vertex algebra construction is not functorial, because lattice automorphisms have an ambiguity in lifting.
The double covers in question are uniquely determined up to isomorphism by the following rule: elements have the form ±eα for lattice vectors α ∈ Λ (i.e., there is a map to Λ sending eα to α that forgets signs), and multiplication satisfies the relations eαeβ = (–1)(α,β)eβeα. Another way to describe this is that given an even lattice Λ, there is a unique (up to coboundary) normalised cocycle ε(α, β) with values ±1 such that (−1)(α,β) = ε(α, β) ε(β, α), where the normalization condition is that ε(α, 0) = ε(0, α) = 1 for all α ∈ Λ. This cocycle induces a central extension of Λ by a group of order 2, and we obtain a twisted group ring Cε[Λ] with basis eα (α ∈ Λ), and multiplication rule eαeβ = ε(α, β)eα+β – the cocycle condition on ε ensures associativity of the ring.
The vertex operator attached to lowest weight vector vλ in the Fock space Vλ is
Y
(
v
λ
,
z
)
=
e
λ
:
exp
∫
λ
(
z
)
:=
e
λ
z
λ
exp
(
∑
n
<
0
λ
n
z
−
n
n
)
exp
(
∑
n
>
0
λ
n
z
−
n
n
)
,
{\displaystyle Y(v_{\lambda },z)=e_{\lambda }:\exp \int \lambda (z):=e_{\lambda }z^{\lambda }\exp \left(\sum _{n<0}\lambda _{n}{\frac {z^{-n}}{n}}\right)\exp \left(\sum _{n>0}\lambda _{n}{\frac {z^{-n}}{n}}\right),}
where zλ is a shorthand for the linear map that takes any element of the α-Fock space Vα to the monomial z(λ,α). The vertex operators for other elements of the Fock space are then determined by reconstruction.
As in the case of the free boson, one has a choice of conformal vector, given by an element s of the vector space Λ ⊗ C, but the condition that the extra Fock spaces have integer L0 eigenvalues constrains the choice of s: for an orthonormal basis xi, the vector 1/2 xi,12 + s2 must satisfy (s, λ) ∈ Z for all λ ∈ Λ, i.e., s lies in the dual lattice.
If the even lattice Λ is generated by its "root vectors" (those satisfying (α, α)=2), and any two root vectors are joined by a chain of root vectors with consecutive inner products non-zero then the vertex operator algebra is the unique simple quotient of the vacuum module of the affine Kac–Moody algebra of the corresponding simply laced simple Lie algebra at level one. This is known as the Frenkel–Kac (or Frenkel–Kac–Segal) construction, and is based on the earlier construction by Sergio Fubini and Gabriele Veneziano of the tachyonic vertex operator in the dual resonance model. Among other features, the zero modes of the vertex operators corresponding to root vectors give a construction of the underlying simple Lie algebra, related to a presentation originally due to Jacques Tits. In particular, one obtains a construction of all ADE type Lie groups directly from their root lattices. And this is commonly considered the simplest way to construct the 248-dimensional group E8.
=== Monster vertex algebra ===
The monster vertex algebra
V
♮
{\displaystyle V^{\natural }}
(also called the "moonshine module") is the key to Borcherds's proof of the Monstrous moonshine conjectures. It was constructed by Frenkel, Lepowsky, and Meurman in 1988. It is notable because its character is the j-invariant with no constant term,
j
(
τ
)
−
744
{\displaystyle j(\tau )-744}
, and its automorphism group is the monster group. It is constructed by orbifolding the lattice vertex algebra constructed from the Leech lattice by the order 2 automorphism induced by reflecting the Leech lattice in the origin. That is, one forms the direct sum of the Leech lattice VOA with the twisted module, and takes the fixed points under an induced involution. Frenkel, Lepowsky, and Meurman conjectured in 1988 that
V
♮
{\displaystyle V^{\natural }}
is the unique holomorphic vertex operator algebra with central charge 24, and partition function
j
(
τ
)
−
744
{\displaystyle j(\tau )-744}
. This conjecture is still open.
=== Chiral de Rham complex ===
Malikov, Schechtman, and Vaintrob showed that by a method of localization, one may canonically attach a bcβγ (boson–fermion superfield) system to a smooth complex manifold. This complex of sheaves has a distinguished differential, and the global cohomology is a vertex superalgebra. Ben-Zvi, Heluani, and Szczesny showed that a Riemannian metric on the manifold induces an N=1 superconformal structure, which is promoted to an N=2 structure if the metric is Kähler and Ricci-flat, and a hyperkähler structure induces an N=4 structure. Borisov and Libgober showed that one may obtain the two-variable elliptic genus of a compact complex manifold from the cohomology of the Chiral de Rham complex. If the manifold is Calabi–Yau, then this genus is a weak Jacobi form.
=== Vertex algebra associated to a surface defect ===
A vertex algebra can arise as a subsector of higher dimensional quantum field theory which localizes to a two real-dimensional submanifold of the space on which the higher dimensional theory is defined. A prototypical example is the construction of Beem, Leemos, Liendo, Peelaers, Rastelli, and van Rees which associates a vertex algebra to any 4d N=2 superconformal field theory. This vertex algebra has the property that its character coincides with the Schur index of the 4d superconformal theory. When the theory admits a weak coupling limit, the vertex algebra has an explicit description as a BRST reduction of a bcβγ system.
== Vertex operator superalgebras ==
By allowing the underlying vector space to be a superspace (i.e., a Z/2Z-graded vector space
V
=
V
+
⊕
V
−
{\displaystyle V=V_{+}\oplus V_{-}}
) one can define a vertex superalgebra by the same data as a vertex algebra, with 1 in V+ and T an even operator. The axioms are essentially the same, but one must incorporate suitable signs into the locality axiom, or one of the equivalent formulations. That is, if a and b are homogeneous, one compares Y(a,z)Y(b,w) with εY(b,w)Y(a,z), where ε is –1 if both a and b are odd and 1 otherwise. If in addition there is a Virasoro element ω in the even part of V2, and the usual grading restrictions are satisfied, then V is called a vertex operator superalgebra.
One of the simplest examples is the vertex operator superalgebra generated by a single free fermion ψ. As a Virasoro representation, it has central charge 1/2, and decomposes as a direct sum of Ising modules of lowest weight 0 and 1/2. One may also describe it as a spin representation of the Clifford algebra on the quadratic space t1/2C[t,t−1](dt)1/2 with residue pairing. The vertex operator superalgebra is holomorphic, in the sense that all modules are direct sums of itself, i.e., the module category is equivalent to the category of vector spaces.
The tensor square of the free fermion is called the free charged fermion, and by boson–fermion correspondence, it is isomorphic to the lattice vertex superalgebra attached to the odd lattice Z. This correspondence has been used by Date–Jimbo–Kashiwara-Miwa to construct soliton solutions to the KP hierarchy of nonlinear PDEs.
== Superconformal structures ==
The Virasoro algebra has some supersymmetric extensions that naturally appear in superconformal field theory and superstring theory. The N=1, 2, and 4 superconformal algebras are of particular importance.
Infinitesimal holomorphic superconformal transformations of a supercurve (with one even local coordinate z and N odd local coordinates θ1,...,θN) are generated by the coefficients of a super-stress–energy tensor T(z, θ1, ..., θN).
When N=1, T has odd part given by a Virasoro field L(z), and even part given by a field
G
(
z
)
=
∑
n
G
n
z
−
n
−
3
/
2
{\displaystyle G(z)=\sum _{n}G_{n}z^{-n-3/2}}
subject to commutation relations
[
G
m
,
L
n
]
=
(
m
−
n
/
2
)
G
m
+
n
{\displaystyle [G_{m},L_{n}]=(m-n/2)G_{m+n}}
[
G
m
,
G
n
]
=
(
m
−
n
)
L
m
+
n
+
δ
m
,
−
n
4
m
2
+
1
12
c
{\displaystyle [G_{m},G_{n}]=(m-n)L_{m+n}+\delta _{m,-n}{\frac {4m^{2}+1}{12}}c}
By examining the symmetry of the operator products, one finds that there are two possibilities for the field G: the indices n are either all integers, yielding the Ramond algebra, or all half-integers, yielding the Neveu–Schwarz algebra. These algebras have unitary discrete series representations at central charge
c
^
=
2
3
c
=
1
−
8
m
(
m
+
2
)
m
≥
3
{\displaystyle {\hat {c}}={\frac {2}{3}}c=1-{\frac {8}{m(m+2)}}\quad m\geq 3}
and unitary representations for all c greater than 3/2, with lowest weight h only constrained by h≥ 0 for Neveu–Schwarz and h ≥ c/24 for Ramond.
An N=1 superconformal vector in a vertex operator algebra V of central charge c is an odd element τ ∈ V of weight 3/2, such that
Y
(
τ
,
z
)
=
G
(
z
)
=
∑
m
∈
Z
+
1
/
2
G
n
z
−
n
−
3
/
2
,
{\displaystyle Y(\tau ,z)=G(z)=\sum _{m\in \mathbb {Z} +1/2}G_{n}z^{-n-3/2},}
G−1/2τ = ω, and the coefficients of G(z) yield an action of the N=1 Neveu–Schwarz algebra at central charge c.
For N=2 supersymmetry, one obtains even fields L(z) and J(z), and odd fields G+(z) and G−(z). The field J(z) generates an action of the Heisenberg algebras (described by physicists as a U(1) current). There are both Ramond and Neveu–Schwarz N=2 superconformal algebras, depending on whether the indexing on the G fields is integral or half-integral. However, the U(1) current gives rise to a one-parameter family of isomorphic superconformal algebras interpolating between Ramond and Neveu–Schwartz, and this deformation of structure is known as spectral flow. The unitary representations are given by discrete series with central charge c = 3-6/m for integers m at least 3, and a continuum of lowest weights for c > 3.
An N=2 superconformal structure on a vertex operator algebra is a pair of odd elements τ+, τ− of weight 3/2, and an even element μ of weight 1 such that τ± generate G±(z), and μ generates J(z).
For N=3 and 4, unitary representations only have central charges in a discrete family, with c=3k/2 and 6k, respectively, as k ranges over positive integers.
== Additional constructions ==
Fixed point subalgebras: Given an action of a symmetry group on a vertex operator algebra, the subalgebra of fixed vectors is also a vertex operator algebra. In 2013, Miyamoto proved that two important finiteness properties, namely Zhu's condition C2 and regularity, are preserved when taking fixed points under finite solvable group actions.
Current extensions: Given a vertex operator algebra and some modules of integral conformal weight, one may under favorable circumstances describe a vertex operator algebra structure on the direct sum. Lattice vertex algebras are a standard example of this. Another family of examples are framed VOAs, which start with tensor products of Ising models, and add modules that correspond to suitably even codes.
Orbifolds: Given a finite cyclic group acting on a holomorphic VOA, it is conjectured that one may construct a second holomorphic VOA by adjoining irreducible twisted modules and taking fixed points under an induced automorphism, as long as those twisted modules have suitable conformal weight. This is known to be true in special cases, e.g., groups of order at most 3 acting on lattice VOAs.
The coset construction (due to Goddard, Kent, and Olive): Given a vertex operator algebra V of central charge c and a set S of vectors, one may define the commutant C(V,S) to be the subspace of vectors v strictly commute with all fields coming from S, i.e., such that Y(s,z)v ∈ V[[z]] for all s ∈ S. This turns out to be a vertex subalgebra, with Y, T, and identity inherited from V. And if S is a VOA of central charge cS, the commutant is a VOA of central charge c–cS. For example, the embedding of SU(2) at level k+1 into the tensor product of two SU(2) algebras at levels k and 1 yields the Virasoro discrete series with p=k+2, q=k+3, and this was used to prove their existence in the 1980s. Again with SU(2), the embedding of level k+2 into the tensor product of level k and level 2 yields the N=1 superconformal discrete series.
BRST reduction: For any degree 1 vector v satisfying v02=0, the cohomology of this operator has a graded vertex superalgebra structure. More generally, one may use any weight 1 field whose residue has square zero. The usual method is to tensor with fermions, as one then has a canonical differential. An important special case is quantum Drinfeld–Sokolov reduction applied to affine Kac–Moody algebras to obtain affine W-algebras as degree 0 cohomology. These W algebras also admit constructions as vertex subalgebras of free bosons given by kernels of screening operators.
== Related algebraic structures ==
If one considers only the singular part of the OPE in a vertex algebra, one arrives at the definition of a Lie conformal algebra. Since one is often only concerned with the singular part of the OPE, this makes Lie conformal algebras a natural object to study. There is a functor from vertex algebras to Lie conformal algebras that forgets the regular part of OPEs, and it has a left adjoint, called the "universal vertex algebra" functor. Vacuum modules of affine Kac–Moody algebras and Virasoro vertex algebras are universal vertex algebras, and in particular, they can be described very concisely once the background theory is developed.
There are several generalizations of the notion of vertex algebra in the literature. Some mild generalizations involve a weakening of the locality axiom to allow monodromy, e.g., the abelian intertwining algebras of Dong and Lepowsky. One may view these roughly as vertex algebra objects in a braided tensor category of graded vector spaces, in much the same way that a vertex superalgebra is such an object in the category of super vector spaces. More complicated generalizations relate to q-deformations and representations of quantum groups, such as in work of Frenkel–Reshetikhin, Etingof–Kazhdan, and Li.
Beilinson and Drinfeld introduced a sheaf-theoretic notion of chiral algebra that is closely related to the notion of vertex algebra, but is defined without using any visible power series. Given an algebraic curve X, a chiral algebra on X is a DX-module A equipped with a multiplication operation
j
∗
j
∗
(
A
⊠
A
)
→
Δ
∗
A
{\displaystyle j_{*}j^{*}(A\boxtimes A)\to \Delta _{*}A}
on X×X that satisfies an associativity condition. They also introduced an equivalent notion of factorization algebra that is a system of quasicoherent sheaves on all finite products of the curve, together with a compatibility condition involving pullbacks to the complement of various diagonals. Any translation-equivariant chiral algebra on the affine line can be identified with a vertex algebra by taking the fiber at a point, and there is a natural way to attach a chiral algebra on a smooth algebraic curve to any vertex operator algebra.
== See also ==
Operator algebra
Zhu algebra
== Notes ==
=== Citations ===
== Sources == | Wikipedia/Vertex_operator_algebra |
In mathematics, a quotient algebra is the result of partitioning the elements of an algebraic structure using a congruence relation.
Quotient algebras are also called factor algebras. Here, the congruence relation must be an equivalence relation that is additionally compatible with all the operations of the algebra, in the formal sense described below.
Its equivalence classes partition the elements of the given algebraic structure. The quotient algebra has these classes as its elements, and the compatibility conditions are used to give the classes an algebraic structure.
The idea of the quotient algebra abstracts into one common notion the quotient structure of quotient rings of ring theory, quotient groups of group theory, the quotient spaces of linear algebra and the quotient modules of representation theory into a common framework.
== Compatible relation ==
Let A be the set of the elements of an algebra
A
{\displaystyle {\mathcal {A}}}
, and let E be an equivalence relation on the set A. The relation E is said to be compatible with (or have the substitution property with respect to) an n-ary operation f, if
(
a
i
,
b
i
)
∈
E
{\displaystyle (a_{i},\;b_{i})\in E}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
implies
(
f
(
a
1
,
a
2
,
…
,
a
n
)
,
f
(
b
1
,
b
2
,
…
,
b
n
)
)
∈
E
{\displaystyle (f(a_{1},a_{2},\ldots ,a_{n}),f(b_{1},b_{2},\ldots ,b_{n}))\in E}
for any
a
i
,
b
i
∈
A
{\displaystyle a_{i},\;b_{i}\in A}
with
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
. An equivalence relation compatible with all the operations of an algebra is called a congruence with respect to this algebra.
== Quotient algebras and homomorphisms ==
Any equivalence relation E in a set A partitions this set in equivalence classes. The set of these equivalence classes is usually called the quotient set, and denoted A/E. For an algebra
A
{\displaystyle {\mathcal {A}}}
, it is straightforward to define the operations induced on the elements of A/E if E is a congruence. Specifically, for any operation
f
i
A
{\displaystyle f_{i}^{\mathcal {A}}}
of arity
n
i
{\displaystyle n_{i}}
in
A
{\displaystyle {\mathcal {A}}}
(where the superscript simply denotes that it is an operation in
A
{\displaystyle {\mathcal {A}}}
, and the subscript
i
∈
I
{\displaystyle i\in I}
enumerates the functions in
A
{\displaystyle {\mathcal {A}}}
and their arities) define
f
i
A
/
E
:
(
A
/
E
)
n
i
→
A
/
E
{\displaystyle f_{i}^{{\mathcal {A}}/E}:(A/E)^{n_{i}}\to A/E}
as
f
i
A
/
E
(
[
a
1
]
E
,
…
,
[
a
n
i
]
E
)
=
[
f
i
A
(
a
1
,
…
,
a
n
i
)
]
E
{\displaystyle f_{i}^{{\mathcal {A}}/E}([a_{1}]_{E},\ldots ,[a_{n_{i}}]_{E})=[f_{i}^{\mathcal {A}}(a_{1},\ldots ,a_{n_{i}})]_{E}}
, where
[
x
]
E
∈
A
/
E
{\displaystyle [x]_{E}\in A/E}
denotes the equivalence class of
x
∈
A
{\displaystyle x\in A}
generated by E ("x modulo E").
For an algebra
A
=
(
A
,
(
f
i
A
)
i
∈
I
)
{\displaystyle {\mathcal {A}}=(A,(f_{i}^{\mathcal {A}})_{i\in I})}
, given a congruence E on
A
{\displaystyle {\mathcal {A}}}
, the algebra
A
/
E
=
(
A
/
E
,
(
f
i
A
/
E
)
i
∈
I
)
{\displaystyle {\mathcal {A}}/E=(A/E,(f_{i}^{{\mathcal {A}}/E})_{i\in I})}
is called the quotient algebra (or factor algebra) of
A
{\displaystyle {\mathcal {A}}}
modulo E. There is a natural homomorphism from
A
{\displaystyle {\mathcal {A}}}
to
A
/
E
{\displaystyle {\mathcal {A}}/E}
mapping every element to its equivalence class. In fact, every homomorphism h determines a congruence relation via the kernel of the homomorphism,
k
e
r
h
=
{
(
a
,
a
′
)
∈
A
2
|
h
(
a
)
=
h
(
a
′
)
}
⊆
A
2
{\displaystyle \mathop {\mathrm {ker} } \,h=\{(a,a')\in A^{2}\,|\,h(a)=h(a')\}\subseteq A^{2}}
.
Given an algebra
A
{\displaystyle {\mathcal {A}}}
, a homomorphism h thus defines two algebras homomorphic to
A
{\displaystyle {\mathcal {A}}}
, the image h(
A
{\displaystyle {\mathcal {A}}}
) and
A
/
k
e
r
h
{\displaystyle {\mathcal {A}}/\mathop {\mathrm {ker} } \,h}
The two are isomorphic, a result known as the homomorphic image theorem or as the first isomorphism theorem for universal algebra. Formally, let
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\to {\mathcal {B}}}
be a surjective homomorphism. Then, there exists a unique isomorphism g from
A
/
k
e
r
h
{\displaystyle {\mathcal {A}}/\mathop {\mathrm {ker} } \,h}
onto
B
{\displaystyle {\mathcal {B}}}
such that g composed with the natural homomorphism induced by
k
e
r
h
{\displaystyle \mathop {\mathrm {ker} } \,h}
equals h.
== Congruence lattice ==
For every algebra
A
{\displaystyle {\mathcal {A}}}
on the set A, the identity relation on A, and
A
×
A
{\displaystyle A\times A}
are trivial congruences. An algebra with no other congruences is called simple.
Let
C
o
n
(
A
)
{\displaystyle \mathrm {Con} ({\mathcal {A}})}
be the set of congruences on the algebra
A
{\displaystyle {\mathcal {A}}}
. Because congruences are closed under intersection, we can define a meet operation:
∧
:
C
o
n
(
A
)
×
C
o
n
(
A
)
→
C
o
n
(
A
)
{\displaystyle \wedge :\mathrm {Con} ({\mathcal {A}})\times \mathrm {Con} ({\mathcal {A}})\to \mathrm {Con} ({\mathcal {A}})}
by simply taking the intersection of the congruences
E
1
∧
E
2
=
E
1
∩
E
2
{\displaystyle E_{1}\wedge E_{2}=E_{1}\cap E_{2}}
.
On the other hand, congruences are not closed under union. However, we can define the closure of any binary relation E, with respect to a fixed algebra
A
{\displaystyle {\mathcal {A}}}
, such that it is a congruence, in the following way:
⟨
E
⟩
A
=
⋂
{
F
∈
C
o
n
(
A
)
∣
E
⊆
F
}
{\displaystyle \langle E\rangle _{\mathcal {A}}=\bigcap \{F\in \mathrm {Con} ({\mathcal {A}})\mid E\subseteq F\}}
. Note that the closure of a binary relation is a congruence and thus depends on the operations in
A
{\displaystyle {\mathcal {A}}}
, not just on the carrier set. Now define
∨
:
C
o
n
(
A
)
×
C
o
n
(
A
)
→
C
o
n
(
A
)
{\displaystyle \vee :\mathrm {Con} ({\mathcal {A}})\times \mathrm {Con} ({\mathcal {A}})\to \mathrm {Con} ({\mathcal {A}})}
as
E
1
∨
E
2
=
⟨
E
1
∪
E
2
⟩
A
{\displaystyle E_{1}\vee E_{2}=\langle E_{1}\cup E_{2}\rangle _{\mathcal {A}}}
.
For every algebra
A
{\displaystyle {\mathcal {A}}}
,
(
C
o
n
(
A
)
,
∧
,
∨
)
{\displaystyle (\mathrm {Con} ({\mathcal {A}}),\wedge ,\vee )}
with the two operations defined above forms a lattice, called the congruence lattice of
A
{\displaystyle {\mathcal {A}}}
.
== Maltsev conditions ==
If two congruences permute (commute) with the composition of relations as operation, i.e.
α
∘
β
=
β
∘
α
{\displaystyle \alpha \circ \beta =\beta \circ \alpha }
, then their join (in the congruence lattice) is equal to their composition:
α
∘
β
=
α
∨
β
{\displaystyle \alpha \circ \beta =\alpha \vee \beta }
. An algebra is called congruence permutable if every pair of its congruences permutes; likewise a variety is said to be congruence-permutable if all its members are
congruence-permutable algebras.
In 1954, Anatoly Maltsev established the following characterization of congruence-permutable varieties: a variety is congruence permutable if and only if there exist a ternary term q(x, y, z) such that q(x, y, y) ≈ x ≈ q(y, y, x); this is called a Maltsev term and varieties with this property are called Maltsev varieties. Maltsev's characterization explains a large number of similar results in groups (take q = xy−1z), rings, quasigroups (take q = (x / (y \ y))(y \ z)), complemented lattices, Heyting algebras etc. Furthermore, every congruence-permutable algebra is congruence-modular, i.e. its lattice of congruences is modular lattice as well; the converse is not true however.
After Maltsev's result, other researchers found characterizations based on conditions similar to that found by Maltsev but for other kinds of properties. In 1967 Bjarni Jónsson found the conditions for varieties having congruence lattices that are distributive (thus called congruence-distributive varieties), while in 1969 Alan Day did the same for varieties having congruence lattices that are modular. Generically, such conditions are called Maltsev conditions.
This line of research led to the Pixley–Wille algorithm for generating Maltsev conditions associated
with congruence identities.
== See also ==
Quotient ring
Congruence lattice problem
Lattice of subgroups
== Notes ==
== References ==
Klaus Denecke; Shelly L. Wismath (2009). Universal algebra and coalgebra. World Scientific. pp. 14–17. ISBN 978-981-283-745-5.
Purna Chandra Biswal (2005). Discrete mathematics and graph theory. PHI Learning Pvt. Ltd. p. 215. ISBN 978-81-203-2721-4.
Clifford Bergman (2011). Universal Algebra: Fundamentals and Selected Topics. CRC Press. pp. 122–124, 137 (Maltsev varieties). ISBN 978-1-4398-5129-6. | Wikipedia/Quotient_(universal_algebra) |
In functional analysis, the weak operator topology, often abbreviated WOT, is the weakest topology on the set of bounded operators on a Hilbert space
H
{\displaystyle H}
, such that the functional sending an operator
T
{\displaystyle T}
to the complex number
⟨
T
x
,
y
⟩
{\displaystyle \langle Tx,y\rangle }
is continuous for any vectors
x
{\displaystyle x}
and
y
{\displaystyle y}
in the Hilbert space.
Explicitly, for an operator
T
{\displaystyle T}
there is base of neighborhoods of the following type: choose a finite number of vectors
x
i
{\displaystyle x_{i}}
, continuous functionals
y
i
{\displaystyle y_{i}}
, and positive real constants
ε
i
{\displaystyle \varepsilon _{i}}
indexed by the same finite set
I
{\displaystyle I}
. An operator
S
{\displaystyle S}
lies in the neighborhood if and only if
|
y
i
(
T
(
x
i
)
−
S
(
x
i
)
)
|
<
ε
i
{\displaystyle |y_{i}(T(x_{i})-S(x_{i}))|<\varepsilon _{i}}
for all
i
∈
I
{\displaystyle i\in I}
.
Equivalently, a net
T
i
⊆
B
(
H
)
{\displaystyle T_{i}\subseteq B(H)}
of bounded operators converges to
T
∈
B
(
H
)
{\displaystyle T\in B(H)}
in WOT if for all
y
∈
H
∗
{\displaystyle y\in H^{*}}
and
x
∈
H
{\displaystyle x\in H}
, the net
y
(
T
i
x
)
{\displaystyle y(T_{i}x)}
converges to
y
(
T
x
)
{\displaystyle y(Tx)}
.
== Relationship with other topologies on B(H) ==
The WOT is the weakest among all common topologies on
B
(
H
)
{\displaystyle B(H)}
, the bounded operators on a Hilbert space
H
{\displaystyle H}
.
=== Strong operator topology ===
The strong operator topology, or SOT, on
B
(
H
)
{\displaystyle B(H)}
is the topology of pointwise convergence. Because the inner product is a continuous function, the SOT is stronger than WOT. The following example shows that this inclusion is strict. Let
H
=
ℓ
2
(
N
)
{\displaystyle H=\ell ^{2}(\mathbb {N} )}
and consider the sequence
{
T
n
}
{\displaystyle \{T^{n}\}}
of right shifts. An application of Cauchy-Schwarz shows that
T
n
→
0
{\displaystyle T^{n}\to 0}
in WOT. But clearly
T
n
{\displaystyle T^{n}}
does not converge to
0
{\displaystyle 0}
in SOT.
The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the strong operator topology are precisely those that are continuous in the WOT (actually, the WOT is the weakest operator topology that leaves continuous all strongly continuous linear functionals on the set
B
(
H
)
{\displaystyle B(H)}
of bounded operators on the Hilbert space H). Because of this fact, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT.
It follows from the polarization identity that a net
{
T
α
}
{\displaystyle \{T_{\alpha }\}}
converges to
0
{\displaystyle 0}
in SOT if and only if
T
α
∗
T
α
→
0
{\displaystyle T_{\alpha }^{*}T_{\alpha }\to 0}
in WOT.
=== Weak-star operator topology ===
The predual of B(H) is the trace class operators C1(H), and it generates the w*-topology on B(H), called the weak-star operator topology or σ-weak topology. The weak-operator and σ-weak topologies agree on norm-bounded sets in B(H).
A net {Tα} ⊂ B(H) converges to T in WOT if and only Tr(TαF) converges to Tr(TF) for all finite-rank operator F. Since every finite-rank operator is trace-class, this implies that WOT is weaker than the σ-weak topology. To see why the claim is true, recall that every finite-rank operator F is a finite sum
F
=
∑
i
=
1
n
λ
i
u
i
v
i
∗
.
{\displaystyle F=\sum _{i=1}^{n}\lambda _{i}u_{i}v_{i}^{*}.}
So {Tα} converges to T in WOT means
Tr
(
T
α
F
)
=
∑
i
=
1
n
λ
i
v
i
∗
(
T
α
u
i
)
⟶
∑
i
=
1
n
λ
i
v
i
∗
(
T
u
i
)
=
Tr
(
T
F
)
.
{\displaystyle {\text{Tr}}\left(T_{\alpha }F\right)=\sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TF).}
Extending slightly, one can say that the weak-operator and σ-weak topologies agree on norm-bounded sets in B(H): Every trace-class operator is of the form
S
=
∑
i
λ
i
u
i
v
i
∗
,
{\displaystyle S=\sum _{i}\lambda _{i}u_{i}v_{i}^{*},}
where the series
∑
i
λ
i
{\displaystyle \sum \nolimits _{i}\lambda _{i}}
converges. Suppose
sup
α
‖
T
α
‖
=
k
<
∞
,
{\displaystyle \sup \nolimits _{\alpha }\|T_{\alpha }\|=k<\infty ,}
and
T
α
→
T
{\displaystyle T_{\alpha }\to T}
in WOT. For every trace-class S,
Tr
(
T
α
S
)
=
∑
i
λ
i
v
i
∗
(
T
α
u
i
)
⟶
∑
i
λ
i
v
i
∗
(
T
u
i
)
=
Tr
(
T
S
)
,
{\displaystyle {\text{Tr}}\left(T_{\alpha }S\right)=\sum _{i}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TS),}
by invoking, for instance, the dominated convergence theorem.
Therefore every norm-bounded closed set is compact in WOT, by the Banach–Alaoglu theorem.
== Other properties ==
The adjoint operation T → T*, as an immediate consequence of its definition, is continuous in WOT.
Multiplication is not jointly continuous in WOT: again let
T
{\displaystyle T}
be the unilateral shift. Appealing to Cauchy-Schwarz, one has that both Tn and T*n converges to 0 in WOT. But T*nTn is the identity operator for all
n
{\displaystyle n}
. (Because WOT coincides with the σ-weak topology on bounded sets, multiplication is not jointly continuous in the σ-weak topology.)
However, a weaker claim can be made: multiplication is separately continuous in WOT. If a net Ti → T in WOT, then STi → ST and TiS → TS in WOT.
== SOT and WOT on B(X,Y) when X and Y are normed spaces ==
We can extend the definitions of SOT and WOT to the more general setting where X and Y are normed spaces and
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
is the space of bounded linear operators of the form
T
:
X
→
Y
{\displaystyle T:X\to Y}
. In this case, each pair
x
∈
X
{\displaystyle x\in X}
and
y
∗
∈
Y
∗
{\displaystyle y^{*}\in Y^{*}}
defines a seminorm
‖
⋅
‖
x
,
y
∗
{\displaystyle \|\cdot \|_{x,y^{*}}}
on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
via the rule
‖
T
‖
x
,
y
∗
=
|
y
∗
(
T
x
)
|
{\displaystyle \|T\|_{x,y^{*}}=|y^{*}(Tx)|}
. The resulting family of seminorms generates the weak operator topology on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
. Equivalently, the WOT on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
is formed by taking for basic open neighborhoods those sets of the form
N
(
T
,
F
,
Λ
,
ϵ
)
:=
{
S
∈
B
(
X
,
Y
)
:
|
y
∗
(
(
S
−
T
)
x
)
|
<
ϵ
,
x
∈
F
,
y
∗
∈
Λ
}
,
{\displaystyle N(T,F,\Lambda ,\epsilon ):=\left\{S\in B(X,Y):\left|y^{*}((S-T)x)\right|<\epsilon ,x\in F,y^{*}\in \Lambda \right\},}
where
T
∈
B
(
X
,
Y
)
,
F
⊆
X
{\displaystyle T\in B(X,Y),F\subseteq X}
is a finite set,
Λ
⊆
Y
∗
{\displaystyle \Lambda \subseteq Y^{*}}
is also a finite set, and
ϵ
>
0
{\displaystyle \epsilon >0}
. The space
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
is a locally convex topological vector space when endowed with the WOT.
The strong operator topology on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
is generated by the family of seminorms
‖
⋅
‖
x
,
x
∈
X
,
{\displaystyle \|\cdot \|_{x},x\in X,}
via the rules
‖
T
‖
x
=
‖
T
x
‖
{\displaystyle \|T\|_{x}=\|Tx\|}
. Thus, a topological base for the SOT is given by open neighborhoods of the form
N
(
T
,
F
,
ϵ
)
:=
{
S
∈
B
(
X
,
Y
)
:
‖
(
S
−
T
)
x
‖
<
ϵ
,
x
∈
F
}
,
{\displaystyle N(T,F,\epsilon ):=\{S\in B(X,Y):\|(S-T)x\|<\epsilon ,x\in F\},}
where as before
T
∈
B
(
X
,
Y
)
,
F
⊆
X
{\displaystyle T\in B(X,Y),F\subseteq X}
is a finite set, and
ϵ
>
0.
{\displaystyle \epsilon >0.}
=== Relationships between different topologies on B(X,Y) ===
The different terminology for the various topologies on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
can sometimes be confusing. For instance, "strong convergence" for vectors in a normed space sometimes refers to norm-convergence, which is very often distinct from (and stronger than) than SOT-convergence when the normed space in question is
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
. The weak topology on a normed space
X
{\displaystyle X}
is the coarsest topology that makes the linear functionals in
X
∗
{\displaystyle X^{*}}
continuous; when we take
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
in place of
X
{\displaystyle X}
, the weak topology can be very different than the weak operator topology. And while the WOT is formally weaker than the SOT, the SOT is weaker than the operator norm topology.
In general, the following inclusions hold:
{
WOT-open sets in
B
(
X
,
Y
)
}
⊆
{
SOT-open sets in
B
(
X
,
Y
)
}
⊆
{
operator-norm-open sets in
B
(
X
,
Y
)
}
,
{\displaystyle \{{\text{WOT-open sets in }}B(X,Y)\}\subseteq \{{\text{SOT-open sets in }}B(X,Y)\}\subseteq \{{\text{operator-norm-open sets in }}B(X,Y)\},}
and these inclusions may or may not be strict depending on the choices of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
The WOT on
B
(
X
,
Y
)
{\displaystyle B(X,Y)}
is a formally weaker topology than the SOT, but they nevertheless share some important properties. For example,
(
B
(
X
,
Y
)
,
SOT
)
∗
=
(
B
(
X
,
Y
)
,
WOT
)
∗
.
{\displaystyle (B(X,Y),{\text{SOT}})^{*}=(B(X,Y),{\text{WOT}})^{*}.}
Consequently, if
S
⊆
B
(
X
,
Y
)
{\displaystyle S\subseteq B(X,Y)}
is convex then
S
¯
SOT
=
S
¯
WOT
,
{\displaystyle {\overline {S}}^{\text{SOT}}={\overline {S}}^{\text{WOT}},}
in other words, SOT-closure and WOT-closure coincide for convex sets.
== References ==
== See also ==
Topologies on the set of operators on a Hilbert space
Weak topology – Mathematical term
Weak-star operator topology | Wikipedia/Weak_operator_topology |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
== Definition and motivation ==
=== Motivating examples ===
=== Definition ===
Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:
Right distributivity: (x + y) · z = x · z + y · z
Left distributivity: z · (x + y) = z · x + z · y
Compatibility with scalars: (ax) · (by) = (ab) (x · y).
These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
== Basic concepts ==
=== Algebra homomorphisms ===
Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as
H
o
m
K
-alg
(
A
,
B
)
.
{\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).}
A K-algebra isomorphism is a bijective K-algebra homomorphism.
=== Subalgebras and ideals ===
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
=== Extension of scalars ===
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product
V
F
:=
V
⊗
K
F
{\displaystyle V_{F}:=V\otimes _{K}F}
. So if A is an algebra over K, then
A
F
{\displaystyle A_{F}}
is an algebra over F.
== Kinds of algebras and examples ==
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
=== Unital algebra ===
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
=== Zero algebra ===
An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
A unital zero algebra is the direct sum
K
⊕
V
{\displaystyle K\oplus V}
of a field
K
{\displaystyle K}
and a
K
{\displaystyle K}
-vector space
V
{\displaystyle V}
, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written as
k
+
v
{\displaystyle k+v}
with
k
∈
K
{\displaystyle k\in K}
and
v
∈
V
{\displaystyle v\in V}
, and the product is the only bilinear operation such that
v
w
=
0
{\displaystyle vw=0}
for every
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
{\displaystyle V}
. So, if
k
1
,
k
2
∈
K
{\displaystyle k_{1},k_{2}\in K}
and
v
1
,
v
2
∈
V
{\displaystyle v_{1},v_{2}\in V}
, one has
(
k
1
+
v
1
)
(
k
2
+
v
2
)
=
k
1
k
2
+
(
k
1
v
2
+
k
2
v
1
)
.
{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module
V
{\displaystyle V}
correspond exactly to the ideals of
K
⊕
V
{\displaystyle K\oplus V}
that are contained in
V
{\displaystyle V}
.
For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
=== Associative algebra ===
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
=== Non-associative algebra ===
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map
A
×
A
→
A
{\displaystyle A\times A\rightarrow A}
. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
== Algebras and rings ==
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
η
:
K
→
Z
(
A
)
,
{\displaystyle \eta \colon K\to Z(A),}
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
K
×
A
→
A
{\displaystyle K\times A\to A}
given by
(
k
,
a
)
↦
η
(
k
)
a
.
{\displaystyle (k,a)\mapsto \eta (k)a.}
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
f
(
k
a
)
=
k
f
(
a
)
{\displaystyle f(ka)=kf(a)}
for all
k
∈
K
{\displaystyle k\in K}
and
a
∈
A
{\displaystyle a\in A}
. In other words, the following diagram commutes:
K
η
A
↙
η
B
↘
A
f
⟶
B
{\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}}
== Structure coefficients ==
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
e
i
e
j
=
∑
k
=
1
n
c
i
,
j
,
k
e
k
{\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}}
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
== Classification of low-dimensional unital associative algebras over the complex numbers ==
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
1
⋅
1
=
1
,
1
⋅
a
=
a
,
a
⋅
1
=
a
.
{\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.}
It remains to specify
a
a
=
1
{\displaystyle \textstyle aa=1}
for the first algebra,
a
a
=
0
{\displaystyle \textstyle aa=0}
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
a
a
=
a
,
b
b
=
b
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0}
for the first algebra,
a
a
=
a
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0}
for the second algebra,
a
a
=
b
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0}
for the third algebra,
a
a
=
1
,
b
b
=
0
,
a
b
=
−
b
a
=
b
{\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b}
for the fourth algebra,
a
a
=
0
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0}
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
== Generalization: algebra over a ring ==
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
=== Associative algebras over rings ===
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to
H
×
H
{\displaystyle \mathbb {H} \times \mathbb {H} }
, the direct product of two quaternion algebras. The center of that ring is
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} }
, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional
R
{\displaystyle \mathbb {R} }
-algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism
R
→
A
{\displaystyle R\to A}
defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural
Z
{\displaystyle \mathbb {Z} }
-module structure, since one can take the unique homomorphism
Z
→
A
{\displaystyle \mathbb {Z} \to A}
. On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
== See also ==
Algebra over an operad
Alternative algebra
Clifford algebra
Composition algebra
Differential algebra
Free algebra
Geometric algebra
Max-plus algebra
Mutation (algebra)
Operator algebra
Zariski's lemma
== Notes ==
== References ==
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. | Wikipedia/An_algebra |
In algebra, a septic equation is an equation of the form
a
x
7
+
b
x
6
+
c
x
5
+
d
x
4
+
e
x
3
+
f
x
2
+
g
x
+
h
=
0
,
{\displaystyle ax^{7}+bx^{6}+cx^{5}+dx^{4}+ex^{3}+fx^{2}+gx+h=0,\,}
where a ≠ 0.
A septic function is a function of the form
f
(
x
)
=
a
x
7
+
b
x
6
+
c
x
5
+
d
x
4
+
e
x
3
+
f
x
2
+
g
x
+
h
{\displaystyle f(x)=ax^{7}+bx^{6}+cx^{5}+dx^{4}+ex^{3}+fx^{2}+gx+h\,}
where a ≠ 0. In other words, it is a polynomial of degree seven. If a = 0, then f is a sextic function (b ≠ 0), quintic function (b = 0, c ≠ 0), etc.
The equation may be obtained from the function by setting f(x) = 0.
The coefficients a, b, c, d, e, f, g, h may be either integers, rational numbers, real numbers, complex numbers or, more generally, members of any field.
Because they have an odd degree, septic functions appear similar to quintic and cubic functions when graphed, except they may possess additional local maxima and local minima (up to three maxima and three minima). The derivative of a septic function is a sextic function.
== Solvable septics ==
Some seventh degree equations can be solved by factorizing into radicals, but other septics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory. To give an example of an irreducible but solvable septic, one can generalize the solvable de Moivre quintic to get,
x
7
+
7
α
x
5
+
14
α
2
x
3
+
7
α
3
x
+
β
=
0
{\displaystyle x^{7}+7\alpha x^{5}+14\alpha ^{2}x^{3}+7\alpha ^{3}x+\beta =0\,}
,
where the auxiliary equation is
y
2
+
β
y
−
α
7
=
0
{\displaystyle y^{2}+\beta y-\alpha ^{7}=0\,}
.
This means that the septic is obtained by eliminating u and v between x = u + v, uv + α = 0 and u7 + v7 + β = 0.
It follows that the septic's seven roots are given by
x
k
=
ω
k
y
1
7
+
ω
k
6
y
2
7
{\displaystyle x_{k}=\omega _{k}{\sqrt[{7}]{y_{1}}}+\omega _{k}^{6}{\sqrt[{7}]{y_{2}}}}
where ωk is any of the 7 seventh roots of unity. The Galois group of this septic is the maximal solvable group of order 42. This is easily generalized to any other degrees k, not necessarily prime.
Another solvable family is,
x
7
−
2
x
6
+
(
α
+
1
)
x
5
+
(
α
−
1
)
x
4
−
α
x
3
−
(
α
+
5
)
x
2
−
6
x
−
4
=
0
{\displaystyle x^{7}-2x^{6}+(\alpha +1)x^{5}+(\alpha -1)x^{4}-\alpha x^{3}-(\alpha +5)x^{2}-6x-4=0\,}
whose members appear in Kluner's Database of Number Fields. Its discriminant is
Δ
=
−
4
4
(
4
α
3
+
99
α
2
−
34
α
+
467
)
3
{\displaystyle \Delta =-4^{4}\left(4\alpha ^{3}+99\alpha ^{2}-34\alpha +467\right)^{3}\,}
The Galois group of these septics is the dihedral group of order 14.
The general septic equation can be solved with the alternating or symmetric Galois groups A7 or S7. Such equations require hyperelliptic functions and associated theta functions of genus 3 for their solution. However, these equations were not studied specifically by the nineteenth-century mathematicians studying the solutions of algebraic equations, because the sextic equations' solutions were already at the limits of their computational abilities without computers.
Septics are the lowest order equations for which it is not obvious that their solutions may be obtained by composing continuous functions of two variables. Hilbert's 13th problem was the conjecture this was not possible in the general case for seventh-degree equations. Vladimir Arnold solved this in 1957, demonstrating that this was always possible. However, Arnold himself considered the genuine Hilbert problem to be whether for septics their solutions may be obtained by superimposing algebraic functions of two variables. As of 2023, the problem is still open.
== Galois groups ==
There are seven Galois groups for septics:
Septic equations solvable by radicals have a Galois group which is either the cyclic group of order 7, or the dihedral group of order 14, or a metacyclic group of order 21 or 42.
The L(3, 2) Galois group (of order 168) is formed by the permutations of the 7 vertex labels which preserve the 7 "lines" in the Fano plane. Septic equations with this Galois group L(3, 2) require elliptic functions but not hyperelliptic functions for their solution.
Otherwise the Galois group of a septic is either the alternating group of order
7
!
/
2
=
2520
{\displaystyle 7!/2=2520}
or the symmetric group of order
7
!
=
5040.
{\displaystyle 7!=5040.}
== Septic equation for the squared area of a cyclic pentagon or hexagon ==
The square of the area of a cyclic pentagon is a root of a septic equation whose coefficients are symmetric functions of the sides of the pentagon. The same is true of the square of the area of a cyclic hexagon.
== See also ==
Cubic function
Quartic function
Quintic function
Sextic equation
Labs septic
== References == | Wikipedia/Septic_function |
Finding the roots of polynomials is a long-standing problem that has been extensively studied throughout the history and substantially influenced the development of mathematics. It involves determining either a numerical approximation or a closed-form expression of the roots of a univariate polynomial, i.e., determining approximate or closed form solutions of
x
{\displaystyle x}
in the equation
a
0
+
a
1
x
+
a
2
x
2
+
⋯
+
a
n
x
n
=
0
{\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n}=0}
where
a
i
{\displaystyle a_{i}}
are either real or complex numbers.
Efforts to understand and solve polynomial equations led to the development of important mathematical concepts, including irrational and complex numbers, as well as foundational structures in modern algebra such as fields, rings, and groups.
Despite being historically important, finding the roots of higher degree polynomials no longer play a central role in mathematics and computational mathematics, with one major exception in computer algebra.
== Overview ==
=== Closed-form formulas ===
Closed-form formulas for polynomial roots exist only when the degree of the polynomial is less than 5. The quadratic formula has been known since antiquity, and the cubic and quartic formulas were discovered in full generality during the 16th century.
When the degree of polynomial is at least 5, a closed-form expression for the roots by the polynomial coefficients does not exist in general, if we only uses additions, subtractions, multiplications, divisions, and radicals (taking n-th roots) in the formula. This is due to the celebrated Abel-Ruffini theorem. On the other hand, the fundamental theorem of algebra shows that all nonconstant polynomials have at least one root. Therefore, root-finding algorithms consists of finding numerical solutions in most cases.
=== Numerical algorithms ===
Root-finding algorithms can be broadly categorized according to the goal of the computation. Some methods aim to find a single root, while others are designed to find all complex roots at once. In certain cases, the objective may be to find roots within a specific region of the complex plane. It is often desirable and even necessary to select algorithms specific to the computational task due to efficiency and accuracy reasons. See Root Finding Methods for a summary of the existing methods available in each case.
== History ==
=== Closed-form formulas ===
The root-finding problem of polynomials was first recognized by the Sumerians and then the Babylonians. Since then, the search for closed-form formulas for polynomial equations lasted for thousands of years.
==== The quadratics ====
The Babylonions and Egyptians were able to solve specific quadratic equations in the second millennium BCE, and their solutions essentially correspond to the quadratic formula.
However, it took 2 millennia of effort to state the quadratic formula in an explicit form similar to the modern formulation, provided by Indian Mathematician Brahmagupta in his book Brāhmasphuṭasiddhānta 625 CE. The full recognition of the quadratic formula requires the introduction of complex numbers, which took another a millennia.
==== The cubics and the quartics ====
The first breakthrough in a closed-form formula of polynomials with degree higher than two took place in Italy. In the early 16th century, the Italian mathematician Scipione del Ferro found a closed-form formula for cubic equations of the form
x
3
+
m
x
=
n
{\displaystyle x^{3}+mx=n}
, where
m
,
n
{\displaystyle m,n}
are nonnegative numbers. Later, Niccolò Tartaglia also discovered methods to solve such cubic equations, and Gerolamo Cardano summarized and published their work in his book Ars Magna in 1545.
Meanwhile, Cardano's student Lodovico Ferrari discovered the closed-form formula of the quartic equations in 1540. His solution is based on the closed-form formula of the cubic equations, thus had to wait until the cubic formula to be published.
In Ars Magna, Cardano noticed that Tartaglia's method sometimes involves extracting the square root of a negative number. In fact, this could happen even if the roots are real themselves. Later, the Italian mathematician Rafael Bombelli investigated further into these mathematical objects by giving an explicit arithmetic rules in his book Algebra published in 1569. These mathematical objects are now known as the complex numbers, which are foundational in mathematics, physics, and engineering.
==== Insolvability of the quintics ====
Since the discovery of cubic and quartic formulas, solving quintic equations in a closed form had been a major problem in algebra. The French lawyer Viete, who first formulated the root formula for cubics in modern language and applied trigonometric methods to root-solving, believed that his methods generalize to a closed-form formula in radicals for polynomial with arbitrary degree. Descartes also hold the same opinion.
However, Lagrange noticed the flaws in these arguments in his 1771 paper Reflections on the Algebraic Theory of Equations, where he analyzed why the methods used to solve the cubics and quartics would not work to solve the quintics. His argument involves studying the permutation of the roots of polynomial equations. Nevertheless, Lagrange still believed that closed-form formula in radicals of the quintics exist. Gauss seems to have been the first prominent mathematician who suspected the insolvability of the quintics, stated in his 1799 doctoral dissertation.
The first serious attempt at proving the insolvability of the quintic was given by the Italian mathematician Paolo Ruffini. He published six versions of his proof between 1799 and 1813, yet his proof was not widely accepted as the writing was long and difficult to understand, and turned out to have a gap.
The first rigorous and accepted proof of the insolvability of the quintic was famously given by Niels Henrik Abel in 1824, which made essential use of the Galois theory of field extensions. In the paper, Abel proved that polynomials with degree more than 4 do not have a closed-form root formula by radicals in general. This puts an end in the search of closed form formulas of the roots of polynomials by radicals of the polynomial coefficients.
==== General solution using combinatorics ====
In 2025, Norman Wildberger and Dean Rubine introduced a general solution for arbitrary degree, involving a formal power series. The equation
1
−
x
+
a
2
x
2
+
a
3
x
3
+
a
4
x
4
+
.
.
.
{\displaystyle 1-x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}+...}
has a solution
x
=
∑
m
2
,
m
3
,
.
.
.
≥
0
(
2
m
2
+
3
m
3
+
4
m
4
+
.
.
.
)
!
(
1
+
m
2
+
2
m
3
+
.
.
.
)
!
m
2
!
m
3
!
.
.
.
a
2
m
2
a
3
m
3
.
.
.
{\displaystyle x=\sum _{m_{2},m_{3},...\geq 0}{{\frac {(2m_{2}+3m_{3}+4m_{4}+...)!}{(1+m_{2}+2m_{3}+...)!m_{2}!m_{3}!...}}a_{2}^{m_{2}}a_{3}^{m_{3}}...}}
This is a generalization of a solution for the quadratics using Catalan numbers
C
n
{\displaystyle C_{n}}
, for which it reduces to
x
=
∑
n
≥
0
C
n
t
n
{\displaystyle x=\sum _{n\geq 0}C_{n}t^{n}}
. For the quintic, this is closely related to the Eisenstein series.
=== Numerical methods ===
Since finding a closed-form formula of higher degree polynomials is significantly harder than that of quadratic equations, the earliest attempts to solve cubic equations are either geometrical or numerical. Also, for practical purposes, numerical solutions are necessary.
==== Iterative methods ====
The earliest iterative approximation methods of root-finding were developed to compute square roots. In Heron of Alexandria's book Metrica (1st-2nd century CE), approximate values of square roots were computed by iteratively improving an initial estimate. Jamshīd al-Kāshī presented a generalized version of the method to compute
n
{\displaystyle n}
th roots. A similar method was also found in Henry Briggs's publication Trigonometria Britannica in 1633. Franciscus Vieta also developed an approximation method that is almost identical to Newton's method.
Newton further generalized the method to compute the roots of arbitrary polynomials in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711), now known as Newton's method. In 1690, Joseph Raphson published a refinement of Newton's method, presenting it in a form that more closely aligned with the modern version used today.
In 1879, the English mathematician Arthur Cayley noticed the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values in his paper The Newton–Fourier imaginary problem. This opened the way to the study of the theory of iterations of rational functions.
==== Real-root isolation methods ====
A class of methods of finding numerical value of real roots is based on real-root isolation. The first example of such method is given by René Descartes in 1637. It counts the roots of a polynomial by examining sign changes in its coefficients. In 1807, the French mathematician François Budan de Boislaurent generalized Descarte's result into Budan's theorem which counts the real roots in a half-open interval (a, b]. However, both methods are not suitable as an effective algorithm.
The first complete real-root isolation algorithm was given by Jacques Charles François Sturm in 1829, known as the Sturm's theorem.
In 1836, Alexandre Joseph Hidulphe Vincent proposed a method for isolating real roots of polynomials using continued fractions, a result now known as Vincent's theorem. The work was largely forgotten until it was rediscovered over a century later by J. V. Uspensky, who included it in his 1948 textbook Theory of Equations. The theorem was subsequently brought to wider academic attention by the American mathematician Alkiviadis G. Akritas, who recognized its significance while studying Uspensky's account. The first implimentation of real-root isolation method by modern computer is given by G.E. Collins and Alkiviadis G. Akritas in 1976, where they proved an effective version of Vincent's theorem. Variants of the algorithm were subsequently studied.
=== Mechanical methods ===
Before electronic computers were invented, people used mechanical computers to automate the polynomial-root solving problems. In 1758, the Hungarian scientist J.A. De Segner proposed a design of root-solving machine in his paper, which operates by drawing the graph of the polynomial on a plane and find the roots as the intersections of the graph with x-axis. In 1770, the English mathematician Jack Rowning investigated the possibility of drawing the graph of polynomials via local motions.
In 1845, the English mathematician Francis Bushforth proposed to use trignometric methods to simplify the root finding problem. Given a polynomial
a
0
+
a
1
x
+
.
.
.
+
a
n
x
n
=
0
{\displaystyle a_{0}+a_{1}x+...+a_{n}x^{n}=0}
, substitute
x
=
cos
t
{\displaystyle x=\cos t}
. Since
cos
n
t
{\displaystyle \cos ^{n}t}
can be written as a linear combination of
cos
k
t
,
k
∈
Z
{\displaystyle \cos kt,k\in \mathbb {Z} }
(See Chebyshev polynomials), the polynomial can be reformulated into the following form
b
0
+
b
1
cos
t
+
b
2
cos
2
t
+
.
.
.
+
b
n
cos
n
t
{\displaystyle b_{0}+b_{1}\cos t+b_{2}\cos ^{2}t+...+b_{n}\cos ^{n}t}
Such curves can be drawn by a harmonic analyzer (also known as tide predicting machines). The first harmonic analyzer was built by Lord Kelvin in 1872, while Bashforth envisioned such machine in his paper 27 years ago.
The Spanish engineer and mathematician Leonardo Torres Quevedo built several machines for solving real and complex roots of polynomials between 1893-1900. His machine employs a logarithmic algorithm, and has a mechanical component called the Endless principle to the value of
log
(
a
+
b
)
{\displaystyle \log(a+b)}
from
log
a
,
log
b
{\displaystyle \log a,\log b}
with high accuracy. This allow him to achieve high accuracy in polynomial root-finding: the machine computes the roots of deg 8 polynomials with an accuracy of
10
−
3
{\displaystyle 10^{-3}}
.
== Common root-finding algorithms ==
=== Finding one root ===
The most widely used method for computing a root of any differentiable function
f
{\displaystyle f}
is Newton's method, in which an initial guess
x
0
{\displaystyle x_{0}}
is iteratively refined. At each iteration the tangent line to
f
{\displaystyle f}
at
x
n
{\displaystyle x_{n}}
is used as a linear approximation to
f
{\displaystyle f}
, and its root is used as the succeeding guess
x
n
+
1
{\displaystyle x_{n+1}}
:
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
,
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}},}
In general, the value of
x
n
{\displaystyle x_{n}}
will converge to a root of
f
{\displaystyle f}
.
In particular, the method can be applied to compute a root of polynomial functions. In this case, the computations in Newton's method can be accelerated using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration.
Though the rate of convergence of Newton's method is generally quadratic, it might converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and
x
0
{\displaystyle x_{0}}
is chosen to be a real number, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if
x
0
{\displaystyle x_{0}}
is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots). This is the starting point of Horner's method for computing the roots.
Closely related to Newton's method are Halley's method and Laguerre's method. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner's rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step).
When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.
=== Finding all complex roots ===
==== Methods using complex-number arithmetic ====
Both the Aberth method and the similar yet simpler Durand–Kerner method simultaneously find all of the roots using only simple complex number arithmetic. The Aberth method is presently the most efficient method. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed them up for large degrees of the polynomial.
A free implementation of Aberth's method is available under the name of MPSolve. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits.
Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.
==== Methods using linear algebra ====
Arguably, the most reliable method to find all roots of a polynomial is to find the eigenvalues of the companion matrix of monic polynomial, which coincides with the roots of the polynomial. There are plenty of algorithms for computing the eigenvalue of matrices. The standard method for finding all roots of a polynomial in MATLAB uses the Francis QR algorithm to compute the eigenvalues of the corresponding companion matrix of the polynomial.
In principle, can use any eigenvalue algorithm to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex (cpoly) variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it has fast convergence with order
1
+
φ
≈
2.6
{\displaystyle 1+\varphi \approx 2.6}
(where
φ
{\displaystyle \varphi }
is the golden ratio) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of O(|f(x)|2+3φ), that is a slower convergence than with three steps of Newton's method.
==== Limitations of iterative methods for finding all roots ====
The oldest method of finding all roots is to start by finding a single root. When a root r has been found, it can be removed from the polynomial by dividing out the binomial x – r. The resulting polynomial contains the remaining roots, which can be found by iterating on this process. This idea, despite being common in theoretical deriviations, does not work well in numerical computations because of the phenomenon of numerical instability: Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of
10
20
;
{\displaystyle 10^{20};}
this implies that an error of
10
−
10
{\displaystyle 10^{-10}}
on the value of the root may produce a value of the polynomial at the approximate root that is of the order of
10
10
.
{\displaystyle 10^{10}.}
=== Finding all real roots ===
Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research.
Methods for finding all complex roots can provide the real roots. However, because of the numerical instability of polynomials, it may need arbitrary-precision arithmetic to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, proportional to the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots.
The standard way of computing real roots is to compute first disjoint intervals, called isolating intervals, such that each one contains exactly one real root, and together they contain all the roots. This computation is called real-root isolation. Having an isolating interval, one may use fast numerical methods, such as Newton's method for improving the precision of the result.
The oldest complete algorithm for real-root isolation results from Sturm's theorem. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and its extensions—Budan's and Vincent's theorems. These methods divide into two main classes, one using continued fractions and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a computational complexity that is similar to that of the best algorithms for computing all the roots (even when all roots are real).
These algorithms have been implemented and are available in Mathematica (continued fraction method) and Maple (bisection method), as well as in other main computer algebra systems (SageMath, PARI/GP) . Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.
=== Finding roots in a restricted domain ===
Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly.
All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable.
The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane.
The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.
=== Finding complex roots in pairs ===
If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method.
The real variant of Jenkins–Traub algorithm is an improvement of this method.
=== Polynomials with rational coefficients ===
For polynomials whose coefficients are exactly given as integers or rational numbers, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also given in precise terms. This method, called square-free factorization, is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative.
The square-free factorization of a polynomial p is a factorization
p
=
p
1
p
2
2
⋯
p
k
k
{\displaystyle p=p_{1}p_{2}^{2}\cdots p_{k}^{k}}
where each
p
i
{\displaystyle p_{i}}
is either 1 or a polynomial without multiple roots, and two different
p
i
{\displaystyle p_{i}}
do not have any common root.
An efficient method to compute this factorization is Yun's algorithm.
== See also ==
Rational root theorem
== References == | Wikipedia/Polynomial_root-finding_algorithms |
In mathematics, a generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. The series, if convergent, defines a generalized hypergeometric function, which may then be defined over a wider domain of the argument by analytic continuation. The generalized hypergeometric series is sometimes just called the hypergeometric series, though this term also sometimes just refers to the Gaussian hypergeometric series. Generalized hypergeometric functions include the (Gaussian) hypergeometric function and the confluent hypergeometric function as special cases, which in turn have many particular special functions as special cases, such as elementary functions, Bessel functions, and the classical orthogonal polynomials.
== Notation ==
A hypergeometric series is formally defined as a power series
β
0
+
β
1
z
+
β
2
z
2
+
⋯
=
∑
n
⩾
0
β
n
z
n
{\displaystyle \beta _{0}+\beta _{1}z+\beta _{2}z^{2}+\dots =\sum _{n\geqslant 0}\beta _{n}z^{n}}
in which the ratio of successive coefficients is a rational function of n. That is,
β
n
+
1
β
n
=
A
(
n
)
B
(
n
)
{\displaystyle {\frac {\beta _{n+1}}{\beta _{n}}}={\frac {A(n)}{B(n)}}}
where A(n) and B(n) are polynomials in n.
For example, in the case of the series for the exponential function,
1
+
z
1
!
+
z
2
2
!
+
z
3
3
!
+
⋯
,
{\displaystyle 1+{\frac {z}{1!}}+{\frac {z^{2}}{2!}}+{\frac {z^{3}}{3!}}+\cdots ,}
we have:
β
n
=
1
n
!
,
β
n
+
1
β
n
=
1
n
+
1
.
{\displaystyle \beta _{n}={\frac {1}{n!}},\qquad {\frac {\beta _{n+1}}{\beta _{n}}}={\frac {1}{n+1}}.}
So this satisfies the definition with A(n) = 1 and B(n) = n + 1.
It is customary to factor out the leading term, so β0 is assumed to be 1. The polynomials can be factored into linear factors of the form (aj + n) and (bk + n) respectively, where the aj and bk are complex numbers.
For historical reasons, it is assumed that (1 + n) is a factor of B. If this is not already the case then both A and B can be multiplied by this factor; the factor cancels so the terms are unchanged and there is no loss of generality.
The ratio between consecutive coefficients now has the form
c
(
a
1
+
n
)
⋯
(
a
p
+
n
)
d
(
b
1
+
n
)
⋯
(
b
q
+
n
)
(
1
+
n
)
{\displaystyle {\frac {c(a_{1}+n)\cdots (a_{p}+n)}{d(b_{1}+n)\cdots (b_{q}+n)(1+n)}}}
,
where c and d are the leading coefficients of A and B. The series then has the form
1
+
a
1
⋯
a
p
b
1
⋯
b
q
⋅
1
c
z
d
+
a
1
⋯
a
p
b
1
⋯
b
q
⋅
1
(
a
1
+
1
)
⋯
(
a
p
+
1
)
(
b
1
+
1
)
⋯
(
b
q
+
1
)
⋅
2
(
c
z
d
)
2
+
⋯
{\displaystyle 1+{\frac {a_{1}\cdots a_{p}}{b_{1}\cdots b_{q}\cdot 1}}{\frac {cz}{d}}+{\frac {a_{1}\cdots a_{p}}{b_{1}\cdots b_{q}\cdot 1}}{\frac {(a_{1}+1)\cdots (a_{p}+1)}{(b_{1}+1)\cdots (b_{q}+1)\cdot 2}}\left({\frac {cz}{d}}\right)^{2}+\cdots }
,
or, by scaling z by the appropriate factor and rearranging,
1
+
a
1
⋯
a
p
b
1
⋯
b
q
z
1
!
+
a
1
(
a
1
+
1
)
⋯
a
p
(
a
p
+
1
)
b
1
(
b
1
+
1
)
⋯
b
q
(
b
q
+
1
)
z
2
2
!
+
⋯
{\displaystyle 1+{\frac {a_{1}\cdots a_{p}}{b_{1}\cdots b_{q}}}{\frac {z}{1!}}+{\frac {a_{1}(a_{1}+1)\cdots a_{p}(a_{p}+1)}{b_{1}(b_{1}+1)\cdots b_{q}(b_{q}+1)}}{\frac {z^{2}}{2!}}+\cdots }
.
This has the form of an exponential generating function. This series is usually denoted by
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
{\displaystyle {}_{p}F_{q}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};z)}
or
p
F
q
[
a
1
a
2
⋯
a
p
b
1
b
2
⋯
b
q
;
z
]
.
{\displaystyle \,{}_{p}F_{q}\left[{\begin{matrix}a_{1}&a_{2}&\cdots &a_{p}\\b_{1}&b_{2}&\cdots &b_{q}\end{matrix}};z\right].}
Using the rising factorial or Pochhammer symbol
(
a
)
0
=
1
,
(
a
)
n
=
a
(
a
+
1
)
(
a
+
2
)
⋯
(
a
+
n
−
1
)
,
n
≥
1
{\displaystyle {\begin{aligned}(a)_{0}&=1,\\(a)_{n}&=a(a+1)(a+2)\cdots (a+n-1),&&n\geq 1\end{aligned}}}
this can be written
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
=
∑
n
=
0
∞
(
a
1
)
n
⋯
(
a
p
)
n
(
b
1
)
n
⋯
(
b
q
)
n
z
n
n
!
.
{\displaystyle \,{}_{p}F_{q}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};z)=\sum _{n=0}^{\infty }{\frac {(a_{1})_{n}\cdots (a_{p})_{n}}{(b_{1})_{n}\cdots (b_{q})_{n}}}\,{\frac {z^{n}}{n!}}.}
(Note that this use of the Pochhammer symbol is not standard; however it is the standard usage in this context.)
== Terminology ==
When all the terms of the series are defined and it has a non-zero radius of convergence, then the series defines an analytic function. Such a function, and its analytic continuations, is called the hypergeometric function.
The case when the radius of convergence is 0 yields many interesting series in mathematics, for example the incomplete gamma function has the asymptotic expansion
Γ
(
a
,
z
)
∼
z
a
−
1
e
−
z
(
1
+
a
−
1
z
+
(
a
−
1
)
(
a
−
2
)
z
2
+
⋯
)
{\displaystyle \Gamma (a,z)\sim z^{a-1}e^{-z}\left(1+{\frac {a-1}{z}}+{\frac {(a-1)(a-2)}{z^{2}}}+\cdots \right)}
which could be written za−1e−z 2F0(1−a,1;;−z−1). However, the use of the term hypergeometric series is usually restricted to the case where the series defines an actual analytic function.
The ordinary hypergeometric series should not be confused with the basic hypergeometric series, which, despite its name, is a rather more complicated and recondite series. The "basic" series is the q-analog of the ordinary hypergeometric series. There are several such generalizations of the ordinary hypergeometric series, including the ones coming from zonal spherical functions on Riemannian symmetric spaces.
The series without the factor of n! in the denominator (summed over all integers n, including negative) is called the bilateral hypergeometric series.
== Convergence conditions ==
There are certain values of the aj and bk for which the numerator or the denominator of the coefficients is 0.
If any aj is a non-positive integer (0, −1, −2, etc.) then the series only has a finite number of terms and is, in fact, a polynomial of degree −aj.
If any bk is a non-positive integer (excepting the previous case with bk < aj) then the denominators become 0 and the series is undefined.
Excluding these cases, the ratio test can be applied to determine the radius of convergence.
If p < q + 1 then the ratio of coefficients tends to zero. This implies that the series converges for any finite value of z and thus defines an entire function of z. An example is the power series for the exponential function.
If p = q + 1 then the ratio of coefficients tends to one. This implies that the series converges for |z| < 1 and diverges for |z| > 1. Whether it converges for |z| = 1 is more difficult to determine. Analytic continuation can be employed for larger values of z.
If p > q + 1 then the ratio of coefficients grows without bound. This implies that, besides z = 0, the series diverges. This is then a divergent or asymptotic series, or it can be interpreted as a symbolic shorthand for a differential equation that the sum satisfies formally.
The question of convergence for p=q+1 when z is on the unit circle is more difficult. It can be shown that the series converges absolutely at z = 1 if
ℜ
(
∑
b
k
−
∑
a
j
)
>
0
{\displaystyle \Re \left(\sum b_{k}-\sum a_{j}\right)>0}
.
Further, if p=q+1,
∑
i
=
1
p
a
i
≥
∑
j
=
1
q
b
j
{\displaystyle \sum _{i=1}^{p}a_{i}\geq \sum _{j=1}^{q}b_{j}}
and z is real, then the following convergence result holds Quigley et al. (2013):
lim
z
→
1
(
1
−
z
)
d
log
(
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
p
)
)
d
z
=
∑
i
=
1
p
a
i
−
∑
j
=
1
q
b
j
{\displaystyle \lim _{z\rightarrow 1}(1-z){\frac {d\log(_{p}F_{q}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};z^{p}))}{dz}}=\sum _{i=1}^{p}a_{i}-\sum _{j=1}^{q}b_{j}}
.
== Basic properties ==
It is immediate from the definition that the order of the parameters aj, or the order of the parameters bk can be changed without changing the value of the function. Also, if any of the parameters aj is equal to any of the parameters bk, then the matching parameters can be "cancelled out", with certain exceptions when the parameters are non-positive integers. For example,
2
F
1
(
3
,
1
;
1
;
z
)
=
2
F
1
(
1
,
3
;
1
;
z
)
=
1
F
0
(
3
;
;
z
)
{\displaystyle \,{}_{2}F_{1}(3,1;1;z)=\,{}_{2}F_{1}(1,3;1;z)=\,{}_{1}F_{0}(3;;z)}
.
This cancelling is a special case of a reduction formula that may be applied whenever a parameter on the top row differs from one on the bottom row by a non-negative integer.
A
+
1
F
B
+
1
[
a
1
,
…
,
a
A
,
c
+
n
b
1
,
…
,
b
B
,
c
;
z
]
=
∑
j
=
0
n
(
n
j
)
z
j
(
c
)
j
∏
i
=
1
A
(
a
i
)
j
∏
i
=
1
B
(
b
i
)
j
A
F
B
[
a
1
+
j
,
…
,
a
A
+
j
b
1
+
j
,
…
,
b
B
+
j
;
z
]
{\displaystyle {}_{A+1}F_{B+1}\left[{\begin{array}{c}a_{1},\ldots ,a_{A},c+n\\b_{1},\ldots ,b_{B},c\end{array}};z\right]=\sum _{j=0}^{n}{\binom {n}{j}}{\frac {z^{j}}{(c)_{j}}}{\frac {\prod _{i=1}^{A}(a_{i})_{j}}{\prod _{i=1}^{B}(b_{i})_{j}}}{}_{A}F_{B}\left[{\begin{array}{c}a_{1}+j,\ldots ,a_{A}+j\\b_{1}+j,\ldots ,b_{B}+j\end{array}};z\right]}
=== Euler's integral transform ===
The following basic identity is very useful as it relates the higher-order hypergeometric functions in terms of integrals over the lower order ones
A
+
1
F
B
+
1
[
a
1
,
…
,
a
A
,
c
b
1
,
…
,
b
B
,
d
;
z
]
=
Γ
(
d
)
Γ
(
c
)
Γ
(
d
−
c
)
∫
0
1
t
c
−
1
(
1
−
t
)
d
−
c
−
1
A
F
B
[
a
1
,
…
,
a
A
b
1
,
…
,
b
B
;
t
z
]
d
t
{\displaystyle {}_{A+1}F_{B+1}\left[{\begin{array}{c}a_{1},\ldots ,a_{A},c\\b_{1},\ldots ,b_{B},d\end{array}};z\right]={\frac {\Gamma (d)}{\Gamma (c)\Gamma (d-c)}}\int _{0}^{1}t^{c-1}(1-t)_{}^{d-c-1}\ {}_{A}F_{B}\left[{\begin{array}{c}a_{1},\ldots ,a_{A}\\b_{1},\ldots ,b_{B}\end{array}};tz\right]dt}
=== Differentiation ===
The generalized hypergeometric function satisfies
(
z
d
d
z
+
a
j
)
p
F
q
[
a
1
,
…
,
a
j
,
…
,
a
p
b
1
,
…
,
b
q
;
z
]
=
a
j
p
F
q
[
a
1
,
…
,
a
j
+
1
,
…
,
a
p
b
1
,
…
,
b
q
;
z
]
{\displaystyle {\begin{aligned}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+a_{j}\right){}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{j},\dots ,a_{p}\\b_{1},\dots ,b_{q}\end{array}};z\right]&=a_{j}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{j}+1,\dots ,a_{p}\\b_{1},\dots ,b_{q}\end{array}};z\right]\\\end{aligned}}}
and
(
z
d
d
z
+
b
k
−
1
)
p
F
q
[
a
1
,
…
,
a
p
b
1
,
…
,
b
k
,
…
,
b
q
;
z
]
=
(
b
k
−
1
)
p
F
q
[
a
1
,
…
,
a
p
b
1
,
…
,
b
k
−
1
,
…
,
b
q
;
z
]
for
b
k
≠
1
{\displaystyle {\begin{aligned}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+b_{k}-1\right){}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k},\dots ,b_{q}\end{array}};z\right]&=(b_{k}-1)\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k}-1,\dots ,b_{q}\end{array}};z\right]{\text{ for }}b_{k}\neq 1\end{aligned}}}
Additionally,
d
d
z
p
F
q
[
a
1
,
…
,
a
p
b
1
,
…
,
b
q
;
z
]
=
∏
i
=
1
p
a
i
∏
j
=
1
q
b
j
p
F
q
[
a
1
+
1
,
…
,
a
p
+
1
b
1
+
1
,
…
,
b
q
+
1
;
z
]
{\displaystyle {\begin{aligned}{\frac {\rm {d}}{{\rm {d}}z}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{q}\end{array}};z\right]&={\frac {\prod _{i=1}^{p}a_{i}}{\prod _{j=1}^{q}b_{j}}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1}+1,\dots ,a_{p}+1\\b_{1}+1,\dots ,b_{q}+1\end{array}};z\right]\end{aligned}}}
Combining these gives a differential equation satisfied by w = pFq:
z
∏
n
=
1
p
(
z
d
d
z
+
a
n
)
w
=
z
d
d
z
∏
n
=
1
q
(
z
d
d
z
+
b
n
−
1
)
w
{\displaystyle z\prod _{n=1}^{p}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+a_{n}\right)w=z{\frac {\rm {d}}{{\rm {d}}z}}\prod _{n=1}^{q}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+b_{n}-1\right)w}
.
== Contiguous function and related identities ==
Take the following operator:
ϑ
=
z
d
d
z
.
{\displaystyle \vartheta =z{\frac {\rm {d}}{{\rm {d}}z}}.}
From the differentiation formulas given above, the linear space spanned by
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
,
ϑ
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z),\vartheta \;{}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z)}
contains each of
p
F
q
(
a
1
,
…
,
a
j
+
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
,
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{j}+1,\dots ,a_{p};b_{1},\dots ,b_{q};z),}
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
k
−
1
,
…
,
b
q
;
z
)
,
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{k}-1,\dots ,b_{q};z),}
z
p
F
q
(
a
1
+
1
,
…
,
a
p
+
1
;
b
1
+
1
,
…
,
b
q
+
1
;
z
)
,
{\displaystyle z\;{}_{p}F_{q}(a_{1}+1,\dots ,a_{p}+1;b_{1}+1,\dots ,b_{q}+1;z),}
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
.
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z).}
Since the space has dimension 2, any three of these p+q+2 functions are linearly dependent:
(
a
i
−
b
j
+
1
)
p
F
q
(
.
.
.
a
i
.
.
;
.
.
.
,
b
j
.
.
.
;
z
)
=
a
i
p
F
q
(
.
.
.
a
i
+
1..
;
.
.
.
,
b
j
.
.
.
;
z
)
−
(
b
j
−
1
)
p
F
q
(
.
.
.
a
i
.
.
;
.
.
.
,
b
j
−
1...
;
z
)
.
{\displaystyle (a_{i}-b_{j}+1){}_{p}F_{q}(...a_{i}..;...,b_{j}...;z)=a_{i}\,{}_{p}F_{q}(...a_{i}+1..;...,b_{j}...;z)-(b_{j}-1){}_{p}F_{q}(...a_{i}..;...,b_{j}-1...;z).}
(
a
i
−
a
j
)
p
F
q
(
.
.
.
a
i
.
.
a
j
.
.
;
.
.
.
.
.
;
z
)
=
a
i
p
F
q
(
.
.
.
a
i
+
1..
a
j
.
.
;
.
.
.
.
.
.
;
z
)
−
a
j
p
F
q
(
.
.
.
a
i
.
.
a
j
+
1...
;
.
.
.
.
;
z
)
.
{\displaystyle (a_{i}-a_{j}){}_{p}F_{q}(...a_{i}..a_{j}..;.....;z)=a_{i}\,{}_{p}F_{q}(...a_{i}+1..a_{j}..;......;z)-a_{j}\,{}_{p}F_{q}(...a_{i}..a_{j}+1...;....;z).}
b
j
p
F
q
(
.
.
.
a
i
.
.
.
.
;
.
.
b
j
.
.
.
;
z
)
=
a
i
p
F
q
(
.
.
.
a
i
+
1....
;
.
.
b
j
+
1...
;
z
)
+
(
b
j
−
a
i
)
p
F
q
(
.
.
.
a
i
.
.
.
.
;
.
.
b
j
+
1...
;
z
)
.
{\displaystyle b_{j}\,{}_{p}F_{q}(...a_{i}....;..b_{j}...;z)=a_{i}\,{}_{p}F_{q}(...a_{i}+1....;..b_{j}+1...;z)+(b_{j}-a_{i}){}_{p}F_{q}(...a_{i}....;..b_{j}+1...;z).}
(
a
i
−
1
)
p
F
q
(
.
.
.
a
i
.
.
a
j
;
.
.
.
;
z
)
=
(
a
i
−
a
j
−
1
)
p
F
q
(
.
.
.
a
i
−
1..
a
j
;
.
.
.
;
z
)
+
a
j
p
F
q
(
.
.
.
a
i
−
1..
a
j
+
1
;
.
.
.
;
z
)
.
{\displaystyle (a_{i}-1){}_{p}F_{q}(...a_{i}..a_{j};...;z)=(a_{i}-a_{j}-1){}_{p}F_{q}(...a_{i}-1..a_{j};...;z)+a_{j}\,{}_{p}F_{q}(...a_{i}-1..a_{j}+1;...;z).}
These dependencies can be written out to generate a large number of identities involving
p
F
q
{\displaystyle {}_{p}F_{q}}
.
For example, in the simplest non-trivial case,
0
F
1
(
;
a
;
z
)
=
(
1
)
0
F
1
(
;
a
;
z
)
{\displaystyle \;{}_{0}F_{1}(;a;z)=(1)\;{}_{0}F_{1}(;a;z)}
,
0
F
1
(
;
a
−
1
;
z
)
=
(
ϑ
a
−
1
+
1
)
0
F
1
(
;
a
;
z
)
{\displaystyle \;{}_{0}F_{1}(;a-1;z)=({\frac {\vartheta }{a-1}}+1)\;{}_{0}F_{1}(;a;z)}
,
z
0
F
1
(
;
a
+
1
;
z
)
=
(
a
ϑ
)
0
F
1
(
;
a
;
z
)
{\displaystyle z\;{}_{0}F_{1}(;a+1;z)=(a\vartheta )\;{}_{0}F_{1}(;a;z)}
,
So
0
F
1
(
;
a
−
1
;
z
)
−
0
F
1
(
;
a
;
z
)
=
z
a
(
a
−
1
)
0
F
1
(
;
a
+
1
;
z
)
{\displaystyle \;{}_{0}F_{1}(;a-1;z)-\;{}_{0}F_{1}(;a;z)={\frac {z}{a(a-1)}}\;{}_{0}F_{1}(;a+1;z)}
.
This, and other important examples,
1
F
1
(
a
+
1
;
b
;
z
)
−
1
F
1
(
a
;
b
;
z
)
=
z
b
1
F
1
(
a
+
1
;
b
+
1
;
z
)
{\displaystyle \;{}_{1}F_{1}(a+1;b;z)-\,{}_{1}F_{1}(a;b;z)={\frac {z}{b}}\;{}_{1}F_{1}(a+1;b+1;z)}
,
1
F
1
(
a
;
b
−
1
;
z
)
−
1
F
1
(
a
;
b
;
z
)
=
a
z
b
(
b
−
1
)
1
F
1
(
a
+
1
;
b
+
1
;
z
)
{\displaystyle \;{}_{1}F_{1}(a;b-1;z)-\,{}_{1}F_{1}(a;b;z)={\frac {az}{b(b-1)}}\;{}_{1}F_{1}(a+1;b+1;z)}
,
1
F
1
(
a
;
b
−
1
;
z
)
−
1
F
1
(
a
+
1
;
b
;
z
)
=
(
a
−
b
+
1
)
z
b
(
b
−
1
)
1
F
1
(
a
+
1
;
b
+
1
;
z
)
{\displaystyle \;{}_{1}F_{1}(a;b-1;z)-\,{}_{1}F_{1}(a+1;b;z)={\frac {(a-b+1)z}{b(b-1)}}\;{}_{1}F_{1}(a+1;b+1;z)}
2
F
1
(
a
+
1
,
b
;
c
;
z
)
−
2
F
1
(
a
,
b
;
c
;
z
)
=
b
z
c
2
F
1
(
a
+
1
,
b
+
1
;
c
+
1
;
z
)
{\displaystyle \;{}_{2}F_{1}(a+1,b;c;z)-\,{}_{2}F_{1}(a,b;c;z)={\frac {bz}{c}}\;{}_{2}F_{1}(a+1,b+1;c+1;z)}
,
2
F
1
(
a
+
1
,
b
;
c
;
z
)
−
2
F
1
(
a
,
b
+
1
;
c
;
z
)
=
(
b
−
a
)
z
c
2
F
1
(
a
+
1
,
b
+
1
;
c
+
1
;
z
)
{\displaystyle \;{}_{2}F_{1}(a+1,b;c;z)-\,{}_{2}F_{1}(a,b+1;c;z)={\frac {(b-a)z}{c}}\;{}_{2}F_{1}(a+1,b+1;c+1;z)}
,
2
F
1
(
a
,
b
;
c
−
1
;
z
)
−
2
F
1
(
a
+
1
,
b
;
c
;
z
)
=
(
a
−
c
+
1
)
b
z
c
(
c
−
1
)
2
F
1
(
a
+
1
,
b
+
1
;
c
+
1
;
z
)
{\displaystyle \;{}_{2}F_{1}(a,b;c-1;z)-\,{}_{2}F_{1}(a+1,b;c;z)={\frac {(a-c+1)bz}{c(c-1)}}\;{}_{2}F_{1}(a+1,b+1;c+1;z)}
,
can be used to generate continued fraction expressions known as Gauss's continued fraction.
Similarly, by applying the differentiation formulas twice, there are
(
p
+
q
+
3
2
)
{\displaystyle {\binom {p+q+3}{2}}}
such functions contained in
{
1
,
ϑ
,
ϑ
2
}
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
,
{\displaystyle \{1,\vartheta ,\vartheta ^{2}\}\;{}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z),}
which has dimension three so any four are linearly dependent. This generates more identities and the process can be continued. The identities thus generated can be combined with each other to produce new ones in a different way.
A function obtained by adding ±1 to exactly one of the parameters aj, bk in
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z)}
is called contiguous to
p
F
q
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
z
)
.
{\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};b_{1},\dots ,b_{q};z).}
Using the technique outlined above, an identity relating
0
F
1
(
;
a
;
z
)
{\displaystyle {}_{0}F_{1}(;a;z)}
and its two contiguous functions can be given, six identities relating
1
F
1
(
a
;
b
;
z
)
{\displaystyle {}_{1}F_{1}(a;b;z)}
and any two of its four contiguous functions, and fifteen identities relating
2
F
1
(
a
,
b
;
c
;
z
)
{\displaystyle {}_{2}F_{1}(a,b;c;z)}
and any two of its six contiguous functions have been found. The first one was derived in the previous paragraph. The last fifteen were given by (Gauss 1813).
== Identities ==
A number of other hypergeometric function identities were discovered in the nineteenth and twentieth centuries. A 20th century contribution to the methodology of proving these identities is the Egorychev method.
=== Saalschütz's theorem ===
Saalschütz's theorem (Saalschütz 1890) is
3
F
2
(
a
,
b
,
−
n
;
c
,
1
+
a
+
b
−
c
−
n
;
1
)
=
(
c
−
a
)
n
(
c
−
b
)
n
(
c
)
n
(
c
−
a
−
b
)
n
.
{\displaystyle {}_{3}F_{2}(a,b,-n;c,1+a+b-c-n;1)={\frac {(c-a)_{n}(c-b)_{n}}{(c)_{n}(c-a-b)_{n}}}.}
For extension of this theorem, see a research paper by Rakha & Rathie. According to (Andrews, Askey & Roy 1999, p. 69), it was in fact first discovered by Pfaff in 1797.
=== Dixon's identity ===
Dixon's identity, first proved by Dixon (1902), gives the sum of a well-poised 3F2 at 1:
3
F
2
(
a
,
b
,
c
;
1
+
a
−
b
,
1
+
a
−
c
;
1
)
=
Γ
(
1
+
a
2
)
Γ
(
1
+
a
2
−
b
−
c
)
Γ
(
1
+
a
−
b
)
Γ
(
1
+
a
−
c
)
Γ
(
1
+
a
)
Γ
(
1
+
a
−
b
−
c
)
Γ
(
1
+
a
2
−
b
)
Γ
(
1
+
a
2
−
c
)
.
{\displaystyle {}_{3}F_{2}(a,b,c;1+a-b,1+a-c;1)={\frac {\Gamma (1+{\frac {a}{2}})\Gamma (1+{\frac {a}{2}}-b-c)\Gamma (1+a-b)\Gamma (1+a-c)}{\Gamma (1+a)\Gamma (1+a-b-c)\Gamma (1+{\frac {a}{2}}-b)\Gamma (1+{\frac {a}{2}}-c)}}.}
For generalization of Dixon's identity, see a paper by Lavoie, et al.
=== Dougall's formula ===
Dougall's formula (Dougall 1907) gives the sum of a very well-poised series that is terminating and 2-balanced.
7
F
6
(
a
1
+
a
2
b
c
d
e
−
m
a
2
1
+
a
−
b
1
+
a
−
c
1
+
a
−
d
1
+
a
−
e
1
+
a
+
m
;
1
)
=
=
(
1
+
a
)
m
(
1
+
a
−
b
−
c
)
m
(
1
+
a
−
c
−
d
)
m
(
1
+
a
−
b
−
d
)
m
(
1
+
a
−
b
)
m
(
1
+
a
−
c
)
m
(
1
+
a
−
d
)
m
(
1
+
a
−
b
−
c
−
d
)
m
.
{\displaystyle {\begin{aligned}{}_{7}F_{6}&\left({\begin{matrix}a&1+{\frac {a}{2}}&b&c&d&e&-m\\&{\frac {a}{2}}&1+a-b&1+a-c&1+a-d&1+a-e&1+a+m\\\end{matrix}};1\right)=\\&={\frac {(1+a)_{m}(1+a-b-c)_{m}(1+a-c-d)_{m}(1+a-b-d)_{m}}{(1+a-b)_{m}(1+a-c)_{m}(1+a-d)_{m}(1+a-b-c-d)_{m}}}.\end{aligned}}}
Terminating means that m is a non-negative integer and 2-balanced means that
1
+
2
a
=
b
+
c
+
d
+
e
−
m
.
{\displaystyle 1+2a=b+c+d+e-m.}
Many of the other formulas for special values of hypergeometric functions can be derived from this as special or limiting cases. It is also called the Dougall-Ramanujan identity. It is a special case of Jackson's identity, and it gives Dixon's identity and Saalschütz's theorem as special cases.
=== Generalization of Kummer's transformations and identities for 2F2 ===
Identity 1.
e
−
x
2
F
2
(
a
,
1
+
d
;
c
,
d
;
x
)
=
2
F
2
(
c
−
a
−
1
,
f
+
1
;
c
,
f
;
−
x
)
{\displaystyle e^{-x}\;{}_{2}F_{2}(a,1+d;c,d;x)={}_{2}F_{2}(c-a-1,f+1;c,f;-x)}
where
f
=
d
(
a
−
c
+
1
)
a
−
d
{\displaystyle f={\frac {d(a-c+1)}{a-d}}}
;
Identity 2.
e
−
x
2
2
F
2
(
a
,
1
+
b
;
2
a
+
1
,
b
;
x
)
=
0
F
1
(
;
a
+
1
2
;
x
2
16
)
−
x
(
1
−
2
a
b
)
2
(
2
a
+
1
)
0
F
1
(
;
a
+
3
2
;
x
2
16
)
,
{\displaystyle e^{-{\frac {x}{2}}}\,{}_{2}F_{2}\left(a,1+b;2a+1,b;x\right)={}_{0}F_{1}\left(;a+{\tfrac {1}{2}};{\tfrac {x^{2}}{16}}\right)-{\frac {x\left(1-{\tfrac {2a}{b}}\right)}{2(2a+1)}}\;{}_{0}F_{1}\left(;a+{\tfrac {3}{2}};{\tfrac {x^{2}}{16}}\right),}
which links Bessel functions to 2F2; this reduces to Kummer's second formula for b = 2a:
Identity 3.
e
−
x
2
1
F
1
(
a
,
2
a
,
x
)
=
0
F
1
(
;
a
+
1
2
;
x
2
16
)
{\displaystyle e^{-{\frac {x}{2}}}\,{}_{1}F_{1}(a,2a,x)={}_{0}F_{1}\left(;a+{\tfrac {1}{2}};{\tfrac {x^{2}}{16}}\right)}
.
Identity 4.
2
F
2
(
a
,
b
;
c
,
d
;
x
)
=
∑
i
=
0
(
b
−
d
i
)
(
a
+
i
−
1
i
)
(
c
+
i
−
1
i
)
(
d
+
i
−
1
i
)
1
F
1
(
a
+
i
;
c
+
i
;
x
)
x
i
i
!
=
e
x
∑
i
=
0
(
b
−
d
i
)
(
a
+
i
−
1
i
)
(
c
+
i
−
1
i
)
(
d
+
i
−
1
i
)
1
F
1
(
c
−
a
;
c
+
i
;
−
x
)
x
i
i
!
,
{\displaystyle {\begin{aligned}{}_{2}F_{2}(a,b;c,d;x)=&\sum _{i=0}{\frac {{b-d \choose i}{a+i-1 \choose i}}{{c+i-1 \choose i}{d+i-1 \choose i}}}\;{}_{1}F_{1}(a+i;c+i;x){\frac {x^{i}}{i!}}\\=&e^{x}\sum _{i=0}{\frac {{b-d \choose i}{a+i-1 \choose i}}{{c+i-1 \choose i}{d+i-1 \choose i}}}\;{}_{1}F_{1}(c-a;c+i;-x){\frac {x^{i}}{i!}},\end{aligned}}}
which is a finite sum if b-d is a non-negative integer.
=== Kummer's relation ===
Kummer's relation is
2
F
1
(
2
a
,
2
b
;
a
+
b
+
1
2
;
x
)
=
2
F
1
(
a
,
b
;
a
+
b
+
1
2
;
4
x
(
1
−
x
)
)
.
{\displaystyle {}_{2}F_{1}\left(2a,2b;a+b+{\tfrac {1}{2}};x\right)={}_{2}F_{1}\left(a,b;a+b+{\tfrac {1}{2}};4x(1-x)\right).}
=== Clausen's formula ===
Clausen's formula
3
F
2
(
2
c
−
2
s
−
1
,
2
s
,
c
−
1
2
;
2
c
−
1
,
c
;
x
)
=
2
F
1
(
c
−
s
−
1
2
,
s
;
c
;
x
)
2
{\displaystyle {}_{3}F_{2}(2c-2s-1,2s,c-{\tfrac {1}{2}};2c-1,c;x)=\,{}_{2}F_{1}(c-s-{\tfrac {1}{2}},s;c;x)^{2}}
was used by de Branges to prove the Bieberbach conjecture.
== Special cases ==
Many of the special functions in mathematics are special cases of the confluent hypergeometric function or the hypergeometric function; see the corresponding articles for examples.
=== The series 0F0 ===
As noted earlier,
0
F
0
(
;
;
z
)
=
e
z
{\displaystyle {}_{0}F_{0}(;;z)=e^{z}}
. The differential equation for this function is
d
d
z
w
=
w
{\displaystyle {\frac {d}{dz}}w=w}
, which has solutions
w
=
k
e
z
{\displaystyle w=ke^{z}}
where k is a constant.
=== The series 0F1 ===
The functions of the form
0
F
1
(
;
a
;
z
)
{\displaystyle {}_{0}F_{1}(;a;z)}
are called confluent hypergeometric limit functions and are closely related to Bessel functions.
The relationship is:
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
;
α
+
1
;
−
1
4
x
2
)
.
{\displaystyle J_{\alpha }(x)={\frac {({\tfrac {x}{2}})^{\alpha }}{\Gamma (\alpha +1)}}{}_{0}F_{1}\left(;\alpha +1;-{\tfrac {1}{4}}x^{2}\right).}
I
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
;
α
+
1
;
1
4
x
2
)
.
{\displaystyle I_{\alpha }(x)={\frac {({\tfrac {x}{2}})^{\alpha }}{\Gamma (\alpha +1)}}{}_{0}F_{1}\left(;\alpha +1;{\tfrac {1}{4}}x^{2}\right).}
The differential equation for this function is
w
=
(
z
d
d
z
+
a
)
d
w
d
z
{\displaystyle w=\left(z{\frac {d}{dz}}+a\right){\frac {dw}{dz}}}
or
z
d
2
w
d
z
2
+
a
d
w
d
z
−
w
=
0.
{\displaystyle z{\frac {d^{2}w}{dz^{2}}}+a{\frac {dw}{dz}}-w=0.}
When a is not a positive integer, the substitution
w
=
z
1
−
a
u
,
{\displaystyle w=z^{1-a}u,}
gives a linearly independent solution
z
1
−
a
0
F
1
(
;
2
−
a
;
z
)
,
{\displaystyle z^{1-a}\;{}_{0}F_{1}(;2-a;z),}
so the general solution is
k
0
F
1
(
;
a
;
z
)
+
l
z
1
−
a
0
F
1
(
;
2
−
a
;
z
)
{\displaystyle k\;{}_{0}F_{1}(;a;z)+lz^{1-a}\;{}_{0}F_{1}(;2-a;z)}
where k, l are constants. (If a is a positive integer, the independent solution is given by the appropriate Bessel function of the second kind.)
A special case is:
0
F
1
(
;
1
2
;
−
z
2
4
)
=
cos
z
{\displaystyle {}_{0}F_{1}\left(;{\frac {1}{2}};-{\frac {z^{2}}{4}}\right)=\cos z}
=== The series 1F0 ===
An important case is:
1
F
0
(
a
;
;
z
)
=
(
1
−
z
)
−
a
.
{\displaystyle {}_{1}F_{0}(a;;z)=(1-z)^{-a}.}
The differential equation for this function is
d
d
z
w
=
(
z
d
d
z
+
a
)
w
,
{\displaystyle {\frac {d}{dz}}w=\left(z{\frac {d}{dz}}+a\right)w,}
or
(
1
−
z
)
d
w
d
z
=
a
w
,
{\displaystyle (1-z){\frac {dw}{dz}}=aw,}
which has solutions
w
=
k
(
1
−
z
)
−
a
{\displaystyle w=k(1-z)^{-a}}
where k is a constant.
1
F
0
(
1
;
;
z
)
=
∑
n
⩾
0
z
n
=
(
1
−
z
)
−
1
{\displaystyle {}_{1}F_{0}(1;;z)=\sum _{n\geqslant 0}z^{n}=(1-z)^{-1}}
is the geometric series with ratio z and coefficient 1.
z
1
F
0
(
2
;
;
z
)
=
∑
n
⩾
0
n
z
n
=
z
(
1
−
z
)
−
2
{\displaystyle z~{}_{1}F_{0}(2;;z)=\sum _{n\geqslant 0}nz^{n}=z(1-z)^{-2}}
is also useful.
=== The series 1F1 ===
The functions of the form
1
F
1
(
a
;
b
;
z
)
{\displaystyle {}_{1}F_{1}(a;b;z)}
are called confluent hypergeometric functions of the first kind, also written
M
(
a
;
b
;
z
)
{\displaystyle M(a;b;z)}
. The incomplete gamma function
γ
(
a
,
z
)
{\displaystyle \gamma (a,z)}
is a special case.
The differential equation for this function is
(
z
d
d
z
+
a
)
w
=
(
z
d
d
z
+
b
)
d
w
d
z
{\displaystyle \left(z{\frac {d}{dz}}+a\right)w=\left(z{\frac {d}{dz}}+b\right){\frac {dw}{dz}}}
or
z
d
2
w
d
z
2
+
(
b
−
z
)
d
w
d
z
−
a
w
=
0.
{\displaystyle z{\frac {d^{2}w}{dz^{2}}}+(b-z){\frac {dw}{dz}}-aw=0.}
When b is not a positive integer, the substitution
w
=
z
1
−
b
u
,
{\displaystyle w=z^{1-b}u,}
gives a linearly independent solution
z
1
−
b
1
F
1
(
1
+
a
−
b
;
2
−
b
;
z
)
,
{\displaystyle z^{1-b}\;{}_{1}F_{1}(1+a-b;2-b;z),}
so the general solution is
k
1
F
1
(
a
;
b
;
z
)
+
l
z
1
−
b
1
F
1
(
1
+
a
−
b
;
2
−
b
;
z
)
{\displaystyle k\;{}_{1}F_{1}(a;b;z)+lz^{1-b}\;{}_{1}F_{1}(1+a-b;2-b;z)}
where k, l are constants.
When a is a non-positive integer, −n,
1
F
1
(
−
n
;
b
;
z
)
{\displaystyle {}_{1}F_{1}(-n;b;z)}
is a polynomial. Up to constant factors, these are the Laguerre polynomials. This implies Hermite polynomials can be expressed in terms of 1F1 as well.
=== The series 1F2 ===
Relations to other functions are known for certain parameter combinations only.
The function
x
1
F
2
(
1
2
;
3
2
,
3
2
;
−
x
2
4
)
{\displaystyle x\;{}_{1}F_{2}\left({\frac {1}{2}};{\frac {3}{2}},{\frac {3}{2}};-{\frac {x^{2}}{4}}\right)}
is the antiderivative of the cardinal sine. With modified values of
a
1
{\displaystyle a_{1}}
and
b
1
{\displaystyle b_{1}}
, one obtains the antiderivative of
sin
(
x
β
)
/
x
α
{\displaystyle \sin(x^{\beta })/x^{\alpha }}
.
The Lommel function is
s
μ
,
ν
(
z
)
=
z
μ
+
1
(
μ
−
ν
+
1
)
(
μ
+
ν
+
1
)
1
F
2
(
1
;
μ
2
−
ν
2
+
3
2
,
μ
2
+
ν
2
+
3
2
;
−
z
2
4
)
{\displaystyle s_{\mu ,\nu }(z)={\frac {z^{\mu +1}}{(\mu -\nu +1)(\mu +\nu +1)}}{}_{1}F_{2}(1;{\frac {\mu }{2}}-{\frac {\nu }{2}}+{\frac {3}{2}},{\frac {\mu }{2}}+{\frac {\nu }{2}}+{\frac {3}{2}};-{\frac {z^{2}}{4}})}
.
=== The series 2F0 ===
The confluent hypergeometric function of the second kind can be written as:
U
(
a
,
b
,
z
)
=
z
−
a
2
F
0
(
a
,
a
−
b
+
1
;
;
−
1
z
)
.
{\displaystyle U(a,b,z)=z^{-a}\;{}_{2}F_{0}\left(a,a-b+1;;-{\frac {1}{z}}\right).}
=== The series 2F1 ===
Historically, the most important are the functions of the form
2
F
1
(
a
,
b
;
c
;
z
)
{\displaystyle {}_{2}F_{1}(a,b;c;z)}
. These are sometimes called Gauss's hypergeometric functions, classical standard hypergeometric or often simply hypergeometric functions. The term Generalized hypergeometric function is used for the functions pFq if there is risk of confusion. This function was first studied in detail by Carl Friedrich Gauss, who explored the conditions for its convergence.
The differential equation for this function is
(
z
d
d
z
+
a
)
(
z
d
d
z
+
b
)
w
=
(
z
d
d
z
+
c
)
d
w
d
z
{\displaystyle \left(z{\frac {d}{dz}}+a\right)\left(z{\frac {d}{dz}}+b\right)w=\left(z{\frac {d}{dz}}+c\right){\frac {dw}{dz}}}
or
z
(
1
−
z
)
d
2
w
d
z
2
+
[
c
−
(
a
+
b
+
1
)
z
]
d
w
d
z
−
a
b
w
=
0.
{\displaystyle z(1-z){\frac {d^{2}w}{dz^{2}}}+\left[c-(a+b+1)z\right]{\frac {dw}{dz}}-ab\,w=0.}
It is known as the hypergeometric differential equation. When c is not a positive integer, the substitution
w
=
z
1
−
c
u
{\displaystyle w=z^{1-c}u}
gives a linearly independent solution
z
1
−
c
2
F
1
(
1
+
a
−
c
,
1
+
b
−
c
;
2
−
c
;
z
)
,
{\displaystyle z^{1-c}\;{}_{2}F_{1}(1+a-c,1+b-c;2-c;z),}
so the general solution for |z| < 1 is
k
2
F
1
(
a
,
b
;
c
;
z
)
+
l
z
1
−
c
2
F
1
(
1
+
a
−
c
,
1
+
b
−
c
;
2
−
c
;
z
)
{\displaystyle k\;{}_{2}F_{1}(a,b;c;z)+lz^{1-c}\;{}_{2}F_{1}(1+a-c,1+b-c;2-c;z)}
where k, l are constants. Different solutions can be derived for other values of z. In fact there are 24 solutions, known as the Kummer solutions, derivable using various identities, valid in different regions of the complex plane.
When a is a non-positive integer, −n,
2
F
1
(
−
n
,
b
;
c
;
z
)
{\displaystyle {}_{2}F_{1}(-n,b;c;z)}
is a polynomial. Up to constant factors and scaling, these are the Jacobi polynomials. Several other classes of orthogonal polynomials, up to constant factors, are special cases of Jacobi polynomials, so these can be expressed using 2F1 as well. This includes Legendre polynomials and Chebyshev polynomials.
A wide range of integrals of elementary functions can be expressed using the hypergeometric function, e.g.:
∫
0
x
1
+
y
α
d
y
=
x
2
+
α
{
α
2
F
1
(
1
α
,
1
2
;
1
+
1
α
;
−
x
α
)
+
2
x
α
+
1
}
,
α
≠
0.
{\displaystyle \int _{0}^{x}{\sqrt {1+y^{\alpha }}}\,\mathrm {d} y={\frac {x}{2+\alpha }}\left\{\alpha \;{}_{2}F_{1}\left({\tfrac {1}{\alpha }},{\tfrac {1}{2}};1+{\tfrac {1}{\alpha }};-x^{\alpha }\right)+2{\sqrt {x^{\alpha }+1}}\right\},\qquad \alpha \neq 0.}
=== The series 3F0 ===
The Mott polynomials can be written as:
s
n
(
x
)
=
(
−
x
/
2
)
n
3
F
0
(
−
n
,
1
−
n
2
,
1
−
n
2
;
;
−
4
x
2
)
.
{\displaystyle s_{n}(x)=(-x/2)^{n}{}_{3}F_{0}(-n,{\frac {1-n}{2}},1-{\frac {n}{2}};;-{\frac {4}{x^{2}}}).}
=== The series 3F2 ===
The function
Li
2
(
x
)
=
∑
n
>
0
x
n
n
−
2
=
x
3
F
2
(
1
,
1
,
1
;
2
,
2
;
x
)
{\displaystyle \operatorname {Li} _{2}(x)=\sum _{n>0}\,{x^{n}}{n^{-2}}=x\;{}_{3}F_{2}(1,1,1;2,2;x)}
is the dilogarithm
The function
Q
n
(
x
;
a
,
b
,
N
)
=
3
F
2
(
−
n
,
−
x
,
n
+
a
+
b
+
1
;
a
+
1
,
−
N
+
1
;
1
)
{\displaystyle Q_{n}(x;a,b,N)={}_{3}F_{2}(-n,-x,n+a+b+1;a+1,-N+1;1)}
is a Hahn polynomial.
=== The series 4F3 ===
The function
p
n
(
t
2
)
=
(
a
+
b
)
n
(
a
+
c
)
n
(
a
+
d
)
n
4
F
3
(
−
n
,
a
+
b
+
c
+
d
+
n
−
1
,
a
−
t
,
a
+
t
;
a
+
b
,
a
+
c
,
a
+
d
;
1
)
{\displaystyle p_{n}(t^{2})=(a+b)_{n}(a+c)_{n}(a+d)_{n}\;{}_{4}F_{3}\left(-n,a+b+c+d+n-1,a-t,a+t;a+b,a+c,a+d;1\right)}
is a Wilson polynomial.
All roots of a quintic equation can be expressed in terms of radicals and the Bring radical, which is the real solution to
x
5
+
x
+
a
=
0
{\displaystyle x^{5}+x+a=0}
. The Bring radical can be written as:
BR
(
a
)
=
−
a
4
F
3
(
1
5
,
2
5
,
3
5
,
4
5
;
1
2
,
3
4
,
5
4
;
3125
a
4
256
)
.
{\displaystyle \operatorname {BR} (a)=-a\;{}_{4}F_{3}\left({\frac {1}{5}},{\frac {2}{5}},{\frac {3}{5}},{\frac {4}{5}};{\frac {1}{2}},{\frac {3}{4}},{\frac {5}{4}};{\frac {3125a^{4}}{256}}\right).}
=== The series q+1Fq ===
The functions
Li
q
(
z
)
=
z
q
+
1
F
q
(
1
,
1
,
…
,
1
;
2
,
2
,
…
,
2
;
z
)
{\displaystyle \operatorname {Li} _{q}(z)=z\;{}_{q+1}F_{q}\left(1,1,\ldots ,1;2,2,\ldots ,2;z\right)}
Li
−
p
(
z
)
=
z
p
F
p
−
1
(
2
,
2
,
…
,
2
;
1
,
1
,
…
,
1
;
z
)
{\displaystyle \operatorname {Li} _{-p}(z)=z\;{}_{p}F_{p-1}\left(2,2,\ldots ,2;1,1,\ldots ,1;z\right)}
for
q
∈
N
0
{\displaystyle q\in \mathbb {N} _{0}}
and
p
∈
N
{\displaystyle p\in \mathbb {N} }
are the Polylogarithm.
For each integer n≥2, the roots of the polynomial xn−x+t can be expressed as a sum of at most N−1 hypergeometric functions of type n+1Fn, which can always be reduced by eliminating at least one pair of a and b parameters.
== Generalizations ==
The generalized hypergeometric function is linked to the Meijer G-function and the MacRobert E-function. Hypergeometric series were generalised to several variables, for example by Paul Emile Appell and Joseph Kampé de Fériet; but a comparable general theory took long to emerge. Many identities were found, some quite remarkable. A generalization, the q-series analogues, called the basic hypergeometric series, were given by Eduard Heine in the late nineteenth century. Here, the ratios considered of successive terms, instead of a rational function of n, are a rational function of qn. Another generalization, the elliptic hypergeometric series, are those series where the ratio of terms is an elliptic function (a doubly periodic meromorphic function) of n.
During the twentieth century this was a fruitful area of combinatorial mathematics, with numerous connections to other fields. There are a number of new definitions of general hypergeometric functions, by Aomoto, Israel Gelfand and others; and applications for example to the combinatorics of arranging a number of hyperplanes in complex N-space (see arrangement of hyperplanes).
Special hypergeometric functions occur as zonal spherical functions on Riemannian symmetric spaces and semi-simple Lie groups. Their importance and role can be understood through the following example: the hypergeometric series 2F1 has the Legendre polynomials as a special case, and when considered in the form of spherical harmonics, these polynomials reflect, in a certain sense, the symmetry properties of the two-sphere or, equivalently, the rotations given by the Lie group SO(3). In tensor product decompositions of concrete representations of this group Clebsch–Gordan coefficients are met, which can be written as 3F2 hypergeometric series.
Bilateral hypergeometric series are a generalization of hypergeometric functions where one sums over all integers, not just the positive ones.
Fox–Wright functions are a generalization of generalized hypergeometric functions where the Pochhammer symbols in the series expression are generalised to gamma functions of linear expressions in the index n.
== See also ==
Appell series
Humbert series
Kampé de Fériet function
Lauricella hypergeometric series
== Notes ==
== References ==
Askey, R. A.; Daalhuis, Adri B. Olde (2010), "Generalized hypergeometric function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Andrews, George E.; Askey, Richard & Roy, Ranjan (1999). Special functions. Encyclopedia of Mathematics and its Applications. Vol. 71. Cambridge University Press. ISBN 978-0-521-78988-2. MR 1688958.
Bailey, W.N. (1935). Generalized Hypergeometric Series. Cambridge Tracts in Mathematics and Mathematical Physics. Vol. 32. London: Cambridge University Press. Zbl 0011.02303.
Dixon, A.C. (1902). "Summation of a certain series". Proc. London Math. Soc. 35 (1): 284–291. doi:10.1112/plms/s1-35.1.284. JFM 34.0490.02.
Dougall, J. (1907). "On Vandermonde's theorem and some more general expansions". Proc. Edinburgh Math. Soc. 25: 114–132. doi:10.1017/S0013091500033642 (inactive 20 December 2024).{{cite journal}}: CS1 maint: DOI inactive as of December 2024 (link)
Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz; Tricomi, Francesco G. (1955). Higher transcendental functions. Vol. III. McGraw-Hill Book Company, Inc., New York-Toronto-London. MR 0066496.
Gasper, George; Rahman, Mizan (2004). Basic Hypergeometric Series. Encyclopedia of Mathematics and Its Applications. Vol. 96 (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0-521-83357-8. MR 2128719. Zbl 1129.33005. (the first edition has ISBN 0-521-35049-2)
Gauss, Carl Friedrich (1813). "Disquisitiones generales circa seriam infinitam
1
+
α
β
1
⋅
γ
x
+
α
(
α
+
1
)
β
(
β
+
1
)
1
⋅
2
⋅
γ
(
γ
+
1
)
x
x
+
etc.
{\displaystyle 1+{\tfrac {\alpha \beta }{1\cdot \gamma }}~x+{\tfrac {\alpha (\alpha +1)\beta (\beta +1)}{1\cdot 2\cdot \gamma (\gamma +1)}}~x~x+{\mbox{etc.}}}
". Commentationes Societatis Regiae Scientarum Gottingensis Recentiores (in Latin). 2. Göttingen. (a reprint of this paper can be found in Carl Friedrich Gauss, Werke, p. 125) (a translation is available on Wikisource)
Grinshpan, A. Z. (2013), "Generalized hypergeometric functions: product identities and weighted norm inequalities", The Ramanujan Journal, 31 (1–2): 53–66, doi:10.1007/s11139-013-9487-x, S2CID 121054930
Heckman, Gerrit & Schlichtkrull, Henrik (1994). Harmonic Analysis and Special Functions on Symmetric Spaces. San Diego: Academic Press. ISBN 978-0-12-336170-7. (part 1 treats hypergeometric functions on Lie groups)
Lavoie, J.L.; Grondin, F.; Rathie, A.K.; Arora, K. (1994). "Generalizations of Dixon's theorem on the sum of a 3F2". Math. Comp. 62 (205): 267–276. doi:10.2307/2153407. JSTOR 2153407.
Miller, A. R.; Paris, R. B. (2011). "Euler-type transformations for the generalized hypergeometric function r+2Fr+1". Z. Angew. Math. Phys. 62 (1): 31–45. Bibcode:2011ZaMP...62...31M. doi:10.1007/s00033-010-0085-0. S2CID 30484300.
Quigley, J.; Wilson, K.J.; Walls, L.; Bedford, T. (2013). "A Bayes linear Bayes Method for Estimation of Correlated Event Rates" (PDF). Risk Analysis. 33 (12): 2209–2224. Bibcode:2013RiskA..33.2209Q. doi:10.1111/risa.12035. PMID 23551053. S2CID 24476762.
Rathie, Arjun K.; Pogány, Tibor K. (2008). "New summation formula for 3F2(1/2) and a Kummer-type II transformation of 2F2(x)". Mathematical Communications. 13: 63–66. MR 2422088. Zbl 1146.33002.
Rakha, M.A.; Rathie, Arjun K. (2011). "Extensions of Euler's type- II transformation and Saalschutz's theorem". Bull. Korean Math. Soc. 48 (1): 151–156. doi:10.4134/bkms.2011.48.1.151.
Saalschütz, L. (1890). "Eine Summationsformel". Zeitschrift für Mathematik und Physik (in German). 35: 186–188. JFM 22.0262.03.
Slater, Lucy Joan (1966). Generalized Hypergeometric Functions. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-06483-5. MR 0201688. Zbl 0135.28101. (there is a 2008 paperback with ISBN 978-0-521-09061-2)
Yoshida, Masaaki (1997). Hypergeometric Functions, My Love: Modular Interpretations of Configuration Spaces. Braunschweig/Wiesbaden: Friedr. Vieweg & Sohn. ISBN 978-3-528-06925-4. MR 1453580.
== External links ==
The book "A = B", this book is freely downloadable from the internet.
MathWorld
Weisstein, Eric W. "Generalized Hypergeometric Function". MathWorld.
Weisstein, Eric W. "Hypergeometric Function". MathWorld.
Weisstein, Eric W. "Confluent Hypergeometric Function of the First Kind". MathWorld.
Weisstein, Eric W. "Confluent Hypergeometric Limit Function". MathWorld. | Wikipedia/Generalized_hypergeometric_function |
In mathematics, Felix Klein's j-invariant or j function is a modular function of weight zero for the special linear group
SL
(
2
,
Z
)
{\displaystyle \operatorname {SL} (2,\mathbb {Z} )}
defined on the upper half-plane of complex numbers. It is the unique such function that is holomorphic away from a simple pole at the cusp such that
j
(
e
2
π
i
/
3
)
=
0
,
j
(
i
)
=
1728
=
12
3
.
{\displaystyle j{\big (}e^{2\pi i/3}{\big )}=0,\quad j(i)=1728=12^{3}.}
Rational functions of
j
{\displaystyle j}
are modular, and in fact give all modular functions of weight 0. Classically, the
j
{\displaystyle j}
-invariant was studied as a parameterization of elliptic curves over
C
{\displaystyle \mathbb {C} }
, but it also has surprising connections to the symmetries of the Monster group (this connection is referred to as monstrous moonshine).
== Definition ==
The j-invariant can be defined as a function on the upper half-plane
H
=
{
τ
∈
C
∣
Im
(
τ
)
>
0
}
{\displaystyle {\mathcal {H}}=\{\tau \in \mathbb {C} \mid \operatorname {Im} (\tau )>0\}}
, by
j
(
τ
)
=
1728
g
2
(
τ
)
3
Δ
(
τ
)
=
1728
g
2
(
τ
)
3
g
2
(
τ
)
3
−
27
g
3
(
τ
)
2
=
1728
g
2
(
τ
)
3
(
2
π
)
12
η
24
(
τ
)
{\displaystyle j(\tau )=1728{\frac {g_{2}(\tau )^{3}}{\Delta (\tau )}}=1728{\frac {g_{2}(\tau )^{3}}{g_{2}(\tau )^{3}-27g_{3}(\tau )^{2}}}=1728{\frac {g_{2}(\tau )^{3}}{(2\pi )^{12}\,\eta ^{24}(\tau )}}}
with the third definition implying
j
(
τ
)
{\displaystyle j(\tau )}
can be expressed as a cube, also since 1728
=
12
3
{\displaystyle {}=12^{3}}
. The function cannot be continued analytically beyond the upper half-plane due to the natural boundary at the real line.
The given functions are the modular discriminant
Δ
(
τ
)
=
g
2
(
τ
)
3
−
27
g
3
(
τ
)
2
=
(
2
π
)
12
η
24
(
τ
)
{\displaystyle \Delta (\tau )=g_{2}(\tau )^{3}-27g_{3}(\tau )^{2}=(2\pi )^{12}\,\eta ^{24}(\tau )}
, Dedekind eta function
η
(
τ
)
{\displaystyle \eta (\tau )}
, and modular invariants,
g
2
(
τ
)
=
60
G
4
(
τ
)
=
60
∑
(
m
,
n
)
≠
(
0
,
0
)
(
m
+
n
τ
)
−
4
{\displaystyle g_{2}(\tau )=60G_{4}(\tau )=60\sum _{(m,n)\neq (0,0)}\left(m+n\tau \right)^{-4}}
g
3
(
τ
)
=
140
G
6
(
τ
)
=
140
∑
(
m
,
n
)
≠
(
0
,
0
)
(
m
+
n
τ
)
−
6
{\displaystyle g_{3}(\tau )=140G_{6}(\tau )=140\sum _{(m,n)\neq (0,0)}\left(m+n\tau \right)^{-6}}
where
G
4
(
τ
)
{\displaystyle G_{4}(\tau )}
,
G
6
(
τ
)
{\displaystyle G_{6}(\tau )}
are Fourier series,
G
4
(
τ
)
=
π
4
45
E
4
(
τ
)
G
6
(
τ
)
=
2
π
6
945
E
6
(
τ
)
{\displaystyle {\begin{aligned}G_{4}(\tau )&={\frac {\pi ^{4}}{45}}\,E_{4}(\tau )\\[4pt]G_{6}(\tau )&={\frac {2\pi ^{6}}{945}}\,E_{6}(\tau )\end{aligned}}}
and
E
4
(
τ
)
{\displaystyle E_{4}(\tau )}
,
E
6
(
τ
)
{\displaystyle E_{6}(\tau )}
are Eisenstein series,
E
4
(
τ
)
=
1
+
240
∑
n
=
1
∞
n
3
q
n
1
−
q
n
E
6
(
τ
)
=
1
−
504
∑
n
=
1
∞
n
5
q
n
1
−
q
n
{\displaystyle {\begin{aligned}E_{4}(\tau )&=1+240\sum _{n=1}^{\infty }{\frac {n^{3}q^{n}}{1-q^{n}}}\\[4pt]E_{6}(\tau )&=1-504\sum _{n=1}^{\infty }{\frac {n^{5}q^{n}}{1-q^{n}}}\end{aligned}}}
and
q
=
e
2
π
i
τ
{\displaystyle q=e^{2\pi i\tau }}
(the square of the nome). The j-invariant can then be directly expressed in terms of the Eisenstein series as,
j
(
τ
)
=
1728
E
4
(
τ
)
3
E
4
(
τ
)
3
−
E
6
(
τ
)
2
{\displaystyle j(\tau )=1728{\frac {E_{4}(\tau )^{3}}{E_{4}(\tau )^{3}-E_{6}(\tau )^{2}}}}
with no numerical factor other than 1728. This implies a third way to define the modular discriminant,
Δ
(
τ
)
=
(
2
π
)
12
E
4
(
τ
)
3
−
E
6
(
τ
)
2
1728
{\displaystyle \Delta (\tau )=(2\pi )^{12}\,{\frac {E_{4}(\tau )^{3}-E_{6}(\tau )^{2}}{1728}}}
For example, using the definitions above and
τ
=
2
i
{\displaystyle \tau =2i}
, then the Dedekind eta function
η
(
2
i
)
{\displaystyle \eta (2i)}
has the exact value,
η
(
2
i
)
=
Γ
(
1
4
)
2
11
/
8
π
3
/
4
{\displaystyle \eta (2i)={\frac {\Gamma \left({\frac {1}{4}}\right)}{2^{11/8}\pi ^{3/4}}}}
implying the transcendental numbers,
g
2
(
2
i
)
=
11
Γ
(
1
4
)
8
2
8
π
2
,
g
3
(
2
i
)
=
7
Γ
(
1
4
)
12
2
12
π
3
{\displaystyle g_{2}(2i)={\frac {11\,\Gamma \left({\frac {1}{4}}\right)^{8}}{2^{8}\pi ^{2}}},\qquad g_{3}(2i)={\frac {7\,\Gamma \left({\frac {1}{4}}\right)^{12}}{2^{12}\pi ^{3}}}}
but yielding the algebraic number (in fact, an integer),
j
(
2
i
)
=
1728
g
2
(
2
i
)
3
g
2
(
2
i
)
3
−
27
g
3
(
2
i
)
2
=
66
3
.
{\displaystyle j(2i)=1728{\frac {g_{2}(2i)^{3}}{g_{2}(2i)^{3}-27g_{3}(2i)^{2}}}=66^{3}.}
In general, this can be motivated by viewing each τ as representing an isomorphism class of elliptic curves. Every elliptic curve E over C is a complex torus, and thus can be identified with a rank 2 lattice; that is, a two-dimensional lattice of C. This lattice can be rotated and scaled (operations that preserve the isomorphism class), so that it is generated by 1 and τ ∈ H. This lattice corresponds to the elliptic curve
y
2
=
4
x
3
−
g
2
(
τ
)
x
−
g
3
(
τ
)
{\displaystyle y^{2}=4x^{3}-g_{2}(\tau )x-g_{3}(\tau )}
(see Weierstrass elliptic functions).
Note that j is defined everywhere in H as the modular discriminant is non-zero. This is due to the corresponding cubic polynomial having distinct roots.
== The fundamental region ==
It can be shown that Δ is a modular form of weight twelve, and g2 one of weight four, so that its third power is also of weight twelve. Thus their quotient, and therefore j, is a modular function of weight zero, in particular a holomorphic function H → C invariant under the action of SL(2, Z). Quotienting out by its centre { ±I } yields the modular group, which we may identify with the projective special linear group PSL(2, Z).
By a suitable choice of transformation belonging to this group,
τ
↦
a
τ
+
b
c
τ
+
d
,
a
d
−
b
c
=
1
,
{\displaystyle \tau \mapsto {\frac {a\tau +b}{c\tau +d}},\qquad ad-bc=1,}
we may reduce τ to a value giving the same value for j, and lying in the fundamental region for j, which consists of values for τ satisfying the conditions
|
τ
|
≥
1
−
1
2
<
R
(
τ
)
≤
1
2
−
1
2
<
R
(
τ
)
<
0
⇒
|
τ
|
>
1
{\displaystyle {\begin{aligned}|\tau |&\geq 1\\[5pt]-{\tfrac {1}{2}}&<{\mathfrak {R}}(\tau )\leq {\tfrac {1}{2}}\\[5pt]-{\tfrac {1}{2}}&<{\mathfrak {R}}(\tau )<0\Rightarrow |\tau |>1\end{aligned}}}
The function j(τ) when restricted to this region still takes on every value in the complex numbers C exactly once. In other words, for every c in C, there is a unique τ in the fundamental region such that c = j(τ). Thus, j has the property of mapping the fundamental region to the entire complex plane.
Additionally two values τ,τ' ∈H produce the same elliptic curve iff τ = T(τ') for some T ∈ PSL(2, Z). This means j provides a bijection from the set of elliptic curves over C to the complex plane.
As a Riemann surface, the fundamental region has genus 0, and every (level one) modular function is a rational function in j; and, conversely, every rational function in j is a modular function. In other words, the field of modular functions is C(j).
== Class field theory and j ==
The j-invariant has many remarkable properties:
If τ is any point of the upper half plane whose corresponding elliptic curve has complex multiplication (that is, if τ is any element of an imaginary quadratic field with positive imaginary part, so that j is defined), then j(τ) is an algebraic integer. These special values are called singular moduli.
The field extension Q[j(τ), τ]/Q(τ) is abelian, that is, it has an abelian Galois group.
Let Λ be the lattice in C generated by {1, τ}. It is easy to see that all of the elements of Q(τ) which fix Λ under multiplication form a ring with units, called an order. The other lattices with generators {1, τ′}, associated in like manner to the same order define the algebraic conjugates j(τ′) of j(τ) over Q(τ). Ordered by inclusion, the unique maximal order in Q(τ) is the ring of algebraic integers of Q(τ), and values of τ having it as its associated order lead to unramified extensions of Q(τ).
These classical results are the starting point for the theory of complex multiplication.
== Transcendence properties ==
In 1937 Theodor Schneider proved the aforementioned result that if τ is a quadratic irrational number in the upper half plane then j(τ) is an algebraic integer. In addition he proved that if τ is an algebraic number but not imaginary quadratic then j(τ) is transcendental.
The j function has numerous other transcendental properties. Kurt Mahler conjectured a particular transcendence result that is often referred to as Mahler's conjecture, though it was proved as a corollary of results by Yu. V. Nesterenko and Patrice Phillipon in the 1990s. Mahler's conjecture (now proven) is that, if τ is in the upper half plane, then e2πiτ and j(τ) are never both simultaneously algebraic. Stronger results are now known, for example if e2πiτ is algebraic then the following three numbers are algebraically independent, and thus at least two of them transcendental:
j
(
τ
)
,
j
′
(
τ
)
π
,
j
′
′
(
τ
)
π
2
{\displaystyle j(\tau ),{\frac {j^{\prime }(\tau )}{\pi }},{\frac {j^{\prime \prime }(\tau )}{\pi ^{2}}}}
== The q-expansion and moonshine ==
Several remarkable properties of j have to do with its q-expansion (Fourier series expansion), written as a Laurent series in terms of q = e2πiτ, which begins:
j
(
τ
)
=
q
−
1
+
744
+
196884
q
+
21493760
q
2
+
864299970
q
3
+
20245856256
q
4
+
⋯
{\displaystyle j(\tau )=q^{-1}+744+196884q+21493760q^{2}+864299970q^{3}+20245856256q^{4}+\cdots }
Note that j has a simple pole at the cusp, so its q-expansion has no terms below q−1.
All the Fourier coefficients are integers, which results in several almost integers, notably Ramanujan's constant:
e
π
163
≈
640320
3
+
744
{\displaystyle e^{\pi {\sqrt {163}}}\approx 640320^{3}+744}
.
The asymptotic formula for the coefficient of qn is given by
e
4
π
n
2
n
3
/
4
{\displaystyle {\frac {e^{4\pi {\sqrt {n}}}}{{\sqrt {2}}\,n^{3/4}}}}
,
as can be proved by the Hardy–Littlewood circle method.
=== Moonshine ===
More remarkably, the Fourier coefficients for the positive exponents of q are the dimensions of the graded part of an infinite-dimensional graded algebra representation of the monster group called the moonshine module – specifically, the coefficient of qn is the dimension of grade-n part of the moonshine module, the first example being the Griess algebra, which has dimension 196,884, corresponding to the term 196884q. This startling observation, first made by John McKay, was the starting point for moonshine theory.
The study of the Moonshine conjecture led John Horton Conway and Simon P. Norton to look at the genus-zero modular functions. If they are normalized to have the form
q
−
1
+
O
(
q
)
{\displaystyle q^{-1}+{O}(q)}
then John G. Thompson showed that there are only a finite number of such functions (of some finite level), and Chris J. Cummins later showed that there are exactly 6486 of them, 616 of which have integral coefficients.
== Alternate expressions ==
We have
j
(
τ
)
=
256
(
1
−
x
)
3
x
2
{\displaystyle j(\tau )={\frac {256\left(1-x\right)^{3}}{x^{2}}}}
where x = λ(1 − λ) and λ is the modular lambda function
λ
(
τ
)
=
θ
2
4
(
e
π
i
τ
)
θ
3
4
(
e
π
i
τ
)
=
k
2
(
τ
)
{\displaystyle \lambda (\tau )={\frac {\theta _{2}^{4}(e^{\pi i\tau })}{\theta _{3}^{4}(e^{\pi i\tau })}}=k^{2}(\tau )}
a ratio of Jacobi theta functions θm, and is the square of the elliptic modulus k(τ). The value of j is unchanged when λ is replaced by any of the six values of the cross-ratio:
{
λ
,
1
1
−
λ
,
λ
−
1
λ
,
1
λ
,
λ
λ
−
1
,
1
−
λ
}
{\displaystyle \left\lbrace {\lambda ,{\frac {1}{1-\lambda }},{\frac {\lambda -1}{\lambda }},{\frac {1}{\lambda }},{\frac {\lambda }{\lambda -1}},1-\lambda }\right\rbrace }
The branch points of j are at {0, 1, ∞}, so that j is a Belyi function.
== Expressions in terms of theta functions ==
Define the nome q = eπiτ and the Jacobi theta function,
ϑ
(
0
;
τ
)
=
ϑ
00
(
0
;
τ
)
=
1
+
2
∑
n
=
1
∞
(
e
π
i
τ
)
n
2
=
∑
n
=
−
∞
∞
q
n
2
{\displaystyle \vartheta (0;\tau )=\vartheta _{00}(0;\tau )=1+2\sum _{n=1}^{\infty }\left(e^{\pi i\tau }\right)^{n^{2}}=\sum _{n=-\infty }^{\infty }q^{n^{2}}}
from which one can derive the auxiliary theta functions, defined here. Let,
a
=
θ
2
(
q
)
=
ϑ
10
(
0
;
τ
)
b
=
θ
3
(
q
)
=
ϑ
00
(
0
;
τ
)
c
=
θ
4
(
q
)
=
ϑ
01
(
0
;
τ
)
{\displaystyle {\begin{aligned}a&=\theta _{2}(q)=\vartheta _{10}(0;\tau )\\b&=\theta _{3}(q)=\vartheta _{00}(0;\tau )\\c&=\theta _{4}(q)=\vartheta _{01}(0;\tau )\end{aligned}}}
where ϑij and θn are alternative notations, and a4 − b4 + c4 = 0. Then we have the for modular invariants g2, g3,
g
2
(
τ
)
=
2
3
π
4
(
a
8
+
b
8
+
c
8
)
g
3
(
τ
)
=
4
27
π
6
(
a
8
+
b
8
+
c
8
)
3
−
54
(
a
b
c
)
8
2
{\displaystyle {\begin{aligned}g_{2}(\tau )&={\tfrac {2}{3}}\pi ^{4}\left(a^{8}+b^{8}+c^{8}\right)\\g_{3}(\tau )&={\tfrac {4}{27}}\pi ^{6}{\sqrt {\frac {\left(a^{8}+b^{8}+c^{8}\right)^{3}-54\left(abc\right)^{8}}{2}}}\\\end{aligned}}}
and modular discriminant,
Δ
=
g
2
3
−
27
g
3
2
=
(
2
π
)
12
(
1
2
a
b
c
)
8
=
(
2
π
)
12
η
(
τ
)
24
{\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}=(2\pi )^{12}\left({\tfrac {1}{2}}abc\right)^{8}=(2\pi )^{12}\eta (\tau )^{24}}
with Dedekind eta function η(τ). The j(τ) can then be rapidly computed,
j
(
τ
)
=
1728
g
2
3
g
2
3
−
27
g
3
2
=
32
(
a
8
+
b
8
+
c
8
)
3
(
a
b
c
)
8
{\displaystyle j(\tau )=1728{\frac {g_{2}^{3}}{g_{2}^{3}-27g_{3}^{2}}}=32{\frac {\left(a^{8}+b^{8}+c^{8}\right)^{3}}{\left(abc\right)^{8}}}}
== Algebraic definition ==
So far we have been considering j as a function of a complex variable. However, as an invariant for isomorphism classes of elliptic curves, it can be defined purely algebraically. Let
y
2
+
a
1
x
y
+
a
3
y
=
x
3
+
a
2
x
2
+
a
4
x
+
a
6
{\displaystyle y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6}}
be a plane elliptic curve over any field. Then we may perform successive transformations to get the above equation into the standard form y2 = 4x3 − g2x − g3 (note that this transformation can only be made when the characteristic of the field is not equal to 2 or 3). The resulting coefficients are:
b
2
=
a
1
2
+
4
a
2
,
b
4
=
a
1
a
3
+
2
a
4
,
b
6
=
a
3
2
+
4
a
6
,
b
8
=
a
1
2
a
6
−
a
1
a
3
a
4
+
a
2
a
3
2
+
4
a
2
a
6
−
a
4
2
,
c
4
=
b
2
2
−
24
b
4
,
c
6
=
−
b
2
3
+
36
b
2
b
4
−
216
b
6
,
{\displaystyle {\begin{aligned}b_{2}&=a_{1}^{2}+4a_{2},\quad &b_{4}&=a_{1}a_{3}+2a_{4},\\b_{6}&=a_{3}^{2}+4a_{6},\quad &b_{8}&=a_{1}^{2}a_{6}-a_{1}a_{3}a_{4}+a_{2}a_{3}^{2}+4a_{2}a_{6}-a_{4}^{2},\\c_{4}&=b_{2}^{2}-24b_{4},\quad &c_{6}&=-b_{2}^{3}+36b_{2}b_{4}-216b_{6},\end{aligned}}}
where g2 = c4 and g3 = c6. We also have the discriminant
Δ
=
−
b
2
2
b
8
+
9
b
2
b
4
b
6
−
8
b
4
3
−
27
b
6
2
.
{\displaystyle \Delta =-b_{2}^{2}b_{8}+9b_{2}b_{4}b_{6}-8b_{4}^{3}-27b_{6}^{2}.}
The j-invariant for the elliptic curve may now be defined as
j
=
c
4
3
Δ
{\displaystyle j={\frac {c_{4}^{3}}{\Delta }}}
In the case that the field over which the curve is defined has characteristic different from 2 or 3, this is equal to
j
=
1728
c
4
3
c
4
3
−
c
6
2
.
{\displaystyle j=1728{\frac {c_{4}^{3}}{c_{4}^{3}-c_{6}^{2}}}.}
== Inverse function ==
The inverse function of the j-invariant can be expressed in terms of the hypergeometric function 2F1 (see also the article Picard–Fuchs equation). Explicitly, given a number N, to solve the equation j(τ) = N for τ can be done in at least four ways.
Method 1: Solving the sextic in λ,
j
(
τ
)
=
256
(
1
−
λ
(
1
−
λ
)
)
3
(
λ
(
1
−
λ
)
)
2
=
256
(
1
−
x
)
3
x
2
{\displaystyle j(\tau )={\frac {256{\bigl (}1-\lambda (1-\lambda ){\bigr )}^{3}}{{\bigl (}\lambda (1-\lambda ){\bigr )}^{2}}}={\frac {256\left(1-x\right)^{3}}{x^{2}}}}
where x = λ(1 − λ), and λ is the modular lambda function so the sextic can be solved as a cubic in x. Then,
τ
=
i
2
F
1
(
1
2
,
1
2
,
1
;
1
−
λ
)
2
F
1
(
1
2
,
1
2
,
1
;
λ
)
=
i
M
(
1
,
1
−
λ
)
M
(
1
,
λ
)
{\displaystyle \tau =i\ {\frac {{}_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}},1;1-\lambda \right)}{{}_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}},1;\lambda \right)}}=i{\frac {\operatorname {M} (1,{\sqrt {1-\lambda }})}{\operatorname {M} (1,{\sqrt {\lambda }})}}}
for any of the six values of λ, where M is the arithmetic–geometric mean.
Method 2: Solving the quartic in γ,
j
(
τ
)
=
27
(
1
+
8
γ
)
3
γ
(
1
−
γ
)
3
{\displaystyle j(\tau )={\frac {27\left(1+8\gamma \right)^{3}}{\gamma \left(1-\gamma \right)^{3}}}}
then for any of the four roots,
τ
=
i
3
2
F
1
(
1
3
,
2
3
,
1
;
1
−
γ
)
2
F
1
(
1
3
,
2
3
,
1
;
γ
)
{\displaystyle \tau ={\frac {i}{\sqrt {3}}}{\frac {{}_{2}F_{1}\left({\tfrac {1}{3}},{\tfrac {2}{3}},1;1-\gamma \right)}{{}_{2}F_{1}\left({\tfrac {1}{3}},{\tfrac {2}{3}},1;\gamma \right)}}}
Method 3: Solving the cubic in β,
j
(
τ
)
=
64
(
1
+
3
β
)
3
β
(
1
−
β
)
2
{\displaystyle j(\tau )={\frac {64\left(1+3\beta \right)^{3}}{\beta \left(1-\beta \right)^{2}}}}
then for any of the three roots,
τ
=
i
2
2
F
1
(
1
4
,
3
4
,
1
;
1
−
β
)
2
F
1
(
1
4
,
3
4
,
1
;
β
)
{\displaystyle \tau ={\frac {i}{\sqrt {2}}}{\frac {{}_{2}F_{1}\left({\tfrac {1}{4}},{\tfrac {3}{4}},1;1-\beta \right)}{{}_{2}F_{1}\left({\tfrac {1}{4}},{\tfrac {3}{4}},1;\beta \right)}}}
Method 4: Solving the quadratic in α,
j
(
τ
)
=
1728
4
α
(
1
−
α
)
{\displaystyle j(\tau )={\frac {1728}{4\alpha (1-\alpha )}}}
then,
τ
=
i
2
F
1
(
1
6
,
5
6
,
1
;
1
−
α
)
2
F
1
(
1
6
,
5
6
,
1
;
α
)
{\displaystyle \tau =i\ {\frac {{}_{2}F_{1}\left({\tfrac {1}{6}},{\tfrac {5}{6}},1;1-\alpha \right)}{{}_{2}F_{1}\left({\tfrac {1}{6}},{\tfrac {5}{6}},1;\alpha \right)}}}
One root gives τ, and the other gives −1/τ, but since j(τ) = j(−1/τ), it makes no difference which α is chosen. The latter three methods can be found in Ramanujan's theory of elliptic functions to alternative bases.
The inversion is applied in high-precision calculations of elliptic function periods even as their ratios become unbounded. A related result is the expressibility via quadratic radicals of the values of j at the points of the imaginary axis whose magnitudes are powers of 2 (thus permitting compass and straightedge constructions). The latter result is hardly evident since the modular equation for j of order 2 is cubic.
== Pi formulas ==
The Chudnovsky brothers found in 1987,
1
π
=
12
640320
3
/
2
∑
k
=
0
∞
(
6
k
)
!
(
163
⋅
3344418
k
+
13591409
)
(
3
k
)
!
(
k
!
)
3
(
−
640320
)
3
k
{\displaystyle {\frac {1}{\pi }}={\frac {12}{640320^{3/2}}}\sum _{k=0}^{\infty }{\frac {(6k)!(163\cdot 3344418k+13591409)}{(3k)!\left(k!\right)^{3}\left(-640320\right)^{3k}}}}
a proof of which uses the fact that
j
(
1
+
−
163
2
)
=
−
640320
3
.
{\displaystyle j\left({\frac {1+{\sqrt {-163}}}{2}}\right)=-640320^{3}.}
For similar formulas, see the Ramanujan–Sato series.
== Failure to classify elliptic curves over other fields ==
The
j
{\displaystyle j}
-invariant is only sensitive to isomorphism classes of elliptic curves over the complex numbers, or more generally, an algebraically closed field. Over other fields there exist examples of elliptic curves whose
j
{\displaystyle j}
-invariant is the same, but are non-isomorphic. For example, let
E
1
,
E
2
{\displaystyle E_{1},E_{2}}
be the elliptic curves associated to the polynomials
E
1
:
y
2
=
x
3
−
25
x
E
2
:
y
2
=
x
3
−
4
x
,
{\displaystyle {\begin{aligned}E_{1}:&{\text{ }}y^{2}=x^{3}-25x\\E_{2}:&{\text{ }}y^{2}=x^{3}-4x,\end{aligned}}}
both having
j
{\displaystyle j}
-invariant
1728
{\displaystyle 1728}
. Then, the rational points of
E
2
{\displaystyle E_{2}}
can be computed as:
E
2
(
Q
)
=
{
∞
,
(
2
,
0
)
,
(
−
2
,
0
)
,
(
0
,
0
)
}
{\displaystyle E_{2}(\mathbb {Q} )=\{\infty ,(2,0),(-2,0),(0,0)\}}
since
x
3
−
4
x
=
x
(
x
2
−
4
)
=
x
(
x
−
2
)
(
x
+
2
)
.
{\displaystyle x^{3}-4x=x(x^{2}-4)=x(x-2)(x+2).}
There are no rational solutions with
y
=
a
≠
0
{\displaystyle y=a\neq 0}
. This can be shown using Cardano's formula to show that in that case the solutions to
x
3
−
4
x
−
a
2
{\displaystyle x^{3}-4x-a^{2}}
are all irrational.
On the other hand, on the set of points
{
n
(
−
4
,
6
)
:
n
∈
Z
}
{\displaystyle \{n(-4,6):n\in \mathbb {Z} \}}
the equation for
E
1
{\displaystyle E_{1}}
becomes
36
n
2
=
−
64
n
3
+
100
n
{\displaystyle 36n^{2}=-64n^{3}+100n}
. Dividing by
4
n
{\displaystyle 4n}
to eliminate the
(
0
,
0
)
{\displaystyle (0,0)}
solution, the quadratic formula gives the rational solutions:
n
=
−
9
±
81
−
4
⋅
16
⋅
(
−
25
)
2
⋅
16
=
−
9
±
41
32
.
{\displaystyle n={\frac {-9\pm {\sqrt {81-4\cdot 16\cdot (-25)}}}{2\cdot 16}}={\frac {-9\pm 41}{32}}.}
If these curves are considered over
Q
(
10
)
{\displaystyle \mathbb {Q} ({\sqrt {10}})}
, there is an isomorphism
E
1
(
Q
(
10
)
)
≅
E
2
(
Q
(
10
)
)
{\displaystyle E_{1}(\mathbb {Q} ({\sqrt {10}}))\cong E_{2}(\mathbb {Q} ({\sqrt {10}}))}
sending
(
x
,
y
)
↦
(
μ
2
x
,
μ
3
y
)
where
μ
=
10
2
.
{\displaystyle (x,y)\mapsto (\mu ^{2}x,\mu ^{3}y)\ {\text{ where }}\ \mu ={\frac {\sqrt {10}}{2}}.}
== References ==
=== Notes ===
=== Other ===
Apostol, Tom M. (1976), Modular functions and Dirichlet Series in Number Theory, Graduate Texts in Mathematics, vol. 41, New York: Springer-Verlag, MR 0422157. Provides a very readable introduction and various interesting identities.
Apostol, Tom M. (1990), Modular functions and Dirichlet Series in Number Theory, Graduate Texts in Mathematics, vol. 41 (2nd ed.), doi:10.1007/978-1-4612-0999-7, ISBN 978-0-387-97127-8, MR 1027834
Berndt, Bruce C.; Chan, Heng Huat (1999), "Ramanujan and the modular j-invariant", Canadian Mathematical Bulletin, 42 (4): 427–440, doi:10.4153/CMB-1999-050-1, MR 1727340. Provides a variety of interesting algebraic identities, including the inverse as a hypergeometric series.
Cox, David A. (1989), Primes of the Form x^2 + ny^2: Fermat, Class Field Theory, and Complex Multiplication, New York: Wiley-Interscience Publication, John Wiley & Sons Inc., MR 1028322 Introduces the j-invariant and discusses the related class field theory.
Conway, John Horton; Norton, Simon (1979), "Monstrous moonshine", Bulletin of the London Mathematical Society, 11 (3): 308–339, doi:10.1112/blms/11.3.308, MR 0554399. Includes a list of the 175 genus-zero modular functions.
Rankin, Robert A. (1977), Modular forms and functions, Cambridge: Cambridge University Press, ISBN 978-0-521-21212-0, MR 0498390. Provides a short review in the context of modular forms.
Schneider, Theodor (1937), "Arithmetische Untersuchungen elliptischer Integrale", Math. Annalen, 113: 1–13, doi:10.1007/BF01571618, MR 1513075, S2CID 121073687. | Wikipedia/Elliptic_modular_function |
In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time
Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux.
In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates.
In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
== Properties and examples ==
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean.
A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi).
Rates are relevant to many aspects of everyday life. For example:
How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate.
== Rate of change ==
Consider the case where the numerator
f
{\displaystyle f}
of a rate is a function
f
(
a
)
{\displaystyle f(a)}
where
a
{\displaystyle a}
happens to be the denominator of the rate
δ
f
/
δ
a
{\displaystyle \delta f/\delta a}
. A rate of change of
f
{\displaystyle f}
with respect to
a
{\displaystyle a}
(where
a
{\displaystyle a}
is incremented by
h
{\displaystyle h}
) can be formally defined in two ways:
Average rate of change
=
f
(
x
+
h
)
−
f
(
x
)
h
Instantaneous rate of change
=
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
)
h
{\displaystyle {\begin{aligned}{\mbox{Average rate of change}}&={\frac {f(x+h)-f(x)}{h}}\\{\mbox{Instantaneous rate of change}}&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\end{aligned}}}
where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative.
For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer.
== Temporal rates ==
In chemistry and physics:
Speed, the rate of change of position, or the change of position per unit of time
Acceleration, the rate of change in speed, or the change in speed per unit of time
Power, the rate of doing work, or the amount of energy transferred per unit time
Frequency, the number of occurrences of a repeating event per unit of time
Angular frequency and rotation speed, the number of turns per unit of time
Reaction rate, the speed at which chemical reactions occur
Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second
=== Counts-per-time rates ===
Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels
In computing:
Bit rate, the number of bits that are conveyed or processed by a computer per unit of time
Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second
Sampling rate, the number of samples (signal measurements) per second
Miscellaneous definitions:
Rate of reinforcement, number of reinforcements per unit of time, usually per minute
Heart rate, usually measured in beats per minute
== Economics/finance rates/ratios ==
Exchange rate, how much one currency is worth in terms of the other
Inflation rate, the ratio of the change in the general price level during a year to the starting price level
Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed)
Price–earnings ratio, market price per share of stock divided by annual earnings per share
Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested
Tax rate, the tax amount divided by the taxable income
Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force
Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time)
== Other rates ==
Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time
Literacy rate, the proportion of the population over age fifteen that can read and write
Sex ratio or gender ratio, the ratio of males to females in a population
== See also ==
Derivative
Gradient
Hertz
Slope
== References == | Wikipedia/Rate_(mathematics) |
In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths.
More formally, let
U
{\displaystyle U}
and
V
{\displaystyle V}
be open subsets of
R
n
{\displaystyle \mathbb {R} ^{n}}
. A function
f
:
U
→
V
{\displaystyle f:U\to V}
is called conformal (or angle-preserving) at a point
u
0
∈
U
{\displaystyle u_{0}\in U}
if it preserves angles between directed curves through
u
0
{\displaystyle u_{0}}
, as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.
The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.
For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types.
The notion of conformality generalizes in a natural way to maps between Riemannian or semi-Riemannian manifolds.
== In two dimensions ==
If
U
{\displaystyle U}
is an open subset of the complex plane
C
{\displaystyle \mathbb {C} }
, then a function
f
:
U
→
C
{\displaystyle f:U\to \mathbb {C} }
is conformal if and only if it is holomorphic and its derivative is everywhere non-zero on
U
{\displaystyle U}
. If
f
{\displaystyle f}
is antiholomorphic (conjugate to a holomorphic function), it preserves angles but reverses their orientation.
In the literature, there is another definition of conformal: a mapping
f
{\displaystyle f}
which is one-to-one and holomorphic on an open set in the plane. The open mapping theorem forces the inverse function (defined on the image of
f
{\displaystyle f}
) to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. In fact, we have the following relation, the inverse function theorem:
(
f
−
1
(
z
0
)
)
′
=
1
f
′
(
z
0
)
{\displaystyle (f^{-1}(z_{0}))'={\frac {1}{f'(z_{0})}}}
where
z
0
∈
C
{\displaystyle z_{0}\in \mathbb {C} }
. However, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic.
The Riemann mapping theorem, one of the profound results of complex analysis, states that any non-empty open simply connected proper subset of
C
{\displaystyle \mathbb {C} }
admits a bijective conformal map to the open unit disk in
C
{\displaystyle \mathbb {C} }
. Informally, this means that any blob can be transformed into a perfect circle by some conformal map.
=== Global conformal maps on the Riemann sphere ===
A map of the Riemann sphere onto itself is conformal if and only if it is a Möbius transformation.
The complex conjugate of a Möbius transformation preserves angles, but reverses the orientation. For example, circle inversions.
=== Conformality with respect to three types of angles ===
In plane geometry there are three types of angles that may be preserved in a conformal map. Each is hosted by its own real algebra, ordinary complex numbers, split-complex numbers, and dual numbers. The conformal maps are described by linear fractional transformations in each case.
== In three or more dimensions ==
=== Riemannian geometry ===
In Riemannian geometry, two Riemannian metrics
g
{\displaystyle g}
and
h
{\displaystyle h}
on a smooth manifold
M
{\displaystyle M}
are called conformally equivalent if
g
=
u
h
{\displaystyle g=uh}
for some positive function
u
{\displaystyle u}
on
M
{\displaystyle M}
. The function
u
{\displaystyle u}
is called the conformal factor.
A diffeomorphism between two Riemannian manifolds is called a conformal map if the pulled back metric is conformally equivalent to the original one. For example, stereographic projection of a sphere onto the plane augmented with a point at infinity is a conformal map.
One can also define a conformal structure on a smooth manifold, as a class of conformally equivalent Riemannian metrics.
=== Euclidean space ===
A classical theorem of Joseph Liouville shows that there are far fewer conformal maps in higher dimensions than in two dimensions. Any conformal map from an open subset of Euclidean space into the same Euclidean space of dimension three or greater can be composed from three types of transformations: a homothety, an isometry, and a special conformal transformation. For linear transformations, a conformal map may only be composed of homothety and isometry, and is called a conformal linear transformation.
== Applications ==
Applications of conformal mapping exist
in aerospace engineering,
in biomedical sciences
(including brain mapping
and genetic mapping),
in applied math
(for geodesics
and in geometry),
in earth sciences
(including geophysics,
geography,
and cartography),
in engineering,
and in electronics.
=== Cartography ===
In cartography, several named map projections, including the Mercator projection and the stereographic projection are conformal. The preservation of compass directions makes them useful in marine navigation.
=== Physics and engineering ===
Conformal mappings are invaluable for solving problems in engineering and physics that can be expressed in terms of functions of a complex variable yet exhibit inconvenient geometries. By choosing an appropriate mapping, the analyst can transform the inconvenient geometry into a much more convenient one. For example, one may wish to calculate the electric field,
E
(
z
)
{\displaystyle E(z)}
, arising from a point charge located near the corner of two conducting planes separated by a certain angle (where
z
{\displaystyle z}
is the complex coordinate of a point in 2-space). This problem per se is quite clumsy to solve in closed form. However, by employing a very simple conformal mapping, the inconvenient angle is mapped to one of precisely
π
{\displaystyle \pi }
radians, meaning that the corner of two planes is transformed to a straight line. In this new domain, the problem (that of calculating the electric field impressed by a point charge located near a conducting wall) is quite easy to solve. The solution is obtained in this domain,
E
(
w
)
{\displaystyle E(w)}
, and then mapped back to the original domain by noting that
w
{\displaystyle w}
was obtained as a function (viz., the composition of
E
{\displaystyle E}
and
w
{\displaystyle w}
) of
z
{\displaystyle z}
, whence
E
(
w
)
{\displaystyle E(w)}
can be viewed as
E
(
w
(
z
)
)
{\displaystyle E(w(z))}
, which is a function of
z
{\displaystyle z}
, the original coordinate basis. Note that this application is not a contradiction to the fact that conformal mappings preserve angles, they do so only for points in the interior of their domain, and not at the boundary. Another example is the application of conformal mapping technique for solving the boundary value problem of liquid sloshing in tanks.
If a function is harmonic (that is, it satisfies Laplace's equation
∇
2
f
=
0
{\displaystyle \nabla ^{2}f=0}
) over a plane domain (which is two-dimensional), and is transformed via a conformal map to another plane domain, the transformation is also harmonic. For this reason, any function which is defined by a potential can be transformed by a conformal map and still remain governed by a potential. Examples in physics of equations defined by a potential include the electromagnetic field, the gravitational field, and, in fluid dynamics, potential flow, which is an approximation to fluid flow assuming constant density, zero viscosity, and irrotational flow. One example of a fluid dynamic application of a conformal map is the Joukowsky transform that can be used to examine the field of flow around a Joukowsky airfoil.
Conformal maps are also valuable in solving nonlinear partial differential equations in some specific geometries. Such analytic solutions provide a useful check on the accuracy of numerical simulations of the governing equation. For example, in the case of very viscous free-surface flow around a semi-infinite wall, the domain can be mapped to a half-plane in which the solution is one-dimensional and straightforward to calculate.
For discrete systems, Noury and Yang presented a way to convert discrete systems root locus into continuous root locus through a well-know conformal mapping in geometry (aka inversion mapping).
=== Maxwell's equations ===
Maxwell's equations are preserved by Lorentz transformations which form a group including circular and hyperbolic rotations. The latter are sometimes called Lorentz boosts to distinguish them from circular rotations. All these transformations are conformal since hyperbolic rotations preserve hyperbolic angle, (called rapidity) and the other rotations preserve circular angle. The introduction of translations in the Poincaré group again preserves angles.
A larger group of conformal maps for relating solutions of Maxwell's equations was identified by Ebenezer Cunningham (1908) and Harry Bateman (1910). Their training at Cambridge University had given them facility with the method of image charges and associated methods of images for spheres and inversion. As recounted by Andrew Warwick (2003) Masters of Theory:
Each four-dimensional solution could be inverted in a four-dimensional hyper-sphere of pseudo-radius
K
{\displaystyle K}
in order to produce a new solution.
Warwick highlights this "new theorem of relativity" as a Cambridge response to Einstein, and as founded on exercises using the method of inversion, such as found in James Hopwood Jeans textbook Mathematical Theory of Electricity and Magnetism.
=== General relativity ===
In general relativity, conformal maps are the simplest and thus most common type of causal transformations. Physically, these describe different universes in which all the same events and interactions are still (causally) possible, but a new additional force is necessary to affect this (that is, replication of all the same trajectories would necessitate departures from geodesic motion because the metric tensor is different). It is often used to try to make models amenable to extension beyond curvature singularities, for example to permit description of the universe even before the Big Bang.
== See also ==
Biholomorphic map
Carathéodory's theorem – A conformal map extends continuously to the boundary
Penrose diagram
Schwarz–Christoffel mapping – a conformal transformation of the upper half-plane onto the interior of a simple polygon
Special linear group – transformations that preserve volume (as opposed to angles) and orientation
== References ==
== Further reading ==
Ahlfors, Lars V. (1973), Conformal invariants: topics in geometric function theory, New York: McGraw–Hill Book Co., MR 0357743
Constantin Carathéodory (1932) Conformal Representation, Cambridge Tracts in Mathematics and Physics
Chanson, H. (2009), Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows, CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages, ISBN 978-0-415-49271-3
Churchill, Ruel V. (1974), Complex Variables and Applications, New York: McGraw–Hill Book Co., ISBN 978-0-07-010855-4
E.P. Dolzhenko (2001) [1994], "Conformal mapping", Encyclopedia of Mathematics, EMS Press
Rudin, Walter (1987), Real and complex analysis (3rd ed.), New York: McGraw–Hill Book Co., ISBN 978-0-07-054234-1, MR 0924157
Weisstein, Eric W. "Conformal Mapping". MathWorld.
== External links ==
Interactive visualizations of many conformal maps
Conformal Maps by Michael Trott, Wolfram Demonstrations Project.
Conformal Mapping images of current flow in different geometries without and with magnetic field by Gerhard Brunthaler.
Conformal Transformation: from Circle to Square.
Online Conformal Map Grapher.
Joukowski Transform Interactive WebApp | Wikipedia/Conformal_transformation |
In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine and standardize the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in "the x-coordinate". The coordinates are taken to be real numbers in elementary mathematics, but may be complex numbers or elements of a more abstract system such as a commutative ring. The use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa; this is the basis of analytic geometry.
== Common coordinate systems ==
=== Number line ===
The simplest example of a coordinate system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O (the origin) is chosen on a given line. The coordinate of a point P is defined as the signed distance from O to P, where the signed distance is the distance taken as positive or negative depending on which side of the line P lies. Each point is given a unique coordinate and each real number is the coordinate of a unique point.
=== Cartesian coordinate system ===
The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three mutually orthogonal planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space.
Depending on the direction and order of the coordinate axes, the three-dimensional system may be a right-handed or a left-handed system.
=== Polar coordinate system ===
Another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis. For a given angle θ, there is a single line through the pole whose angle with the polar axis is θ (measured counterclockwise from the axis to the line). Then there is a unique point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates (r, θ) there is a single point, but any point is represented by many pairs of coordinates. For example, (r, θ), (r, θ+2π) and (−r, θ+π) are all polar coordinates for the same point. The pole is represented by (0, θ) for any value of θ.
=== Cylindrical and spherical coordinate systems ===
There are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the same meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple (r, θ, z). Spherical coordinates take this a step further by converting the pair of cylindrical coordinates (r, z) to polar coordinates (ρ, φ) giving a triple (ρ, θ, φ).
=== Homogeneous coordinate system ===
A point in the plane may be represented in homogeneous coordinates by a triple (x, y, z) where x/z and y/z are the Cartesian coordinates of the point. This introduces an "extra" coordinate since only two are needed to specify a point on the plane, but this system is useful in that it represents any point on the projective plane without the use of infinity. In general, a homogeneous coordinate system is one where only the ratios of the coordinates are significant and not the actual values.
=== Other commonly used systems ===
Some other common coordinate systems are the following:
Curvilinear coordinates are a generalization of coordinate systems generally; the system is based on the intersection of curves.
Orthogonal coordinates: coordinate surfaces meet at right angles
Skew coordinates: coordinate surfaces are not orthogonal
The log-polar coordinate system represents a point in the plane by the logarithm of the distance from the origin and an angle measured from a reference line intersecting the origin.
Plücker coordinates are a way of representing lines in 3D Euclidean space using a six-tuple of numbers as homogeneous coordinates.
Generalized coordinates are used in the Lagrangian treatment of mechanics.
Canonical coordinates are used in the Hamiltonian treatment of mechanics.
Barycentric coordinate system as used for ternary plots and more generally in the analysis of triangles.
Trilinear coordinates are used in the context of triangles.
There are ways of describing curves without coordinates, using intrinsic equations that use invariant quantities such as curvature and arc length. These include:
The Whewell equation relates arc length and the tangential angle.
The Cesàro equation relates arc length and curvature.
== Coordinates of geometric objects ==
Coordinates systems are often used to specify the position of a point, but they may also be used to specify the position of more complex figures such as lines, planes, circles or spheres. For example, Plücker coordinates are used to determine the position of a line in space. When there is a need, the type of figure being described is used to distinguish the type of coordinate system, for example the term line coordinates is used for any coordinate system that specifies the position of a line.
It may occur that systems of coordinates for two different sets of geometric figures are equivalent in terms of their analysis. An example of this is the systems of homogeneous coordinates for points and lines in the projective plane. The two systems in a case like this are said to be dualistic. Dualistic systems have the property that results from one system can be carried over to the other since these results are only different interpretations of the same analytical result; this is known as the principle of duality.
== Transformations ==
There are often many different possible coordinate systems for describing geometrical figures. The relationship between different systems is described by coordinate transformations, which give formulas for the coordinates in one system in terms of the coordinates in another system. For example, in the plane, if Cartesian coordinates (x, y) and polar coordinates (r, θ) have the same origin, and the polar axis is the positive x axis, then the coordinate transformation from polar to Cartesian coordinates is given by x = r cosθ and y = r sinθ.
With every bijection from the space to itself two coordinate transformations can be associated:
Such that the new coordinates of the image of each point are the same as the old coordinates of the original point (the formulas for the mapping are the inverse of those for the coordinate transformation)
Such that the old coordinates of the image of each point are the same as the new coordinates of the original point (the formulas for the mapping are the same as those for the coordinate transformation)
For example, in 1D, if the mapping is a translation of 3 to the right, the first moves the origin from 0 to 3, so that the coordinate of each point becomes 3 less, while the second moves the origin from 0 to −3, so that the coordinate of each point becomes 3 more.
== Coordinate lines/curves ==
Given a coordinate system, if one of the coordinates of a point varies while the other coordinates are held constant, then the resulting curve is called a coordinate curve. If a coordinate curve is a straight line, it is called a coordinate line. A coordinate system for which some coordinate curves are not lines is called a curvilinear coordinate system.
Orthogonal coordinates are a special but extremely common case of curvilinear coordinates.
A coordinate line with all other constant coordinates equal to zero is called a coordinate axis, an oriented line used for assigning coordinates.
In a Cartesian coordinate system, all coordinates curves are lines, and, therefore, there are as many coordinate axes as coordinates. Moreover, the coordinate axes are pairwise orthogonal.
A polar coordinate system is a curvilinear system where coordinate curves are lines or circles. However, one of the coordinate curves is reduced to a single point, the origin, which is often viewed as a circle of radius zero. Similarly, spherical and cylindrical coordinate systems have coordinate curves that are lines, circles or circles of radius zero.
Many curves can occur as coordinate curves. For example, the coordinate curves of parabolic coordinates are parabolas.
== Coordinate planes/surfaces ==
In three-dimensional space, if one coordinate is held constant and the other two are allowed to vary, then the resulting surface is called a coordinate surface. For example, the coordinate surfaces obtained by holding ρ constant in the spherical coordinate system are the spheres with center at the origin. In three-dimensional space the intersection of two coordinate surfaces is a coordinate curve. In the Cartesian coordinate system we may speak of coordinate planes.
Similarly, coordinate hypersurfaces are the (n − 1)-dimensional spaces resulting from fixing a single coordinate of an n-dimensional coordinate system.
== Coordinate maps ==
The concept of a coordinate map, or coordinate chart is central to the theory of manifolds. A coordinate map is essentially a coordinate system for a subset of a given space with the property that each point has exactly one set of coordinates. More precisely, a coordinate map is a homeomorphism from an open subset of a space X to an open subset of Rn. It is often not possible to provide one consistent coordinate system for an entire space. In this case, a collection of coordinate maps are put together to form an atlas covering the space. A space equipped with such an atlas is called a manifold and additional structure can be defined on a manifold if the structure is consistent where the coordinate maps overlap. For example, a differentiable manifold is a manifold where the change of coordinates from one coordinate map to another is always a differentiable function.
== Orientation-based coordinates ==
In geometry and kinematics, coordinate systems are used to describe the (linear) position of points and the angular position of axes, planes, and rigid bodies. In the latter case, the orientation of a second (typically referred to as "local") coordinate system, fixed to the node, is defined based on the first (typically referred to as "global" or "world" coordinate system). For instance, the orientation of a rigid body can be represented by an orientation matrix, which includes, in its three columns, the Cartesian coordinates of three points. These points are used to define the orientation of the axes of the local system; they are the tips of three unit vectors aligned with those axes.
== Geographic systems ==
The Earth as a whole is one of the most common geometric spaces requiring the precise measurement of location, and thus coordinate systems. Starting with the Greeks of the Hellenistic period, a variety of coordinate systems have been developed based on the types above, including:
Geographic coordinate system, the spherical coordinates of latitude and longitude
Projected coordinate systems, including thousands of cartesian coordinate systems, each based on a map projection to create a planar surface of the world or a region.
Geocentric coordinate system, a three-dimensional cartesian coordinate system that models the earth as an object, and are most commonly used for modeling the orbits of satellites, including the Global Positioning System and other satellite navigation systems.
== See also ==
=== Relativistic coordinate systems ===
== References ==
=== Citations ===
=== Sources ===
== External links ==
Hexagonal Coordinate Systems | Wikipedia/Coordinate_transformation |
In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and pedagogic take on the study of geometry by focusing on groups of geometric transformations, and properties that are invariant under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focuses on proving theorems.
For example, within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.
The first systematic effort to use transformations as the foundation of geometry was made by Felix Klein in the 19th century, under the name Erlangen programme. For nearly a century this approach remained confined to mathematics research circles. In the 20th century efforts were made to exploit it for mathematical education. Andrei Kolmogorov included this approach (together with set theory) as part of a proposal for geometry teaching reform in Russia. These efforts culminated in the 1960s with the general reform of mathematics teaching known as the New Math movement.
== Use in mathematics teaching ==
An exploration of transformation geometry often begins with a study of reflection symmetry as found in daily life. The first real transformation is reflection in a line or reflection against an axis. The composition of two reflections results in a rotation when the lines intersect, or a translation when they are parallel. Thus through transformations students learn about Euclidean plane isometry. For instance, consider reflection in a vertical line and a line inclined at 45° to the horizontal. One can observe that one composition yields a counter-clockwise quarter-turn (90°) while the reverse composition yields a clockwise quarter-turn. Such results show that transformation geometry includes non-commutative processes.
An entertaining application of reflection in a line occurs in a proof of the one-seventh area triangle found in any triangle.
Another transformation introduced to young students is the dilation. However, the reflection in a circle transformation seems inappropriate for lower grades. Thus inversive geometry, a larger study than grade school transformation geometry, is usually reserved for college students.
Experiments with concrete symmetry groups make way for abstract group theory. Other concrete activities use computations with complex numbers, hypercomplex numbers, or matrices to express transformation geometry.
Such transformation geometry lessons present an alternate view that contrasts with classical synthetic geometry. When students then encounter analytic geometry, the ideas of coordinate rotations and reflections follow easily. All these concepts prepare for linear algebra where the reflection concept is expanded.
Educators have shown some interest and described projects and experiences with transformation geometry for children from kindergarten to high school. In the case of very young age children, in order to avoid introducing new terminology and to make links with students' everyday experience with concrete objects, it was sometimes recommended to use words they are familiar with, like "flips" for line reflections, "slides" for translations, and "turns" for rotations, although these are not precise mathematical language. In some proposals, students start by performing with concrete objects before they perform the abstract transformations via their definitions of a mapping of each point of the figure.
In an attempt to restructure the courses of geometry in Russia, Kolmogorov suggested presenting it under the point of view of transformations, so the geometry courses were structured based on set theory. This led to the appearance of the term "congruent" in schools, for figures that were before called "equal": since a figure was seen as a set of points, it could only be equal to itself, and two triangles that could be overlapped by isometries were said to be congruent.
One author expressed the importance of group theory to transformation geometry as follows:
I have gone to some trouble to develop from first principles all the group theory that I need, with the intention that my book can serve as a first introduction to transformation groups, and the notions of abstract group theory if you have never seen these.
== See also ==
Chirality (mathematics)
Geometric transformation
Euler's rotation theorem
Motion (geometry)
Transformation matrix
== References ==
== Further reading ==
Heinrich Guggenheimer (1967) Plane Geometry and Its Groups, Holden-Day.
Roger Evans Howe & William Barker (2007) Continuous Symmetry: From Euclid to Klein, American Mathematical Society, ISBN 978-0-8218-3900-3 .
Robin Hartshorne (2011) Review of Continuous Symmetry, American Mathematical Monthly 118:565–8.
Roger Lyndon (1985) Groups and Geometry, #101 London Mathematical Society Lecture Note Series, Cambridge University Press ISBN 0-521-31694-4 .
P.S. Modenov and A.S. Parkhomenko (1965) Geometric Transformations, translated by Michael B.P. Slater, Academic Press.
George E. Martin (1982) Transformation Geometry: An Introduction to Symmetry, Springer Verlag.
Isaak Yaglom (1962) Geometric Transformations, Random House (translated from the Russian).
Max Jeger (1966) Transformation Geometry (translated from the German).
Transformations teaching notes from Gatsby Charitable Foundation
Nathalie Sinclair (2008) The History of the Geometry Curriculum in the United States, pps. 63–66.
Zalman P. Usiskin and Arthur F. Coxford. A Transformation Approach to Tenth Grade Geometry, The Mathematics Teacher, Vol. 65, No. 1 (January 1972), pp. 21-30.
Zalman P. Usiskin. The Effects of Teaching Euclidean Geometry via Transformations on Student Achievement and Attitudes in Tenth-Grade Geometry, Journal for Research in Mathematics Education, Vol. 3, No. 4 (Nov., 1972), pp. 249-259.
A. N. Kolmogorov. Геометрические преобразования в школьном курсе геометрии, Математика в школе, 1965, Nº 2, pp. 24–29. (Geometric transformations in a school geometry course) (in Russian)
Alton Thorpe Olson (1970). High School Plane Geometry Through Transformations: An Exploratory Study, Vol. I. University of Wisconsin--Madison.
Alton Thorpe Olson (1970). High School Plane Geometry Through Transformations: An Exploratory Study, Vol II. University of Wisconsin--Madison. | Wikipedia/Transformation_geometry |
Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system (alibi meaning "being somewhere else at the same time"); and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described (alias meaning "going under a different name"). By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either.
For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (local) coordinate system which moves together with the femur, rather than a (global) coordinate system which is fixed to the floor.
In three-dimensional Euclidean space, any proper rigid transformation, whether active or passive, can be represented as a screw displacement, the composition of a translation along an axis and a rotation about that axis.
The terms active transformation and passive transformation were first introduced in 1957 by Valentine Bargmann for describing Lorentz transformations in special relativity.
== Example ==
As an example, let the vector
v
=
(
v
1
,
v
2
)
∈
R
2
{\displaystyle \mathbf {v} =(v_{1},v_{2})\in \mathbb {R} ^{2}}
, be a vector in the plane. A rotation of the vector through an angle θ in counterclockwise direction is given by the rotation matrix:
R
=
(
cos
θ
−
sin
θ
sin
θ
cos
θ
)
,
{\displaystyle R={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}},}
which can be viewed either as an active transformation or a passive transformation (where the above matrix will be inverted), as described below.
== Spatial transformations in the Euclidean space R3 ==
In general a spatial transformation
T
:
R
3
→
R
3
{\displaystyle T\colon \mathbb {R} ^{3}\to \mathbb {R} ^{3}}
may consist of a translation and a linear transformation. In the following, the translation will be omitted, and the linear transformation will be represented by a 3×3 matrix
T
{\displaystyle T}
.
=== Active transformation ===
As an active transformation,
T
{\displaystyle T}
transforms the initial vector
v
=
(
v
x
,
v
y
,
v
z
)
{\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})}
into a new vector
v
′
=
(
v
x
′
,
v
y
′
,
v
z
′
)
=
T
v
=
T
(
v
x
,
v
y
,
v
z
)
{\displaystyle \mathbf {v} '=(v'_{x},v'_{y},v'_{z})=T\mathbf {v} =T(v_{x},v_{y},v_{z})}
.
If one views
{
e
x
′
=
T
(
1
,
0
,
0
)
,
e
y
′
=
T
(
0
,
1
,
0
)
,
e
z
′
=
T
(
0
,
0
,
1
)
}
{\displaystyle \{\mathbf {e} '_{x}=T(1,0,0),\ \mathbf {e} '_{y}=T(0,1,0),\ \mathbf {e} '_{z}=T(0,0,1)\}}
as a new basis, then the coordinates of the new vector
v
′
=
v
x
e
x
′
+
v
y
e
y
′
+
v
z
e
z
′
{\displaystyle \mathbf {v} '=v_{x}\mathbf {e} '_{x}+v_{y}\mathbf {e} '_{y}+v_{z}\mathbf {e} '_{z}}
in the new basis are the same as those of
v
=
v
x
e
x
+
v
y
e
y
+
v
z
e
z
{\displaystyle \mathbf {v} =v_{x}\mathbf {e} _{x}+v_{y}\mathbf {e} _{y}+v_{z}\mathbf {e} _{z}}
in the original basis. Note that active transformations make sense even as a linear transformation into a different vector space. It makes sense to write the new vector in the unprimed basis (as above) only when the transformation is from the space into itself.
=== Passive transformation ===
On the other hand, when one views
T
{\displaystyle T}
as a passive transformation, the initial vector
v
=
(
v
x
,
v
y
,
v
z
)
{\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})}
is left unchanged, while the coordinate system and its basis vectors are transformed in the opposite direction, that is, with the inverse transformation
T
−
1
{\displaystyle T^{-1}}
. This gives a new coordinate system XYZ with basis vectors:
e
X
=
T
−
1
(
1
,
0
,
0
)
,
e
Y
=
T
−
1
(
0
,
1
,
0
)
,
e
Z
=
T
−
1
(
0
,
0
,
1
)
{\displaystyle \mathbf {e} _{X}=T^{-1}(1,0,0),\ \mathbf {e} _{Y}=T^{-1}(0,1,0),\ \mathbf {e} _{Z}=T^{-1}(0,0,1)}
The new coordinates
(
v
X
,
v
Y
,
v
Z
)
{\displaystyle (v_{X},v_{Y},v_{Z})}
of
v
{\displaystyle \mathbf {v} }
with respect to the new coordinate system XYZ are given by:
v
=
(
v
x
,
v
y
,
v
z
)
=
v
X
e
X
+
v
Y
e
Y
+
v
Z
e
Z
=
T
−
1
(
v
X
,
v
Y
,
v
Z
)
.
{\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})=v_{X}\mathbf {e} _{X}+v_{Y}\mathbf {e} _{Y}+v_{Z}\mathbf {e} _{Z}=T^{-1}(v_{X},v_{Y},v_{Z}).}
From this equation one sees that the new coordinates are given by
(
v
X
,
v
Y
,
v
Z
)
=
T
(
v
x
,
v
y
,
v
z
)
.
{\displaystyle (v_{X},v_{Y},v_{Z})=T(v_{x},v_{y},v_{z}).}
As a passive transformation
T
{\displaystyle T}
transforms the old coordinates into the new ones.
Note the equivalence between the two kinds of transformations: the coordinates of the new point in the active transformation and the new coordinates of the point in the passive transformation are the same, namely
(
v
X
,
v
Y
,
v
Z
)
=
(
v
x
′
,
v
y
′
,
v
z
′
)
.
{\displaystyle (v_{X},v_{Y},v_{Z})=(v'_{x},v'_{y},v'_{z}).}
== In abstract vector spaces ==
The distinction between active and passive transformations can be seen mathematically by considering abstract vector spaces.
Fix a finite-dimensional vector space
V
{\displaystyle V}
over a field
K
{\displaystyle K}
(thought of as
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
), and a basis
B
=
{
e
i
}
1
≤
i
≤
n
{\displaystyle {\mathcal {B}}=\{e_{i}\}_{1\leq i\leq n}}
of
V
{\displaystyle V}
. This basis provides an isomorphism
C
:
K
n
→
V
{\displaystyle C:K^{n}\rightarrow V}
via the component map
(
v
i
)
1
≤
i
≤
n
=
(
v
1
,
⋯
,
v
n
)
↦
∑
i
v
i
e
i
{\textstyle (v_{i})_{1\leq i\leq n}=(v_{1},\cdots ,v_{n})\mapsto \sum _{i}v_{i}e_{i}}
.
An active transformation is then an endomorphism on
V
{\displaystyle V}
, that is, a linear map from
V
{\displaystyle V}
to itself. Taking such a transformation
τ
∈
End
(
V
)
{\displaystyle \tau \in {\text{End}}(V)}
, a vector
v
∈
V
{\displaystyle v\in V}
transforms as
v
↦
τ
v
{\displaystyle v\mapsto \tau v}
. The components of
τ
{\displaystyle \tau }
with respect to the basis
B
{\displaystyle {\mathcal {B}}}
are defined via the equation
τ
e
i
=
∑
j
τ
j
i
e
j
{\textstyle \tau e_{i}=\sum _{j}\tau _{ji}e_{j}}
. Then, the components of
v
{\displaystyle v}
transform as
v
i
↦
τ
i
j
v
j
{\displaystyle v_{i}\mapsto \tau _{ij}v_{j}}
.
A passive transformation is instead an endomorphism on
K
n
{\displaystyle K^{n}}
. This is applied to the components:
v
i
↦
T
i
j
v
j
=:
v
i
′
{\displaystyle v_{i}\mapsto T_{ij}v_{j}=:v'_{i}}
. Provided that
T
{\displaystyle T}
is invertible, the new basis
B
′
=
{
e
i
′
}
{\displaystyle {\mathcal {B}}'=\{e'_{i}\}}
is determined by asking that
v
i
e
i
=
v
i
′
e
i
′
{\displaystyle v_{i}e_{i}=v'_{i}e'_{i}}
, from which the expression
e
i
′
=
(
T
−
1
)
j
i
e
j
{\displaystyle e'_{i}=(T^{-1})_{ji}e_{j}}
can be derived.
Although the spaces
End
(
V
)
{\displaystyle {\text{End}}(V)}
and
End
(
K
n
)
{\displaystyle {\text{End}}({K^{n}})}
are isomorphic, they are not canonically isomorphic. Nevertheless a choice of basis
B
{\displaystyle {\mathcal {B}}}
allows construction of an isomorphism.
=== As left- and right-actions ===
Often one restricts to the case where the maps are invertible, so that active transformations are the general linear group
GL
(
V
)
{\displaystyle {\text{GL}}(V)}
of transformations while passive transformations are the group
GL
(
n
,
K
)
{\displaystyle {\text{GL}}(n,K)}
.
The transformations can then be understood as acting on the space of bases for
V
{\displaystyle V}
. An active transformation
τ
∈
GL
(
V
)
{\displaystyle \tau \in {\text{GL}}(V)}
sends the basis
{
e
i
}
↦
{
τ
e
i
}
{\displaystyle \{e_{i}\}\mapsto \{\tau e_{i}\}}
. Meanwhile a passive transformation
T
∈
GL
(
n
,
K
)
{\displaystyle T\in {\text{GL}}(n,K)}
sends the basis
{
e
i
}
↦
{
∑
j
(
T
−
1
)
j
i
e
j
}
{\textstyle \{e_{i}\}\mapsto \left\{\sum _{j}(T^{-1})_{ji}e_{j}\right\}}
.
The inverse in the passive transformation ensures the components transform identically under
τ
{\displaystyle \tau }
and
T
{\displaystyle T}
. This then gives a sharp distinction between active and passive transformations: active transformations act from the left on bases, while the passive transformations act from the right, due to the inverse.
This observation is made more natural by viewing bases
B
{\displaystyle {\mathcal {B}}}
as a choice of isomorphism
Φ
B
:
K
n
→
V
{\displaystyle \Phi _{\mathcal {B}}:K^{n}\rightarrow V}
. The space of bases is equivalently the space of such isomorphisms, denoted
Iso
(
K
n
,
V
)
{\displaystyle {\text{Iso}}(K^{n},V)}
. Active transformations, identified with
GL
(
V
)
{\displaystyle {\text{GL}}(V)}
, act on
Iso
(
K
n
,
V
)
{\displaystyle {\text{Iso}}(K^{n},V)}
from the left by composition, that is if
τ
{\displaystyle \tau }
represents an active transformation, we have
Φ
B
′
=
τ
∘
Φ
B
{\displaystyle \Phi _{\mathcal {B'}}=\tau \circ \Phi _{\mathcal {B}}}
. On the opposite, passive transformations, identified with
GL
(
n
,
K
)
{\displaystyle {\text{GL}}(n,K)}
acts on
Iso
(
K
n
,
V
)
{\displaystyle {\text{Iso}}(K^{n},V)}
from the right by pre-composition, that is if
T
{\displaystyle T}
represents a passive transformation, we have
Φ
B
″
=
Φ
B
∘
T
{\displaystyle \Phi _{\mathcal {B''}}=\Phi _{\mathcal {B}}\circ T}
.
This turns the space of bases into a left
GL
(
V
)
{\displaystyle {\text{GL}}(V)}
-torsor and a right
GL
(
n
,
K
)
{\displaystyle {\text{GL}}(n,K)}
-torsor.
From a physical perspective, active transformations can be characterized as transformations of physical space, while passive transformations are characterized as redundancies in the description of physical space. This plays an important role in mathematical gauge theory, where gauge transformations are described mathematically by transition maps which act from the right on fibers.
== See also ==
Change of basis
Covariance and contravariance of vectors
Rotation of axes
Translation of axes
== References ==
Dirk Struik (1953) Lectures on Analytic and Projective Geometry, page 84, Addison-Wesley.
== External links ==
UI ambiguity | Wikipedia/Active_and_passive_transformation |
In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f: X → X.
Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.
== Partial transformations ==
While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X.
== Algebraic structures ==
The set of all transformations on a given base set, together with function composition, forms a regular semigroup.
== Combinatorics ==
For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations.
== See also ==
Coordinate transformation
Data transformation (statistics)
Geometric transformation
Infinitesimal transformation
Linear transformation
List of transforms
Rigid transformation
Transformation geometry
Transformation semigroup
Transformation group
Transformation matrix
== References ==
== External links ==
Media related to Transformation (function) at Wikimedia Commons | Wikipedia/Transformation_(mathematics) |
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
The most common form of the transformation, parametrized by the real constant
v
,
{\displaystyle v,}
representing a velocity confined to the x-direction, is expressed as
t
′
=
γ
(
t
−
v
x
c
2
)
x
′
=
γ
(
x
−
v
t
)
y
′
=
y
z
′
=
z
{\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}}
where (t, x, y, z) and (t′, x′, y′, z′) are the coordinates of an event in two frames with the spatial origins coinciding at t = t′ = 0, where the primed frame is seen from the unprimed frame as moving with speed v along the x-axis, where c is the speed of light, and
γ
=
1
1
−
v
2
/
c
2
{\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}}
is the Lorentz factor. When speed v is much smaller than c, the Lorentz factor is negligibly different from 1, but as v approaches c,
γ
{\displaystyle \gamma }
grows without bound. The value of v must be smaller than c for the transformation to make sense.
Expressing the speed as a fraction of the speed of light,
β
=
v
/
c
,
{\textstyle \beta =v/c,}
an equivalent form of the transformation is
c
t
′
=
γ
(
c
t
−
β
x
)
x
′
=
γ
(
x
−
β
c
t
)
y
′
=
y
z
′
=
z
.
{\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\\x'&=\gamma \left(x-\beta ct\right)\\y'&=y\\z'&=z.\end{aligned}}}
Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity.
In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame.
They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity.
Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity.
The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group.
== History ==
Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905.
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations.
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group,
and he named it after Lorentz.
Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary.
== Derivation of the group of Lorentz transformations ==
An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates x, y, z to specify position in space in that frame. Subscripts label individual events.
From Einstein's second postulate of relativity (invariance of c) it follows that:
in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events a1 = (t1, x1, y1, z1) and a2 = (t2, x2, y2, z2). The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that:
where (t, x, y, z) are the spacetime coordinates used to define events in one frame, and (t′, x′, y′, z′) are the coordinates in another frame. First one observes that (D2) is satisfied if an arbitrary 4-tuple b of numbers are added to events a1 and a2. Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too:
(a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. First equation in (D3) can be written more compactly as:
where (·, ·) refers to the bilinear form of signature (1, 3) on R4 exposed by the right hand side formula in (D3). The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as R4 endowed with this bilinear form is known as Minkowski space M. The Lorentz transformation is thus an element of the group O(1, 3), the Lorentz group or, for those that prefer the other metric signature, O(3, 1) (also called the Lorentz group). One has:
which is precisely preservation of the bilinear form (D3) which implies (by linearity of Λ and bilinearity of the form) that (D2) is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group.
== Generalities ==
The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations.
Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and boost is a homogeneous transformation, which transforms the origin back to the origin.
The full Lorentz group O(3, 1) also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed.
Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group.
== Physical formulation of Lorentz boosts ==
=== Coordinate transformation ===
A "stationary" observer in frame F defines events with coordinates t, x, y, z. Another frame F′ moves with velocity v relative to F, and an observer in this "moving" frame F′ defines events using the coordinates t′, x′, y′, z′.
The coordinate axes in each frame are parallel (the x and x′ axes are parallel, the y and y′ axes are parallel, and the z and z′ axes are parallel), remain mutually perpendicular, and relative motion is along the coincident xx′ axes. At t = t′ = 0, the origins of both coordinate systems are the same, (x, y, z) = (x′, y′, z′) = (0, 0, 0). In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized.
If an observer in F records an event t, x, y, z, then an observer in F′ records the same event with coordinates
where v is the relative velocity between frames in the x-direction, c is the speed of light, and
γ
=
1
1
−
v
2
c
2
{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
(lowercase gamma) is the Lorentz factor.
Here, v is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity v > 0 is motion along the positive directions of the xx′ axes, zero relative velocity v = 0 is no relative motion, while negative relative velocity v < 0 is relative motion along the negative directions of the xx′ axes. The magnitude of relative velocity v cannot equal or exceed c, so only subluminal speeds −c < v < c are allowed. The corresponding range of γ is 1 ≤ γ < ∞.
The transformations are not defined if v is outside these limits. At the speed of light (v = c) γ is infinite, and faster than light (v > c) γ is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers.
As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the xx′ axes, because of the −v in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the xx′ axes, while the event does not change and is simply represented in another coordinate system, a passive transformation.
The inverse relations (t, x, y, z in terms of t′, x′, y′, z′) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here F′ is the "stationary" frame while F is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from F′ to F must take exactly the same form as the transformations from F to F′. The only difference is F moves with velocity −v relative to F′ (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in F′ notes an event t′, x′, y′, z′, then an observer in F notes the same event with coordinates
and the value of γ remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction.
Sometimes it is more convenient to use β = v/c (lowercase beta) instead of v, so that
c
t
′
=
γ
(
c
t
−
β
x
)
,
x
′
=
γ
(
x
−
β
c
t
)
,
{\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\,,\\x'&=\gamma \left(x-\beta ct\right)\,,\\\end{aligned}}}
which shows much more clearly the symmetry in the transformation. From the allowed ranges of v and the definition of β, it follows −1 < β < 1. The use of β and γ is standard throughout the literature. In the case of three spatial dimensions [ct,x,y,z], where the boost
β
{\displaystyle \beta }
is in the x direction, the eigenstates of the transformation are [1,1,0,0] with eigenvalue
(
1
−
β
)
/
(
1
+
β
)
{\displaystyle {\sqrt {(1-\beta )/(1+\beta )}}}
, [1, −1,0,0] with eigenvalue
(
1
+
β
)
/
(
1
−
β
)
{\displaystyle {\sqrt {(1+\beta )/(1-\beta )}}}
, and [0,0,1,0] and [0,0,0,1], the latter two with eigenvalue 1.
When the boost velocity
v
{\displaystyle {\boldsymbol {v}}}
is in an arbitrary vector direction with the boost vector
β
=
v
/
c
{\displaystyle {\boldsymbol {\beta }}={\boldsymbol {v}}/c}
, then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by
[
c
t
′
−
γ
β
x
x
′
1
+
γ
2
1
+
γ
β
x
2
y
′
γ
2
1
+
γ
β
x
β
y
z
′
γ
2
1
+
γ
β
y
β
z
]
=
[
γ
−
γ
β
x
−
γ
β
y
−
γ
β
z
−
γ
β
x
1
+
γ
2
1
+
γ
β
x
2
γ
2
1
+
γ
β
x
β
y
γ
2
1
+
γ
β
x
β
z
−
γ
β
y
γ
2
1
+
γ
β
x
β
y
1
+
γ
2
1
+
γ
β
y
2
γ
2
1
+
γ
β
y
β
z
−
γ
β
z
γ
2
1
+
γ
β
x
β
z
γ
2
1
+
γ
β
y
β
z
1
+
γ
2
1
+
γ
β
z
2
]
[
c
t
−
γ
β
x
x
1
+
γ
2
1
+
γ
β
x
2
y
γ
2
1
+
γ
β
x
β
y
z
γ
2
1
+
γ
β
y
β
z
]
,
{\displaystyle {\begin{bmatrix}ct'{\vphantom {-\gamma \beta _{x}}}\\x'{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma \beta _{x}&-\gamma \beta _{y}&-\gamma \beta _{z}\\-\gamma \beta _{x}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}\\-\gamma \beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}\\-\gamma \beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{z}^{2}\\\end{bmatrix}}{\begin{bmatrix}ct{\vphantom {-\gamma \beta _{x}}}\\x{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}},}
where the Lorentz factor is
γ
=
1
/
1
−
β
2
{\displaystyle \gamma =1/{\sqrt {1-{\boldsymbol {\beta }}^{2}}}}
. The determinant of the transformation matrix is +1 and its trace is
2
(
1
+
γ
)
{\displaystyle 2(1+\gamma )}
. The inverse of the transformation is given by reversing the sign of
β
{\displaystyle {\boldsymbol {\beta }}}
. The quantity
c
2
t
2
−
x
2
−
y
2
−
z
2
{\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}}
is invariant under the transformation: namely
(
c
t
′
2
−
x
′
2
−
y
′
2
−
z
′
2
)
=
(
c
t
2
−
x
2
−
y
2
−
z
2
)
{\displaystyle (ct'^{2}-x'^{2}-y'^{2}-z'^{2})=(ct^{2}-x^{2}-y^{2}-z^{2})}
.
The Lorentz transformations can also be derived in a way that resembles circular rotations in 3-dimensional space using the hyperbolic functions. For the boost in the x direction, the results are
where ζ (lowercase zeta) is a parameter called rapidity (many other symbols are used, including θ, ϕ, φ, η, ψ, ξ). Given the strong resemblance to rotations of spatial coordinates in 3-dimensional space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4-dimensional Minkowski space. The parameter ζ is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram.
The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking x = 0 or ct = 0 in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying ζ, which parametrizes the curves according to the identity
cosh
2
ζ
−
sinh
2
ζ
=
1
.
{\displaystyle \cosh ^{2}\zeta -\sinh ^{2}\zeta =1\,.}
Conversely the ct and x axes can be constructed for varying coordinates but constant ζ. The definition
tanh
ζ
=
sinh
ζ
cosh
ζ
,
{\displaystyle \tanh \zeta ={\frac {\sinh \zeta }{\cosh \zeta }}\,,}
provides the link between a constant value of rapidity, and the slope of the ct axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor
cosh
ζ
=
1
1
−
tanh
2
ζ
.
{\displaystyle \cosh \zeta ={\frac {1}{\sqrt {1-\tanh ^{2}\zeta }}}\,.}
Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between β, γ, and ζ are
β
=
tanh
ζ
,
γ
=
cosh
ζ
,
β
γ
=
sinh
ζ
.
{\displaystyle {\begin{aligned}\beta &=\tanh \zeta \,,\\\gamma &=\cosh \zeta \,,\\\beta \gamma &=\sinh \zeta \,.\end{aligned}}}
Taking the inverse hyperbolic tangent gives the rapidity
ζ
=
tanh
−
1
β
.
{\displaystyle \zeta =\tanh ^{-1}\beta \,.}
Since −1 < β < 1, it follows −∞ < ζ < ∞. From the relation between ζ and β, positive rapidity ζ > 0 is motion along the positive directions of the xx′ axes, zero rapidity ζ = 0 is no relative motion, while negative rapidity ζ < 0 is relative motion along the negative directions of the xx′ axes.
The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity ζ → −ζ since this is equivalent to negating the relative velocity. Therefore,
The inverse transformations can be similarly visualized by considering the cases when x′ = 0 and ct′ = 0.
So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences;
Δ
t
′
=
γ
(
Δ
t
−
v
Δ
x
c
2
)
,
Δ
x
′
=
γ
(
Δ
x
−
v
Δ
t
)
,
{\displaystyle {\begin{aligned}\Delta t'&=\gamma \left(\Delta t-{\frac {v\,\Delta x}{c^{2}}}\right)\,,\\\Delta x'&=\gamma \left(\Delta x-v\,\Delta t\right)\,,\end{aligned}}}
with inverse relations
Δ
t
=
γ
(
Δ
t
′
+
v
Δ
x
′
c
2
)
,
Δ
x
=
γ
(
Δ
x
′
+
v
Δ
t
′
)
.
{\displaystyle {\begin{aligned}\Delta t&=\gamma \left(\Delta t'+{\frac {v\,\Delta x'}{c^{2}}}\right)\,,\\\Delta x&=\gamma \left(\Delta x'+v\,\Delta t'\right)\,.\end{aligned}}}
where Δ (uppercase delta) indicates a difference of quantities; e.g., Δx = x2 − x1 for two values of x coordinates, and so on.
These transformations on differences rather than spatial points or instants of time are useful for a number of reasons:
in calculations and experiments, it is lengths between two points or time intervals that are measured or of interest (e.g., the length of a moving vehicle, or time duration it takes to travel from one place to another),
the transformations of velocity can be readily derived by making the difference infinitesimally small and dividing the equations, and the process repeated for the transformation of acceleration,
if the coordinate systems are never coincident (i.e., not in standard configuration), and if both observers can agree on an event t0, x0, y0, z0 in F and t0′, x0′, y0′, z0′ in F′, then they can use that event as the origin, and the spacetime coordinate differences are the differences between their coordinates and this origin, e.g., Δx = x − x0, Δx′ = x′ − x0′, etc.
=== Physical implications ===
A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in F the equation for a pulse of light along the x direction is x = ct, then in F′ the Lorentz transformations give x′ = ct′, and vice versa, for any −c < v < c.
For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation:
t
′
≈
t
x
′
≈
x
−
v
t
{\displaystyle {\begin{aligned}t'&\approx t\\x'&\approx x-vt\end{aligned}}}
in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance".
Three counterintuitive, but correct, predictions of the transformations are:
Relativity of simultaneity
Suppose two events occur along the x axis simultaneously (Δt = 0) in F, but separated by a nonzero displacement Δx. Then in F′, we find that
Δ
t
′
=
γ
−
v
Δ
x
c
2
{\displaystyle \Delta t'=\gamma {\frac {-v\,\Delta x}{c^{2}}}}
, so the events are no longer simultaneous according to a moving observer.
Time dilation
Suppose there is a clock at rest in F. If a time interval is measured at the same point in that frame, so that Δx = 0, then the transformations give this interval in F′ by Δt′ = γΔt. Conversely, suppose there is a clock at rest in F′. If an interval is measured at the same point in that frame, so that Δx′ = 0, then the transformations give this interval in F by Δt = γΔt′. Either way, each observer measures the time interval between ticks of a moving clock to be longer by a factor γ than the time interval between ticks of his own clock.
Length contraction
Suppose there is a rod at rest in F aligned along the x axis, with length Δx. In F′, the rod moves with velocity -v, so its length must be measured by taking two simultaneous (Δt′ = 0) measurements at opposite ends. Under these conditions, the inverse Lorentz transform shows that Δx = γΔx′. In F the two measurements are no longer simultaneous, but this does not matter because the rod is at rest in F. So each observer measures the distance between the end points of a moving rod to be shorter by a factor 1/γ than the end points of an identical rod at rest in his own frame. Length contraction affects any geometric quantity related to lengths, so from the perspective of a moving observer, areas and volumes will also appear to shrink along the direction of motion.
=== Vector transformations ===
The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector v with a magnitude |v| = v that cannot equal or exceed c, so that 0 ≤ v < c.
Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector r as measured in F, and r′ as measured in F′, each into components perpendicular (⊥) and parallel ( ‖ ) to v,
r
=
r
⊥
+
r
‖
,
r
′
=
r
⊥
′
+
r
‖
′
,
{\displaystyle \mathbf {r} =\mathbf {r} _{\perp }+\mathbf {r} _{\|}\,,\quad \mathbf {r} '=\mathbf {r} _{\perp }'+\mathbf {r} _{\|}'\,,}
then the transformations are
t
′
=
γ
(
t
−
r
∥
⋅
v
c
2
)
r
‖
′
=
γ
(
r
‖
−
v
t
)
r
⊥
′
=
r
⊥
{\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {\mathbf {r} _{\parallel }\cdot \mathbf {v} }{c^{2}}}\right)\\\mathbf {r} _{\|}'&=\gamma (\mathbf {r} _{\|}-\mathbf {v} t)\\\mathbf {r} _{\perp }'&=\mathbf {r} _{\perp }\end{aligned}}}
where · is the dot product. The Lorentz factor γ retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition β = v/c with magnitude 0 ≤ β < 1 is also used by some authors.
Introducing a unit vector n = v/v = β/β in the direction of relative motion, the relative velocity is v = vn with magnitude v and direction n, and vector projection and rejection give respectively
r
∥
=
(
r
⋅
n
)
n
,
r
⊥
=
r
−
(
r
⋅
n
)
n
{\displaystyle \mathbf {r} _{\parallel }=(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {r} _{\perp }=\mathbf {r} -(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} }
Accumulating the results gives the full transformations,
The projection and rejection also applies to r′. For the inverse transformations, exchange r and r′ to switch observed coordinates, and negate the relative velocity v → −v (or simply the unit vector n → −n since the magnitude v is always positive) to obtain
The unit vector has the advantage of simplifying equations for a single boost, allows either v or β to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing β and βγ. It is not convenient for multiple boosts.
The vectorial relation between relative velocity and rapidity is
β
=
β
n
=
n
tanh
ζ
,
{\displaystyle {\boldsymbol {\beta }}=\beta \mathbf {n} =\mathbf {n} \tanh \zeta \,,}
and the "rapidity vector" can be defined as
ζ
=
ζ
n
=
n
tanh
−
1
β
,
{\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta \,,}
each of which serves as a useful abbreviation in some contexts. The magnitude of ζ is the absolute value of the rapidity scalar confined to 0 ≤ ζ < ∞, which agrees with the range 0 ≤ β < 1.
=== Transformation of velocities ===
Defining the coordinate velocities and Lorentz factor by
u
=
d
r
d
t
,
u
′
=
d
r
′
d
t
′
,
γ
v
=
1
1
−
v
⋅
v
c
2
{\displaystyle \mathbf {u} ={\frac {d\mathbf {r} }{dt}}\,,\quad \mathbf {u} '={\frac {d\mathbf {r} '}{dt'}}\,,\quad \gamma _{\mathbf {v} }={\frac {1}{\sqrt {1-{\dfrac {\mathbf {v} \cdot \mathbf {v} }{c^{2}}}}}}}
taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to
u
′
=
1
1
−
v
⋅
u
c
2
[
u
γ
v
−
v
+
1
c
2
γ
v
γ
v
+
1
(
u
⋅
v
)
v
]
{\displaystyle \mathbf {u} '={\frac {1}{1-{\frac {\mathbf {v} \cdot \mathbf {u} }{c^{2}}}}}\left[{\frac {\mathbf {u} }{\gamma _{\mathbf {v} }}}-\mathbf {v} +{\frac {1}{c^{2}}}{\frac {\gamma _{\mathbf {v} }}{\gamma _{\mathbf {v} }+1}}\left(\mathbf {u} \cdot \mathbf {v} \right)\mathbf {v} \right]}
The velocities u and u′ are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity u relative to F, or equivalently with velocity u′ relative to F′, in turn F′ moves with velocity v relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange u and u′, and change v to −v.
The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect.
The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential.
=== Transformation of other quantities ===
In general, given four quantities A and Z = (Zx, Zy, Zz) and their Lorentz-boosted counterparts A′ and Z′ = (Z′x, Z′y, Z′z), a relation of the form
A
2
−
Z
⋅
Z
=
A
′
2
−
Z
′
⋅
Z
′
{\displaystyle A^{2}-\mathbf {Z} \cdot \mathbf {Z} ={A'}^{2}-\mathbf {Z} '\cdot \mathbf {Z} '}
implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates;
A
′
=
γ
(
A
−
v
n
⋅
Z
c
)
,
Z
′
=
Z
+
(
γ
−
1
)
(
Z
⋅
n
)
n
−
γ
A
v
n
c
.
{\displaystyle {\begin{aligned}A'&=\gamma \left(A-{\frac {v\mathbf {n} \cdot \mathbf {Z} }{c}}\right)\,,\\\mathbf {Z} '&=\mathbf {Z} +(\gamma -1)(\mathbf {Z} \cdot \mathbf {n} )\mathbf {n} -{\frac {\gamma Av\mathbf {n} }{c}}\,.\end{aligned}}}
The decomposition of Z (and Z′) into components perpendicular and parallel to v is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange (A, Z) and (A′, Z′) to switch observed quantities, and reverse the direction of relative motion by the substitution n ↦ −n).
The quantities (A, Z) collectively make up a four-vector, where A is the "timelike component", and Z the "spacelike component". Examples of A and Z are the following:
For a given object (e.g., particle, fluid, field, material), if A or Z correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy E of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin s depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity st, however a boosted observer will perceive a nonzero timelike component and an altered spin.
Not all quantities are invariant in the form as shown above, for example orbital angular momentum L does not have a timelike quantity, and neither does the electric field E nor the magnetic field B. The definition of angular momentum is L = r × p, and in a boosted frame the altered angular momentum is L′ = r′ × p′. Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out L transforms with another vector quantity N = (E/c2)r − tp related to boosts, see relativistic angular momentum for details. For the case of the E and B fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in F it is F = q(E + v × B) while in F′ it is F′ = q(E′ + v′ × B′). A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below.
== Mathematical formulation ==
Throughout, italic non-bold capital letters are 4 × 4 matrices, while non-italic bold letters are 3 × 3 matrices.
=== Homogeneous Lorentz group ===
Writing the coordinates in column vectors and the Minkowski metric η as a square matrix
X
′
=
[
c
t
′
x
′
y
′
z
′
]
,
η
=
[
−
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
]
,
X
=
[
c
t
x
y
z
]
{\displaystyle X'={\begin{bmatrix}c\,t'\\x'\\y'\\z'\end{bmatrix}}\,,\quad \eta ={\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}\,,\quad X={\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}}
the spacetime interval takes the form (superscript T denotes transpose)
X
⋅
X
=
X
T
η
X
=
X
′
T
η
X
′
{\displaystyle X\cdot X=X^{\mathrm {T} }\eta X={X'}^{\mathrm {T} }\eta {X'}}
and is invariant under a Lorentz transformation
X
′
=
Λ
X
{\displaystyle X'=\Lambda X}
where Λ is a square matrix which can depend on parameters.
The set of all Lorentz transformations
Λ
{\displaystyle \Lambda }
in this article is denoted
L
{\displaystyle {\mathcal {L}}}
. This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression X·X is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication.
From the invariance of the spacetime interval it follows
η
=
Λ
T
η
Λ
{\displaystyle \eta =\Lambda ^{\mathrm {T} }\eta \Lambda }
and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule gives immediately
[
det
(
Λ
)
]
2
=
1
⇒
det
(
Λ
)
=
±
1
{\displaystyle \left[\det(\Lambda )\right]^{2}=1\quad \Rightarrow \quad \det(\Lambda )=\pm 1}
Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form,
η
=
[
−
1
0
0
I
]
,
Λ
=
[
Γ
−
a
T
−
b
M
]
,
{\displaystyle \eta ={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}\,,\quad \Lambda ={\begin{bmatrix}\Gamma &-\mathbf {a} ^{\mathrm {T} }\\-\mathbf {b} &\mathbf {M} \end{bmatrix}}\,,}
carrying out the block matrix multiplications obtains general conditions on Γ, a, b, M to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results
Γ
2
=
1
+
b
T
b
{\displaystyle \Gamma ^{2}=1+\mathbf {b} ^{\mathrm {T} }\mathbf {b} }
is useful; bTb ≥ 0 always so it follows that
Γ
2
≥
1
⇒
Γ
≤
−
1
,
Γ
≥
1
{\displaystyle \Gamma ^{2}\geq 1\quad \Rightarrow \quad \Gamma \leq -1\,,\quad \Gamma \geq 1}
The negative inequality may be unexpected, because Γ multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then Γ is the Lorentz factor.
The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets.
where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities.
The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets
L
=
L
+
↑
∪
L
−
↑
∪
L
+
↓
∪
L
−
↓
{\displaystyle {\mathcal {L}}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\uparrow }\cup {\mathcal {L}}_{+}^{\downarrow }\cup {\mathcal {L}}_{-}^{\downarrow }}
A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations Λ and L from a particular subgroup, the composite Lorentz transformations ΛL and LΛ must be in the same subgroup as Λ and L. This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets
L
+
↑
{\displaystyle {\mathcal {L}}_{+}^{\uparrow }}
,
L
+
{\displaystyle {\mathcal {L}}_{+}}
,
L
↑
{\displaystyle {\mathcal {L}}^{\uparrow }}
, and
L
0
=
L
+
↑
∪
L
−
↓
{\displaystyle {\mathcal {L}}_{0}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\downarrow }}
all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g.
L
+
↓
{\displaystyle {\mathcal {L}}_{+}^{\downarrow }}
,
L
−
↓
{\displaystyle {\mathcal {L}}_{-}^{\downarrow }}
,
L
−
↑
{\displaystyle {\mathcal {L}}_{-}^{\uparrow }}
) do not form subgroups.
=== Proper transformations ===
If a Lorentz covariant 4-vector is measured in one inertial frame with result
X
{\displaystyle X}
, and the same measurement made in another inertial frame (with the same orientation and origin) gives result
X
′
{\displaystyle X'}
, the two results will be related by
X
′
=
B
(
v
)
X
{\displaystyle X'=B(\mathbf {v} )X}
where the boost matrix
B
(
v
)
{\displaystyle B(\mathbf {v} )}
represents the rotation-free Lorentz transformation between the unprimed and primed frames and
v
{\displaystyle \mathbf {v} }
is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by
B
(
v
)
=
[
γ
−
γ
v
x
/
c
−
γ
v
y
/
c
−
γ
v
z
/
c
−
γ
v
x
/
c
1
+
(
γ
−
1
)
v
x
2
v
2
(
γ
−
1
)
v
x
v
y
v
2
(
γ
−
1
)
v
x
v
z
v
2
−
γ
v
y
/
c
(
γ
−
1
)
v
y
v
x
v
2
1
+
(
γ
−
1
)
v
y
2
v
2
(
γ
−
1
)
v
y
v
z
v
2
−
γ
v
z
/
c
(
γ
−
1
)
v
z
v
x
v
2
(
γ
−
1
)
v
z
v
y
v
2
1
+
(
γ
−
1
)
v
z
2
v
2
]
=
[
γ
−
γ
β
→
T
−
γ
β
→
I
+
(
γ
−
1
)
β
→
β
→
T
β
2
]
,
{\displaystyle B(\mathbf {v} )={\begin{bmatrix}\gamma &-\gamma v_{x}/c&-\gamma v_{y}/c&-\gamma v_{z}/c\\-\gamma v_{x}/c&1+(\gamma -1){\dfrac {v_{x}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{y}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{z}}{v^{2}}}\\-\gamma v_{y}/c&(\gamma -1){\dfrac {v_{y}v_{x}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{y}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{y}v_{z}}{v^{2}}}\\-\gamma v_{z}/c&(\gamma -1){\dfrac {v_{z}v_{x}}{v^{2}}}&(\gamma -1){\dfrac {v_{z}v_{y}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{z}^{2}}{v^{2}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma {\vec {\beta }}^{T}\\-\gamma {\vec {\beta }}&I+(\gamma -1){\dfrac {{\vec {\beta }}{\vec {\beta }}^{T}}{\beta ^{2}}}\end{bmatrix}},}
where
v
=
v
x
2
+
v
y
2
+
v
z
2
{\textstyle v={\sqrt {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}}}
is the magnitude of the velocity and
γ
=
1
1
−
v
2
c
2
{\textstyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by
B
(
−
v
)
{\displaystyle B(-\mathbf {v} )}
.
If a frame F′ is boosted with velocity u relative to frame F, and another frame F′′ is boosted with velocity v relative to F′, the separate boosts are
X
″
=
B
(
v
)
X
′
,
X
′
=
B
(
u
)
X
{\displaystyle X''=B(\mathbf {v} )X'\,,\quad X'=B(\mathbf {u} )X}
and the composition of the two boosts connects the coordinates in F′′ and F,
X
″
=
B
(
v
)
B
(
u
)
X
.
{\displaystyle X''=B(\mathbf {v} )B(\mathbf {u} )X\,.}
Successive transformations act on the left. If u and v are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: B(v)B(u) = B(u)B(v). This composite transformation happens to be another boost, B(w), where w is collinear with u and v.
If u and v are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: B(v)B(u) and B(u)B(v) are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of R(ρ)B(w) or B(w)R(ρ). The w and w are composite velocities, while ρ and ρ are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply
R
(
ρ
)
=
[
1
0
0
R
(
ρ
)
]
,
{\displaystyle \quad R({\boldsymbol {\rho }})={\begin{bmatrix}1&0\\0&\mathbf {R} ({\boldsymbol {\rho }})\end{bmatrix}}\,,}
where R(ρ) is a 3 × 3 rotation matrix, which rotates any 3-dimensional vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect w and ρ (or w and ρ) to the original boost parameters u and v. In a composition of boosts, the R matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for w, ρ, w, ρ.
In this article the axis-angle representation is used for ρ. The rotation is about an axis in the direction of a unit vector e, through angle θ (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector"
θ
=
θ
e
{\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {e} }
will serve as a useful abbreviation.
Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include:
inverses: B(v)−1 = B(−v) (relative motion in the opposite direction), and R(θ)−1 = R(−θ) (rotation in the opposite sense about the same axis)
identity transformation for no relative motion/rotation: B(0) = R(0) = I
unit determinant: det(B) = det(R) = +1. This property makes them proper transformations.
matrix symmetry: B is symmetric (equals transpose), while R is nonsymmetric but orthogonal (transpose equals inverse, RT = R−1).
The most general proper Lorentz transformation Λ(v, θ) includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, Λ(0, θ) = R(θ) and Λ(v, 0) = B(v). An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes Λ(ζ, θ) and B(ζ).
==== The Lie group SO+(3,1) ====
The set of transformations
{
B
(
ζ
)
,
R
(
θ
)
,
Λ
(
ζ
,
θ
)
}
{\displaystyle \{B({\boldsymbol {\zeta }}),R({\boldsymbol {\theta }}),\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\}}
with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension).
For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ζ = 0,
B
x
=
I
+
ζ
∂
B
x
∂
ζ
|
ζ
=
0
+
⋯
{\displaystyle B_{x}=I+\zeta \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}+\cdots }
where the higher order terms not shown are negligible because ζ is small, and Bx is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ζ = 0,
∂
B
x
∂
ζ
|
ζ
=
0
=
−
K
x
.
{\displaystyle \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}=-K_{x}\,.}
For now, Kx is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained
B
x
=
lim
N
→
∞
(
I
−
ζ
N
K
x
)
N
=
e
−
ζ
K
x
{\displaystyle B_{x}=\lim _{N\to \infty }\left(I-{\frac {\zeta }{N}}K_{x}\right)^{N}=e^{-\zeta K_{x}}}
where the limit definition of the exponential has been used (see also characterizations of the exponential function). More generally
B
(
ζ
)
=
e
−
ζ
⋅
K
,
R
(
θ
)
=
e
θ
⋅
J
.
{\displaystyle B({\boldsymbol {\zeta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }\,,\quad R({\boldsymbol {\theta }})=e^{{\boldsymbol {\theta }}\cdot \mathbf {J} }\,.}
The axis-angle vector θ and rapidity vector ζ are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are K = (Kx, Ky, Kz) and J = (Jx, Jy, Jz), each vectors of matrices with the explicit forms
K
x
=
[
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
]
,
K
y
=
[
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
]
,
K
z
=
[
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
]
J
x
=
[
0
0
0
0
0
0
0
0
0
0
0
−
1
0
0
1
0
]
,
J
y
=
[
0
0
0
0
0
0
0
1
0
0
0
0
0
−
1
0
0
]
,
J
z
=
[
0
0
0
0
0
0
−
1
0
0
1
0
0
0
0
0
0
]
{\displaystyle {\begin{alignedat}{3}K_{x}&={\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\\\end{bmatrix}}\,,\quad &K_{y}&={\begin{bmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}\,,\quad &K_{z}&={\begin{bmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}\\[10mu]J_{x}&={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\\\end{bmatrix}}\,,\quad &J_{y}&={\begin{bmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&-1&0&0\end{bmatrix}}\,,\quad &J_{z}&={\begin{bmatrix}0&0&0&0\\0&0&-1&0\\0&1&0&0\\0&0&0&0\end{bmatrix}}\end{alignedat}}}
These are all defined in an analogous way to Kx above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: J are the rotation generators which correspond to angular momentum, and K are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve C(t) with C(0) = I in the group depending on some group parameter t with respect to that group parameter, evaluated at t = 0, serves as a definition of a corresponding group generator G, and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map G smoothly back into the group via t → exp(tG) for all t; this curve will yield G again when differentiated at t = 0.
Expanding the exponentials in their Taylor series obtains
B
(
ζ
)
=
I
−
sinh
ζ
(
n
⋅
K
)
+
(
cosh
ζ
−
1
)
(
n
⋅
K
)
2
{\displaystyle B({\boldsymbol {\zeta }})=I-\sinh \zeta (\mathbf {n} \cdot \mathbf {K} )+(\cosh \zeta -1)(\mathbf {n} \cdot \mathbf {K} )^{2}}
R
(
θ
)
=
I
+
sin
θ
(
e
⋅
J
)
+
(
1
−
cos
θ
)
(
e
⋅
J
)
2
.
{\displaystyle R({\boldsymbol {\theta }})=I+\sin \theta (\mathbf {e} \cdot \mathbf {J} )+(1-\cos \theta )(\mathbf {e} \cdot \mathbf {J} )^{2}\,.}
which compactly reproduce the boost and rotation matrices as given in the previous section.
It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product
Λ
=
(
I
−
ζ
⋅
K
+
⋯
)
(
I
+
θ
⋅
J
+
⋯
)
=
(
I
+
θ
⋅
J
+
⋯
)
(
I
−
ζ
⋅
K
+
⋯
)
=
I
−
ζ
⋅
K
+
θ
⋅
J
+
⋯
{\displaystyle {\begin{aligned}\Lambda &=(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )\\&=(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )\\&=I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots \end{aligned}}}
is commutative because only linear terms are required (products like (θ·J)(ζ·K) and (ζ·K)(θ·J) count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential
Λ
(
ζ
,
θ
)
=
e
−
ζ
⋅
K
+
θ
⋅
J
.
{\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }.}
The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular,
e
−
ζ
⋅
K
+
θ
⋅
J
≠
e
−
ζ
⋅
K
e
θ
⋅
J
,
{\displaystyle e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }\neq e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }e^{{\boldsymbol {\theta }}\cdot \mathbf {J} },}
because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators J and K), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies.
==== The Lie algebra so(3,1) ====
Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators
V
=
{
ζ
⋅
K
+
θ
⋅
J
}
{\displaystyle V=\{{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \}}
together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers. The generators Jx, Jy, Jz, Kx, Ky, Kz form a basis set of V, and the components of the axis-angle and rapidity vectors, θx, θy, θz, ζx, ζy, ζz, are the coordinates of a Lorentz generator with respect to this basis.
Three of the commutation relations of the Lorentz generators are
[
J
x
,
J
y
]
=
J
z
,
[
K
x
,
K
y
]
=
−
J
z
,
[
J
x
,
K
y
]
=
K
z
,
{\displaystyle [J_{x},J_{y}]=J_{z}\,,\quad [K_{x},K_{y}]=-J_{z}\,,\quad [J_{x},K_{y}]=K_{z}\,,}
where the bracket [A, B] = AB − BA is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat).
These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra
s
o
(
3
,
1
)
{\displaystyle {\mathfrak {so}}(3,1)}
. In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers.
Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense.
The exponential map from the Lie algebra to the Lie group,
exp
:
s
o
(
3
,
1
)
→
S
O
(
3
,
1
)
,
{\displaystyle \exp \,:\,{\mathfrak {so}}(3,1)\to \mathrm {SO} (3,1),}
provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra.
=== Improper transformations ===
Lorentz transformations also include parity inversion
P
=
[
1
0
0
−
I
]
{\displaystyle P={\begin{bmatrix}1&0\\0&-\mathbf {I} \end{bmatrix}}}
which negates all the spatial coordinates only, and time reversal
T
=
[
−
1
0
0
I
]
{\displaystyle T={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}}
which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here I is the 3 × 3 identity matrix. These are both symmetric, they are their own inverses (see involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations.
If Λ is a proper orthochronous Lorentz transformation, then TΛ is improper antichronous, PΛ is improper orthochronous, and TPΛ = PTΛ is proper antichronous.
=== Inhomogeneous Lorentz group ===
Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form
X
′
=
Λ
X
+
C
{\displaystyle X'=\Lambda X+C}
where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article.
== Tensor formulation ==
=== Contravariant vectors ===
Writing the general matrix transformation of coordinates as the matrix equation
[
x
′
0
x
′
1
x
′
2
x
′
3
]
=
[
Λ
0
0
Λ
0
1
Λ
0
2
Λ
0
3
x
′
0
Λ
1
0
Λ
1
1
Λ
1
2
Λ
1
3
x
′
0
Λ
2
0
Λ
2
1
Λ
2
2
Λ
2
3
x
′
0
Λ
3
0
Λ
3
1
Λ
3
2
Λ
3
3
x
′
0
]
[
x
0
x
′
0
x
1
x
′
0
x
2
x
′
0
x
3
x
′
0
]
{\displaystyle {\begin{bmatrix}{x'}^{0}\\{x'}^{1}\\{x'}^{2}\\{x'}^{3}\end{bmatrix}}={\begin{bmatrix}{\Lambda ^{0}}_{0}&{\Lambda ^{0}}_{1}&{\Lambda ^{0}}_{2}&{\Lambda ^{0}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{1}}_{0}&{\Lambda ^{1}}_{1}&{\Lambda ^{1}}_{2}&{\Lambda ^{1}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{2}}_{0}&{\Lambda ^{2}}_{1}&{\Lambda ^{2}}_{2}&{\Lambda ^{2}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{3}}_{0}&{\Lambda ^{3}}_{1}&{\Lambda ^{3}}_{2}&{\Lambda ^{3}}_{3}{\vphantom {{x'}^{0}}}\\\end{bmatrix}}{\begin{bmatrix}x^{0}{\vphantom {{x'}^{0}}}\\x^{1}{\vphantom {{x'}^{0}}}\\x^{2}{\vphantom {{x'}^{0}}}\\x^{3}{\vphantom {{x'}^{0}}}\end{bmatrix}}}
allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4-dimensional spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is
x
′
ν
=
Λ
ν
μ
x
μ
,
{\displaystyle {x'}^{\nu }={\Lambda ^{\nu }}_{\mu }x^{\mu },}
where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index.
The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If A is any four-vector, then in tensor index notation
A
′
ν
=
Λ
ν
μ
A
μ
.
{\displaystyle {A'}^{\nu }={\Lambda ^{\nu }}_{\mu }A^{\mu }\,.}
Alternatively, one writes
A
ν
′
=
Λ
ν
′
μ
A
μ
.
{\displaystyle A^{\nu '}={\Lambda ^{\nu '}}_{\mu }A^{\mu }\,.}
in which the primed indices denote the indices of A in the primed frame. For a general n-component object one may write
X
′
α
=
Π
(
Λ
)
α
β
X
β
,
{\displaystyle {X'}^{\alpha }={\Pi (\Lambda )^{\alpha }}_{\beta }X^{\beta }\,,}
where Π is the appropriate representation of the Lorentz group, an n × n matrix for every Λ. In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from 1 to n. E.g., if X is a bispinor, then the indices are called Dirac indices.
=== Covariant vectors ===
There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g.,
x
ν
=
η
μ
ν
x
μ
,
{\displaystyle x_{\nu }=\eta _{\mu \nu }x^{\mu },}
where η is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by
x
μ
=
η
μ
ν
x
ν
,
{\displaystyle x^{\mu }=\eta ^{\mu \nu }x_{\nu },}
where, when viewed as matrices, ημν is the inverse of ημν. As it happens, ημν = ημν. This is referred to as raising an index. To transform a covariant vector Aμ, first raise its index, then transform it according to the same rule as for contravariant 4-vectors, then finally lower the index;
A
′
ν
=
η
ρ
ν
Λ
ρ
σ
η
μ
σ
A
μ
.
{\displaystyle {A'}_{\nu }=\eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }A_{\mu }.}
But
η
ρ
ν
Λ
ρ
σ
η
μ
σ
=
(
Λ
−
1
)
μ
ν
,
{\displaystyle \eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}
That is, it is the (μ, ν)-component of the inverse Lorentz transformation. One defines (as a matter of notation),
Λ
ν
μ
≡
(
Λ
−
1
)
μ
ν
,
{\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}
and may in this notation write
A
′
ν
=
Λ
ν
μ
A
μ
.
{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.}
Now for a subtlety. The implied summation on the right hand side of
A
′
ν
=
Λ
ν
μ
A
μ
=
(
Λ
−
1
)
μ
ν
A
μ
{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }}
is running over a row index of the matrix representing Λ−1. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of Λ acting on the column vector Aμ. That is, in pure matrix notation,
A
′
=
(
Λ
−
1
)
T
A
.
{\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.}
This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace Λ with Π(Λ).
=== Tensors ===
If A and B are linear operators on vector spaces U and V, then a linear operator A ⊗ B may be defined on the tensor product of U and V, denoted U ⊗ V according to
From this it is immediately clear that if u and v are a four-vectors in V, then u ⊗ v ∈ T2V ≡ V ⊗ V transforms as
The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor u ⊗ v.
These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space V can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity T. It is given by
where Λχ′ψ is defined above. This form can generally be reduced to the form for general n-component objects given above with a single matrix (Π(Λ)) operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor.
==== Transformation of the electromagnetic field ====
Lorentz transformations can also be used to illustrate that the magnetic field B and electric field E are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment.
An observer measures a charge at rest in frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer does not observe any magnetic field.
The other observer in frame F′ moves at velocity v relative to F and the charge. This observer sees a different electric field because the charge moves at velocity −v in their rest frame. The motion of the charge corresponds to an electric current, and thus the observer in frame F′ also sees a magnetic field.
The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector.
The electromagnetic field strength tensor is given by
F
μ
ν
=
[
0
−
1
c
E
x
−
1
c
E
y
−
1
c
E
z
1
c
E
x
0
−
B
z
B
y
1
c
E
y
B
z
0
−
B
x
1
c
E
z
−
B
y
B
x
0
]
(SI units, signature
(
+
,
−
,
−
,
−
)
)
.
{\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-{\frac {1}{c}}E_{x}&-{\frac {1}{c}}E_{y}&-{\frac {1}{c}}E_{z}\\{\frac {1}{c}}E_{x}&0&-B_{z}&B_{y}\\{\frac {1}{c}}E_{y}&B_{z}&0&-B_{x}\\{\frac {1}{c}}E_{z}&-B_{y}&B_{x}&0\end{bmatrix}}{\text{(SI units, signature }}(+,-,-,-){\text{)}}.}
in SI units. In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field E and the magnetic induction B have the same units making the appearance of the electromagnetic field tensor more natural. Consider a Lorentz boost in the x-direction. It is given by
Λ
μ
ν
=
[
γ
−
γ
β
0
0
−
γ
β
γ
0
0
0
0
1
0
0
0
0
1
]
,
F
μ
ν
=
[
0
E
x
E
y
E
z
−
E
x
0
B
z
−
B
y
−
E
y
−
B
z
0
B
x
−
E
z
B
y
−
B
x
0
]
(Gaussian units, signature
(
−
,
+
,
+
,
+
)
)
,
{\displaystyle {\Lambda ^{\mu }}_{\nu }={\begin{bmatrix}\gamma &-\gamma \beta &0&0\\-\gamma \beta &\gamma &0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}},\qquad F^{\mu \nu }={\begin{bmatrix}0&E_{x}&E_{y}&E_{z}\\-E_{x}&0&B_{z}&-B_{y}\\-E_{y}&-B_{z}&0&B_{x}\\-E_{z}&B_{y}&-B_{x}&0\end{bmatrix}}{\text{(Gaussian units, signature }}(-,+,+,+){\text{)}},}
where the field tensor is displayed side by side for easiest possible reference in the manipulations below.
The general transformation law (T3) becomes
F
μ
′
ν
′
=
Λ
μ
′
μ
Λ
ν
′
ν
F
μ
ν
.
{\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }.}
For the magnetic field one obtains
B
x
′
=
F
2
′
3
′
=
Λ
2
μ
Λ
3
ν
F
μ
ν
=
Λ
2
2
Λ
3
3
F
23
=
1
×
1
×
B
x
=
B
x
,
B
y
′
=
F
3
′
1
′
=
Λ
3
μ
Λ
1
ν
F
μ
ν
=
Λ
3
3
Λ
1
ν
F
3
ν
=
Λ
3
3
Λ
1
0
F
30
+
Λ
3
3
Λ
1
1
F
31
=
1
×
(
−
β
γ
)
(
−
E
z
)
+
1
×
γ
B
y
=
γ
B
y
+
β
γ
E
z
=
γ
(
B
−
β
×
E
)
y
B
z
′
=
F
1
′
2
′
=
Λ
1
μ
Λ
2
ν
F
μ
ν
=
Λ
1
μ
Λ
2
2
F
μ
2
=
Λ
1
0
Λ
2
2
F
02
+
Λ
1
1
Λ
2
2
F
12
=
(
−
γ
β
)
×
1
×
E
y
+
γ
×
1
×
B
z
=
γ
B
z
−
β
γ
E
y
=
γ
(
B
−
β
×
E
)
z
{\displaystyle {\begin{aligned}B_{x'}&=F^{2'3'}={\Lambda ^{2}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{2}}_{2}{\Lambda ^{3}}_{3}F^{23}=1\times 1\times B_{x}\\&=B_{x},\\B_{y'}&=F^{3'1'}={\Lambda ^{3}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{\nu }F^{3\nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{0}F^{30}+{\Lambda ^{3}}_{3}{\Lambda ^{1}}_{1}F^{31}\\&=1\times (-\beta \gamma )(-E_{z})+1\times \gamma B_{y}=\gamma B_{y}+\beta \gamma E_{z}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{y}\\B_{z'}&=F^{1'2'}={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{1}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{1}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=(-\gamma \beta )\times 1\times E_{y}+\gamma \times 1\times B_{z}=\gamma B_{z}-\beta \gamma E_{y}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{z}\end{aligned}}}
For the electric field results
E
x
′
=
F
0
′
1
′
=
Λ
0
μ
Λ
1
ν
F
μ
ν
=
Λ
0
1
Λ
1
0
F
10
+
Λ
0
0
Λ
1
1
F
01
=
(
−
γ
β
)
(
−
γ
β
)
(
−
E
x
)
+
γ
γ
E
x
=
−
γ
2
β
2
(
E
x
)
+
γ
2
E
x
=
E
x
(
1
−
β
2
)
γ
2
=
E
x
,
E
y
′
=
F
0
′
2
′
=
Λ
0
μ
Λ
2
ν
F
μ
ν
=
Λ
0
μ
Λ
2
2
F
μ
2
=
Λ
0
0
Λ
2
2
F
02
+
Λ
0
1
Λ
2
2
F
12
=
γ
×
1
×
E
y
+
(
−
β
γ
)
×
1
×
B
z
=
γ
E
y
−
β
γ
B
z
=
γ
(
E
+
β
×
B
)
y
E
z
′
=
F
0
′
3
′
=
Λ
0
μ
Λ
3
ν
F
μ
ν
=
Λ
0
μ
Λ
3
3
F
μ
3
=
Λ
0
0
Λ
3
3
F
03
+
Λ
0
1
Λ
3
3
F
13
=
γ
×
1
×
E
z
−
β
γ
×
1
×
(
−
B
y
)
=
γ
E
z
+
β
γ
B
y
=
γ
(
E
+
β
×
B
)
z
.
{\displaystyle {\begin{aligned}E_{x'}&=F^{0'1'}={\Lambda ^{0}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{1}{\Lambda ^{1}}_{0}F^{10}+{\Lambda ^{0}}_{0}{\Lambda ^{1}}_{1}F^{01}\\&=(-\gamma \beta )(-\gamma \beta )(-E_{x})+\gamma \gamma E_{x}=-\gamma ^{2}\beta ^{2}(E_{x})+\gamma ^{2}E_{x}=E_{x}(1-\beta ^{2})\gamma ^{2}\\&=E_{x},\\E_{y'}&=F^{0'2'}={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{0}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{0}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=\gamma \times 1\times E_{y}+(-\beta \gamma )\times 1\times B_{z}=\gamma E_{y}-\beta \gamma B_{z}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{y}\\E_{z'}&=F^{0'3'}={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{3}F^{\mu 3}={\Lambda ^{0}}_{0}{\Lambda ^{3}}_{3}F^{03}+{\Lambda ^{0}}_{1}{\Lambda ^{3}}_{3}F^{13}\\&=\gamma \times 1\times E_{z}-\beta \gamma \times 1\times (-B_{y})=\gamma E_{z}+\beta \gamma B_{y}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{z}.\end{aligned}}}
Here, β = (β, 0, 0) is used. These results can be summarized by
E
∥
′
=
E
∥
B
∥
′
=
B
∥
E
⊥
′
=
γ
(
E
⊥
+
β
×
B
⊥
)
=
γ
(
E
+
β
×
B
)
⊥
,
B
⊥
′
=
γ
(
B
⊥
−
β
×
E
⊥
)
=
γ
(
B
−
β
×
E
)
⊥
,
{\displaystyle {\begin{aligned}\mathbf {E} _{\parallel '}&=\mathbf {E} _{\parallel }\\\mathbf {B} _{\parallel '}&=\mathbf {B} _{\parallel }\\\mathbf {E} _{\bot '}&=\gamma \left(\mathbf {E} _{\bot }+{\boldsymbol {\beta }}\times \mathbf {B} _{\bot }\right)=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\bot },\\\mathbf {B} _{\bot '}&=\gamma \left(\mathbf {B} _{\bot }-{\boldsymbol {\beta }}\times \mathbf {E} _{\bot }\right)=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\bot },\end{aligned}}}
and are independent of the metric signature. For SI units, substitute E → E⁄c. Misner, Thorne & Wheeler (1973) refer to this last form as the 3 + 1 view as opposed to the geometric view represented by the tensor expression
F
μ
′
ν
′
=
Λ
μ
′
μ
Λ
ν
′
ν
F
μ
ν
,
{\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu },}
and make a strong point of the ease with which results that are difficult to achieve using the 3 + 1 view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields E (alone) and B (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations (T1) and (T2) that immediately yield (T3). One should note that the primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is
F
μ
′
ν
′
(
x
′
)
=
Λ
μ
′
μ
Λ
ν
′
ν
F
μ
ν
(
Λ
−
1
x
′
)
=
Λ
μ
′
μ
Λ
ν
′
ν
F
μ
ν
(
x
)
.
{\displaystyle F^{\mu '\nu '}\left(x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }\left(\Lambda ^{-1}x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }(x).}
Length contraction has an effect on charge density ρ and current density J, and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors,
j
′
=
j
−
γ
ρ
v
n
+
(
γ
−
1
)
(
j
⋅
n
)
n
ρ
′
=
γ
(
ρ
−
j
⋅
v
n
c
2
)
,
{\displaystyle {\begin{aligned}\mathbf {j} '&=\mathbf {j} -\gamma \rho v\mathbf {n} +\left(\gamma -1\right)(\mathbf {j} \cdot \mathbf {n} )\mathbf {n} \\\rho '&=\gamma \left(\rho -\mathbf {j} \cdot {\frac {v\mathbf {n} }{c^{2}}}\right),\end{aligned}}}
or, in the simpler geometric view,
j
μ
′
=
Λ
μ
′
μ
j
μ
.
{\displaystyle j^{\mu '}={\Lambda ^{\mu '}}_{\mu }j^{\mu }.}
Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector.
The Maxwell equations are invariant under Lorentz transformations.
=== Spinors ===
Equation (T1) hold unmodified for any representation of the Lorentz group, including the bispinor representation. In (T2) one simply replaces all occurrences of Λ by the bispinor representation Π(Λ),
The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons.
==== Transformation of general fields ====
A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule
where W(Λ, p) is the Wigner's little group and D(j) is the (2j + 1)-dimensional representation of SO(3).
== See also ==
== Footnotes ==
== Notes ==
== References ==
=== Websites ===
O'Connor, John J.; Robertson, Edmund F. (1996), A History of Special Relativity
Brown, Harvey R. (2003), Michelson, FitzGerald and Lorentz: the Origins of Relativity Revisited
=== Papers ===
=== Books ===
== Further reading ==
Ernst, A.; Hsu, J.-P. (2001), "First proposal of the universal speed of light by Voigt 1887" (PDF), Chinese Journal of Physics, 39 (3): 211–230, Bibcode:2001ChJPh..39..211E, archived from the original (PDF) on 2011-07-16
Thornton, Stephen T.; Marion, Jerry B. (2004), Classical dynamics of particles and systems (5th ed.), Belmont, [CA.]: Brooks/Cole, pp. 546–579, ISBN 978-0-534-40896-1
Voigt, Woldemar (1887), "Über das Doppler'sche princip", Nachrichten von der Königlicher Gesellschaft den Wissenschaft zu Göttingen, 2: 41–51
== External links ==
Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties.
The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page.
Relativity Archived 2011-08-29 at the Wayback Machine – a chapter from an online textbook
Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects.
Animation clip on YouTube visualizing the Lorentz transformation.
MinutePhysics video on YouTube explaining and visualizing the Lorentz transformation with a mechanical Minkowski diagram
Interactive graph on Desmos (graphing) showing Lorentz transformations with a virtual Minkowski diagram
Interactive graph on Desmos showing Lorentz transformations with points and hyperbolas
Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc. | Wikipedia/Lorentz_transformations |
Computer-aided design (CAD) is the use of computers (or workstations) to aid in the creation, modification, analysis, or optimization of a design.: 3 This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.: 4 Designs made through CAD software help protect products and inventions when used in patent applications. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The terms computer-aided drafting (CAD) and computer-aided design and drafting (CADD) are also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it is known as mechanical design automation (MDA), which includes the process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.: 71, 106
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design (building information modeling), prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).
== Overview ==
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within the product lifecycle management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
Computer-aided engineering (CAE) and finite element analysis (FEA, FEM)
Computer-aided manufacturing (CAM) including instructions to computer numerical control (CNC) machines
Photorealistic rendering and motion simulation
Document management and revision control using product data management (PDM)
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.
== Types ==
There are several different types of CAD, each requiring the operator to think differently about how to use them and design their virtual components in a different manner. Virtually all of CAD tools rely on constraint concepts that are used to define geometric or non-geometric elements of a model.
=== 2D CAD ===
There are many producers of the lower-end 2D sketching systems, including a number of free and open-source programs. These provide an approach to the drawing process where scale and placement on the drawing sheet can easily be adjusted in the final draft as required, unlike in hand drafting.
=== 3D CAD ===
3D wireframe is an extension of 2D drafting into a three-dimensional space. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.
3D "dumb" solids are created in a way analogous to manipulations of real-world objects. Basic three-dimensional geometric forms (e.g., prisms, cylinders, spheres, or rectangles) have solid volumes added or subtracted from them as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids do not usually include tools to easily allow the motion of the components, set their limits to their motion, or identify interference between components.
There are several types of 3D solid modeling
Parametric modeling allows the operator to use what is referred to as "design intent". The objects and features are created modifiable. Any future modifications can be made by changing on how the original part was created. If a feature was intended to be located from the center of the part, the operator should locate it from the center of the model. The feature could be located using any geometric object already available in the part, but this random placement would defeat the design intent. If the operator designs the part as it functions, the parametric modeler is able to make changes to the part while maintaining geometric and functional relationships.
Direct or explicit modeling provide the ability to edit geometry without a history tree. With direct modeling, once a sketch is used to create geometry it is incorporated into the new geometry, and the designer only has to modify the geometry afterward without needing the original sketch. As with parametric modeling, direct modeling has the ability to include the relationships between selected geometry (e.g., tangency, concentricity).
Assembly modelling is a process which incorporates results of the previous single-part modelling into a final product containing several parts. Assemblies can be hierarchical, depending on the specific CAD software vendor, and highly complex models can be achieved (e.g. in building engineering by using computer-aided architectural design software): 539
==== Freeform CAD ====
Top-end CAD systems offer the capability to incorporate more organic, aesthetic and ergonomic features into the designs. Freeform surface modeling is often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.
== Technology ==
Originally software for CAD systems was developed with computer languages such as Fortran, ALGOL but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature-based modeler and freeform surface systems are built around a number of key C modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.
Unexpected capabilities of these associative relationships have led to a new form of prototyping called digital prototyping. In contrast to physical prototypes, which entail manufacturing time in the design. That said, CAD models can be generated by a computer after the physical prototype has been scanned using an industrial CT scanning machine. Depending on the nature of the business, digital or physical prototypes can be initially chosen according to specific needs.
Today, CAD systems exist for all the major platforms (Windows, Linux, UNIX and Mac OS X); some packages support multiple platforms.
Currently, no special hardware is required for most CAD software. However, some CAD systems can do graphically and computationally intensive tasks, so a modern graphics card, high speed (and possibly multiple) CPUs and large amounts of RAM may be recommended.
The human-machine interface is generally via a computer mouse but can also be via a pen and digitizing graphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of a Spacemouse/SpaceBall. Some systems also support stereoscopic glasses for viewing the 3D model. Technologies that in the past were limited to larger installations or specialist applications have become available to a wide group of users. These include the CAVE or HMDs and interactive devices like motion-sensing technology
== Software ==
Starting with the IBM Drafting System in the mid-1960s, computer-aided design systems began to provide more capabilitties than just an ability to reproduce manual drafting with electronic drafting, and the cost-benefit for companies to switch to CAD became apparent. The software automated many tasks that are taken for granted from computer systems today, such as automated generation of bills of materials, auto layout in integrated circuits, interference checking, and many others. Eventually, CAD provided the designer with the ability to perform engineering calculations. During this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a revolutionary change in the engineering industry, where draftsman, designer, and engineer roles that had previously been separate began to merge. CAD is an example of the pervasive effect computers were beginning to have on the industry.
Current computer-aided design software packages range from 2D vector-based drafting systems to 3D solid and surface modelers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematical modeling.
CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).
CAD is mainly used for detailed design of 3D models or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components. It can also be used to design objects such as jewelry, furniture, appliances, etc. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs. 4D BIM is a type of virtual construction engineering simulation incorporating time or schedule-related information for project management.
CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to layout and develop work on screen, print it out and save it for future editing, saving time on their drawings.
=== License management software ===
In the 2000s, some CAD system software vendors shipped their distributions with a dedicated license manager software that controlled how often or how many users can utilize the CAD system.: 166 It could run either on a local machine (by loading from a local storage device) or a local network fileserver and was usually tied to a specific IP address in latter case.: 166
== List of software packages ==
CAD software enables engineers and architects to design, inspect and manage engineering projects within an integrated graphical user interface (GUI) on a personal computer system. Most applications support solid modeling with boundary representation (B-Rep) and NURBS geometry, and enable the same to be published in a variety of formats.
Based on market statistics, commercial software from Autodesk, Dassault Systems, Siemens PLM Software, and PTC dominate the CAD industry. The following is a list of major CAD applications, grouped by usage statistics.
=== Commercial software ===
ABViewer
AC3D
Alibre Design
ArchiCAD (Graphisoft)
AutoCAD (Autodesk)
AutoTURN
AxSTREAM
BricsCAD
CATIA (Dassault Systèmes)
Cobalt
CorelCAD
EAGLE
Fusion 360 (Autodesk)
IntelliCAD
Inventor (Autodesk)
IRONCAD
KeyCreator (Kubotek)
Landscape Express
MEDUSA4
MicroStation (Bentley Systems)
Modelur (AgiliCity)
Onshape (PTC)
NX (Siemens Digital Industries Software)
PTC Creo (successor to Pro/ENGINEER) (PTC)
PunchCAD
Remo 3D
Revit (Autodesk)
Rhinoceros 3D
SketchUp
Solid Edge (Siemens Digital Industries Software)
SOLIDWORKS (Dassault Systèmes)
SpaceClaim
T-FLEX CAD
TranslateCAD
TurboCAD
Vectorworks (Nemetschek)
=== Open-source software ===
Blender
BRL-CAD
FreeCAD
LibreCAD
LeoCAD
OpenSCAD
QCAD
Salome (software)
SolveSpace
=== Freeware ===
BricsCAD Shape
Tinkercad (successor to Autodesk 123D)
=== CAD kernels ===
ACIS by (Spatial Corp owned by Dassault Systèmes)
C3D Toolkit by C3D Labs
Open CASCADE Open Source
Parasolid by (Siemens Digital Industries Software)
ShapeManager by (Autodesk)
== See also ==
== References ==
== External links ==
MIT 1982 CAD lab
Learning materials related to Computer-aided design at Wikiversity
Learning materials related to Computer-aided Geometric Design at Wikiversity | Wikipedia/Computer-aided_design |
In algebra, a quartic function is a function of the formα
f
(
x
)
=
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
,
{\displaystyle f(x)=ax^{4}+bx^{3}+cx^{2}+dx+e,}
where a is nonzero,
which is defined by a polynomial of degree four, called a quartic polynomial.
A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
,
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0,}
where a ≠ 0.
The derivative of a quartic function is a cubic function.
Sometimes the term biquadratic is used instead of quartic, but, usually, biquadratic function refers to a quadratic function of a square (or, equivalently, to the function defined by a quartic polynomial without terms of odd degree), having the form
f
(
x
)
=
a
x
4
+
c
x
2
+
e
.
{\displaystyle f(x)=ax^{4}+cx^{2}+e.}
Since a quartic function is defined by a polynomial of even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If a is positive, then the function increases to positive infinity at both ends; and thus the function has a global minimum. Likewise, if a is negative, it decreases to negative infinity and has a global maximum. In both cases it may or may not have another local maximum and another local minimum.
The degree four (quartic case) is the highest degree such that every polynomial equation can be solved by radicals, according to the Abel–Ruffini theorem.
== History ==
Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna.
The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
== Applications ==
Each coordinate of the intersection points of two conic sections is a solution of a quartic equation. The same is true for the intersection of a line and a torus. It follows that quartic equations often arise in computational geometry and all related fields such as computer graphics, computer-aided design, computer-aided manufacturing and optics. Here are examples of other geometric problems whose solution involves solving a quartic equation.
In computer-aided manufacturing, the torus is a shape that is commonly associated with the endmill cutter. To calculate its location relative to a triangulated surface, the position of a horizontal torus on the z-axis must be found where it is tangent to a fixed line, and this requires the solution of a general quartic equation to be calculated.
A quartic equation arises also in the process of solving the crossed ladders problem, in which the lengths of two crossed ladders, each based against one wall and leaning against another, are given along with the height at which they cross, and the distance between the walls is to be found.
In optics, Alhazen's problem is "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to a quartic equation.
Finding the distance of closest approach of two ellipses involves solving a quartic equation.
The eigenvalues of a 4×4 matrix are the roots of a quartic polynomial which is the characteristic polynomial of the matrix.
The characteristic equation of a fourth-order linear difference equation or differential equation is a quartic equation. An example arises in the Timoshenko-Rayleigh theory of beam bending.
Intersections between spheres, cylinders, or other quadrics can be found using quartic equations.
== Inflection points and golden ratio ==
Letting F and G be the distinct inflection points of the graph of a quartic function, and letting H be the intersection of the inflection secant line FG and the quartic, nearer to G than to F, then G divides FH into the golden section:
F
G
G
H
=
1
+
5
2
=
φ
(
the golden ratio
)
.
{\displaystyle {\frac {FG}{GH}}={\frac {1+{\sqrt {5}}}{2}}=\varphi \;({\text{the golden ratio}}).}
Moreover, the area of the region between the secant line and the quartic below the secant line equals the area of the region between the secant line and the quartic above the secant line. One of those regions is disjointed into sub-regions of equal area.
== Solution ==
=== Nature of the roots ===
Given the general quartic equation
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0}
with real coefficients and a ≠ 0 the nature of its roots is mainly determined by the sign of its discriminant
Δ
=
256
a
3
e
3
−
192
a
2
b
d
e
2
−
128
a
2
c
2
e
2
+
144
a
2
c
d
2
e
−
27
a
2
d
4
+
144
a
b
2
c
e
2
−
6
a
b
2
d
2
e
−
80
a
b
c
2
d
e
+
18
a
b
c
d
3
+
16
a
c
4
e
−
4
a
c
3
d
2
−
27
b
4
e
2
+
18
b
3
c
d
e
−
4
b
3
d
3
−
4
b
2
c
3
e
+
b
2
c
2
d
2
{\displaystyle {\begin{aligned}\Delta ={}&256a^{3}e^{3}-192a^{2}bde^{2}-128a^{2}c^{2}e^{2}+144a^{2}cd^{2}e-27a^{2}d^{4}\\&+144ab^{2}ce^{2}-6ab^{2}d^{2}e-80abc^{2}de+18abcd^{3}+16ac^{4}e\\&-4ac^{3}d^{2}-27b^{4}e^{2}+18b^{3}cde-4b^{3}d^{3}-4b^{2}c^{3}e+b^{2}c^{2}d^{2}\end{aligned}}}
This may be refined by considering the signs of four other polynomials:
P
=
8
a
c
−
3
b
2
{\displaystyle P=8ac-3b^{2}}
such that P/8a2 is the second degree coefficient of the associated depressed quartic (see below);
R
=
b
3
+
8
d
a
2
−
4
a
b
c
,
{\displaystyle R=b^{3}+8da^{2}-4abc,}
such that R/8a3 is the first degree coefficient of the associated depressed quartic;
Δ
0
=
c
2
−
3
b
d
+
12
a
e
,
{\displaystyle \Delta _{0}=c^{2}-3bd+12ae,}
which is 0 if the quartic has a triple root; and
D
=
64
a
3
e
−
16
a
2
c
2
+
16
a
b
2
c
−
16
a
2
b
d
−
3
b
4
{\displaystyle D=64a^{3}e-16a^{2}c^{2}+16ab^{2}c-16a^{2}bd-3b^{4}}
which is 0 if the quartic has two double roots.
The possible cases for the nature of the roots are as follows:
If ∆ < 0 then the equation has two distinct real roots and two complex conjugate non-real roots.
If ∆ > 0 then either the equation's four roots are all real or none is.
If P < 0 and D < 0 then all four roots are real and distinct.
If P > 0 or D > 0 then there are two pairs of non-real complex conjugate roots.
If ∆ = 0 then (and only then) the polynomial has a multiple root. Here are the different cases that can occur:
If P < 0 and D < 0 and ∆0 ≠ 0, there are a real double root and two real simple roots.
If D > 0 or (P > 0 and (D ≠ 0 or R ≠ 0)), there are a real double root and two complex conjugate roots.
If ∆0 = 0 and D ≠ 0, there are a triple root and a simple root, all real.
If D = 0, then:
If P < 0, there are two real double roots.
If P > 0 and R = 0, there are two complex conjugate double roots.
If ∆0 = 0, all four roots are equal to −b/4a
There are some cases that do not seem to be covered, but in fact they cannot occur. For example, ∆0 > 0, P = 0 and D ≤ 0 is not a possible case. In fact, if ∆0 > 0 and P = 0 then D > 0, since
16
a
2
Δ
0
=
3
D
+
P
2
;
{\displaystyle 16a^{2}\Delta _{0}=3D+P^{2};}
so this combination is not possible.
=== General formula for roots ===
The four roots x1, x2, x3, and x4 for the general quartic equation
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
0
{\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,}
with a ≠ 0 are given in the following formula, which is deduced from the one in the section on Ferrari's method by back changing the variables (see § Converting to a depressed quartic) and using the formulas for the quadratic and cubic equations.
x
1
,
2
=
−
b
4
a
−
S
±
1
2
−
4
S
2
−
2
p
+
q
S
x
3
,
4
=
−
b
4
a
+
S
±
1
2
−
4
S
2
−
2
p
−
q
S
{\displaystyle {\begin{aligned}x_{1,2}\ &=-{\frac {b}{4a}}-S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p+{\frac {q}{S}}}}\\x_{3,4}\ &=-{\frac {b}{4a}}+S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p-{\frac {q}{S}}}}\end{aligned}}}
where p and q are the coefficients of the second and of the first degree respectively in the associated depressed quartic
p
=
8
a
c
−
3
b
2
8
a
2
q
=
b
3
−
4
a
b
c
+
8
a
2
d
8
a
3
{\displaystyle {\begin{aligned}p&={\frac {8ac-3b^{2}}{8a^{2}}}\\q&={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}\end{aligned}}}
and where
S
=
1
2
−
2
3
p
+
1
3
a
(
Q
+
Δ
0
Q
)
Q
=
Δ
1
+
Δ
1
2
−
4
Δ
0
3
2
3
{\displaystyle {\begin{aligned}S&={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {1}{3a}}\left(Q+{\frac {\Delta _{0}}{Q}}\right)}}\\Q&={\sqrt[{3}]{\frac {\Delta _{1}+{\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}}{2}}}\end{aligned}}}
(if S = 0 or Q = 0, see § Special cases of the formula, below)
with
Δ
0
=
c
2
−
3
b
d
+
12
a
e
Δ
1
=
2
c
3
−
9
b
c
d
+
27
b
2
e
+
27
a
d
2
−
72
a
c
e
{\displaystyle {\begin{aligned}\Delta _{0}&=c^{2}-3bd+12ae\\\Delta _{1}&=2c^{3}-9bcd+27b^{2}e+27ad^{2}-72ace\end{aligned}}}
and
Δ
1
2
−
4
Δ
0
3
=
−
27
Δ
,
{\displaystyle \Delta _{1}^{2}-4\Delta _{0}^{3}=-27\Delta \ ,}
where
Δ
{\displaystyle \Delta }
is the aforementioned discriminant. For the cube root expression for Q, any of the three cube roots in the complex plane can be used, although if one of them is real that is the natural and simplest one to choose. The mathematical expressions of these last four terms are very similar to those of their cubic counterparts.
==== Special cases of the formula ====
If
Δ
>
0
,
{\displaystyle \Delta >0,}
the value of
Q
{\displaystyle Q}
is a non-real complex number. In this case, either all roots are non-real or they are all real. In the latter case, the value of
S
{\displaystyle S}
is also real, despite being expressed in terms of
Q
;
{\displaystyle Q;}
this is casus irreducibilis of the cubic function extended to the present context of the quartic. One may prefer to express it in a purely real way, by using trigonometric functions, as follows:
S
=
1
2
−
2
3
p
+
2
3
a
Δ
0
cos
φ
3
{\displaystyle S={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {2}{3a}}{\sqrt {\Delta _{0}}}\cos {\frac {\varphi }{3}}}}}
where
φ
=
arccos
(
Δ
1
2
Δ
0
3
)
.
{\displaystyle \varphi =\arccos \left({\frac {\Delta _{1}}{2{\sqrt {\Delta _{0}^{3}}}}}\right).}
If
Δ
≠
0
{\displaystyle \Delta \neq 0}
and
Δ
0
=
0
,
{\displaystyle \Delta _{0}=0,}
the sign of
Δ
1
2
−
4
Δ
0
3
=
Δ
1
2
{\displaystyle {\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}={\sqrt {\Delta _{1}^{2}}}}
has to be chosen to have
Q
≠
0
,
{\displaystyle Q\neq 0,}
that is one should define
Δ
1
2
{\displaystyle {\sqrt {\Delta _{1}^{2}}}}
as
Δ
1
,
{\displaystyle \Delta _{1},}
maintaining the sign of
Δ
1
.
{\displaystyle \Delta _{1}.}
If
S
=
0
,
{\displaystyle S=0,}
then one must change the choice of the cube root in
Q
{\displaystyle Q}
in order to have
S
≠
0.
{\displaystyle S\neq 0.}
This is always possible except if the quartic may be factored into
(
x
+
b
4
a
)
4
.
{\displaystyle \left(x+{\tfrac {b}{4a}}\right)^{4}.}
The result is then correct, but misleading because it hides the fact that no cube root is needed in this case. In fact this case may occur only if the numerator of
q
{\displaystyle q}
is zero, in which case the associated depressed quartic is biquadratic; it may thus be solved by the method described below.
If
Δ
=
0
{\displaystyle \Delta =0}
and
Δ
0
=
0
,
{\displaystyle \Delta _{0}=0,}
and thus also
Δ
1
=
0
,
{\displaystyle \Delta _{1}=0,}
at least three roots are equal to each other, and the roots are rational functions of the coefficients. The triple root
x
0
{\displaystyle x_{0}}
is a common root of the quartic and its second derivative
2
(
6
a
x
2
+
3
b
x
+
c
)
;
{\displaystyle 2(6ax^{2}+3bx+c);}
it is thus also the unique root of the remainder of the Euclidean division of the quartic by its second derivative, which is a linear polynomial. The simple root
x
1
{\displaystyle x_{1}}
can be deduced from
x
1
+
3
x
0
=
−
b
/
a
.
{\displaystyle x_{1}+3x_{0}=-b/a.}
If
Δ
=
0
{\displaystyle \Delta =0}
and
Δ
0
≠
0
,
{\displaystyle \Delta _{0}\neq 0,}
the above expression for the roots is correct but misleading, hiding the fact that the polynomial is reducible and no cube root is needed to represent the roots.
=== Simpler cases ===
==== Reducible quartics ====
Consider the general quartic
Q
(
x
)
=
a
4
x
4
+
a
3
x
3
+
a
2
x
2
+
a
1
x
+
a
0
.
{\displaystyle Q(x)=a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}.}
It is reducible if Q(x) = R(x)×S(x), where R(x) and S(x) are non-constant polynomials with rational coefficients (or more generally with coefficients in the same field as the coefficients of Q(x)). Such a factorization will take one of two forms:
Q
(
x
)
=
(
x
−
x
1
)
(
b
3
x
3
+
b
2
x
2
+
b
1
x
+
b
0
)
{\displaystyle Q(x)=(x-x_{1})(b_{3}x^{3}+b_{2}x^{2}+b_{1}x+b_{0})}
or
Q
(
x
)
=
(
c
2
x
2
+
c
1
x
+
c
0
)
(
d
2
x
2
+
d
1
x
+
d
0
)
.
{\displaystyle Q(x)=(c_{2}x^{2}+c_{1}x+c_{0})(d_{2}x^{2}+d_{1}x+d_{0}).}
In either case, the roots of Q(x) are the roots of the factors, which may be computed using the formulas for the roots of a quadratic function or cubic function.
Detecting the existence of such factorizations can be done using the resolvent cubic of Q(x). It turns out that:
if we are working over R (that is, if coefficients are restricted to be real numbers) (or, more generally, over some real closed field) then there is always such a factorization;
if we are working over Q (that is, if coefficients are restricted to be rational numbers) then there is an algorithm to determine whether or not Q(x) is reducible and, if it is, how to express it as a product of polynomials of smaller degree.
In fact, several methods of solving quartic equations (Ferrari's method, Descartes' method, and, to a lesser extent, Euler's method) are based upon finding such factorizations.
==== Biquadratic equation ====
If a3 = a1 = 0 then the function
Q
(
x
)
=
a
4
x
4
+
a
2
x
2
+
a
0
{\displaystyle Q(x)=a_{4}x^{4}+a_{2}x^{2}+a_{0}}
is called a biquadratic function; equating it to zero defines a biquadratic equation, which is easy to solve as follows
Let the auxiliary variable z = x2.
Then Q(x) becomes a quadratic q in z: q(z) = a4z2 + a2z + a0. Let z+ and z− be the roots of q(z). Then the roots of the quartic Q(x) are
x
1
=
+
z
+
,
x
2
=
−
z
+
,
x
3
=
+
z
−
,
x
4
=
−
z
−
.
{\displaystyle {\begin{aligned}x_{1}&=+{\sqrt {z_{+}}},\\x_{2}&=-{\sqrt {z_{+}}},\\x_{3}&=+{\sqrt {z_{-}}},\\x_{4}&=-{\sqrt {z_{-}}}.\end{aligned}}}
==== Quasi-palindromic equation ====
The polynomial
P
(
x
)
=
a
0
x
4
+
a
1
x
3
+
a
2
x
2
+
a
1
m
x
+
a
0
m
2
{\displaystyle P(x)=a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}}
is almost palindromic, as P(mx) = x4/m2P(m/x) (it is palindromic if m = 1). The change of variables z = x + m/x in P(x)/x2 = 0 produces the quadratic equation a0z2 + a1z + a2 − 2ma0 = 0. Since x2 − xz + m = 0, the quartic equation P(x) = 0 may be solved by applying the quadratic formula twice.
=== Solution methods ===
==== Converting to a depressed quartic ====
For solving purposes, it is generally better to convert the quartic into a depressed quartic by the following simple change of variable. All formulas are simpler and some methods work only in this case. The roots of the original quartic are easily recovered from that of the depressed quartic by the reverse change of variable.
Let
a
4
x
4
+
a
3
x
3
+
a
2
x
2
+
a
1
x
+
a
0
=
0
{\displaystyle a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0}
be the general quartic equation we want to solve.
Dividing by a4, provides the equivalent equation x4 + bx3 + cx2 + dx + e = 0, with b = a3/a4, c = a2/a4, d = a1/a4, and e = a0/a4.
Substituting y − b/4 for x gives, after regrouping the terms, the equation y4 + py2 + qy + r = 0,
where
p
=
8
c
−
3
b
2
8
=
8
a
2
a
4
−
3
a
3
2
8
a
4
2
q
=
b
3
−
4
b
c
+
8
d
8
=
a
3
3
−
4
a
2
a
3
a
4
+
8
a
1
a
4
2
8
a
4
3
r
=
−
3
b
4
+
256
e
−
64
b
d
+
16
b
2
c
256
=
−
3
a
3
4
+
256
a
0
a
4
3
−
64
a
1
a
3
a
4
2
+
16
a
2
a
3
2
a
4
256
a
4
4
.
{\displaystyle {\begin{aligned}p&={\frac {8c-3b^{2}}{8}}={\frac {8a_{2}a_{4}-3{a_{3}}^{2}}{8{a_{4}}^{2}}}\\q&={\frac {b^{3}-4bc+8d}{8}}={\frac {{a_{3}}^{3}-4a_{2}a_{3}a_{4}+8a_{1}{a_{4}}^{2}}{8{a_{4}}^{3}}}\\r&={\frac {-3b^{4}+256e-64bd+16b^{2}c}{256}}={\frac {-3{a_{3}}^{4}+256a_{0}{a_{4}}^{3}-64a_{1}a_{3}{a_{4}}^{2}+16a_{2}{a_{3}}^{2}a_{4}}{256{a_{4}}^{4}}}.\end{aligned}}}
If y0 is a root of this depressed quartic, then y0 − b/4 (that is y0 − a3/4a4) is a root of the original quartic and every root of the original quartic can be obtained by this process.
==== Ferrari's solution ====
As explained in the preceding section, we may start with the depressed quartic equation
y
4
+
p
y
2
+
q
y
+
r
=
0.
{\displaystyle y^{4}+py^{2}+qy+r=0.}
This depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. The depressed equation may be rewritten (this is easily verified by expanding the square and regrouping all terms in the left-hand side) as
(
y
2
+
p
2
)
2
=
−
q
y
−
r
+
p
2
4
.
{\displaystyle \left(y^{2}+{\frac {p}{2}}\right)^{2}=-qy-r+{\frac {p^{2}}{4}}.}
Then, we introduce a variable m into the factor on the left-hand side by adding 2y2m + pm + m2 to both sides. After regrouping the coefficients of the power of y on the right-hand side, this gives the equation
which is equivalent to the original equation, whichever value is given to m.
As the value of m may be arbitrarily chosen, we will choose it in order to complete the square on the right-hand side. This implies that the discriminant in y of this quadratic equation is zero, that is m is a root of the equation
(
−
q
)
2
−
4
(
2
m
)
(
m
2
+
p
m
+
p
2
4
−
r
)
=
0
,
{\displaystyle (-q)^{2}-4(2m)\left(m^{2}+pm+{\frac {p^{2}}{4}}-r\right)=0,\,}
which may be rewritten as
This is the resolvent cubic of the quartic equation. The value of m may thus be obtained from Cardano's formula. When m is a root of this equation, the right-hand side of equation (1) is the square
(
2
m
y
−
q
2
2
m
)
2
.
{\displaystyle \left({\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.}
However, this induces a division by zero if m = 0. This implies q = 0, and thus that the depressed equation is bi-quadratic, and may be solved by an easier method (see above). This was not a problem at the time of Ferrari, when one solved only explicitly given equations with numeric coefficients. For a general formula that is always true, one thus needs to choose a root of the cubic equation such that m ≠ 0. This is always possible except for the depressed equation y4 = 0.
Now, if m is a root of the cubic equation such that m ≠ 0, equation (1) becomes
(
y
2
+
p
2
+
m
)
2
=
(
y
2
m
−
q
2
2
m
)
2
.
{\displaystyle \left(y^{2}+{\frac {p}{2}}+m\right)^{2}=\left(y{\sqrt {2m}}-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.}
This equation is of the form M2 = N2, which can be rearranged as M2 − N2 = 0 or (M + N)(M − N) = 0. Therefore, equation (1) may be rewritten as
(
y
2
+
p
2
+
m
+
2
m
y
−
q
2
2
m
)
(
y
2
+
p
2
+
m
−
2
m
y
+
q
2
2
m
)
=
0.
{\displaystyle \left(y^{2}+{\frac {p}{2}}+m+{\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)\left(y^{2}+{\frac {p}{2}}+m-{\sqrt {2m}}y+{\frac {q}{2{\sqrt {2m}}}}\right)=0.}
This equation is easily solved by applying to each factor the quadratic formula. Solving them we may write the four roots as
y
=
±
1
2
m
±
2
−
(
2
p
+
2
m
±
1
2
q
m
)
2
,
{\displaystyle y={\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2},}
where ±1 and ±2 denote either + or −. As the two occurrences of ±1 must denote the same sign, this leaves four possibilities, one for each root.
Therefore, the solutions of the original quartic equation are
x
=
−
a
3
4
a
4
+
±
1
2
m
±
2
−
(
2
p
+
2
m
±
1
2
q
m
)
2
.
{\displaystyle x=-{a_{3} \over 4a_{4}}+{\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2}.}
A comparison with the general formula above shows that √2m = 2S.
==== Descartes' solution ====
Descartes introduced in 1637 the method of finding the roots of a quartic polynomial by factoring it into two quadratic ones. Let
x
4
+
b
x
3
+
c
x
2
+
d
x
+
e
=
(
x
2
+
s
x
+
t
)
(
x
2
+
u
x
+
v
)
=
x
4
+
(
s
+
u
)
x
3
+
(
t
+
v
+
s
u
)
x
2
+
(
s
v
+
t
u
)
x
+
t
v
{\displaystyle {\begin{aligned}x^{4}+bx^{3}+cx^{2}+dx+e&=(x^{2}+sx+t)(x^{2}+ux+v)\\&=x^{4}+(s+u)x^{3}+(t+v+su)x^{2}+(sv+tu)x+tv\end{aligned}}}
By equating coefficients, this results in the following system of equations:
{
b
=
s
+
u
c
=
t
+
v
+
s
u
d
=
s
v
+
t
u
e
=
t
v
{\displaystyle \left\{{\begin{array}{l}b=s+u\\c=t+v+su\\d=sv+tu\\e=tv\end{array}}\right.}
This can be simplified by starting again with the depressed quartic y4 + py2 + qy + r, which can be obtained by substituting y − b/4 for x. Since the coefficient of y3 is 0, we get s = −u, and:
{
p
+
u
2
=
t
+
v
q
=
u
(
t
−
v
)
r
=
t
v
{\displaystyle \left\{{\begin{array}{l}p+u^{2}=t+v\\q=u(t-v)\\r=tv\end{array}}\right.}
One can now eliminate both t and v by doing the following:
u
2
(
p
+
u
2
)
2
−
q
2
=
u
2
(
t
+
v
)
2
−
u
2
(
t
−
v
)
2
=
u
2
[
(
t
+
v
+
(
t
−
v
)
)
(
t
+
v
−
(
t
−
v
)
)
]
=
u
2
(
2
t
)
(
2
v
)
=
4
u
2
t
v
=
4
u
2
r
{\displaystyle {\begin{aligned}u^{2}(p+u^{2})^{2}-q^{2}&=u^{2}(t+v)^{2}-u^{2}(t-v)^{2}\\&=u^{2}[(t+v+(t-v))(t+v-(t-v))]\\&=u^{2}(2t)(2v)\\&=4u^{2}tv\\&=4u^{2}r\end{aligned}}}
If we set U = u2, then solving this equation becomes finding the roots of the resolvent cubic
which is done elsewhere. This resolvent cubic is equivalent to the resolvent cubic given above (equation (1a)), as can be seen by substituting U = 2m.
If u is a square root of a non-zero root of this resolvent (such a non-zero root exists except for the quartic x4, which is trivially factored),
{
s
=
−
u
2
t
=
p
+
u
2
+
q
/
u
2
v
=
p
+
u
2
−
q
/
u
{\displaystyle \left\{{\begin{array}{l}s=-u\\2t=p+u^{2}+q/u\\2v=p+u^{2}-q/u\end{array}}\right.}
The symmetries in this solution are as follows. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of u for the square root of U merely exchanges the two quadratics with one another.
The above solution shows that a quartic polynomial with rational coefficients and a zero coefficient on the cubic term is factorable into quadratics with rational coefficients if and only if either the resolvent cubic (2) has a non-zero root which is the square of a rational, or p2 − 4r is the square of rational and q = 0; this can readily be checked using the rational root test.
==== Euler's solution ====
A variant of the previous method is due to Euler. Unlike the previous methods, both of which use some root of the resolvent cubic, Euler's method uses all of them. Consider a depressed quartic x4 + px2 + qx + r. Observe that, if
x4 + px2 + qx + r = (x2 + sx + t)(x2 − sx + v),
r1 and r2 are the roots of x2 + sx + t,
r3 and r4 are the roots of x2 − sx + v,
then
the roots of x4 + px2 + qx + r are r1, r2, r3, and r4,
r1 + r2 = −s,
r3 + r4 = s.
Therefore, (r1 + r2)(r3 + r4) = −s2. In other words, −(r1 + r2)(r3 + r4) is one of the roots of the resolvent cubic (2) and this suggests that the roots of that cubic are equal to −(r1 + r2)(r3 + r4), −(r1 + r3)(r2 + r4), and −(r1 + r4)(r2 + r3). This is indeed true and it follows from Vieta's formulas. It also follows from Vieta's formulas, together with the fact that we are working with a depressed quartic, that r1 + r2 + r3 + r4 = 0. (Of course, this also follows from the fact that r1 + r2 + r3 + r4 = −s + s.) Therefore, if α, β, and γ are the roots of the resolvent cubic, then the numbers r1, r2, r3, and r4 are such that
{
r
1
+
r
2
+
r
3
+
r
4
=
0
(
r
1
+
r
2
)
(
r
3
+
r
4
)
=
−
α
(
r
1
+
r
3
)
(
r
2
+
r
4
)
=
−
β
(
r
1
+
r
4
)
(
r
2
+
r
3
)
=
−
γ
.
{\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\(r_{1}+r_{2})(r_{3}+r_{4})=-\alpha \\(r_{1}+r_{3})(r_{2}+r_{4})=-\beta \\(r_{1}+r_{4})(r_{2}+r_{3})=-\gamma {\text{.}}\end{array}}\right.}
It is a consequence of the first two equations that r1 + r2 is a square root of α and that r3 + r4 is the other square root of α. For the same reason,
r1 + r3 is a square root of β,
r2 + r4 is the other square root of β,
r1 + r4 is a square root of γ,
r2 + r3 is the other square root of γ.
Therefore, the numbers r1, r2, r3, and r4 are such that
{
r
1
+
r
2
+
r
3
+
r
4
=
0
r
1
+
r
2
=
α
r
1
+
r
3
=
β
r
1
+
r
4
=
γ
;
{\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\r_{1}+r_{2}={\sqrt {\alpha }}\\r_{1}+r_{3}={\sqrt {\beta }}\\r_{1}+r_{4}={\sqrt {\gamma }}{\text{;}}\end{array}}\right.}
the sign of the square roots will be dealt with below. The only solution of this system is:
{
r
1
=
α
+
β
+
γ
2
r
2
=
α
−
β
−
γ
2
r
3
=
−
α
+
β
−
γ
2
r
4
=
−
α
−
β
+
γ
2
.
{\displaystyle \left\{{\begin{array}{l}r_{1}={\frac {{\sqrt {\alpha }}+{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}\\[2mm]r_{2}={\frac {{\sqrt {\alpha }}-{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{3}={\frac {-{\sqrt {\alpha }}+{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{4}={\frac {-{\sqrt {\alpha }}-{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}{\text{.}}\end{array}}\right.}
Since, in general, there are two choices for each square root, it might look as if this provides 8 (= 23) choices for the set {r1, r2, r3, r4}, but, in fact, it provides no more than 2 such choices, because the consequence of replacing one of the square roots by the symmetric one is that the set {r1, r2, r3, r4} becomes the set {−r1, −r2, −r3, −r4}.
In order to determine the right sign of the square roots, one simply chooses some square root for each of the numbers α, β, and γ and uses them to compute the numbers r1, r2, r3, and r4 from the previous equalities. Then, one computes the number √α√β√γ. Since α, β, and γ are the roots of (2), it is a consequence of Vieta's formulas that their product is equal to q2 and therefore that √α√β√γ = ±q. But a straightforward computation shows that
√α√β√γ = r1r2r3 + r1r2r4 + r1r3r4 + r2r3r4.
If this number is −q, then the choice of the square roots was a good one (again, by Vieta's formulas); otherwise, the roots of the polynomial will be −r1, −r2, −r3, and −r4, which are the numbers obtained if one of the square roots is replaced by the symmetric one (or, what amounts to the same thing, if each of the three square roots is replaced by the symmetric one).
This argument suggests another way of choosing the square roots:
pick any square root √α of α and any square root √β of β;
define √γ as
−
q
α
β
{\displaystyle -{\frac {q}{{\sqrt {\alpha }}{\sqrt {\beta }}}}}
.
Of course, this will make no sense if α or β is equal to 0, but 0 is a root of (2) only when q = 0, that is, only when we are dealing with a biquadratic equation, in which case there is a much simpler approach.
==== Solving by Lagrange resolvent ====
The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent cubic whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots; see Lagrange resolvents for the general method. Denote by xi, for i from 0 to 3, the four roots of x4 + bx3 + cx2 + dx + e. If we set
s
0
=
1
2
(
x
0
+
x
1
+
x
2
+
x
3
)
,
s
1
=
1
2
(
x
0
−
x
1
+
x
2
−
x
3
)
,
s
2
=
1
2
(
x
0
+
x
1
−
x
2
−
x
3
)
,
s
3
=
1
2
(
x
0
−
x
1
−
x
2
+
x
3
)
,
{\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(x_{0}+x_{1}+x_{2}+x_{3}),\\[4pt]s_{1}&={\tfrac {1}{2}}(x_{0}-x_{1}+x_{2}-x_{3}),\\[4pt]s_{2}&={\tfrac {1}{2}}(x_{0}+x_{1}-x_{2}-x_{3}),\\[4pt]s_{3}&={\tfrac {1}{2}}(x_{0}-x_{1}-x_{2}+x_{3}),\end{aligned}}}
then since the transformation is an involution we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −b/2, we only need the values for s1, s2 and s3. These are the roots of the polynomial
(
s
2
−
s
1
2
)
(
s
2
−
s
2
2
)
(
s
2
−
s
3
2
)
.
{\displaystyle (s^{2}-{s_{1}}^{2})(s^{2}-{s_{2}}^{2})(s^{2}-{s_{3}}^{2}).}
Substituting the si by their values in term of the xi, this polynomial may be expanded in a polynomial in s whose coefficients are symmetric polynomials in the xi. By the fundamental theorem of symmetric polynomials, these coefficients may be expressed as polynomials in the coefficients of the monic quartic. If, for simplification, we suppose that the quartic is depressed, that is b = 0, this results in the polynomial
This polynomial is of degree six, but only of degree three in s2, and so the corresponding equation is solvable by the method described in the article about cubic function. By substituting the roots in the expression of the xi in terms of the si, we obtain expression for the roots. In fact we obtain, apparently, several expressions, depending on the numbering of the roots of the cubic polynomial and of the signs given to their square roots. All these different expressions may be deduced from one of them by simply changing the numbering of the xi.
These expressions are unnecessarily complicated, involving the cubic roots of unity, which can be avoided as follows. If s is any non-zero root of (3), and if we set
F
1
(
x
)
=
x
2
+
s
x
+
c
2
+
s
2
2
−
d
2
s
F
2
(
x
)
=
x
2
−
s
x
+
c
2
+
s
2
2
+
d
2
s
{\displaystyle {\begin{aligned}F_{1}(x)&=x^{2}+sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}-{\frac {d}{2s}}\\F_{2}(x)&=x^{2}-sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}+{\frac {d}{2s}}\end{aligned}}}
then
F
1
(
x
)
×
F
2
(
x
)
=
x
4
+
c
x
2
+
d
x
+
e
.
{\displaystyle F_{1}(x)\times F_{2}(x)=x^{4}+cx^{2}+dx+e.}
We therefore can solve the quartic by solving for s and then solving for the roots of the two factors using the quadratic formula.
This gives exactly the same formula for the roots as the one provided by Descartes' method.
==== Solving with algebraic geometry ====
There is an alternative solution using algebraic geometry In brief, one interprets the roots as the intersection of two quadratic curves, then finds the three reducible quadratic curves (pairs of lines) that pass through these points (this corresponds to the resolvent cubic, the pairs of lines being the Lagrange resolvents), and then use these linear equations to solve the quadratic.
The four roots of the depressed quartic x4 + px2 + qx + r = 0 may also be expressed as the x coordinates of the intersections of the two quadratic equations y2 + py + qx + r = 0 and y − x2 = 0 i.e., using the substitution y = x2 that two quadratics intersect in four points is an instance of Bézout's theorem. Explicitly, the four points are Pi ≔ (xi, xi2) for the four roots xi of the quartic.
These four points are not collinear because they lie on the irreducible quadratic y = x2 and thus there is a 1-parameter family of quadratics (a pencil of curves) passing through these points. Writing the projectivization of the two quadratics as quadratic forms in three variables:
F
1
(
X
,
Y
,
Z
)
:=
Y
2
+
p
Y
Z
+
q
X
Z
+
r
Z
2
,
F
2
(
X
,
Y
,
Z
)
:=
Y
Z
−
X
2
{\displaystyle {\begin{aligned}F_{1}(X,Y,Z)&:=Y^{2}+pYZ+qXZ+rZ^{2},\\F_{2}(X,Y,Z)&:=YZ-X^{2}\end{aligned}}}
the pencil is given by the forms λF1 + μF2 for any point [λ, μ] in the projective line — in other words, where λ and μ are not both zero, and multiplying a quadratic form by a constant does not change its quadratic curve of zeros.
This pencil contains three reducible quadratics, each corresponding to a pair of lines, each passing through two of the four points, which can be done
(
4
2
)
{\displaystyle \textstyle {\binom {4}{2}}}
= 6 different ways. Denote these Q1 = L12 + L34, Q2 = L13 + L24, and Q3 = L14 + L23. Given any two of these, their intersection has exactly the four points.
The reducible quadratics, in turn, may be determined by expressing the quadratic form λF1 + μF2 as a 3×3 matrix: reducible quadratics correspond to this matrix being singular, which is equivalent to its determinant being zero, and the determinant is a homogeneous degree three polynomial in λ and μ and corresponds to the resolvent cubic.
== See also ==
Linear function – Linear map or polynomial function of degree one
Quadratic function – Polynomial function of degree two
Cubic function – Polynomial function of degree 3
Quintic function – Polynomial function of degree 5
== Notes ==
^α For the purposes of this article, e is used as a variable as opposed to its conventional use as Euler's number (except when otherwise specified).
== References ==
== Further reading ==
Carpenter, W. (1966). "On the solution of the real quartic". Mathematics Magazine. 39 (1): 28–30. doi:10.2307/2688990. JSTOR 2688990.
Yacoub, M.D.; Fraidenraich, G. (July 2012). "A solution to the quartic equation". Mathematical Gazette. 96: 271–275. doi:10.1017/s002555720000454x. S2CID 124512391.
== External links ==
Quartic formula as four single equations at PlanetMath.
Ferrari's achievement | Wikipedia/Biquadratic_function |
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
The set of rational functions over a field K is a field, the field of fractions of the ring of the polynomial functions over K.
== Definitions ==
A function
f
{\displaystyle f}
is called a rational function if it can be written in the form
f
(
x
)
=
P
(
x
)
Q
(
x
)
{\displaystyle f(x)={\frac {P(x)}{Q(x)}}}
where
P
{\displaystyle P}
and
Q
{\displaystyle Q}
are polynomial functions of
x
{\displaystyle x}
and
Q
{\displaystyle Q}
is not the zero function. The domain of
f
{\displaystyle f}
is the set of all values of
x
{\displaystyle x}
for which the denominator
Q
(
x
)
{\displaystyle Q(x)}
is not zero.
However, if
P
{\displaystyle \textstyle P}
and
Q
{\displaystyle \textstyle Q}
have a non-constant polynomial greatest common divisor
R
{\displaystyle \textstyle R}
, then setting
P
=
P
1
R
{\displaystyle \textstyle P=P_{1}R}
and
Q
=
Q
1
R
{\displaystyle \textstyle Q=Q_{1}R}
produces a rational function
f
1
(
x
)
=
P
1
(
x
)
Q
1
(
x
)
,
{\displaystyle f_{1}(x)={\frac {P_{1}(x)}{Q_{1}(x)}},}
which may have a larger domain than
f
{\displaystyle f}
, and is equal to
f
{\displaystyle f}
on the domain of
f
.
{\displaystyle f.}
It is a common usage to identify
f
{\displaystyle f}
and
f
1
{\displaystyle f_{1}}
, that is to extend "by continuity" the domain of
f
{\displaystyle f}
to that of
f
1
.
{\displaystyle f_{1}.}
Indeed, one can define a rational fraction as an equivalence class of fractions of polynomials, where two fractions
A
(
x
)
B
(
x
)
{\displaystyle \textstyle {\frac {A(x)}{B(x)}}}
and
C
(
x
)
D
(
x
)
{\displaystyle \textstyle {\frac {C(x)}{D(x)}}}
are considered equivalent if
A
(
x
)
D
(
x
)
=
B
(
x
)
C
(
x
)
{\displaystyle A(x)D(x)=B(x)C(x)}
. In this case
P
(
x
)
Q
(
x
)
{\displaystyle \textstyle {\frac {P(x)}{Q(x)}}}
is equivalent to
P
1
(
x
)
Q
1
(
x
)
.
{\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.}
A proper rational function is a rational function in which the degree of
P
(
x
)
{\displaystyle P(x)}
is less than the degree of
Q
(
x
)
{\displaystyle Q(x)}
and both are real polynomials, named by analogy to a proper fraction in
Q
.
{\displaystyle \mathbb {Q} .}
=== Complex rational functions ===
In complex analysis, a rational function
f
(
z
)
=
P
(
z
)
Q
(
z
)
{\displaystyle f(z)={\frac {P(z)}{Q(z)}}}
is the ratio of two polynomials with complex coefficients, where Q is not the zero polynomial and P and Q have no common factor (this avoids f taking the indeterminate value 0/0).
The domain of f is the set of complex numbers such that
Q
(
z
)
≠
0
{\displaystyle Q(z)\neq 0}
.
Every rational function can be naturally extended to a function whose domain and range are the whole Riemann sphere (complex projective line).
A complex rational function with degree one is a Möbius transformation.
Rational functions are representative examples of meromorphic functions.
Iteration of rational functions on the Riemann sphere (i.e. a rational mapping) creates discrete dynamical systems.
Julia sets for rational maps
=== Degree ===
There are several non equivalent definitions of the degree of a rational function.
Most commonly, the degree of a rational function is the maximum of the degrees of its constituent polynomials P and Q, when the fraction is reduced to lowest terms. If the degree of f is d, then the equation
f
(
z
)
=
w
{\displaystyle f(z)=w\,}
has d distinct solutions in z except for certain values of w, called critical values, where two or more solutions coincide or where some solution is rejected at infinity (that is, when the degree of the equation decreases after having cleared the denominator).
The degree of the graph of a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator.
In some contexts, such as in asymptotic analysis, the degree of a rational function is the difference between the degrees of the numerator and the denominator.: §13.6.1 : Chapter IV
In network synthesis and network analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called a biquadratic function.
== Examples ==
The rational function
f
(
x
)
=
x
3
−
2
x
2
(
x
2
−
5
)
{\displaystyle f(x)={\frac {x^{3}-2x}{2(x^{2}-5)}}}
is not defined at
x
2
=
5
⇔
x
=
±
5
.
{\displaystyle x^{2}=5\Leftrightarrow x=\pm {\sqrt {5}}.}
It is asymptotic to
x
2
{\displaystyle {\tfrac {x}{2}}}
as
x
→
∞
.
{\displaystyle x\to \infty .}
The rational function
f
(
x
)
=
x
2
+
2
x
2
+
1
{\displaystyle f(x)={\frac {x^{2}+2}{x^{2}+1}}}
is defined for all real numbers, but not for all complex numbers, since if x were a square root of
−
1
{\displaystyle -1}
(i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero:
f
(
i
)
=
i
2
+
2
i
2
+
1
=
−
1
+
2
−
1
+
1
=
1
0
,
{\displaystyle f(i)={\frac {i^{2}+2}{i^{2}+1}}={\frac {-1+2}{-1+1}}={\frac {1}{0}},}
which is undefined.
A constant function such as f(x) = π is a rational function since constants are polynomials. The function itself is rational, even though the value of f(x) is irrational for all x.
Every polynomial function
f
(
x
)
=
P
(
x
)
{\displaystyle f(x)=P(x)}
is a rational function with
Q
(
x
)
=
1.
{\displaystyle Q(x)=1.}
A function that cannot be written in this form, such as
f
(
x
)
=
sin
(
x
)
,
{\displaystyle f(x)=\sin(x),}
is not a rational function. However, the adjective "irrational" is not generally used for functions.
Every Laurent polynomial can be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is a subring of the rational functions.
The rational function
f
(
x
)
=
x
x
{\displaystyle f(x)={\tfrac {x}{x}}}
is equal to 1 for all x except 0, where there is a removable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, since x/x is equivalent to 1/1.
== Taylor series ==
The coefficients of a Taylor series of any rational function satisfy a linear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collecting like terms after clearing the denominator.
For example,
1
x
2
−
x
+
2
=
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle {\frac {1}{x^{2}-x+2}}=\sum _{k=0}^{\infty }a_{k}x^{k}.}
Multiplying through by the denominator and distributing,
1
=
(
x
2
−
x
+
2
)
∑
k
=
0
∞
a
k
x
k
{\displaystyle 1=(x^{2}-x+2)\sum _{k=0}^{\infty }a_{k}x^{k}}
1
=
∑
k
=
0
∞
a
k
x
k
+
2
−
∑
k
=
0
∞
a
k
x
k
+
1
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=0}^{\infty }a_{k}x^{k+2}-\sum _{k=0}^{\infty }a_{k}x^{k+1}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
After adjusting the indices of the sums to get the same powers of x, we get
1
=
∑
k
=
2
∞
a
k
−
2
x
k
−
∑
k
=
1
∞
a
k
−
1
x
k
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=2}^{\infty }a_{k-2}x^{k}-\sum _{k=1}^{\infty }a_{k-1}x^{k}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
Combining like terms gives
1
=
2
a
0
+
(
2
a
1
−
a
0
)
x
+
∑
k
=
2
∞
(
a
k
−
2
−
a
k
−
1
+
2
a
k
)
x
k
.
{\displaystyle 1=2a_{0}+(2a_{1}-a_{0})x+\sum _{k=2}^{\infty }(a_{k-2}-a_{k-1}+2a_{k})x^{k}.}
Since this holds true for all x in the radius of convergence of the original Taylor series, we can compute as follows. Since the constant term on the left must equal the constant term on the right it follows that
a
0
=
1
2
.
{\displaystyle a_{0}={\frac {1}{2}}.}
Then, since there are no powers of x on the left, all of the coefficients on the right must be zero, from which it follows that
a
1
=
1
4
{\displaystyle a_{1}={\frac {1}{4}}}
a
k
=
1
2
(
a
k
−
1
−
a
k
−
2
)
for
k
≥
2.
{\displaystyle a_{k}={\frac {1}{2}}(a_{k-1}-a_{k-2})\quad {\text{for}}\ k\geq 2.}
Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by using partial fraction decomposition we can write any proper rational function as a sum of factors of the form 1 / (ax + b) and expand these as geometric series, giving an explicit formula for the Taylor coefficients; this is the method of generating functions.
== Abstract algebra ==
In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting, given a field F and some indeterminate X, a rational expression (also known as a rational fraction or, in algebraic geometry, a rational function) is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However, since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors.
The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X.
=== Notion of a rational function on an algebraic variety ===
Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn).
An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line.
== Applications ==
Rational functions are used in numerical analysis for interpolation and approximation of functions, for example the Padé approximants introduced by Henri Padé. Approximations in terms of rational functions are well suited for computer algebra systems and other numerical software. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials.
Rational functions are used to approximate or model more complex equations in science and engineering including fields and forces in physics, spectroscopy in analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo, wave functions for atoms and molecules, optics and photography to improve image resolution, and acoustics and sound.
In signal processing, the Laplace transform (for continuous systems) or the z-transform (for discrete-time systems) of the impulse response of commonly-used linear time-invariant systems (filters) with infinite impulse response are rational functions over complex numbers.
== See also ==
Partial fraction decomposition
Partial fractions in integration
Function field of an algebraic variety
Algebraic fractions – a generalization of rational functions that allows taking integer roots
== References ==
== Further reading ==
"Rational function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007), "Section 3.4. Rational Function Interpolation and Extrapolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8
== External links ==
Dynamic visualization of rational functions with JSXGraph | Wikipedia/Biquadratic_rational_function |
In geometry, a degenerate conic is a conic (a second-degree plane curve, defined by a polynomial equation of degree two) that fails to be an irreducible curve. This means that the defining equation is factorable over the complex numbers (or more generally over an algebraically closed field) as the product of two linear polynomials.
Using the alternative definition of the conic as the intersection in three-dimensional space of a plane and a double cone, a conic is degenerate if the plane goes through the vertex of the cones.
In the real plane, a degenerate conic can be two lines that may or may not be parallel, a single line (either two coinciding lines or the union of a line and the line at infinity), a single point (in fact, two complex conjugate lines), or the null set (twice the line at infinity or two parallel complex conjugate lines).
All these degenerate conics may occur in pencils of conics. That is, if two real non-degenerated conics are defined by quadratic polynomial equations f = 0 and g = 0, the conics of equations af + bg = 0 form a pencil, which contains one or three degenerate conics. For any degenerate conic in the real plane, one may choose f and g so that the given degenerate conic belongs to the pencil they determine.
== Examples ==
The conic section with equation
x
2
−
y
2
=
0
{\displaystyle x^{2}-y^{2}=0}
is degenerate as its equation can be written as
(
x
−
y
)
(
x
+
y
)
=
0
{\displaystyle (x-y)(x+y)=0}
, and corresponds to two intersecting lines forming an "X". This degenerate conic occurs as the limit case
a
=
1
,
b
=
0
{\displaystyle a=1,b=0}
in the pencil of hyperbolas of equations
a
(
x
2
−
y
2
)
−
b
=
0.
{\displaystyle a(x^{2}-y^{2})-b=0.}
The limiting case
a
=
0
,
b
=
1
{\displaystyle a=0,b=1}
is an example of a degenerate conic consisting of twice the line at infinity.
Similarly, the conic section with equation
x
2
+
y
2
=
0
{\displaystyle x^{2}+y^{2}=0}
, which has only one real point, is degenerate, as
x
2
+
y
2
{\displaystyle x^{2}+y^{2}}
is factorable as
(
x
+
i
y
)
(
x
−
i
y
)
{\displaystyle (x+iy)(x-iy)}
over the complex numbers. The conic consists thus of two complex conjugate lines that intersect in the unique real point,
(
0
,
0
)
{\displaystyle (0,0)}
, of the conic.
The pencil of ellipses of equations
a
x
2
+
b
(
y
2
−
1
)
=
0
{\displaystyle ax^{2}+b(y^{2}-1)=0}
degenerates, for
a
=
0
,
b
=
1
{\displaystyle a=0,b=1}
, into two parallel lines and, for
a
=
1
,
b
=
0
{\displaystyle a=1,b=0}
, into a double line.
The pencil of circles of equations
a
(
x
2
+
y
2
−
1
)
−
b
x
=
0
{\displaystyle a(x^{2}+y^{2}-1)-bx=0}
degenerates for
a
=
0
{\displaystyle a=0}
into two lines, the line at infinity and the line of equation
x
=
0
{\displaystyle x=0}
.
== Classification ==
Over the complex projective plane there are only two types of degenerate conics – two different lines, which necessarily intersect in one point, or one double line. Any degenerate conic may be transformed by a projective transformation into any other degenerate conic of the same type.
Over the real affine plane the situation is more complicated. A degenerate real conic may be:
Two intersecting lines, such as
x
2
−
y
2
=
0
⇔
(
x
+
y
)
(
x
−
y
)
=
0
{\displaystyle x^{2}-y^{2}=0\Leftrightarrow (x+y)(x-y)=0}
Two parallel lines, such as
x
2
−
1
=
0
⇔
(
x
+
1
)
(
x
−
1
)
=
0
{\displaystyle x^{2}-1=0\Leftrightarrow (x+1)(x-1)=0}
A double line (multiplicity 2), such as
x
2
=
0
{\displaystyle x^{2}=0}
Two intersecting complex conjugate lines (only one real point), such as
x
2
+
y
2
=
0
⇔
(
x
+
i
y
)
(
x
−
i
y
)
=
0
{\displaystyle x^{2}+y^{2}=0\Leftrightarrow (x+iy)(x-iy)=0}
Two parallel complex conjugate lines (no real point), such as
x
2
+
1
=
0
⇔
(
x
+
i
)
(
x
−
i
)
=
0
{\displaystyle x^{2}+1=0\Leftrightarrow (x+i)(x-i)=0}
A single line and the line at infinity
Twice the line at infinity (no real point in the affine plane)
For any two degenerate conics of the same class, there are affine transformations mapping the first conic to the second one.
== Discriminant ==
Non-degenerate real conics can be classified as ellipses, parabolas, or hyperbolas by the discriminant of the non-homogeneous form
A
x
2
+
2
B
x
y
+
C
y
2
+
2
D
x
+
2
E
y
+
F
{\displaystyle Ax^{2}+2Bxy+Cy^{2}+2Dx+2Ey+F}
, which is the determinant of the matrix
M
=
[
A
B
B
C
]
,
{\displaystyle M={\begin{bmatrix}A&B\\B&C\\\end{bmatrix}},}
the matrix of the quadratic form in
(
x
,
y
)
{\displaystyle (x,y)}
. This determinant is positive, zero, or negative as the conic is, respectively, an ellipse, a parabola, or a hyperbola.
Analogously, a conic can be classified as non-degenerate or degenerate according to the discriminant of the homogeneous quadratic form in
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
.: p.16 Here the affine form is homogenized to
A
x
2
+
2
B
x
y
+
C
y
2
+
2
D
x
z
+
2
E
y
z
+
F
z
2
;
{\displaystyle Ax^{2}+2Bxy+Cy^{2}+2Dxz+2Eyz+Fz^{2};}
the discriminant of this form is the determinant of the matrix
Q
=
[
A
B
D
B
C
E
D
E
F
]
.
{\displaystyle Q={\begin{bmatrix}A&B&D\\B&C&E\\D&E&F\\\end{bmatrix}}.}
The conic is degenerate if and only if the determinant of this matrix equals zero. In this case, we have the following possibilities:
Two intersecting lines (a hyperbola degenerated to its two asymptotes) if and only if
det
M
<
0
{\displaystyle \det M<0}
(see first diagram).
Two parallel straight lines (a degenerate parabola) if and only if
det
M
=
0
{\displaystyle \det M=0}
. These lines are distinct and real if
D
2
+
E
2
>
(
A
+
C
)
F
{\displaystyle D^{2}+E^{2}>(A+C)F}
(see second diagram), coincident if
D
2
+
E
2
=
(
A
+
C
)
F
{\displaystyle D^{2}+E^{2}=(A+C)F}
, and non-existent in the real plane if
D
2
+
E
2
<
(
A
+
C
)
F
{\displaystyle D^{2}+E^{2}<(A+C)F}
.
A single point (a degenerate ellipse) if and only if
det
M
>
0
{\displaystyle \det M>0}
.
A single line (and the line at infinity) if and only if
A
=
B
=
C
=
0
,
{\displaystyle A=B=C=0,}
and
D
{\displaystyle D}
and
E
{\displaystyle E}
are not both zero. This case always occurs as a degenerate conic in a pencil of circles. However, in other contexts it is not considered as a degenerate conic, as its equation is not of degree 2.
The case of coincident lines occurs if and only if the rank of the 3×3 matrix
Q
{\displaystyle Q}
is 1; in all other degenerate cases its rank is 2.: p.108
== Relation to intersection of a plane and a cone ==
Conics, also known as conic sections to emphasize their three-dimensional geometry, arise as the intersection of a plane with a cone. Degeneracy occurs when the plane contains the apex of the cone or when the cone degenerates to a cylinder and the plane is parallel to the axis of the cylinder. See Conic section#Degenerate cases for details.
== Applications ==
Degenerate conics, as with degenerate algebraic varieties generally, arise as limits of non-degenerate conics, and are important in compactification of moduli spaces of curves.
For example, the pencil of curves (1-dimensional linear system of conics) defined by
x
2
+
a
y
2
=
1
{\displaystyle x^{2}+ay^{2}=1}
is non-degenerate for
a
≠
0
{\displaystyle a\neq 0}
but is degenerate for
a
=
0
;
{\displaystyle a=0;}
concretely, it is an ellipse for
a
>
0
,
{\displaystyle a>0,}
two parallel lines for
a
=
0
,
{\displaystyle a=0,}
and a hyperbola with
a
<
0
{\displaystyle a<0}
– throughout, one axis has length 2 and the other has length
1
/
|
a
|
,
{\textstyle 1/{\sqrt {|a|}},}
which is infinity for
a
=
0.
{\displaystyle a=0.}
Such families arise naturally – given four points in general linear position (no three on a line), there is a pencil of conics through them (five points determine a conic, four points leave one parameter free), of which three are degenerate, each consisting of a pair of lines, corresponding to the
(
4
2
,
2
)
=
3
{\displaystyle \textstyle {{\binom {4}{2,2}}=3}}
ways of choosing 2 pairs of points from 4 points (counting via the multinomial coefficient).
For example, given the four points
(
±
1
,
±
1
)
,
{\displaystyle (\pm 1,\pm 1),}
the pencil of conics through them can be parameterized as
(
1
+
a
)
x
2
+
(
1
−
a
)
y
2
=
2
,
{\displaystyle (1+a)x^{2}+(1-a)y^{2}=2,}
yielding the following pencil; in all cases the center is at the origin:
a
>
1
:
{\displaystyle a>1:}
hyperbolae opening left and right;
a
=
1
:
{\displaystyle a=1:}
the parallel vertical lines
x
=
−
1
,
x
=
1
;
{\displaystyle x=-1,\ x=1;}
0
<
a
<
1
:
{\displaystyle 0<a<1:}
ellipses with a vertical major axis;
a
=
0
:
{\displaystyle a=0:}
a circle (with radius
2
{\displaystyle {\sqrt {2}}}
);
−
1
<
a
<
0
:
{\displaystyle -1<a<0:}
ellipses with a horizontal major axis;
a
=
−
1
:
{\displaystyle a=-1:}
the parallel horizontal lines
y
=
−
1
,
y
=
1
;
{\displaystyle y=-1,\ y=1;}
a
<
−
1
:
{\displaystyle a<-1:}
hyperbolae opening up and down,
a
=
∞
:
{\displaystyle a=\infty :}
the diagonal lines
y
=
x
,
y
=
−
x
;
{\displaystyle y=x,\ y=-x;}
(dividing by
a
{\displaystyle a}
and taking the limit as
a
→
∞
{\displaystyle a\to \infty }
yields
x
2
−
y
2
=
0
{\displaystyle x^{2}-y^{2}=0}
)
This then loops around to
a
>
1
,
{\displaystyle a>1,}
since pencils are a projective line.
Note that this parametrization has a symmetry, where inverting the sign of a reverses x and y. In the terminology of (Levy 1964), this is a Type I linear system of conics, and is animated in the linked video.
A striking application of such a family is in (Faucette 1996) which gives a geometric solution to a quartic equation by considering the pencil of conics through the four roots of the quartic, and identifying the three degenerate conics with the three roots of the resolvent cubic.
Pappus's hexagon theorem is the special case of Pascal's theorem, when a conic degenerates to two lines.
== Degeneration ==
In the complex projective plane, all conics are equivalent, and can degenerate to either two different lines or one double line.
In the real affine plane:
Hyperbolas can degenerate to two intersecting lines (the asymptotes), as in
x
2
−
y
2
=
a
2
,
{\displaystyle x^{2}-y^{2}=a^{2},}
or to two parallel lines:
x
2
−
a
2
y
2
=
1
,
{\displaystyle x^{2}-a^{2}y^{2}=1,}
or to the double line
x
2
−
a
2
y
2
=
a
2
,
{\displaystyle x^{2}-a^{2}y^{2}=a^{2},}
as a goes to 0.
Parabolas can degenerate to two parallel lines:
x
2
−
a
y
−
1
=
0
{\displaystyle x^{2}-ay-1=0}
or the double line
x
2
−
a
y
=
0
,
{\displaystyle x^{2}-ay=0,}
as a goes to 0; but, because parabolae have a double point at infinity, cannot degenerate to two intersecting lines.
Ellipses can degenerate to two parallel lines:
x
2
+
a
2
y
2
−
1
=
0
{\displaystyle x^{2}+a^{2}y^{2}-1=0}
or the double line
x
2
+
a
2
y
2
−
a
2
=
0
,
{\displaystyle x^{2}+a^{2}y^{2}-a^{2}=0,}
as a goes to 0; but, because they have conjugate complex points at infinity which become a double point on degeneration, cannot degenerate to two intersecting lines.
Degenerate conics can degenerate further to more special degenerate conics, as indicated by the dimensions of the spaces and points at infinity.
Two intersecting lines can degenerate to two parallel lines, by rotating until parallel, as in
x
2
−
a
y
2
−
1
=
0
,
{\displaystyle x^{2}-ay^{2}-1=0,}
or to a double line by rotating into each other about a point, as in
x
2
−
a
y
2
=
0
,
{\displaystyle x^{2}-ay^{2}=0,}
in each case as a goes to 0.
Two parallel lines can degenerate to a double line by moving into each other, as in
x
2
−
a
2
=
0
{\displaystyle x^{2}-a^{2}=0}
as a goes to 0, but cannot degenerate to non-parallel lines.
A double line cannot degenerate to the other types.
Another type of degeneration occurs for an ellipse when the sum of the distances to the foci is mandated to equal the interfocal distance; thus it has semi-minor axis equal to zero and has eccentricity equal to one. The result is a line segment (degenerate because the ellipse is not differentiable at the endpoints) with its foci at the endpoints. As an orbit, this is a radial elliptic trajectory.
== Points to define ==
A general conic is defined by five points: given five points in general position, there is a unique conic passing through them. If three of these points lie on a line, then the conic is reducible, and may or may not be unique. If no four points are collinear, then five points define a unique conic (degenerate if three points are collinear, but the other two points determine the unique other line). If four points are collinear, however, then there is not a unique conic passing through them – one line passing through the four points, and the remaining line passes through the other point, but the angle is undefined, leaving 1 parameter free. If all five points are collinear, then the remaining line is free, which leaves 2 parameters free.
Given four points in general linear position (no three collinear; in particular, no two coincident), there are exactly three pairs of lines (degenerate conics) passing through them, which will in general be intersecting, unless the points form a trapezoid (one pair is parallel) or a parallelogram (two pairs are parallel).
Given three points, if they are non-collinear, there are three pairs of parallel lines passing through them – choose two to define one line, and the third for the parallel line to pass through, by the parallel postulate.
Given two distinct points, there is a unique double line through them.
== Notes ==
== References == | Wikipedia/Degenerate_conic |
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here.
== Definition ==
B is a partially ordered set and the elements of B are also its bounds.
An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary complementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively. Sum and product commute and associate, as in the usual algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '∙' precedes '+'. Hence A ∙ B + C is parsed as (A ∙ B) + C and not as A ∙ (B + C). Complementation is denoted by writing an overbar over its argument. The numerical analog of the complement of X is 1 − X. In the language of universal algebra, a Boolean algebra is a
⟨
B
,
+
,
{\displaystyle \langle B,+,}
∙
,
.
.
¯
,
1
,
0
⟩
{\displaystyle ,{\overline {..}},1,0\rangle }
algebra of type
⟨
2
,
2
,
1
,
0
,
0
⟩
{\displaystyle \langle 2,2,1,0,0\rangle }
.
Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form, with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '∙' as AND, and vice versa if 1 is read as False. These two operations define a commutative semiring, known as the Boolean semiring.
== Some basic identities ==
2 can be seen as grounded in the following trivial "Boolean" arithmetic:
1
+
1
=
1
+
0
=
0
+
1
=
1
0
+
0
=
0
0
⋅
0
=
0
⋅
1
=
1
⋅
0
=
0
1
⋅
1
=
1
1
¯
=
0
0
¯
=
1
{\displaystyle {\begin{aligned}&1+1=1+0=0+1=1\\&0+0=0\\&0\cdot 0=0\cdot 1=1\cdot 0=0\\&1\cdot 1=1\\&{\overline {1}}=0\\&{\overline {0}}=1\end{aligned}}}
Note that:
'+' and '∙' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '∙' are derived by analogy from numerical arithmetic; simply set any nonzero number to 1.
Swapping 0 and 1, and '+' and '∙' preserves truth; this is the essence of the duality pervading all Boolean algebras.
This Boolean arithmetic suffices to verify any equation of 2, including the axioms, by examining every possible assignment of 0s and 1s to each variable (see decision procedure).
The following equations may now be verified:
A
+
A
=
A
A
⋅
A
=
A
A
+
0
=
A
A
+
1
=
1
A
⋅
0
=
0
A
¯
¯
=
A
{\displaystyle {\begin{aligned}&A+A=A\\&A\cdot A=A\\&A+0=A\\&A+1=1\\&A\cdot 0=0\\&{\overline {\overline {A}}}=A\end{aligned}}}
Each of '+' and '∙' distributes over the other:
A
⋅
(
B
+
C
)
=
A
⋅
B
+
A
⋅
C
;
{\displaystyle \ A\cdot (B+C)=A\cdot B+A\cdot C;}
A
+
(
B
⋅
C
)
=
(
A
+
B
)
⋅
(
A
+
C
)
.
{\displaystyle \ A+(B\cdot C)=(A+B)\cdot (A+C).}
That '∙' distributes over '+' agrees with elementary algebra, but not '+' over '∙'. For this and other reasons, a sum of products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR synthesis).
Each of '+' and '∙' can be defined in terms of the other and complementation:
A
⋅
B
=
A
¯
+
B
¯
¯
{\displaystyle A\cdot B={\overline {{\overline {A}}+{\overline {B}}}}}
A
+
B
=
A
¯
⋅
B
¯
¯
.
{\displaystyle A+B={\overline {{\overline {A}}\cdot {\overline {B}}}}.}
We only need one binary operation, and concatenation suffices to denote it. Hence concatenation and overbar suffice to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X and "()" denote either 0 or 1 yields the syntax of the primary algebra of G. Spencer-Brown's Laws of Form.
A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be derived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only concatenation and overbar is:
A
B
C
=
B
C
A
{\displaystyle \ ABC=BCA}
(Concatenation commutes, associates)
A
¯
A
=
1
{\displaystyle {\overline {A}}A=1}
(2 is a complemented lattice, with an upper bound of 1)
A
0
=
A
{\displaystyle \ A0=A}
(0 is the lower bound).
A
A
B
¯
=
A
B
¯
{\displaystyle A{\overline {AB}}=A{\overline {B}}}
(2 is a distributive lattice)
Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is negation in both cases.)
If 0=1, (1)–(3) are the axioms for an abelian group.
(1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply association from the left and right combined.
This basis makes for an easy approach to proof, called "calculation" in Laws of Form, that proceeds by simplifying expressions to 0 or 1, by invoking axioms (2)–(4), and the elementary identities
A
A
=
A
,
A
¯
¯
=
A
,
1
+
A
=
1
{\displaystyle AA=A,{\overline {\overline {A}}}=A,1+A=1}
, and the distributive law.
== Metatheory ==
De Morgan's theorem states that if one does the following, in the given order, to any Boolean function:
Complement every variable;
Swap '+' and '∙' operators (taking care to add brackets to ensure the order of operations remains the same);
Complement the result,
the result is logically equivalent to what you started with. Repeated application of De Morgan's theorem to parts of a function can be used to drive all complements down to the individual variables.
A powerful and nontrivial metatheorem states that any identity of 2 holds for all Boolean algebras. Conversely, an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2. Hence all identities of Boolean algebra are captured by 2. This theorem is useful because any equation in 2 can be verified by a decision procedure. Logicians refer to this fact as "2 is decidable". All known decision procedures require a number of steps that is an exponential function of the number of variables N appearing in the equation to be verified. Whether there exists a decision procedure whose steps are a polynomial function of N falls under the P = NP conjecture.
The above metatheorem does not hold if we consider the validity of more general first-order logic formulas instead of only atomic positive equalities. As an example consider the formula (x = 0) ∨ (x = 1). This formula is always true in a two-element Boolean algebra. In a four-element Boolean algebra whose domain is the powerset of
{
0
,
1
}
{\displaystyle \{0,1\}}
, this formula corresponds to the statement (x = ∅) ∨ (x = {0,1}) and is false when x is
{
1
}
{\displaystyle \{1\}}
. The decidability for the first-order theory of many classes of Boolean algebras can still be shown, using quantifier elimination or small model property (with the domain size computed as a function of the formula and generally larger than 2).
== See also ==
Boolean algebra
Bounded set
Lattice (order)
Order theory
== References ==
== Further reading ==
Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best of the lot, and one still in print, is:
Mendelson, Elliot, 1970. Schaum's Outline of Boolean Algebra. McGraw–Hill.
The following items reveal how the two-element Boolean algebra is mathematically nontrivial.
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.
Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2. | Wikipedia/Two-element_Boolean_algebra |
In mathematics, a Cantor algebra, named after Georg Cantor, is one of two closely related Boolean algebras, one countable and one complete.
The countable Cantor algebra is the Boolean algebra of all clopen subsets of the Cantor set. This is the free Boolean algebra on a countable number of generators. Up to isomorphism, this is the only nontrivial Boolean algebra that is both countable and atomless.
The complete Cantor algebra is the complete Boolean algebra of Borel subsets of the reals modulo meager sets (Balcar & Jech 2006). It is isomorphic to the completion of the countable Cantor algebra. (The complete Cantor algebra is sometimes called the Cohen algebra, though "Cohen algebra" usually refers to a different type of Boolean algebra.) The complete Cantor algebra was studied by von Neumann in 1935 (later published as (von Neumann 1998)), who showed that it is not isomorphic to the random algebra of Borel subsets modulo measure zero sets.
== References ==
Balcar, Bohuslav; Jech, Thomas (2006), "Weak distributivity, a problem of von Neumann and the mystery of measurability", Bulletin of Symbolic Logic, 12 (2): 241–266, doi:10.2178/bsl/1146620061, MR 2223923
von Neumann, John (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton University Press, ISBN 978-0-691-05893-1, MR 0120174 | Wikipedia/Cantor_algebra |
A mixed-signal integrated circuit is any integrated circuit that has both analog circuits and digital circuits on a single semiconductor die. Their usage has grown dramatically with the increased use of cell phones, telecommunications, portable electronics, and automobiles with electronics and digital sensors.
== Overview ==
Integrated circuits (ICs) are generally classified as digital (e.g. a microprocessor) or analog (e.g. an operational amplifier). Mixed-signal ICs contain both digital and analog circuitry on the same chip, and sometimes embedded software. Mixed-signal ICs process both analog and digital signals together. For example, an analog-to-digital converter (ADC) is a typical mixed-signal circuit.
Mixed-signal ICs are often used to convert analog signals to digital signals so that digital devices can process them. For example, mixed-signal ICs are essential components for FM tuners in digital products such as media players, which have digital amplifiers. Any analog signal can be digitized using a very basic ADC, and the smallest and most energy efficient of these are mixed-signal ICs.
Mixed-signal ICs are more difficult to design and manufacture than analog-only or digital-only integrated circuits. For example, an efficient mixed-signal IC may have its digital and analog components share a common power supply. However, analog and digital components have very different power needs and consumption characteristics, which makes this a non-trivial goal in chip design.
Mixed-signal functionality involves both traditional active elements (like transistors) and well-performing passive elements (like coils, capacitors, and resistors) on the same chip. This requires additional modelling understanding and options from manufacturing technologies. High voltage transistors might be needed in the power management functions on a chip with digital functionality, possibly with a low-power CMOS processor system. Some advanced mixed-signal technologies may enable combining analog sensor elements (like pressure sensors or imaging diodes) on the same chip with an ADC.
Typically, mixed-signal ICs do not necessarily need the fastest digital performance. Instead, they need more mature models of active and passive elements for more accurate simulations and verification, such as for testability planning and reliability estimations. Therefore, mixed-signal circuits are typically realized with larger line widths than the highest speed and densest digital logic, and the implementation technologies can be two to four generations behind the latest digital-only implementation technologies. Additionally, mixed signal processing may need passive elements like resistors, capacitors, and coils, which may require specialized metal, dielectric layers, or similar adaptations of standard fabrication processes. Because of these specific requirements, mixed-signal ICs and digital ICs can have different manufacturers (known as foundries).
== Applications ==
There are numerous applications of mixed-signal integrated circuits, such as in mobile phones, modern radio and telecommunication systems, sensor systems with on-chip standardized digital interfaces (including I2C, UART, SPI, or CAN), voice-related signal processing, aerospace and space electronics, the Internet of things (IoT), unmanned aerial vehicles (UAVs), and automotive and other electrical vehicles. Mixed-signal circuits or systems are typically cost-effective solutions, such as for building modern consumer electronics and in industrial, medical, measurement, and space applications.
Examples of mixed-signal integrated circuits include data converters using delta-sigma modulation, analog-to-digital converters and digital-to-analog converters using error detection and correction, and digital radio chips. Digitally controlled sound chips are also mixed-signal circuits. With the advent of cellular and network technology, this category now includes cellular telephone, software radio, and LAN and WAN router integrated circuits.
== Design and development ==
Typically, mixed-signal chips perform some whole function or sub-function in a larger assembly, such as the radio subsystem of a cell phone, or the read data path and laser SLED control logic of a DVD player. Mixed-signal ICs often contain an entire system-on-a-chip. They may also contain on-chip memory blocks (like OTP), which complicates the manufacturing compared to analog ICs. A mixed-signal IC minimizes off-chip interconnects between digital and analog functionality in the system—typically reducing size and weight due to minimized packaging and a smaller module substrate—and therefore increases the reliability of the system.
Because of the use of both digital signal processing and analog circuitry, mixed-signal ICs are usually designed for a very specific purpose. Their design requires a high level of expertise and careful use of computer aided design (CAD) tools. There also exists specific design tools (like mixed-signal simulators) or description languages (like VHDL-AMS). Automated testing of the finished chips can also be challenging. Teradyne, Keysight, and Advantest are the major suppliers of the test equipment for mixed-signal chips.
There are several particular challenges of mixed-signal circuit manufacturing:
CMOS technology is usually optimal for digital performance, while bipolar junction transistors are usually optimal for analog performance. However, until the last decade, it was difficult to combine these cost-effectively or to design both in a single technology without serious performance compromises. The advent of technologies like high performance CMOS, BiCMOS, CMOS SOI, and SiGe have removed many of these former compromises.
Testing functional operation of mixed-signal ICs remains complex, expensive, and often is a "one-off" implementation task (meaning a lot of work is necessary for a product with a single, specific use).
Systematic design methods of analog and mixed-signal circuits are far more primitive than digital circuits. In general, analog circuit design cannot be automated to nearly the extent that digital circuit design can. Combining the two technologies multiplies this complication.
Fast-changing digital signals send noise to sensitive analog inputs. One path for this noise is substrate coupling. A variety of techniques are used to attempt to block or cancel this noise coupling, such as fully differential amplifiers, P+ guard-rings, differential topology, on-chip decoupling, and triple-well isolation.
=== Variations ===
Mixed-signal devices are available as standard parts, but sometimes custom-designed application-specific integrated circuits (ASICs) are necessary. ASICs are designed for new applications, when new standards emerge, or when new energy source(s) are implemented in the system. Due to their specialization, ASICs are usually only developed when production volumes are estimated to be high. The availability of ready-and-tested analog- and mixed-signal IP blocks from foundries or dedicated design houses has lowered the gap to realize mixed-signal ASICs.
There also exist mixed-signal field-programmable gate arrays (FPGAs) and microcontrollers. In these, the same chip that handles digital logic may contain mixed-signal structures like analog-to-digital and digital-to-analog converter(s), operational amplifiers, or wireless connectivity blocks. These mixed-signal FPGAs and microcontrollers are bridging the gap between standard mixed-signal devices, full-custom ASICs, and embedded software; they offer a solution during product development or when product volume is too low to justify an ASIC. However, they can have performance limitations, such as the resolution of the analog-to-digital converters, the speed of digital-to-analog conversion, or a limited number of inputs and outputs. Nevertheless, they can speed up the system architecture design, prototyping, and even production (at small and medium scales). Their usage also can be supported with development boards, development community, and possibly software support.
== History ==
=== MOS switched-capacitor circuits ===
The MOSFET was invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered and used surface passivation by silicon dioxide to create the first planar transistors, the first in which drain and source were adjacent at the same surface. Robert Noyce and Jack Kilby invention of the silicon integrated cicruit was enabled by the planar process developed by Jean Hoerni. In turn, Hoerni's planar process was inspired by the surface passivation method developed at Bell Labs by Carl Frosch and Lincoln Derick in 1955 and 1957.
MOS technology eventually became practical for telephony applications with the MOS mixed-signal integrated circuit, which combines analog and digital signal processing on a single chip, developed by former Bell engineer David A. Hodges with Paul R. Gray at UC Berkeley in the early 1970s. In 1974, Hodges and Gray worked with R.E. Suarez to develop MOS switched capacitor (SC) circuit technology, which they used to develop a digital-to-analog converter (DAC) chip, using MOS capacitors and MOSFET switches for data conversion. MOS analog-to-digital converter (ADC) and DAC chips were commercialized by 1974.
MOS SC circuits led to the development of pulse-code modulation (PCM) codec-filter chips in the late 1970s. The silicon-gate CMOS (complementary MOS) PCM codec-filter chip, developed by Hodges and W.C. Black in 1980, has since been the industry standard for digital telephony. By the 1990s, telecommunication networks such as the public switched telephone network (PSTN) had been largely digitized with very-large-scale integration (VLSI) CMOS PCM codec-filters, widely used in electronic switching systems for telephone exchanges, private branch exchanges (PBX), and key telephone systems (KTS); user-end modems; data transmission applications such as digital loop carriers, pair gain multiplexers, telephone loop extenders, integrated services digital network (ISDN) terminals, digital cordless telephones, and digital cell phones; and applications such as speech recognition equipment, voice data storage, voice mail, and digital tapeless answering machines. The bandwidth of digital telecommunication networks has been rapidly increasing at an exponential rate, as observed by Edholm's law, largely driven by the rapid scaling and miniaturization of MOS technology.
=== RF CMOS circuits ===
While working at Bell Labs in the early 1980s, Pakistani engineer Asad Abidi worked on the development of sub-micron MOSFET (metal–oxide–semiconductor field-effect transistor) VLSI (very large-scale integration) technology at the Advanced LSI Development Lab, along with Marty Lepselter, George E. Smith, and Harry Bol. As one of the few circuit designers at the lab, Abidi demonstrated the potential of sub-micron NMOS integrated circuit technology in high-speed communication circuits, and developed the first MOS amplifiers for Gb/s data rates in optical fiber receivers. Abidi's work was initially met with skepticism from proponents of gallium arsenide and bipolar junction transistors, the dominant technologies for high-speed circuits at the time. In 1985, he joined UCLA, where he pioneered RF CMOS technology in the late 1980s. His work changed the way in which radio-frequency (RF) circuits would be designed, away from discrete bipolar transistors and towards CMOS integrated circuits.
Abidi was researching analog CMOS circuits for signal processing and communications during the late 1980s to early 1990s. In the mid-1990s, the RF CMOS technology that he pioneered was widely adopted in wireless networking, as mobile phones began entering widespread use. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices.
The baseband processors and radio transceivers in all modern wireless networking devices and mobile phones are mass-produced using RF CMOS devices. RF CMOS circuits are widely used to transmit and receive wireless signals in a variety of applications, such as satellite technology (such as GPS), Bluetooth, Wi-Fi, near-field communication (NFC), mobile networks (such as 3G, 4G, and 5G), terrestrial broadcast, and automotive radar applications, among other uses. RF CMOS technology is crucial to modern wireless communications, including wireless networks and mobile communication devices.
== Commercial examples ==
Examples of mixed-signal design houses and resources:
AnSem
CoreHW
EnSilica
ICsense
Presto Engineering
Sondrel
System to ASIC
Triad Semiconductor
Examples of mixed signal FPGAs and microcontrollers:
Analog Devices CM4xx Mixed-Signal Control Processors
Fusion FPGA (from Microsemi, now part of Microchip Technology)
Cypress PSoC – "programmable system on chip", a product from Infineon Technologies (former Cypress Semiconductor)
Texas Instruments' MSP430
Xilinx mixed signal FPGA
Examples of mixed signal foundries:
GlobalFoundries
New Japan Radio
Tower Semiconductor Ltd
X-Fab
List of sound chips
Yamaha FM synthesis sound chips
POKEY
MOS Technology SID
== See also ==
Analog front-end
RFIC
== Notes ==
== References ==
== Further reading ==
Saraju Mohanty (2015). Nanoelectronic Mixed-Signal System Design. McGraw-Hill. ISBN 978-0071825719.
R. Jacob Baker (2009). CMOS Mixed-Signal Circuit Design, Second Edition. http://CMOSedu.com/ | Wikipedia/Mixed-signal_integrated_circuit |
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
== Comparison to CPUs and GPUs ==
Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, without hardware for rasterisation/texture mapping. The TPU ASICs are mounted in a heatsink assembly, which can fit in a hard drive slot within a data center rack, according to Norman Jouppi.
Different types of processors are suited for different types of machine learning models. TPUs are well suited for CNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages for RNNs.
== History ==
According to Jonathan Ross, one of the original TPU engineers, and later the founder of Groq, three separate groups at Google were developing AI accelerators, with the TPU being the design that was ultimately selected. He was not aware of systolic arrays at the time and upon learning the term thought "Oh, that's called a systolic array? It just seemed to make sense."
The tensor processing unit was announced in May 2016 at Google I/O, when the company said that the TPU had already been used inside their data centers for over a year. Google's 2017 paper describing its creation cites previous systolic matrix multipliers of similar architecture built in the 1990s. The chip has been specifically designed for Google's TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks. However, as of 2017 Google still used CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.
Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018, The New York Times reported that Google "would allow other companies to buy access to those chips through its cloud-computing service." Google has said that they were used in the AlphaGo versus Lee Sedol series of human-versus-machine Go games, as well as in the AlphaZero system, which produced Chess, Shogi and Go playing programs from the game rules alone and went on to beat the leading programs in those games. Google has also used TPUs for Google Street View text processing and was able to find all the text in the Street View database in less than five days. In Google Photos, an individual TPU can process over 100 million photos a day. It is also used in RankBrain which Google uses to provide search results.
Google provides third parties access to TPUs through its Cloud TPU service as part of the Google Cloud Platform and through its notebook-based services Kaggle and Colaboratory.
== Products ==
=== First generation TPU ===
The first-generation TPU is an 8-bit matrix multiplication engine, driven with CISC instructions by the host processor across a PCIe 3.0 bus. It is manufactured on a 28 nm process with a die size ≤ 331 mm2. The clock speed is 700 MHz and it has a thermal design power of 28–40 W. It has 28 MiB of on chip memory, and 4 MiB of 32-bit accumulators taking the results of a 256×256 systolic array of 8-bit multipliers. Within the TPU package is 8 GiB of dual-channel 2133 MHz DDR3 SDRAM offering 34 GB/s of bandwidth. Instructions transfer data to or from the host, perform matrix multiplications or convolutions, and apply activation functions.
=== Second generation TPU ===
The second-generation TPU was announced in May 2017. Google stated the first-generation TPU design was limited by memory bandwidth and using 16 GB of High Bandwidth Memory in the second-generation design increased bandwidth to 600 GB/s and performance to 45 teraFLOPS. The TPUs are then arranged into four-chip modules with a performance of 180 teraFLOPS. Then 64 of these modules are assembled into 256-chip pods with 11.5 petaFLOPS of performance. Notably, while the first-generation TPUs were limited to integers, the second-generation TPUs can also calculate in floating point, introducing the bfloat16 format invented by Google Brain. This makes the second-generation TPUs useful for both training and inference of machine learning models. Google has stated these second-generation TPUs will be available on the Google Compute Engine for use in TensorFlow applications.
=== Third generation TPU ===
The third-generation TPU was announced on May 8, 2018. Google announced that processors themselves are twice as powerful as the second-generation TPUs, and would be deployed in pods with four times as many chips as the preceding generation. This results in an 8-fold increase in performance per pod (with up to 1,024 chips per pod) compared to the second-generation TPU deployment.
=== Fourth generation TPU ===
On May 18, 2021, Google CEO Sundar Pichai spoke about TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference. TPU v4 improved performance by more than 2x over TPU v3 chips. Pichai said "A single v4 pod contains 4,096 v4 chips, and each pod has 10x the interconnect bandwidth per chip at scale, compared to any other networking technology.” An April 2023 paper by Google claims TPU v4 is 5-87% faster than an Nvidia A100 at machine learning benchmarks.
There is also an "inference" version, called v4i, that does not require liquid cooling.
=== Fifth generation TPU ===
In 2021, Google revealed the physical layout of TPU v5 is being designed with the assistance of a novel application of deep reinforcement learning. Google claims TPU v5 is nearly twice as fast as TPU v4, and based on that and the relative performance of TPU v4 over A100, some speculate TPU v5 as being as fast as or faster than an H100.
Similar to the v4i being a lighter-weight version of the v4, the fifth generation has a "cost-efficient" version called v5e. In December 2023, Google announced TPU v5p which is claimed to be competitive with the H100.
=== Sixth generation TPU ===
In May 2024, at the Google I/O conference, Google announced TPU v6, which became available in preview in October 2024. Google claimed a 4.7 times performance increase relative to TPU v5e, via larger matrix multiplication units and an increased clock speed. High bandwidth memory (HBM) capacity and bandwidth have also doubled. A pod can contain up to 256 Trillium units.
=== Seventh generation TPU ===
In April 2025, at Google Cloud Next conference, Google unveiled TPU v7. This new chip, called Ironwood, will come in two configurations: a 256-chip cluster and a 9,216-chip cluster. Ironwood will have a peak computational performance rate of 4,614 TFLOP/s.
=== Edge TPU ===
In July 2018, Google announced the Edge TPU. The Edge TPU is Google's purpose-built ASIC chip designed to run machine learning (ML) models for edge computing, meaning it is much smaller and consumes far less power compared to the TPUs hosted in Google datacenters (also known as Cloud TPUs). In January 2019, Google made the Edge TPU available to developers with a line of products under the Coral brand. The Edge TPU is capable of 4 trillion operations per second with 2 W of electrical power.
The product offerings include a single-board computer (SBC), a system on module (SoM), a USB accessory, a mini PCI-e card, and an M.2 card. The SBC Coral Dev Board and Coral SoM both run Mendel Linux OS – a derivative of Debian. The USB, PCI-e, and M.2 products function as add-ons to existing computer systems, and support Debian-based Linux systems on x86-64 and ARM64 hosts (including Raspberry Pi).
The machine learning runtime used to execute models on the Edge TPU is based on TensorFlow Lite. The Edge TPU is only capable of accelerating forward-pass operations, which means it's primarily useful for performing inferences (although it is possible to perform lightweight transfer learning on the Edge TPU). The Edge TPU also only supports 8-bit math, meaning that for a network to be compatible with the Edge TPU, it needs to either be trained using the TensorFlow quantization-aware training technique, or since late 2019 it's also possible to use post-training quantization.
On November 12, 2019, Asus announced a pair of single-board computer (SBCs) featuring the Edge TPU. The Asus Tinker Edge T and Tinker Edge R Board designed for IoT and edge AI. The SBCs officially support Android and Debian operating systems. ASUS has also demonstrated a mini PC called Asus PN60T featuring the Edge TPU.
On January 2, 2020, Google announced the Coral Accelerator Module and Coral Dev Board Mini, to be demonstrated at CES 2020 later the same month. The Coral Accelerator Module is a multi-chip module featuring the Edge TPU, PCIe and USB interfaces for easier integration. The Coral Dev Board Mini is a smaller SBC featuring the Coral Accelerator Module and MediaTek 8167s SoC.
=== Pixel Neural Core ===
On October 15, 2019, Google announced the Pixel 4 smartphone, which contains an Edge TPU called the Pixel Neural Core. Google describe it as "customized to meet the requirements of key camera features in Pixel 4", using a neural network search that sacrifices some accuracy in favor of minimizing latency and power use.
=== Google Tensor ===
Google followed the Pixel Neural Core by integrating an Edge TPU into a custom system-on-chip named Google Tensor, which was released in 2021 with the Pixel 6 line of smartphones. The Google Tensor SoC demonstrated "extremely large performance advantages over the competition" in machine learning-focused benchmarks; although instantaneous power consumption also was relatively high, the improved performance meant less energy was consumed due to shorter periods requiring peak performance.
== Lawsuit ==
In 2019, Singular Computing, founded in 2009 by Joseph Bates, a visiting professor at MIT, filed suit against Google alleging patent infringement in TPU chips. By 2020, Google had successfully lowered the number of claims the court would consider to just two: claim 53 of US 8407273 filed in 2012 and claim 7 of US 9218156 filed in 2013, both of which claim a dynamic range of 10−6 to 106 for floating point numbers, which the standard float16 cannot do (without resorting to subnormal numbers) as it only has five bits for the exponent. In a 2023 court filing, Singular Computing specifically called out Google's use of bfloat16, as that exceeds the dynamic range of float16. Singular claims non-standard floating point formats were non-obvious in 2009, but Google retorts that the VFLOAT format, with configurable number of exponent bits, existed as prior art in 2002. By January 2024, subsequent lawsuits by Singular had brought the number of patents being litigated up to eight. Towards the end of the trial later that month, Google agreed to a settlement with undisclosed terms.
== See also ==
Cognitive computer
AI accelerator
Structure tensor, a mathematical foundation for TPU's
Tensor Core, a similar architecture by Nvidia
TrueNorth, a similar device simulating spiking neurons instead of low-precision tensors
Vision processing unit, a similar device specialised for vision processing
== References ==
== External links ==
Cloud Tensor Processing Units (TPUs) (Documentation from Google Cloud)
Photo of Google's TPU chip and board
Photo of Google's TPU v2 board
Photo of Google's TPU v3 board
Photo of Google's TPU v2 pod | Wikipedia/Tensor_Processing_Unit |
Gottfried Wilhelm Leibniz (or Leibnitz; 1 July 1646 [O.S. 21 June] – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Sir Isaac Newton, with the creation of calculus in addition to many other branches of mathematics, such as binary arithmetic and statistics. Leibniz has been called the "last universal genius" due to his vast expertise across fields, which became a rarity after his lifetime with the coming of the Industrial Revolution and the spread of specialized labor. He is a prominent figure in both the history of philosophy and the history of mathematics. He wrote works on philosophy, theology, ethics, politics, law, history, philology, games, music, and other studies. Leibniz also made major contributions to physics and technology, and anticipated notions that surfaced much later in probability theory, biology, medicine, geology, psychology, linguistics and computer science.
Leibniz contributed to the field of library science, developing a cataloguing system (at the Herzog August Library in Wolfenbüttel, Germany) that came to serve as a model for many of Europe's largest libraries. His contributions to a wide range of subjects were scattered in various learned journals, in tens of thousands of letters and in unpublished manuscripts. He wrote in several languages, primarily in Latin, French and German.
As a philosopher, he was a leading representative of 17th-century rationalism and idealism. As a mathematician, his major achievement was the development of differential and integral calculus, independently of Newton's contemporaneous developments. Leibniz's notation has been favored as the conventional and more exact expression of calculus. In addition to his work on calculus, he is credited with devising the modern binary number system, which is the basis of modern communications and digital computing; however, Thomas Harriot had devised the same system decades before. He envisioned the field of combinatorial topology as early as 1679, and helped initiate the field of fractional calculus.
In the 20th century, Leibniz's notions of the law of continuity and the transcendental law of homogeneity found a consistent mathematical formulation by means of non-standard analysis. He was also a pioneer in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal's calculator, he was the first to describe a pinwheel calculator in 1685 and invented the Leibniz wheel, later used in the arithmometer, the first mass-produced mechanical calculator.
In philosophy and theology, Leibniz is most noted for his optimism, i.e. his conclusion that our world is, in a qualified sense, the best possible world that God could have created, a view sometimes lampooned by other thinkers, such as Voltaire in his satirical novella Candide. Leibniz, along with René Descartes and Baruch Spinoza, was one of the three influential early modern rationalists. His philosophy also assimilates elements of the scholastic tradition, notably the assumption that some substantive knowledge of reality can be achieved by reasoning from first principles or prior definitions. The work of Leibniz anticipated modern logic and still influences contemporary analytic philosophy, such as its adopted use of the term "possible world" to define modal notions.
== Biography ==
=== Early life ===
Gottfried Leibniz was born on 1 July [OS: 21 June] 1646, in Leipzig, Saxony, to Friedrich Leibniz (1597–1652) and Catharina Schmuck (1621–1664).
He was baptized two days later at St. Nicholas Church, Leipzig; his godfather was the Lutheran theologian Martin Geier. His father died when he was six years old, and Leibniz was raised by his mother.
Leibniz's father had been a Professor of Moral Philosophy at the University of Leipzig, where he also served as dean of philosophy. The boy inherited his father's personal library. He was given free access to it from the age of seven, shortly after his father's death. While Leibniz's schoolwork was largely confined to the study of a small canon of authorities, his father's library enabled him to study a wide variety of advanced philosophical and theological works—ones that he would not have otherwise been able to read until his college years. Access to his father's library, largely written in Latin, also led to his proficiency in the Latin language, which he achieved by the age of 12. At the age of 13 he composed 300 hexameters of Latin verse in a single morning for a special event at school.
In April 1661 he enrolled in his father's former university at age 14. There he was guided, among others, by Jakob Thomasius, previously a student of Friedrich. Leibniz completed his bachelor's degree in Philosophy in December 1662. He defended his Disputatio Metaphysical de Principio Individual (Metaphysical Disputation on the Principle of Individuation), which addressed the principle of individuation, on 9 June 1663 [O.S. 30 May], presenting an early version of monadic substance theory. Leibniz earned his master's degree in Philosophy on 7 February 1664. In December 1664 he published and defended a dissertation Specimen Quaestionum Philosophicarum ex Jure collectarum (An Essay of Collected Philosophical Problems of Right), arguing for both a theoretical and a pedagogical relationship between philosophy and law. After one year of legal studies, he was awarded his bachelor's degree in Law on 28 September 1665. His dissertation was titled De conditionibus (On Conditions).
In early 1666, at age 19, Leibniz wrote his first book, De Arte Combinatoria (On the Combinatorial Art), the first part of which was also his habilitation thesis in Philosophy, which he defended in March 1666. De Arte Combinatoria was inspired by Ramon Llull's Ars Magna and contained a proof of the existence of God, cast in geometrical form, and based on the argument from motion.
His next goal was to earn his license and Doctorate in Law, which normally required three years of study. In 1666, the University of Leipzig turned down Leibniz's doctoral application and refused to grant him a Doctorate in Law, most likely due to his relative youth. Leibniz subsequently left Leipzig.
Leibniz then enrolled in the University of Altdorf and quickly submitted a thesis, which he had probably been working on earlier in Leipzig. The title of his thesis was Disputatio Inauguralis de Casibus Perplexis in Jure (Inaugural Disputation on Ambiguous Legal Cases). Leibniz earned his license to practice law and his Doctorate in Law in November 1666. He next declined the offer of an academic appointment at Altdorf, saying that "my thoughts were turned in an entirely different direction".
As an adult, Leibniz often introduced himself as "Gottfried von Leibniz". Many posthumously published editions of his writings presented his name on the title page as "Freiherr G. W. von Leibniz." However, no document has ever been found from any contemporary government that stated his appointment to any form of nobility.
=== 1666–1676 ===
Leibniz's first position was as a salaried secretary to an alchemical society in Nuremberg. He knew fairly little about the subject at that time but presented himself as deeply learned. He soon met Johann Christian von Boyneburg (1622–1672), the dismissed chief minister of the Elector of Mainz, Johann Philipp von Schönborn. Von Boyneburg hired Leibniz as an assistant, and shortly thereafter reconciled with the Elector and introduced Leibniz to him. Leibniz then dedicated an essay on law to the Elector in the hope of obtaining employment. The stratagem worked; the Elector asked Leibniz to assist with the redrafting of the legal code for the Electorate. In 1669, Leibniz was appointed assessor in the Court of Appeal. Although von Boyneburg died late in 1672, Leibniz remained under the employment of his widow until she dismissed him in 1674.
Von Boyneburg did much to promote Leibniz's reputation, and the latter's memoranda and letters began to attract favorable notice. After Leibniz's service to the Elector there soon followed a diplomatic role. He published an essay, under the pseudonym of a fictitious Polish nobleman, arguing (unsuccessfully) for the German candidate for the Polish crown. The main force in European geopolitics during Leibniz's adult life was the ambition of Louis XIV of France, backed by French military and economic might. Meanwhile, the Thirty Years' War had left German-speaking Europe exhausted, fragmented, and economically backward. Leibniz proposed to protect German-speaking Europe by distracting Louis as follows: France would be invited to take Egypt as a stepping stone towards an eventual conquest of the Dutch East Indies. In return, France would agree to leave Germany and the Netherlands undisturbed. This plan obtained the Elector's cautious support. In 1672, the French government invited Leibniz to Paris for discussion, but the plan was soon overtaken by the outbreak of the Franco-Dutch War and became irrelevant. Napoleon's failed invasion of Egypt in 1798 can be seen as an unwitting, late implementation of Leibniz's plan, after the Eastern hemisphere colonial supremacy in Europe had already passed from the Dutch to the British.
Thus Leibniz went to Paris in 1672. Soon after arriving, he met Dutch physicist and mathematician Christiaan Huygens and realised that his own knowledge of mathematics and physics was patchy. With Huygens as his mentor, he began a program of self-study that soon pushed him to making major contributions to both subjects, including discovering his version of the differential and integral calculus. He met Nicolas Malebranche and Antoine Arnauld, the leading French philosophers of the day, and studied the writings of Descartes and Pascal, unpublished as well as published. He befriended a German mathematician, Ehrenfried Walther von Tschirnhaus; they corresponded for the rest of their lives.
When it became clear that France would not implement its part of Leibniz's Egyptian plan, the Elector sent his nephew, escorted by Leibniz, on a related mission to the English government in London, early in 1673. There Leibniz came into acquaintance of Henry Oldenburg and John Collins. He met with the Royal Society where he demonstrated a calculating machine that he had designed and had been building since 1670. The machine was able to execute all four basic operations (adding, subtracting, multiplying, and dividing), and the society quickly made him an external member.
The mission ended abruptly when news of the Elector's death (12 February 1673) reached them. Leibniz promptly returned to Paris and not, as had been planned, to Mainz. The sudden deaths of his two patrons in the same winter meant that Leibniz had to find a new basis for his career.
In this regard, a 1669 invitation from Duke John Frederick of Brunswick to visit Hanover proved to have been fateful. Leibniz had declined the invitation, but had begun corresponding with the duke in 1671. In 1673, the duke offered Leibniz the post of counsellor. Leibniz very reluctantly accepted the position two years later, only after it became clear that no employment was forthcoming in Paris, whose intellectual stimulation he relished, or with the Habsburg imperial court.
In 1675 he tried to get admitted to the French Academy of Sciences as a foreign honorary member, but it was considered that there were already enough foreigners there and so no invitation came. He left Paris in October 1676.
=== House of Hanover, 1676–1716 ===
Leibniz managed to delay his arrival in Hanover until the end of 1676 after making one more short journey to London, where Newton accused him of having seen his unpublished work on calculus in advance. This was alleged to be evidence supporting the accusation, made decades later, that he had stolen calculus from Newton. On the journey from London to Hanover, Leibniz stopped in The Hague where he met van Leeuwenhoek, the discoverer of microorganisms. He also spent several days in intense discussion with Spinoza, who had just completed, but had not published, his masterwork, the Ethics. Spinoza died very shortly after Leibniz's visit.
In 1677, he was promoted, at his request, to Privy Counselor of Justice, a post he held for the rest of his life. Leibniz served three consecutive rulers of the House of Brunswick as historian, political adviser, and most consequentially, as librarian of the ducal library. He thenceforth employed his pen on all the various political, historical, and theological matters involving the House of Brunswick; the resulting documents form a valuable part of the historical record for the period.
Leibniz began promoting a project to use windmills to improve the mining operations in the Harz Mountains. This project did little to improve mining operations and was shut down by Duke Ernst August in 1685.
Among the few people in north Germany to accept Leibniz were the Electress Sophia of Hanover (1630–1714), her daughter Sophia Charlotte of Hanover (1668–1705), the Queen of Prussia and his avowed disciple, and Caroline of Ansbach, the consort of her grandson, the future George II. To each of these women he was correspondent, adviser, and friend. In turn, they all approved of Leibniz more than did their spouses and the future king George I of Great Britain.
The population of Hanover was only about 10,000, and its provinciality eventually grated on Leibniz. Nevertheless, to be a major courtier to the House of Brunswick was quite an honor, especially in light of the meteoric rise in the prestige of that House during Leibniz's association with it. In 1692, the Duke of Brunswick became a hereditary Elector of the Holy Roman Empire. The British Act of Settlement 1701 designated the Electress Sophia and her descent as the royal family of England, once both King William III and his sister-in-law and successor, Queen Anne, were dead. Leibniz played a role in the initiatives and negotiations leading up to that Act, but not always an effective one. For example, something he published anonymously in England, thinking to promote the Brunswick cause, was formally censured by the British Parliament.
The Brunswicks tolerated the enormous effort Leibniz devoted to intellectual pursuits unrelated to his duties as a courtier, pursuits such as perfecting calculus, writing about other mathematics, logic, physics, and philosophy, and keeping up a vast correspondence. He began working on calculus in 1674; the earliest evidence of its use in his surviving notebooks is 1675. By 1677 he had a coherent system in hand, but did not publish it until 1684. Leibniz's most important mathematical papers were published between 1682 and 1692, usually in a journal which he and Otto Mencke founded in 1682, the Acta Eruditorum. That journal played a key role in advancing his mathematical and scientific reputation, which in turn enhanced his eminence in diplomacy, history, theology, and philosophy.
The Elector Ernest Augustus commissioned Leibniz to write a history of the House of Brunswick, going back to the time of Charlemagne or earlier, hoping that the resulting book would advance his dynastic ambitions. From 1687 to 1690, Leibniz traveled extensively in Germany, Austria, and Italy, seeking and finding archival materials bearing on this project. Decades went by but no history appeared; the next Elector became quite annoyed at Leibniz's apparent dilatoriness. Leibniz never finished the project, in part because of his huge output on many other fronts, but also because he insisted on writing a meticulously researched and erudite book based on archival sources, when his patrons would have been quite happy with a short popular book, one perhaps little more than a genealogy with commentary, to be completed in three years or less. They never knew that he had in fact carried out a fair part of his assigned task: when the material Leibniz had written and collected for his history of the House of Brunswick was finally published in the 19th century, it filled three volumes.
Leibniz was appointed Librarian of the Herzog August Library in Wolfenbüttel, Lower Saxony, in 1691.
In 1708, John Keill, writing in the journal of the Royal Society and with Newton's presumed blessing, accused Leibniz of having plagiarised Newton's calculus. Thus began the calculus priority dispute which darkened the remainder of Leibniz's life. A formal investigation by the Royal Society (in which Newton was an unacknowledged participant), undertaken in response to Leibniz's demand for a retraction, upheld Keill's charge. Historians of mathematics writing since 1900 or so have tended to acquit Leibniz, pointing to important differences between Leibniz's and Newton's versions of calculus.
In 1712, Leibniz began a two-year residence in Vienna, where he was appointed Imperial Court Councillor to the Habsburgs. On the death of Queen Anne in 1714, Elector George Louis became King George I of Great Britain, under the terms of the 1701 Act of Settlement. Even though Leibniz had done much to bring about this happy event, it was not to be his hour of glory. Despite the intercession of the Princess of Wales, Caroline of Ansbach, George I forbade Leibniz to join him in London until he completed at least one volume of the history of the Brunswick family his father had commissioned nearly 30 years earlier. Moreover, for George I to include Leibniz in his London court would have been deemed insulting to Newton, who was seen as having won the calculus priority dispute and whose standing in British official circles could not have been higher. Finally, his dear friend and defender, the Dowager Electress Sophia, died in 1714. In 1716, while traveling in northern Europe, the Russian Tsar Peter the Great stopped in Bad Pyrmont and met Leibniz, who took interest in Russian matters since 1708 and was appointed advisor in 1711.
=== Death ===
Leibniz died in Hanover in 1716. At the time, he was so out of favor that neither George I (who happened to be near Hanover at that time) nor any fellow courtier other than his personal secretary attended the funeral. Even though Leibniz was a life member of the Royal Society and the Berlin Academy of Sciences, neither organization saw fit to honor his death. His grave went unmarked for more than 50 years. He was, however, eulogized by Fontenelle, before the French Academy of Sciences in Paris, which had admitted him as a foreign member in 1700. The eulogy was composed at the behest of the Duchess of Orleans, a niece of the Electress Sophia.
=== Personal life ===
Leibniz never married. He proposed to an unknown woman at age 50, but changed his mind when she took too long to decide. He complained on occasion about money, but the fair sum he left to his sole heir, his sister's stepson, proved that the Brunswicks had paid him fairly well. In his diplomatic endeavors, he at times verged on the unscrupulous, as was often the case with professional diplomats of his day. On several occasions, Leibniz backdated and altered personal manuscripts, actions which put him in a bad light during the calculus controversy.
He was charming, well-mannered, and not without humor and imagination. He had many friends and admirers all over Europe. He was identified as a Protestant and a philosophical theist. Leibniz remained committed to Trinitarian Christianity throughout his life.
== Philosophy ==
Leibniz's philosophical thinking appears fragmented because his philosophical writings consist mainly of a multitude of short pieces: journal articles, manuscripts published long after his death, and letters to correspondents. He wrote two book-length philosophical treatises, of which only the Théodicée of 1710 was published in his lifetime.
Leibniz dated his beginning as a philosopher to his Discourse on Metaphysics, which he composed in 1686 as a commentary on a running dispute between Nicolas Malebranche and Antoine Arnauld. This led to an extensive correspondence with Arnauld; it and the Discourse were not published until the 19th century. In 1695, Leibniz made his public entrée into European philosophy with a journal article titled "New System of the Nature and Communication of Substances". Between 1695 and 1705, he composed his New Essays on Human Understanding, a lengthy commentary on John Locke's 1690 An Essay Concerning Human Understanding, but upon learning of Locke's 1704 death, lost the desire to publish it, so that the New Essays were not published until 1765. The Monadologie, composed in 1714 and published posthumously, consists of 90 aphorisms.
Leibniz also wrote a short paper, "Primae veritates" ("First Truths"), first published by Louis Couturat in 1903 (pp. 518–523) summarizing his views on metaphysics. The paper is undated; that he wrote it while in Vienna in 1689 was determined only in 1999, when the ongoing critical edition finally published Leibniz's philosophical writings for the period 1677–1690. Couturat's reading of this paper influenced much 20th-century thinking about Leibniz, especially among analytic philosophers. After a meticulous study (informed by the 1999 additions to the critical edition) of all of Leibniz's philosophical writings up to 1688, Mercer (2001) disagreed with Couturat's reading.
Leibniz met Baruch Spinoza in 1676, read some of his unpublished writings, and had since been influenced by some of Spinoza's ideas. While Leibniz befriended him and admired Spinoza's powerful intellect, he was also dismayed by Spinoza's conclusions, especially when these were inconsistent with Christian orthodoxy.
Unlike Descartes and Spinoza, Leibniz had a university education in philosophy. He was influenced by his Leipzig professor Jakob Thomasius, who also supervised his BA thesis in philosophy. Leibniz also read Francisco Suárez, a Spanish Jesuit respected even in Lutheran universities. Leibniz was deeply interested in the new methods and conclusions of Descartes, Huygens, Newton, and Boyle, but the established philosophical ideas in which he was educated influenced his view of their work.
=== Principles ===
Leibniz variously invoked one or another of seven fundamental philosophical Principles:
Identity/contradiction. If a proposition is true, then its negation is false and vice versa.
Identity of indiscernibles. Two distinct things cannot have all their properties in common. If every predicate possessed by x is also possessed by y and vice versa, then entities x and y are identical; to suppose two things indiscernible is to suppose the same thing under two names. The "identity of indiscernibles" is frequently invoked in modern logic and philosophy. It has attracted the most controversy and criticism, especially from corpuscular philosophy and quantum mechanics. The converse of this is often called Leibniz's law, or the indiscernibility of identicals, which is mostly uncontroversial.
Sufficient reason. "There must be a sufficient reason for anything to exist, for any event to occur, for any truth to obtain."
Pre-established harmony. "[T]he appropriate nature of each substance brings it about that what happens to one corresponds to what happens to all the others, without, however, their acting upon one another directly." (Discourse on Metaphysics, XIV) A dropped glass shatters because it "knows" it has hit the ground, and not because the impact with the ground "compels" the glass to split.
Law of continuity. Natura non facit saltus (literally, "Nature does not make jumps").
Optimism. "God assuredly always chooses the best."
Plenitude. Leibniz believed that the best of all possible worlds would actualize every genuine possibility, and argued in Théodicée that this best of all possible worlds will contain all possibilities, with our finite experience of eternity giving no reason to dispute nature's perfection.
Leibniz would on occasion give a rational defense of a specific principle, but more often took them for granted.
=== Monads ===
Leibniz's best known contribution to metaphysics is his theory of monads, as exposited in Monadologie. He proposes his theory that the universe is made of an infinite number of simple substances known as monads. Monads can also be compared to the corpuscles of the mechanical philosophy of René Descartes and others. These simple substances or monads are the "ultimate units of existence in nature". Monads have no parts but still exist by the qualities that they have. These qualities are continuously changing over time, and each monad is unique. They are also not affected by time and are subject to only creation and annihilation. Monads are centers of force; substance is force, while space, matter, and motion are merely phenomenal. He argued, against Newton, that space, time, and motion are completely relative: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." Einstein, who called himself a "Leibnizian", wrote in the introduction to Max Jammer's book Concepts of Space that Leibnizianism was superior to Newtonianism, and his ideas would have dominated over Newton's had it not been for the poor technological tools of the time; Joseph Agassi argues that Leibniz paved the way for Einstein's theory of relativity.
Leibniz's proof of God can be summarized in the Théodicée. Reason is governed by the principle of contradiction and the principle of sufficient reason. Using the principle of reasoning, Leibniz concluded that the first reason of all things is God. All that we see and experience is subject to change, and the fact that this world is contingent can be explained by the possibility of the world being arranged differently in space and time. The contingent world must have some necessary reason for its existence. Leibniz uses a geometry book as an example to explain his reasoning. If this book was copied from an infinite chain of copies, there must be some reason for the content of the book. Leibniz concluded that there must be the "monas monadum" or God.
The ontological essence of a monad is its irreducible simplicity. Unlike atoms, monads possess no material or spatial character. They also differ from atoms by their complete mutual independence, so that interactions among monads are only apparent. Instead, by virtue of the principle of pre-established harmony, each monad follows a pre-programmed set of "instructions" peculiar to itself, so that a monad "knows" what to do at each moment. By virtue of these intrinsic instructions, each monad is like a little mirror of the universe. Monads need not be "small"; e.g., each human being constitutes a monad, in which case free will is problematic.
Monads are purported to have gotten rid of the problematic:
interaction between mind and matter arising in the system of Descartes;
lack of individuation inherent to the system of Spinoza, which represents individual creatures as merely accidental.
=== Theodicy and optimism ===
The Theodicy tries to justify the apparent imperfections of the world by claiming that it is optimal among all possible worlds. It must be the best possible and most balanced world, because it was created by an all powerful and all knowing God, who would not choose to create an imperfect world if a better world could be known to him or possible to exist. In effect, apparent flaws that can be identified in this world must exist in every possible world, because otherwise God would have chosen to create the world that excluded those flaws.
Leibniz asserted that the truths of theology (religion) and philosophy cannot contradict each other, since reason and faith are both "gifts of God" so that their conflict would imply God contending against himself. The Theodicy is Leibniz's attempt to reconcile his personal philosophical system with his interpretation of the tenets of Christianity. This project was motivated in part by Leibniz's belief, shared by many philosophers and theologians during the Enlightenment, in the rational and enlightened nature of the Christian religion. It was also shaped by Leibniz's belief in the perfectibility of human nature (if humanity relied on correct philosophy and religion as a guide), and by his belief that metaphysical necessity must have a rational or logical foundation, even if this metaphysical causality seemed inexplicable in terms of physical necessity (the natural laws identified by science).
In the view of Leibniz, because reason and faith must be entirely reconciled, any tenet of faith which could not be defended by reason must be rejected. Leibniz then approached one of the central criticisms of Christian theism: if God is all good, all wise, and all powerful, then how did evil come into the world? The answer (according to Leibniz) is that, while God is indeed unlimited in wisdom and power, his human creations, as creations, are limited both in their wisdom and in their will (power to act). This predisposes humans to false beliefs, wrong decisions, and ineffective actions in the exercise of their free will. God does not arbitrarily inflict pain and suffering on humans; rather he permits both moral evil (sin) and physical evil (pain and suffering) as the necessary consequences of metaphysical evil (imperfection), as a means by which humans can identify and correct their erroneous decisions, and as a contrast to true good.
Further, although human actions flow from prior causes that ultimately arise in God and therefore are known to God as metaphysical certainties, an individual's free will is exercised within natural laws, where choices are merely contingently necessary and to be decided in the event by a "wonderful spontaneity" that provides individuals with an escape from rigorous predestination.
=== Discourse on Metaphysics ===
For Leibniz, "God is an absolutely perfect being". He describes this perfection later in section VI as the simplest form of something with the most substantial outcome (VI). Along these lines, he declares that every type of perfection "pertains to him (God) in the highest degree" (I). Even though his types of perfections are not specifically drawn out, Leibniz highlights the one thing that, to him, does certify imperfections and proves that God is perfect: "that one acts imperfectly if he acts with less perfection than he is capable of", and since God is a perfect being, he cannot act imperfectly (III). Because God cannot act imperfectly, the decisions he makes pertaining to the world must be perfect. Leibniz also comforts readers, stating that because he has done everything to the most perfect degree; those who love him cannot be injured. However, to love God is a subject of difficulty as Leibniz believes that we are "not disposed to wish for that which God desires" because we have the ability to alter our disposition (IV). In accordance with this, many act as rebels, but Leibniz says that the only way we can truly love God is by being content "with all that comes to us according to his will" (IV).
Because God is "an absolutely perfect being" (I), Leibniz argues that God would be acting imperfectly if he acted with any less perfection than what he is able of (III). His syllogism then ends with the statement that God has made the world perfectly in all ways. This also affects how we should view God and his will. Leibniz states that, in lieu of God's will, we have to understand that God "is the best of all masters" and he will know when his good succeeds, so we, therefore, must act in conformity to his good will—or as much of it as we understand (IV). In our view of God, Leibniz declares that we cannot admire the work solely because of the maker, lest we mar the glory and love God in doing so. Instead, we must admire the maker for the work he has done (II). Effectively, Leibniz states that if we say the earth is good because of the will of God, and not good according to some standards of goodness, then how can we praise God for what he has done if contrary actions are also praiseworthy by this definition (II). Leibniz then asserts that different principles and geometry cannot simply be from the will of God, but must follow from his understanding.
Leibniz wrote: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself." Martin Heidegger called this question "the fundamental question of metaphysics".
=== Symbolic thought and rational resolution of disputes ===
Leibniz believed that much of human reasoning could be reduced to calculations of a sort, and that such calculations could resolve many differences of opinion:
The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right.
Leibniz's calculus ratiocinator, which resembles symbolic logic, can be viewed as a way of making such calculations feasible. Leibniz wrote memoranda that can now be read as groping attempts to get symbolic logic—and thus his calculus—off the ground. These writings remained unpublished until the appearance of a selection edited by Carl Immanuel Gerhardt (1859). Louis Couturat published a selection in 1901; by this time the main developments of modern logic had been created by Charles Sanders Peirce and by Gottlob Frege.
Leibniz thought symbols were important for human understanding. He attached so much importance to the development of good notations that he attributed all his discoveries in mathematics to this. His notation for calculus is an example of his skill in this regard. Leibniz's passion for symbols and notation, as well as his belief that these are essential to a well-running logic and mathematics, made him a precursor of semiotics.
But Leibniz took his speculations much further. Defining a character as any written sign, he then defined a "real" character as one that represents an idea directly and not simply as the word embodying the idea. Some real characters, such as the notation of logic, serve only to facilitate reasoning. Many characters well known in his day, including Egyptian hieroglyphics, Chinese characters, and the symbols of astronomy and chemistry, he deemed not real. Instead, he proposed the creation of a characteristica universalis or "universal characteristic", built on an alphabet of human thought in which each fundamental concept would be represented by a unique "real" character:
It is obvious that if we could find characters or signs suited for expressing all our thoughts as clearly and as exactly as arithmetic expresses numbers or geometry expresses lines, we could do in all matters insofar as they are subject to reasoning all that we can do in arithmetic and geometry. For all investigations which depend on reasoning would be carried out by transposing these characters and by a species of calculus.
Complex thoughts would be represented by combining characters for simpler thoughts. Leibniz saw that the uniqueness of prime factorization suggests a central role for prime numbers in the universal characteristic, a striking anticipation of Gödel numbering. Granted, there is no intuitive or mnemonic way to number any set of elementary concepts using the prime numbers.
Because Leibniz was a mathematical novice when he first wrote about the characteristic, at first he did not conceive it as an algebra but rather as a universal language or script. Only in 1676 did he conceive of a kind of "algebra of thought", modeled on and including conventional algebra and its notation. The resulting characteristic included a logical calculus, some combinatorics, algebra, his analysis situs (geometry of situation), a universal concept language, and more. What Leibniz actually intended by his characteristica universalis and calculus ratiocinator, and the extent to which modern formal logic does justice to calculus, may never be established. Leibniz's idea of reasoning through a universal language of symbols and calculations remarkably foreshadows great 20th-century developments in formal systems, such as Turing completeness, where computation was used to define equivalent universal languages (see Turing degree).
=== Formal logic ===
Leibniz has been noted as one of the most important logicians between the times of Aristotle and Gottlob Frege. Leibniz enunciated the principal properties of what we now call conjunction, disjunction, negation, identity, set inclusion, and the empty set. The principles of Leibniz's logic and, arguably, of his whole philosophy, reduce to two:
All our ideas are compounded from a very small number of simple ideas, which form the alphabet of human thought.
Complex ideas proceed from these simple ideas by a uniform and symmetrical combination, analogous to arithmetical multiplication.
The formal logic that emerged early in the 20th century also requires, at minimum, unary negation and quantified variables ranging over some universe of discourse.
Leibniz published nothing on formal logic in his lifetime; most of what he wrote on the subject consists of working drafts. In his History of Western Philosophy, Bertrand Russell went so far as to claim that Leibniz had developed logic in his unpublished writings to a level which was reached only 200 years later.
Russell's principal work on Leibniz found that many of Leibniz's most startling philosophical ideas and claims (e.g., that each of the fundamental monads mirrors the whole universe) follow logically from Leibniz's conscious choice to reject relations between things as unreal. He regarded such relations as (real) qualities of things (Leibniz admitted unary predicates only): For him, "Mary is the mother of John" describes separate qualities of Mary and of John. This view contrasts with the relational logic of De Morgan, Peirce, Schröder and Russell himself, now standard in predicate logic. Notably, Leibniz also declared space and time to be inherently relational.
Leibniz's 1690 discovery of his algebra of concepts (deductively equivalent to the Boolean algebra) and the associated metaphysics, are of interest in present-day computational metaphysics.
== Mathematics ==
Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular (see History of the function concept). In the 18th century, "function" lost these geometrical associations. Leibniz was also one of the pioneers in actuarial science, calculating the purchase price of life annuities and the liquidation of a state's debt.
Leibniz's research into formal logic, also relevant to mathematics, is discussed in the preceding section. The best overview of Leibniz's writings on calculus may be found in Bos (1974).
Leibniz, who invented one of the earliest mechanical calculators, said of calculation: "For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used."
=== Linear systems ===
Leibniz arranged the coefficients of a system of linear equations into an array, now called a matrix, in order to find a solution to the system if it existed. This method was later called Gaussian elimination. Leibniz laid down the foundations and theory of determinants, although the Japanese mathematician Seki Takakazu also discovered determinants independently of Leibniz. His works show calculating the determinants using cofactors. Calculating the determinant using cofactors is named the Leibniz formula. Finding the determinant of a matrix using this method proves impractical with large n, requiring to calculate n! products and the number of n-permutations. He also solved systems of linear equations using determinants, which is now called Cramer's rule. This method for solving systems of linear equations based on determinants was found in 1684 by Leibniz (Gabriel Cramer published his findings in 1750). Although Gaussian elimination requires
O
(
n
3
)
{\displaystyle O(n^{3})}
arithmetic operations, linear algebra textbooks still teach cofactor expansion before LU factorization.
=== Geometry ===
The Leibniz formula for π states that
1
−
1
3
+
1
5
−
1
7
+
⋯
=
π
4
.
{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,\cdots \,=\,{\frac {\pi }{4}}.}
Leibniz wrote that circles "can most simply be expressed by this series, that is, the aggregate of fractions alternately added and subtracted". However this formula is only accurate with a large number of terms, using 10,000,000 terms to obtain the correct value of π/4 to 8 decimal places. Leibniz attempted to create a definition for a straight line while attempting to prove the parallel postulate. While most mathematicians defined a straight line as the shortest line between two points, Leibniz believed that this was merely a property of a straight line rather than the definition.
=== Calculus ===
Leibniz is credited, along with Isaac Newton, with the invention of calculus (differential and integral calculus). According to Leibniz's notebooks, a critical breakthrough occurred on 11 November 1675, when he employed integral calculus for the first time to find the area under the graph of a function y = f(x). He introduced several notations used to this day, for instance the integral sign ∫ (
∫
f
(
x
)
d
x
{\displaystyle \displaystyle \int f(x)\,dx}
), representing an elongated S, from the Latin word summa, and the d used for differentials (
d
y
d
x
{\displaystyle {\frac {dy}{dx}}}
), from the Latin word differentia. Leibniz did not publish anything about his calculus until 1684. Leibniz expressed the inverse relation of integration and differentiation, later called the fundamental theorem of calculus, by means of a figure in his 1693 paper Supplementum geometriae dimensoriae.... However, James Gregory is credited for the theorem's discovery in geometric form, Isaac Barrow proved a more generalized geometric version, and Newton developed supporting theory. The concept became more transparent as developed through Leibniz's formalism and new notation. The product rule of differential calculus is still called "Leibniz's law". In addition, the theorem that tells how and when to differentiate under the integral sign is called the Leibniz integral rule.
Leibniz exploited infinitesimals in developing calculus, manipulating them in ways suggesting that they had paradoxical algebraic properties. George Berkeley, in a tract called The Analyst and also in De Motu, criticized these. A recent study argues that Leibnizian calculus was free of contradictions, and was better grounded than Berkeley's empiricist criticisms.
Leibniz introduced fractional calculus in a letter written to Guillaume de l'Hôpital in 1695. At the same time, Leibniz wrote to Johann Bernoulli about derivatives of "general order". In the correspondence between Leibniz and John Wallis in 1697, Wallis's infinite product for
1
2
{\displaystyle {\frac {1}{2}}}
π is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notation
d
1
/
2
y
{\displaystyle {d}^{1/2}{y}}
to denote the derivative of order
1
2
{\displaystyle {\frac {1}{2}}}
.
From 1711 until his death, Leibniz was engaged in a dispute with John Keill, Newton and others, over whether Leibniz had invented calculus independently of Newton.
The use of infinitesimals in mathematics was frowned upon by followers of Karl Weierstrass, but survived in science and engineering, and even in rigorous mathematics, via the fundamental computational device known as the differential. Beginning in 1960, Abraham Robinson worked out a rigorous foundation for Leibniz's infinitesimals, using model theory, in the context of a field of hyperreal numbers. The resulting non-standard analysis can be seen as a belated vindication of Leibniz's mathematical reasoning. Robinson's transfer principle is a mathematical implementation of Leibniz's heuristic law of continuity, while the standard part function implements the Leibnizian transcendental law of homogeneity.
=== Topology ===
Leibniz was the first to use the term analysis situs, later used in the 19th century to refer to what is now known as topology. There are two takes on this situation. On the one hand, Mates, citing a 1954 paper in German by Jacob Freudenthal, argues:
Although for Leibniz the situs of a sequence of points is completely determined by the distance between them and is altered if those distances are altered, his admirer Euler, in the famous 1736 paper solving the Königsberg Bridge Problem and its generalizations, used the term geometria situs in such a sense that the situs remains unchanged under topological deformations. He mistakenly credits Leibniz with originating this concept. ... [It] is sometimes not realized that Leibniz used the term in an entirely different sense and hence can hardly be considered the founder of that part of mathematics.
But Hideaki Hirano argues differently, quoting Mandelbrot:
To sample Leibniz' scientific works is a sobering experience. Next to calculus, and to other thoughts that have been carried out to completion, the number and variety of premonitory thrusts is overwhelming. We saw examples in "packing", ... My Leibniz mania is further reinforced by finding that for one moment its hero attached importance to geometric scaling. In Euclidis Prota ..., which is an attempt to tighten Euclid's axioms, he states ...: "I have diverse definitions for the straight line. The straight line is a curve, any part of which is similar to the whole, and it alone has this property, not only among curves but among sets." This claim can be proved today.
Thus the fractal geometry promoted by Mandelbrot drew on Leibniz's notions of self-similarity and the principle of continuity: Natura non facit saltus. We also see that when Leibniz wrote, in a metaphysical vein, that "the straight line is a curve, any part of which is similar to the whole", he was anticipating topology by more than two centuries. As for "packing", Leibniz told his friend and correspondent Des Bosses to imagine a circle, then to inscribe within it three congruent circles with maximum radius; the latter smaller circles could be filled with three even smaller circles by the same procedure. This process can be continued infinitely, from which arises a good idea of self-similarity. Leibniz's improvement of Euclid's axiom contains the same concept.
He envisioned the field of combinatorial topology as early as 1679, in his work titled Characteristica Geometrica, as he "tried to formulate basic geometric properties of figures, to use special symbols to represent them, and to combine these properties under operations so as to produce new ones."
== Science and engineering ==
Leibniz's writings are currently discussed, not only for their anticipations and possible discoveries not yet recognized, but as ways of advancing present knowledge. Much of his writing on physics is included in Gerhardt's Mathematical Writings.
=== Physics ===
Leibniz contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695.
Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense. For instance, he anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions."
Leibniz held a relational notion of space and time, against Newton's substantivalist views. According to Newton's substantivalism, space and time are entities in their own right, existing independently of things. Leibniz's relationalism, in contrast, describes space and time as systems of relations that exist between objects. The rise of general relativity and subsequent work in the history of physics has put Leibniz's stance in a more favorable light.
One of Leibniz's projects was to recast Newton's theory as a vortex theory. However, his project went beyond vortex theory, since at its heart there was an attempt to explain one of the most difficult problems in physics, that of the origin of the cohesion of matter.
The principle of sufficient reason has been invoked in recent cosmology, and his identity of indiscernibles in quantum mechanics, a field some even credit him with having anticipated in some sense. In addition to his theories about the nature of reality, Leibniz's contributions to the development of calculus have also had a major impact on physics.
==== The vis viva ====
Leibniz's vis viva (Latin for "living force") is mv2, twice the modern kinetic energy. He realized that the total energy would be conserved in certain mechanical systems, so he considered it an innate motive characteristic of matter. Here too his thinking gave rise to another regrettable nationalistic dispute. His vis viva was seen as rivaling the conservation of momentum championed by Newton in England and by Descartes and Voltaire in France; hence academics in those countries tended to neglect Leibniz's idea. Leibniz knew of the validity of conservation of momentum. In reality, both energy and momentum are conserved (in closed systems), so both approaches are valid.
=== Other natural science ===
By proposing that the Earth has a molten core, he anticipated modern geology. In embryology, he was a preformationist, but also proposed that organisms are the outcome of a combination of an infinite number of possible microstructures and of their powers. In the life sciences and paleontology, he revealed an amazing transformist intuition, fueled by his study of comparative anatomy and fossils. One of his principal works on this subject, Protogaea, unpublished in his lifetime, has recently been published in English for the first time. He worked out a primal organismic theory. In medicine, he exhorted the physicians of his time—with some results—to ground their theories in detailed comparative observations and verified experiments, and to distinguish firmly scientific and metaphysical points of view.
=== Psychology ===
Psychology had been a central interest of Leibniz. He appears to be an "underappreciated pioneer of psychology" He wrote on topics which are now regarded as fields of psychology: attention and consciousness, memory, learning (association), motivation (the act of "striving"), emergent individuality, the general dynamics of development (evolutionary psychology). His discussions in the New Essays and Monadology often rely on everyday observations such as the behaviour of a dog or the noise of the sea, and he develops intuitive analogies (the synchronous running of clocks or the balance spring of a clock). He also devised postulates and principles that apply to psychology: the continuum of the unnoticed petites perceptions to the distinct, self-aware apperception, and psychophysical parallelism from the point of view of causality and of purpose: "Souls act according to the laws of final causes, through aspirations, ends and means. Bodies act according to the laws of efficient causes, i.e. the laws of motion. And these two realms, that of efficient causes and that of final causes, harmonize with one another." This idea refers to the mind-body problem, stating that the mind and brain do not act upon each other, but act alongside each other separately but in harmony. Leibniz, however, did not use the term psychologia.
Leibniz's epistemological position—against John Locke and English empiricism (sensualism)—was made clear: "Nihil est in intellectu quod non fuerit in sensu, nisi intellectu ipse." – "Nothing is in the intellect that was not first in the senses, except the intellect itself." Principles that are not present in sensory impressions can be recognised in human perception and consciousness: logical inferences, categories of thought, the principle of causality and the principle of purpose (teleology).
Leibniz found his most important interpreter in Wilhelm Wundt, founder of psychology as a discipline. Wundt used the "… nisi intellectu ipse" quotation 1862 on the title page of his Beiträge zur Theorie der Sinneswahrnehmung (Contributions on the Theory of Sensory Perception) and published a detailed and aspiring monograph on Leibniz. Wundt shaped the term apperception, introduced by Leibniz, into an experimental psychologically based apperception psychology that included neuropsychological modelling – an excellent example of how a concept created by a great philosopher could stimulate a psychological research program. One principle in the thinking of Leibniz played a fundamental role: "the principle of equality of separate but corresponding viewpoints." Wundt characterized this style of thought (perspectivism) in a way that also applied for him—viewpoints that "supplement one another, while also being able to appear as opposites that only resolve themselves when considered more deeply."
Much of Leibniz's work went on to have a great impact on the field of psychology. Leibniz thought that there are many petites perceptions, or small perceptions of which we perceive but of which we are unaware. He believed that by the principle that phenomena found in nature were continuous by default, it was likely that the transition between conscious and unconscious states had intermediary steps. For this to be true, there must also be a portion of the mind of which we are unaware at any given time. His theory regarding consciousness in relation to the principle of continuity can be seen as an early theory regarding the stages of sleep. In this way, Leibniz's theory of perception can be viewed as one of many theories leading up to the idea of the unconscious. Leibniz was a direct influence on Ernst Platner, who is credited with originally coining the term Unbewußtseyn (unconscious). Additionally, the idea of subliminal stimuli can be traced back to his theory of small perceptions. Leibniz's ideas regarding music and tonal perception went on to influence the laboratory studies of Wilhelm Wundt.
=== Social science ===
In public health, he advocated establishing a medical administrative authority, with powers over epidemiology and veterinary medicine. He worked to set up a coherent medical training program, oriented towards public health and preventive measures. In economic policy, he proposed tax reforms and a national insurance program, and discussed the balance of trade. He even proposed something akin to what much later emerged as game theory. In sociology he laid the ground for communication theory.
=== Technology ===
In 1906, Garland published a volume of Leibniz's writings bearing on his many practical inventions and engineering work. To date, few of these writings have been translated into English. Nevertheless, it is well understood that Leibniz was a serious inventor, engineer, and applied scientist, with great respect for practical life. Following the motto theoria cum praxi, he urged that theory be combined with practical application, and thus has been claimed as the father of applied science. He designed wind-driven propellers and water pumps, mining machines to extract ore, hydraulic presses, lamps, submarines, clocks, etc. With Denis Papin, he created a steam engine. He even proposed a method for desalinating water. From 1680 to 1685, he struggled to overcome the chronic flooding that afflicted the ducal silver mines in the Harz Mountains, but did not succeed.
==== Computation ====
Leibniz may have been the first computer scientist and information theorist. Early in life, he documented the binary numeral system (base 2), then revisited that system throughout his career. While Leibniz was examining other cultures to compare his metaphysical views, he encountered an ancient Chinese book I Ching. Leibniz interpreted a diagram which showed yin and yang and corresponded it to a zero and one. More information can be found in the Sinophology section. Leibniz had similarities with Juan Caramuel y Lobkowitz and Thomas Harriot, who independently developed the binary system, as he was familiar with their works on the binary system. Juan Caramuel y Lobkowitz worked extensively on logarithms including logarithms with base 2. Thomas Harriot's manuscripts contained a table of binary numbers and their notation, which demonstrated that any number could be written on a base 2 system. Regardless, Leibniz simplified the binary system and articulated logical properties such as conjunction, disjunction, negation, identity, inclusion, and the empty set. He anticipated Lagrangian interpolation and algorithmic information theory. His calculus ratiocinator anticipated aspects of the universal Turing machine. In 1961, Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. Wiener is quoted with "Indeed, the general idea of a computing machine is nothing but a mechanization of Leibniz's Calculus Ratiocinator."
In 1671, Leibniz began to invent a machine that could execute all four arithmetic operations, gradually improving it over a number of years. This "stepped reckoner" attracted fair attention and was the basis of his election to the Royal Society in 1673. A number of such machines were made during his years in Hanover by a craftsman working under his supervision. They were not an unambiguous success because they did not fully mechanize the carry operation. Couturat reported finding an unpublished note by Leibniz, dated 1674, describing a machine capable of performing some algebraic operations. Leibniz also devised a (now reproduced) cipher machine, recovered by Nicholas Rescher in 2010. In 1693, Leibniz described a design of a machine which could, in theory, integrate differential equations, which he called "integraph".
Leibniz was groping towards hardware and software concepts worked out much later by Charles Babbage and Ada Lovelace. In 1679, while mulling over his binary arithmetic, Leibniz imagined a machine in which binary numbers were represented by marbles, governed by a rudimentary sort of punched cards. Modern electronic digital computers replace Leibniz's marbles moving by gravity with shift registers, voltage gradients, and pulses of electrons, but otherwise they run roughly as Leibniz envisioned in 1679.
=== Librarian ===
Later in Leibniz's career (after the death of von Boyneburg), Leibniz moved to Paris and accepted a position as a librarian in the Hanoverian court of Johann Friedrich, Duke of Brunswick-Luneburg. Leibniz's predecessor, Tobias Fleischer, had already created a cataloging system for the Duke's library but it was a clumsy attempt. At this library, Leibniz focused more on advancing the library than on the cataloging. For instance, within a month of taking the new position, he developed a comprehensive plan to expand the library. He was one of the first to consider developing a core collection for a library and felt "that a library for display and ostentation is a luxury and indeed superfluous, but a well-stocked and organized library is important and useful for all areas of human endeavor and is to be regarded on the same level as schools and churches". Leibniz lacked the funds to develop the library in this manner. After working at this library, by the end of 1690 Leibniz was appointed as privy-councilor and librarian of the Bibliotheca Augusta at Wolfenbüttel. It was an extensive library with at least 25,946 printed volumes. At this library, Leibniz sought to improve the catalog. He was not allowed to make complete changes to the existing closed catalog, but was allowed to improve upon it so he started on that task immediately. He created an alphabetical author catalog and had also created other cataloging methods that were not implemented. While serving as librarian of the ducal libraries in Hanover and Wolfenbüttel, Leibniz effectively became one of the founders of library science. Seemingly, Leibniz paid a good deal of attention to the classification of subject matter, favoring a well-balanced library covering a host of numerous subjects and interests. Leibniz, for example, proposed the following classification system in the Otivm Hanoveranvm Sive Miscellanea (1737):
Theology
Jurisprudence
Medicine
Intellectual Philosophy
Philosophy of the Imagination or Mathematics
Philosophy of Sensible Things or Physics
Philology or Language
Civil History
Literary History and Libraries
General and Miscellaneous
He also designed a book indexing system in ignorance of the only other such system then extant, that of the Bodleian Library at Oxford University. He also called on publishers to distribute abstracts of all new titles they produced each year, in a standard form that would facilitate indexing. He hoped that this abstracting project would eventually include everything printed from his day back to Gutenberg. Neither proposal met with success at the time, but something like them became standard practice among English language publishers during the 20th century, under the aegis of the Library of Congress and the British Library.
He called for the creation of an empirical database as a way to further all sciences. His characteristica universalis, calculus ratiocinator, and a "community of minds"—intended, among other things, to bring political and religious unity to Europe—can be seen as distant unwitting anticipations of artificial languages (e.g., Esperanto and its rivals), symbolic logic, even the World Wide Web.
=== Advocate of scientific societies ===
Leibniz emphasized that research was a collaborative endeavor. Hence he warmly advocated the formation of national scientific societies along the lines of the British Royal Society and the French Académie Royale des Sciences. More specifically, in his correspondence and travels he urged the creation of such societies in Dresden, Saint Petersburg, Vienna, and Berlin. Only one such project came to fruition; in 1700, the Berlin Academy of Sciences was created. Leibniz drew up its first statutes, and served as its first President for the remainder of his life. That Academy evolved into the German Academy of Sciences, the publisher of the ongoing critical edition of his works.
== Law and morality ==
Leibniz's writings on law, ethics, and politics were long overlooked by English-speaking scholars, but this has changed of late.
While Leibniz was no apologist for absolute monarchy like Hobbes, or for tyranny in any form, neither did he echo the political and constitutional views of his contemporary John Locke, views invoked in support of liberalism, in 18th-century America and later elsewhere. The following excerpt from a 1695 letter to Baron J. C. Boyneburg's son Philipp is very revealing of Leibniz's political sentiments:
As for ... the great question of the power of sovereigns and the obedience their peoples owe them, I usually say that it would be good for princes to be persuaded that their people have the right to resist them, and for the people, on the other hand, to be persuaded to obey them passively. I am, however, quite of the opinion of Grotius, that one ought to obey as a rule, the evil of revolution being greater beyond comparison than the evils causing it. Yet I recognize that a prince can go to such excess, and place the well-being of the state in such danger, that the obligation to endure ceases. This is most rare, however, and the theologian who authorizes violence under this pretext should take care against excess; excess being infinitely more dangerous than deficiency.
In 1677, Leibniz called for a European confederation, governed by a council or senate, whose members would represent entire nations and would be free to vote their consciences; this is sometimes considered an anticipation of the European Union. He believed that Europe would adopt a uniform religion. He reiterated these proposals in 1715.
But at the same time, he arrived to propose an interreligious and multicultural project to create a universal system of justice, which required from him a broad interdisciplinary perspective. In order to propose it, he combined linguistics (especially sinology), moral and legal philosophy, management, economics, and politics.
=== Law ===
Leibniz trained as a legal academic, but under the tutelage of Cartesian-sympathiser Erhard Weigel we already see an attempt to solve legal problems by rationalist mathematical methods (Weigel's influence being most explicit in the Specimen Quaestionum Philosophicarum ex Jure collectarum (An Essay of Collected Philosophical Problems of Right)). For example, the Disputatio Inauguralis de Casibus Perplexis in Jure (Inaugural Disputation on Ambiguous Legal Cases) uses early combinatorics to solve some legal disputes, while the 1666 De Arte Combinatoria (On the Art of Combination) includes simple legal problems by way of illustration.
The use of combinatorial methods to solve legal and moral problems seems, via Athanasius Kircher and Daniel Schwenter to be of Llullist inspiration: Ramón Llull attempted to solve ecumenical disputes through recourse to a combinatorial mode of reasoning he regarded as universal (a mathesis universalis).
In the late 1660s the enlightened Prince-Bishop of Mainz Johann Philipp von Schönborn announced a review of the legal system and made available a position to support his current law commissioner. Leibniz left Franconia and made for Mainz before even winning the role. On reaching Frankfurt am Main Leibniz penned The New Method of Teaching and Learning the Law, by way of application. The text proposed a reform of legal education and is characteristically syncretic, integrating aspects of Thomism, Hobbesianism, Cartesianism and traditional jurisprudence. Leibniz's argument that the function of legal teaching was not to impress rules as one might train a dog, but to aid the student in discovering their own public reason, evidently impressed von Schönborn as he secured the job.
Leibniz's next major attempt to find a universal rational core to law and so found a legal "science of right", came when Leibniz worked in Mainz from 1667–72. Starting initially from Hobbes' mechanistic doctrine of power, Leibniz reverted to logico-combinatorial methods in an attempt to define justice. As Leibniz's so-called Elementa Juris Naturalis advanced, he built in modal notions of right (possibility) and obligation (necessity) in which we see perhaps the earliest elaboration of his possible worlds doctrine within a deontic frame. While ultimately the Elementa remained unpublished, Leibniz continued to work on his drafts and promote their ideas to correspondents up until his death.
=== Ecumenism ===
Leibniz devoted considerable intellectual and diplomatic effort to what would now be called an ecumenical endeavor, seeking to reconcile the Roman Catholic and Lutheran churches. In this respect, he followed the example of his early patrons, Baron von Boyneburg and the Duke John Frederick—both cradle Lutherans who converted to Catholicism as adults—who did what they could to encourage the reunion of the two faiths, and who warmly welcomed such endeavors by others. (The House of Brunswick remained Lutheran, because the Duke's children did not follow their father.) These efforts included corresponding with French bishop Jacques-Bénigne Bossuet, and involved Leibniz in some theological controversy. He evidently thought that the thoroughgoing application of reason would suffice to heal the breach caused by the Reformation.
== Philology ==
Leibniz the philologist was an avid student of languages, eagerly latching on to any information about vocabulary and grammar that came his way. In 1710, he applied ideas of gradualism and uniformitarianism to linguistics in a short essay. He refuted the belief, widely held by Christian scholars of the time, that Hebrew was the primeval language of the human race. At the same time, he rejected the idea of unrelated language groups and considered them all to have a common source. He also refuted the argument, advanced by Swedish scholars in his day, that a form of proto-Swedish was the ancestor of the Germanic languages. He puzzled over the origins of the Slavic languages and was fascinated by classical Chinese. Leibniz was also an expert in the Sanskrit language.
He published the princeps editio (first modern edition) of the late medieval Chronicon Holtzatiae, a Latin chronicle of the County of Holstein.
== Sinophology ==
Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. He apparently read Confucius Sinarum Philosophus in the first year of its publication. He came to the conclusion that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China, hoping it would convert him. Leibniz was one of the western philosophers of the time who attempted to accommodate Confucian ideas to prevailing European beliefs.
Leibniz's attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz's ideas of "simple substance" and "pre-established harmony" were directly influenced by Confucianism, pointing to the fact that they were conceived during the period when he was reading Confucius Sinarum Philosophus.
== Polymath ==
While making his grand tour of European archives to research the Brunswick family history that he never completed, Leibniz stopped in Vienna between May 1688 and February 1689, where he did much legal and diplomatic work for the Brunswicks. He visited mines, talked with mine engineers, and tried to negotiate export contracts for lead from the ducal mines in the Harz mountains. His proposal that the streets of Vienna be lit with lamps burning rapeseed oil was implemented. During a formal audience with the Austrian Emperor and in subsequent memoranda, he advocated reorganizing the Austrian economy, reforming the coinage of much of central Europe, negotiating a Concordat between the Habsburgs and the Vatican, and creating an imperial research library, official archive, and public insurance fund. He wrote and published an important paper on mechanics.
== Posthumous reputation ==
When Leibniz died, his reputation was in decline. He was remembered for only one book, the Théodicée, whose supposed central argument Voltaire lampooned in his popular book Candide, which concludes with the character Candide saying, "Non liquet" (it is not clear), a term that was applied during the Roman Republic to a legal verdict of "not proven". Voltaire's depiction of Leibniz's ideas was so influential that many believed it to be an accurate description. Thus Voltaire and his Candide bear some of the blame for the lingering failure to appreciate and understand Leibniz's ideas. Leibniz had an ardent disciple, Christian Wolff, whose dogmatic and facile outlook did Leibniz's reputation much harm. Leibniz also influenced David Hume, who read his Théodicée and used some of his ideas. In any event, philosophical fashion was moving away from the rationalism and system building of the 17th century, of which Leibniz had been such an ardent proponent. His work on law, diplomacy, and history was seen as of ephemeral interest. The vastness and richness of his correspondence went unrecognized.
Leibniz's reputation began to recover with the 1765 publication of the Nouveaux Essais. In 1768, Louis Dutens edited the first multi-volume edition of Leibniz's writings, followed in the 19th century by a number of editions, including those edited by Erdmann, Foucher de Careil, Gerhardt, Gerland, Klopp, and Mollat. Publication of Leibniz's correspondence with notables such as Antoine Arnauld, Samuel Clarke, Sophia of Hanover, and her daughter Sophia Charlotte of Hanover, began.
In 1900, Bertrand Russell published a critical study of Leibniz's metaphysics. Shortly thereafter, Louis Couturat published an important study of Leibniz, and edited a volume of Leibniz's heretofore unpublished writings, mainly on logic. They made Leibniz somewhat respectable among 20th-century analytical and linguistic philosophers in the English-speaking world (Leibniz had already been of great influence to many Germans such as Bernhard Riemann). For example, Leibniz's phrase salva veritate, meaning interchangeability without loss of or compromising the truth, recurs in Willard Quine's writings. Nevertheless, the secondary literature on Leibniz did not really blossom until after World War II. This is especially true of English speaking countries; in Gregory Brown's bibliography fewer than 30 of the English language entries were published before 1946. American Leibniz studies owe much to Leroy Loemker (1900–1985) through his translations and his interpretive essays in LeClerc (1973). Leibniz's philosophy was also highly regarded by Gilles Deleuze, who in 1988 published The Fold: Leibniz and the Baroque.
Nicholas Jolley has surmised that Leibniz's reputation as a philosopher is now perhaps higher than at any time since he was alive. Analytic and contemporary philosophy continue to invoke his notions of identity, individuation, and possible worlds. Work in the history of 17th- and 18th-century ideas has revealed more clearly the 17th-century "Intellectual Revolution" that preceded the better-known Industrial and commercial revolutions of the 18th and 19th centuries.
In Germany, various important institutions were named after Leibniz. In Hanover in particular, he is the namesake for some of the most important institutions in the town:
Leibniz University Hannover
Leibniz-Akademie, Institution for academic and non-academic training and further education in the business sector
Gottfried Wilhelm Leibniz Bibliothek – Niedersächsische Landesbibliothek, one of the largest regional and academic libraries in Germany and, alongside the Oldenburg State Library and the Herzog August Library in Wolfenbüttel, one of the three state libraries in Lower Saxony
Gottfried-Wilhelm-Leibniz-Gesellschaft, Society for the cultivation and dissemination of Leibniz's teachings
Outside of Hanover:
Leibniz Association, Berlin
Leibniz-Sozietät der Wissenschaften zu Berlin, Association of scientists founded in Berlin in 1993 with the legal form of a registered association; It continues the activities of the Academy of Sciences of the GDR with personnel continuity
Leibniz Kolleg of Tübingen University, central propaedeutic institution of the university, which aims to enable high school graduates to make a well-founded study decision through a ten-month, comprehensive general course of study and at the same time to introduce them to academic work
Leibniz Supercomputing Centre, Munich
more than 20 schools all over Germany
Awards:
Leibniz-Ring-Hannover, Honor given since 1997 by the Hannover Press Club to personalities or institutions "who have drawn attention to themselves through an outstanding performance or have made a special mark through their life's work."
Leibniz-Medaille of the Berlin-Brandenburg Academy of Sciences and Humanities, established in 1906 and awarded previously by the Prussian Academy of Sciences and later the German Academy of Sciences at Berlin
Gottfried-Wilhelm-Leibniz-Medaille of the Leibniz-Sozietät
Leibniz-Medaille der Akademie der Wissenschaften und der Literatur Mainz
In 1985, the German government created the Leibniz Prize, offering an annual award of 1.55 million euros for experimental results and 770,000 euros for theoretical ones. It was the world's largest prize for scientific achievement prior to the Fundamental Physics Prize.
The collection of manuscript papers of Leibniz at the Gottfried Wilhelm Leibniz Bibliothek – Niedersächische Landesbibliothek was inscribed on UNESCO's Memory of the World Register in 2007.
=== Cultural references ===
Leibniz still receives popular attention. The Google Doodle for 1 July 2018 celebrated Leibniz's 372nd birthday. Using a quill, his hand is shown writing "Google" in binary ASCII code.
One of the earliest popular but indirect expositions of Leibniz was Voltaire's satire Candide, published in 1759. Leibniz was lampooned as Professor Pangloss, described as "the greatest philosopher of the Holy Roman Empire".
Leibniz also appears as one of the main historical figures in Neal Stephenson's series of novels The Baroque Cycle. Stephenson credits readings and discussions concerning Leibniz for inspiring him to write the series.
Leibniz also stars in Adam Ehrlich Sachs's novel The Organs of Sense.
The German biscuit Choco Leibniz is named after Leibniz, a famous resident of Hanover where the manufacturer Bahlsen is based.
== Writings and publication ==
Leibniz mainly wrote in three languages: scholastic Latin, French and German. During his lifetime, he published many pamphlets and scholarly articles, but only two "philosophical" books, the Combinatorial Art and the Théodicée. (He published numerous pamphlets, often anonymous, on behalf of the House of Brunswick-Lüneburg, most notably the "De jure suprematum" a major consideration of the nature of sovereignty.) One substantial book appeared posthumously, his Nouveaux essais sur l'entendement humain, which Leibniz had withheld from publication after the death of John Locke. Only in 1895, when Bodemann completed his catalogue of Leibniz's manuscripts and correspondence, did the enormous extent of Leibniz's Nachlass become clear: about 15,000 letters to more than 1000 recipients plus more than 40,000 other items. Moreover, quite a few of these letters are of essay length. Much of his vast correspondence, especially the letters dated after 1700, remains unpublished, and much of what is published has appeared only in recent decades. The more than 67,000 records of the Leibniz Edition's Catalogue cover almost all of his known writings and the letters from him and to him. The amount, variety, and disorder of Leibniz's writings are a predictable result of a situation he described in a letter as follows:
I cannot tell you how extraordinarily distracted and spread out I am. I am trying to find various things in the archives; I look at old papers and hunt up unpublished documents. From these I hope to shed some light on the history of the [House of] Brunswick. I receive and answer a huge number of letters. At the same time, I have so many mathematical results, philosophical thoughts, and other literary innovations that should not be allowed to vanish that I often do not know where to begin.
The extant parts of the critical edition of Leibniz's writings are organized as follows:
Series 1. Political, Historical, and General Correspondence. 25 vols., 1666–1706.
Series 2. Philosophical Correspondence. 3 vols., 1663–1700.
Series 3. Mathematical, Scientific, and Technical Correspondence. 8 vols., 1672–1698.
Series 4. Political Writings. 9 vols., 1667–1702.
Series 5. Historical and Linguistic Writings. In preparation.
Series 6. Philosophical Writings. 7 vols., 1663–1690, and Nouveaux essais sur l'entendement humain.
Series 7. Mathematical Writings. 6 vols., 1672–1676.
Series 8. Scientific, Medical, and Technical Writings. 1 vol., 1668–1676.
The systematic cataloguing of all of Leibniz's Nachlass began in 1901. It was hampered by two world wars and then by decades of German division into two states, separating scholars and scattering portions of his literary estates. The ambitious project has had to deal with writings in seven languages, contained in some 200,000 written and printed pages. In 1985 it was reorganized and included in a joint program of German federal and state (Länder) academies. Since then the branches in Potsdam, Münster, Hanover and Berlin have jointly published 57 volumes of the critical edition, with an average of 870 pages, and prepared index and concordance works.
=== Selected works ===
The year given is usually that in which the work was completed, not of its eventual publication.
1666 (publ. 1690). De Arte Combinatoria (On the Art of Combination); partially translated in Loemker §1 and Parkinson (1966)
1667. Nova Methodus Discendae Docendaeque Iurisprudentiae (A New Method for Learning and Teaching Jurisprudence)
1667. "Dialogus de connexione inter res et verba"
1671. Hypothesis Physica Nova (New Physical Hypothesis); Loemker §8.I (part)
1673 Confessio philosophi (A Philosopher's Creed); an English translation is available online.
Oct. 1684. "Meditationes de cognitione, veritate et ideis" ("Meditations on Knowledge, Truth, and Ideas")
Nov. 1684. "Nova methodus pro maximis et minimis" ("New method for maximums and minimums"); translated in Struik, D. J., 1969. A Source Book in Mathematics, 1200–1800. Harvard University Press: 271–81.
1686. Discours de métaphysique; Martin and Brown (1988), Ariew and Garber 35, Loemker §35, Wiener III.3, Woolhouse and Francks 1
1686. Generales inquisitiones de analysi notionum et veritatum (General Inquiries About the Analysis of Concepts and of Truths)
1694. "De primae philosophiae Emendatione, et de Notione Substantiae" ("On the Correction of First Philosophy and the Notion of Substance")
1695. Système nouveau de la nature et de la communication des substances (New System of Nature)
1700. Accessiones historicae
1703. "Explication de l'Arithmétique Binaire" ("Explanation of Binary Arithmetic"); Carl Immanuel Gerhardt, Mathematical Writings VII.223. An English translation by Lloyd Strickland is available online.
1704 (publ. 1765). Nouveaux essais sur l'entendement humain. Translated in: Remnant, Peter, and Bennett, Jonathan, trans., 1996. New Essays on Human Understanding Langley translation 1896. Cambridge University Press. Wiener III.6 (part)
1707–1710. Scriptores rerum Brunsvicensium (3 Vols.)
1710. Théodicée; Farrer, A. M., and Huggard, E. M., trans., 1985 (1952). Wiener III.11 (part). An English translation is available online at Project Gutenberg.
1714. "Principes de la nature et de la Grâce fondés en raison"
1714. Monadologie; translated by Nicholas Rescher, 1991. The Monadology: An Edition for Students. University of Pittsburgh Press. Ariew and Garber 213, Loemker §67, Wiener III.13, Woolhouse and Francks 19. An English translation by Robert Latta is available online.
==== Posthumous works ====
1717. Collectanea Etymologica, edited by the secretary of Leibniz Johann Georg von Eckhart
1749. Protogaea
1750. Origines Guelficae
=== Collections ===
Six important collections of English translations are Wiener (1951), Parkinson (1966), Loemker (1969), Ariew and Garber (1989), Woolhouse and Francks (1998), and Strickland (2006). The ongoing critical edition of all of Leibniz's writings is Sämtliche Schriften und Briefe.
== See also ==
General Leibniz rule
Leibniz Association
Leibniz operator
List of German inventors and discoverers
List of pioneers in computer science
List of things named after Gottfried Leibniz
Mathesis universalis
Scientific Revolution
Leibniz University Hannover
Bartholomew Des Bosses
Joachim Bouvet
Outline of Gottfried Wilhelm Leibniz
Gottfried Wilhelm Leibniz bibliography
== Notes ==
== References ==
=== Citations ===
=== Sources ===
==== Bibliographies ====
Bodemann, Eduard, Die Leibniz-Handschriften der Königlichen öffentlichen Bibliothek zu Hannover, 1895, (anastatic reprint: Hildesheim, Georg Olms, 1966).
Bodemann, Eduard, Der Briefwechsel des Gottfried Wilhelm Leibniz in der Königlichen öffentlichen Bibliothek zu Hannover, 1889, (anastatic reprint: Hildesheim, Georg Olms, 1966).
Ravier, Émile, Bibliographie des œuvres de Leibniz, Paris: Alcan, 1937 (anastatic reprint Hildesheim: Georg Olms, 1966).
Heinekamp, Albert and Mertens, Marlen. Leibniz-Bibliographie. Die Literatur über Leibniz bis 1980, Frankfurt: Vittorio Klostermann, 1984.
Heinekamp, Albert and Mertens, Marlen. Leibniz-Bibliographie. Die Literatur über Leibniz. Band II: 1981–1990, Frankfurt: Vittorio Klostermann, 1996.
An updated bibliography of more than 25.000 titles is available at Leibniz Bibliographie.
==== Primary literature (chronologically) ====
Wiener, Philip, (ed.), 1951. Leibniz: Selections. Scribner.
Schrecker, Paul & Schrecker, Anne Martin, (eds.), 1965. Monadology and other Philosophical Essays. Prentice-Hall.
Parkinson, G. H. R. (ed.), 1966. Logical Papers. Clarendon Press.
Mason, H. T. & Parkinson, G. H. R. (eds.), 1967. The Leibniz-Arnauld Correspondence. Manchester University Press.
Loemker, Leroy, (ed.), 1969 [1956]. Leibniz: Philosophical Papers and Letters. Reidel.
Morris, Mary & Parkinson, G. H. R. (eds.), 1973. Philosophical Writings. Everyman's University Library.
Riley, Patrick, (ed.), 1988. Leibniz: Political Writings. Cambridge University Press.
Niall, R. Martin, D. & Brown, Stuart (eds.), 1988. Discourse on Metaphysics and Related Writings. Manchester University Press.
Ariew, Roger and Garber, Daniel. (eds.), 1989. Leibniz: Philosophical Essays. Hackett.
Rescher, Nicholas (ed.), 1991. G. W. Leibniz's Monadology. An Edition for Students, University of Pittsburgh Press.
Rescher, Nicholas, On Leibniz, (Pittsburgh: University of Pittsburgh Press, 2013).
Parkinson, G. H. R. (ed.) 1992. De Summa Rerum. Metaphysical Papers, 1675–1676. Yale University Press.
Cook, Daniel, & Rosemont, Henry Jr., (eds.), 1994. Leibniz: Writings on China. Open Court.
Farrer, Austin (ed.), 1995. Theodicy, Open Court.
Remnant, Peter, & Bennett, Jonathan, (eds.), 1996 (1981). Leibniz: New Essays on Human Understanding. Cambridge University Press.
Woolhouse, R. S., and Francks, R., (eds.), 1997. Leibniz's 'New System' and Associated Contemporary Texts. Oxford University Press.
Woolhouse, R. S., and Francks, R., (eds.), 1998. Leibniz: Philosophical Texts. Oxford University Press.
Ariew, Roger, (ed.), 2000. G. W. Leibniz and Samuel Clarke: Correspondence. Hackett.
Richard T. W. Arthur, (ed.), 2001. The Labyrinth of the Continuum: Writings on the Continuum Problem, 1672–1686. Yale University Press.
Richard T. W. Arthur, 2014. Leibniz. John Wiley & Sons.
Robert C. Sleigh Jr., (ed.), 2005. Confessio Philosophi: Papers Concerning the Problem of Evil, 1671–1678. Yale University Press.
Dascal, Marcelo (ed.), 2006. G. W. Leibniz. The Art of Controversies, Springer.
Strickland, Lloyd, 2006 (ed.). The Shorter Leibniz Texts: A Collection of New Translations. Continuum.
Look, Brandon and Rutherford, Donald (eds.), 2007. The Leibniz-Des Bosses Correspondence, Yale University Press.
Cohen, Claudine and Wakefield, Andre, (eds.), 2008. Protogaea. University of Chicago Press.
Murray, Michael, (ed.) 2011. Dissertation on Predestination and Grace, Yale University Press.
Strickand, Lloyd (ed.), 2011. Leibniz and the two Sophies. The Philosophical Correspondence, Toronto.
Lodge, Paul (ed.), 2013. The Leibniz-De Volder Correspondence: With Selections from the Correspondence Between Leibniz and Johann Bernoulli, Yale University Press.
Artosi, Alberto, Pieri, Bernardo, Sartor, Giovanni (eds.), 2014. Leibniz: Logico-Philosophical Puzzles in the Law, Springer.
De Iuliis, Carmelo Massimo, (ed.), 2017. Leibniz: The New Method of Learning and Teaching Jurisprudence, Talbot, Clark NJ.
==== Secondary literature up to 1950 ====
Du Bois-Reymond, Emil, 1912. Leibnizsche Gedanken in der neueren Naturwissenschaft, Berlin: Dummler, 1871 (reprinted in Reden, Leipzig: Veit, vol. 1).
Couturat, Louis, 1901. La Logique de Leibniz. Paris: Felix Alcan.
Heidegger, Martin, 1983. The Metaphysical Foundations of Logic. Indiana University Press (lecture course, 1928).
Lovejoy, Arthur O., 1957 (1936). "Plenitude and Sufficient Reason in Leibniz and Spinoza" in his The Great Chain of Being. Harvard University Press: 144–182. Reprinted in Frankfurt, H. G., (ed.), 1972. Leibniz: A Collection of Critical Essays. Anchor Books 1972.
Mackie, John Milton; Guhrauer, Gottschalk Eduard, 1845. Life of Godfrey William von Leibnitz. Gould, Kendall and Lincoln.
Russell, Bertrand, 1900, A Critical Exposition of the Philosophy of Leibniz, Cambridge: The University Press.
Smith, David Eugene (1929). A Source Book in Mathematics. New York and London: McGraw-Hill Book Company, Inc.
Trendelenburg, F. A., 1857, "Über Leibnizens Entwurf einer allgemeinen Charakteristik," Philosophische Abhandlungen der Königlichen Akademie der Wissenschaften zu Berlin. Aus dem Jahr 1856, Berlin: Commission Dümmler, pp. 36–69.
Adolphus William Ward (1911). Leibniz as a Politician: The Adamson Lecture, 1910 (1st ed.). Manchester: University Press. Wikidata Q19095295. (lecture)
==== Secondary literature post-1950 ====
Adams, Robert Merrihew. 1994. Leibniz: Determinist, Theist, Idealist. New York: Oxford, Oxford University Press.
Aiton, Eric J., 1985. Leibniz: A Biography. Hilger (UK).
Antognazza, Maria Rosa, 2008. Leibniz: An Intellectual Biography. Cambridge University Press.
Antognazza, Maria Rosa, 2016. Leibniz: A Very Short Introduction. Oxford University Press.
Antognazza, Maria Rosa, ed., 2018. Oxford Handbook of Leibniz. Oxford University Press.
Barrow, John D.; Tipler, Frank J. (1986). The Anthropic Cosmological Principle (1st ed.). Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148.
Borowski, Audrey, 2024. Leibniz in His World: The Making of a Savant. Princeton University Press.
Bos, H. J. M. (1974). "Differentials, higher-order differentials and the derivative in the Leibnizian calculus". Archive for History of Exact Sciences. 14: 1–90. doi:10.1007/bf00327456. S2CID 120779114.
Brown, Stuart (ed.), 1999. The Young Leibniz and His Philosophy (1646–76), Dordrecht, Kluwer.
Connelly, Stephen, 2021. Leibniz: A Contribution to the Archaeology of Power, Edinburgh University Press ISBN 9781474418065.
Davis, Martin, 2000. The Universal Computer: The Road from Leibniz to Turing. WW Norton.
Deleuze, Gilles, 1993. The Fold: Leibniz and the Baroque. University of Minnesota Press.
Fahrenberg, Jochen, 2017. PsyDok ZPID The influence of Gottfried Wilhelm Leibniz on the Psychology, Philosophy, and Ethics of Wilhelm Wundt.
Fahrenberg, Jochen, 2020. Wilhelm Wundt (1832–1920). Introduction, Quotations, Reception, Commentaries, Attempts at Reconstruction. Pabst Science Publishers, Lengerich 2020, ISBN 978-3-95853-574-9.
Finster, Reinhard & van den Heuvel, Gerd 2000. Gottfried Wilhelm Leibniz. Mit Selbstzeugnissen und Bilddokumenten. 4. Auflage. Rowohlt, Reinbek bei Hamburg (Rowohlts Monographien, 50481), ISBN 3-499-50481-2.
Grattan-Guinness, Ivor, 1997. The Norton History of the Mathematical Sciences. W W Norton.
Hall, A. R., 1980. Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge University Press.
Hamza, Gabor, 2005. "Le développement du droit privé européen". ELTE Eotvos Kiado Budapest.
Hoeflich, M. H. (1986). "Law & Geometry: Legal Science from Leibniz to Langdell". American Journal of Legal History. 30 (2): 95–121. doi:10.2307/845705. JSTOR 845705.
Hostler, John, 1975. Leibniz's Moral Philosophy. UK: Duckworth.
Ishiguro, Hidé 1990. Leibniz's Philosophy of Logic and Language. Cambridge University Press.
Jolley, Nicholas, (ed.), 1995. The Cambridge Companion to Leibniz. Cambridge University Press.
Kaldis, Byron, 2011. "Leibniz' Argument for Innate Ideas", in Bruce, Michael and Barbone, Steven, eds., Just the Arguments: 100 of the Most Important Arguments in Western Philosophy. Wiley-Blackwell.
Karabell, Zachary (2003). Parting the desert: the creation of the Suez Canal. Alfred A. Knopf. ISBN 978-0-375-40883-0.
Kempe, Michael, 2024. The Best of All Possible Worlds: A Life of Leibniz in Seven Pivotal Days. W. W. Norton.
Kromer, Ralf, and Yannick Chin-Drian. New Essays on Leibniz Reception: In Science and Philosophy of Science 1800-2000. 1st ed. 2012. Heidelberg: Birkhauser, 2012.
LeClerc, Ivor (ed.), 1973. The Philosophy of Leibniz and the Modern World. Vanderbilt University Press.
Luchte, James (2006). "Mathesis and Analysis: Finitude and the Infinite in the Monadology of Leibniz". Heythrop Journal. 47 (4): 519–543. doi:10.1111/j.1468-2265.2006.00296.x.
Mates, Benson, 1986. The Philosophy of Leibniz: Metaphysics and Language. Oxford University Press.
Mercer, Christia, 2001. Leibniz's Metaphysics: Its Origins and Development. Cambridge University Press.
Perkins, Franklin, 2004. Leibniz and China: A Commerce of Light. Cambridge University Press.
Riley, Patrick, 1996. Leibniz's Universal Jurisprudence: Justice as the Charity of the Wise. Harvard University Press.
Rutherford, Donald, 1998. Leibniz and the Rational Order of Nature. Cambridge University Press.
Schulte-Albert, H. G. (1971). Gottfried Wilhelm Leibniz and Library Classification. The Journal of Library History (1966–1972), (2). 133–152.
Sepioł, Zbigniew (2003). "Legal and political thought of Gottfried Wilhelm Leibniz". Studia Iuridica (in Polish). 41: 227–250 – via CEEOL.
Smith, Justin E. H., 2011. Divine Machines. Leibniz and the Sciences of Life, Princeton University Press.
Wilson, Catherine, 1989. Leibniz's Metaphysics: A Historical and Comparative Study. Princeton University Press.
Zalta, E. N. (2000). "A (Leibnizian) Theory of Concepts" (PDF). Philosophiegeschichte und Logische Analyse / Logical Analysis and History of Philosophy. 3: 137–183. doi:10.30965/26664275-00301008.
== External links ==
Works by Gottfried Wilhelm Leibniz at Project Gutenberg
Works by or about Gottfried Wilhelm Leibniz at the Internet Archive
Works by Gottfried Wilhelm Leibniz at LibriVox (public domain audiobooks)
Peckhaus, Volker. "Leibniz's Influence on 19th Century Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Burnham, Douglas. "Gottfried Leibniz: Metaphysics". Internet Encyclopedia of Philosophy.
Carlin, Laurence. "Gottfried Leibniz: Causation". Internet Encyclopedia of Philosophy.
Horn, Joshua. "Leibniz: Modal Metaphysics". Internet Encyclopedia of Philosophy.
Jorarti, Julia. "Leibniz: Philosophy of Mind". Internet Encyclopedia of Philosophy.
Lenzen, Wolfgang. "Leibniz: Logic". Internet Encyclopedia of Philosophy.
O'Connor, John J.; Robertson, Edmund F. "Gottfried Wilhelm Leibniz". MacTutor History of Mathematics Archive. University of St Andrews.
Gottfried Wilhelm Leibniz at the Mathematics Genealogy Project
Translations by Jonathan Bennett, of the New Essays, the exchanges with Bayle, Arnauld and Clarke, and about 15 shorter works.
Gottfried Wilhelm Leibniz: Texts and Translations, compiled by Donald Rutherford, UCSD
Leibnitiana, links and resources edited by Gregory Brown, University of Houston
Philosophical Works of Leibniz translated by G.M. Duncan (1890)
The Best of All Possible Worlds: Nicholas Rescher Talks About Gottfried Wilhelm von Leibniz's "Versatility and Creativity"
"Protogæa" Archived 1 August 2020 at the Wayback Machine (1693, Latin, in Acta eruditorum) – Linda Hall Library
Protogaea Archived 1 August 2020 at the Wayback Machine (1749, German) – full digital facsimile from Linda Hall Library
Leibniz's (1768, 6-volume) Opera omnia – digital facsimile
Leibniz's arithmetical machine, 1710, online and analyzed on BibNum Archived 24 July 2017 at the Wayback Machine [click 'à télécharger' for English analysis]
Leibniz's binary numeral system, 'De progressione dyadica', 1679, online and analyzed on BibNum Archived 24 July 2017 at the Wayback Machine [click 'à télécharger' for English analysis] | Wikipedia/Algebra_of_concepts |
In mathematics, the algebra of sets, not to be confused with the mathematical structure of an algebra of sets, defines the properties and laws of sets, the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.
Any set of sets closed under the set-theoretic operations forms a Boolean algebra with the join operator being union, the meet operator being intersection, the complement operator being set complement, the bottom being
∅
{\displaystyle \varnothing }
and the top being the universe set under consideration.
== Fundamentals ==
The algebra of sets is the set-theoretic analogue of the algebra of numbers. Just as arithmetic addition and multiplication are associative and commutative, so are set union and intersection; just as the arithmetic relation "less than or equal" is reflexive, antisymmetric and transitive, so is the set relation of "subset".
It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. For a basic introduction to sets see the article on sets, for a fuller account see naive set theory, and for a full rigorous axiomatic treatment see axiomatic set theory.
== Fundamental properties of set algebra ==
The binary operations of set union (
∪
{\displaystyle \cup }
) and intersection (
∩
{\displaystyle \cap }
) satisfy many identities. Several of these identities or "laws" have well established names.
Commutative property:
A
∪
B
=
B
∪
A
{\displaystyle A\cup B=B\cup A}
A
∩
B
=
B
∩
A
{\displaystyle A\cap B=B\cap A}
Associative property:
(
A
∪
B
)
∪
C
=
A
∪
(
B
∪
C
)
{\displaystyle (A\cup B)\cup C=A\cup (B\cup C)}
(
A
∩
B
)
∩
C
=
A
∩
(
B
∩
C
)
{\displaystyle (A\cap B)\cap C=A\cap (B\cap C)}
Distributive property:
A
∪
(
B
∩
C
)
=
(
A
∪
B
)
∩
(
A
∪
C
)
{\displaystyle A\cup (B\cap C)=(A\cup B)\cap (A\cup C)}
A
∩
(
B
∪
C
)
=
(
A
∩
B
)
∪
(
A
∩
C
)
{\displaystyle A\cap (B\cup C)=(A\cap B)\cup (A\cap C)}
The union and intersection of sets may be seen as analogous to the addition and multiplication of numbers. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersection distributes over union. However, unlike addition and multiplication, union also distributes over intersection.
Two additional pairs of properties involve the special sets called the empty set
∅
{\displaystyle \varnothing }
and the universe set
U
{\displaystyle {\boldsymbol {U}}}
; together with the complement operator (
A
∁
{\displaystyle A^{\complement }}
denotes the complement of
A
{\displaystyle A}
. This can also be written as
A
′
{\displaystyle A'}
, read as "A prime"). The empty set has no members, and the universe set has all possible members (in a particular context).
Identity:
A
∪
∅
=
A
{\displaystyle A\cup \varnothing =A}
A
∩
U
=
A
{\displaystyle A\cap {\boldsymbol {U}}=A}
Complement:
A
∪
A
∁
=
U
{\displaystyle A\cup A^{\complement }={\boldsymbol {U}}}
A
∩
A
∁
=
∅
{\displaystyle A\cap A^{\complement }=\varnothing }
The identity expressions (together with the commutative expressions) say that, just like 0 and 1 for addition and multiplication,
∅
{\displaystyle \varnothing }
and
U
{\displaystyle {\boldsymbol {U}}}
are the identity elements for union and intersection, respectively.
Unlike addition and multiplication, union and intersection do not have inverse elements. However the complement laws give the fundamental properties of the somewhat inverse-like unary operation of set complementation.
The preceding five pairs of formulae—the commutative, associative, distributive, identity and complement formulae—encompass all of set algebra, in the sense that every valid proposition in the algebra of sets can be derived from them.
Note that if the complement formulae are weakened to the rule
(
A
∁
)
∁
=
A
{\displaystyle (A^{\complement })^{\complement }=A}
, then this is exactly the algebra of propositional linear logic.
== Principle of duality ==
Each of the identities stated above is one of a pair of identities such that each can be transformed into the other by interchanging
∪
{\displaystyle \cup }
and
∩
{\displaystyle \cap }
, while also interchanging
∅
{\displaystyle \varnothing }
and
U
{\displaystyle {\boldsymbol {U}}}
.
These are examples of an extremely important and powerful property of set algebra, namely, the principle of duality for sets, which asserts that for any true statement about sets, the dual statement obtained by interchanging unions and intersections, interchanging
U
{\displaystyle {\boldsymbol {U}}}
and
∅
{\displaystyle \varnothing }
and reversing inclusions is also true. A statement is said to be self-dual if it is equal to its own dual.
== Some additional laws for unions and intersections ==
The following proposition states six more important laws of set algebra, involving unions and intersections.
PROPOSITION 3: For any subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
of a universe set
U
{\displaystyle {\boldsymbol {U}}}
, the following identities hold:
idempotent laws:
A
∪
A
=
A
{\displaystyle A\cup A=A}
A
∩
A
=
A
{\displaystyle A\cap A=A}
domination laws:
A
∪
U
=
U
{\displaystyle A\cup {\boldsymbol {U}}={\boldsymbol {U}}}
A
∩
∅
=
∅
{\displaystyle A\cap \varnothing =\varnothing }
absorption laws:
A
∪
(
A
∩
B
)
=
A
{\displaystyle A\cup (A\cap B)=A}
A
∩
(
A
∪
B
)
=
A
{\displaystyle A\cap (A\cup B)=A}
As noted above, each of the laws stated in proposition 3 can be derived from the five fundamental pairs of laws stated above. As an illustration, a proof is given below for the idempotent law for union.
Proof:
The following proof illustrates that the dual of the above proof is the proof of the dual of the idempotent law for union, namely the idempotent law for intersection.
Proof:
Intersection can be expressed in terms of set difference:
A
∩
B
=
A
∖
(
A
∖
B
)
{\displaystyle A\cap B=A\setminus (A\setminus B)}
== Some additional laws for complements ==
The following proposition states five more important laws of set algebra, involving complements.
PROPOSITION 4: Let
A
{\displaystyle A}
and
B
{\displaystyle B}
be subsets of a universe
U
{\displaystyle {\boldsymbol {U}}}
, then:
De Morgan's laws:
(
A
∪
B
)
∁
=
A
∁
∩
B
∁
{\displaystyle (A\cup B)^{\complement }=A^{\complement }\cap B^{\complement }}
(
A
∩
B
)
∁
=
A
∁
∪
B
∁
{\displaystyle (A\cap B)^{\complement }=A^{\complement }\cup B^{\complement }}
double complement or involution law:
(
A
∁
)
∁
=
A
{\displaystyle (A^{\complement })^{\complement }=A}
complement laws for the universe set and the empty set:
∅
∁
=
U
{\displaystyle \varnothing ^{\complement }={\boldsymbol {U}}}
U
∁
=
∅
{\displaystyle {\boldsymbol {U}}^{\complement }=\varnothing }
Notice that the double complement law is self-dual.
The next proposition, which is also self-dual, says that the complement of a set is the only set that satisfies the complement laws. In other words, complementation is characterized by the complement laws.
PROPOSITION 5: Let
A
{\displaystyle A}
and
B
{\displaystyle B}
be subsets of a universe
U
{\displaystyle {\boldsymbol {U}}}
, then:
uniqueness of complements:
If
A
∪
B
=
U
{\displaystyle A\cup B={\boldsymbol {U}}}
, and
A
∩
B
=
∅
{\displaystyle A\cap B=\varnothing }
, then
B
=
A
∁
{\displaystyle B=A^{\complement }}
== Algebra of inclusion ==
The following proposition says that inclusion, that is the binary relation of one set being a subset of another, is a partial order.
PROPOSITION 6: If
A
{\displaystyle A}
,
B
{\displaystyle B}
and
C
{\displaystyle C}
are sets then the following hold:
reflexivity:
A
⊆
A
{\displaystyle A\subseteq A}
antisymmetry:
A
⊆
B
{\displaystyle A\subseteq B}
and
B
⊆
A
{\displaystyle B\subseteq A}
if and only if
A
=
B
{\displaystyle A=B}
transitivity:
If
A
⊆
B
{\displaystyle A\subseteq B}
and
B
⊆
C
{\displaystyle B\subseteq C}
, then
A
⊆
C
{\displaystyle A\subseteq C}
The following proposition says that for any set S, the power set of S, ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra.
PROPOSITION 7: If
A
{\displaystyle A}
,
B
{\displaystyle B}
and
C
{\displaystyle C}
are subsets of a set
S
{\displaystyle S}
then the following hold:
existence of a least element and a greatest element:
∅
⊆
A
⊆
S
{\displaystyle \varnothing \subseteq A\subseteq S}
existence of joins:
A
⊆
A
∪
B
{\displaystyle A\subseteq A\cup B}
If
A
⊆
C
{\displaystyle A\subseteq C}
and
B
⊆
C
{\displaystyle B\subseteq C}
, then
A
∪
B
⊆
C
{\displaystyle A\cup B\subseteq C}
existence of meets:
A
∩
B
⊆
A
{\displaystyle A\cap B\subseteq A}
If
C
⊆
A
{\displaystyle C\subseteq A}
and
C
⊆
B
{\displaystyle C\subseteq B}
, then
C
⊆
A
∩
B
{\displaystyle C\subseteq A\cap B}
The following proposition says that the statement
A
⊆
B
{\displaystyle A\subseteq B}
is equivalent to various other statements involving unions, intersections and complements.
PROPOSITION 8: For any two sets
A
{\displaystyle A}
and
B
{\displaystyle B}
, the following are equivalent:
A
⊆
B
{\displaystyle A\subseteq B}
A
∩
B
=
A
{\displaystyle A\cap B=A}
A
∪
B
=
B
{\displaystyle A\cup B=B}
A
∖
B
=
∅
{\displaystyle A\setminus B=\varnothing }
B
∁
⊆
A
∁
{\displaystyle B^{\complement }\subseteq A^{\complement }}
The above proposition shows that the relation of set inclusion can be characterized by either of the operations of set union or set intersection, which means that the notion of set inclusion is axiomatically superfluous.
== Algebra of relative complements ==
The following proposition lists several identities concerning relative complements and set-theoretic differences.
PROPOSITION 9: For any universe
U
{\displaystyle {\boldsymbol {U}}}
and subsets
A
{\displaystyle A}
,
B
{\displaystyle B}
and
C
{\displaystyle C}
of
U
{\displaystyle {\boldsymbol {U}}}
, the following identities hold:
C
∖
(
A
∩
B
)
=
(
C
∖
A
)
∪
(
C
∖
B
)
{\displaystyle C\setminus (A\cap B)=(C\setminus A)\cup (C\setminus B)}
C
∖
(
A
∪
B
)
=
(
C
∖
A
)
∩
(
C
∖
B
)
{\displaystyle C\setminus (A\cup B)=(C\setminus A)\cap (C\setminus B)}
C
∖
(
B
∖
A
)
=
(
A
∩
C
)
∪
(
C
∖
B
)
{\displaystyle C\setminus (B\setminus A)=(A\cap C)\cup (C\setminus B)}
(
B
∖
A
)
∩
C
=
(
B
∩
C
)
∖
(
A
∩
C
)
=
(
B
∩
C
)
∖
A
=
B
∩
(
C
∖
A
)
{\displaystyle (B\setminus A)\cap C=(B\cap C)\setminus (A\cap C)=(B\cap C)\setminus A=B\cap (C\setminus A)}
(
B
∖
A
)
∪
C
=
(
B
∪
C
)
∖
(
A
∖
C
)
{\displaystyle (B\setminus A)\cup C=(B\cup C)\setminus (A\setminus C)}
(
B
∖
A
)
∖
C
=
B
∖
(
A
∪
C
)
{\displaystyle (B\setminus A)\setminus C=B\setminus (A\cup C)}
A
∖
A
=
∅
{\displaystyle A\setminus A=\varnothing }
∅
∖
A
=
∅
{\displaystyle \varnothing \setminus A=\varnothing }
A
∖
∅
=
A
{\displaystyle A\setminus \varnothing =A}
B
∖
A
=
A
∁
∩
B
{\displaystyle B\setminus A=A^{\complement }\cap B}
(
B
∖
A
)
∁
=
A
∪
B
∁
{\displaystyle (B\setminus A)^{\complement }=A\cup B^{\complement }}
U
∖
A
=
A
∁
{\displaystyle {\boldsymbol {U}}\setminus A=A^{\complement }}
A
∖
U
=
∅
{\displaystyle A\setminus {\boldsymbol {U}}=\varnothing }
== See also ==
σ-algebra is an algebra of sets, completed to include countably infinite operations.
Axiomatic set theory
Image (mathematics) § Properties
Field of sets
List of set identities and relations
Naive set theory
Set (mathematics)
Topological space — a subset of
℘
(
X
)
{\displaystyle \wp (X)}
, the power set of
X
{\displaystyle X}
, closed with respect to arbitrary union, finite intersection and containing
∅
{\displaystyle \varnothing }
and
X
{\displaystyle X}
.
== References ==
Stoll, Robert R.; Set Theory and Logic, Mineola, N.Y.: Dover Publications (1979) ISBN 0-486-63829-4. "The Algebra of Sets", pp 16—23.
Courant, Richard, Herbert Robbins, Ian Stewart, What is mathematics?: An Elementary Approach to Ideas and Methods, Oxford University Press US, 1996. ISBN 978-0-19-510519-3. "SUPPLEMENT TO CHAPTER II THE ALGEBRA OF SETS".
== External links ==
Operations on Sets at ProvenMath | Wikipedia/Algebra_of_sets |
Boolean differential calculus (BDC) (German: Boolescher Differentialkalkül (BDK)) is a subject field of Boolean algebra discussing changes of Boolean variables and Boolean functions.
Boolean differential calculus concepts are analogous to those of classical differential calculus, notably studying the changes in functions and variables with respect to another/others.
The Boolean differential calculus allows various aspects of dynamical systems theory such as
automata theory on finite automata
Petri net theory
supervisory control theory (SCT)
to be discussed in a united and closed form, with their individual advantages combined.
== History and applications ==
Originally inspired by the design and testing of switching circuits and the utilization of error-correcting codes in electrical engineering, the roots for the development of what later would evolve into the Boolean differential calculus were initiated by works of Irving S. Reed, David E. Muller, David A. Huffman, Sheldon B. Akers Jr. and A. D. Talantsev (A. D. Talancev, А. Д. Таланцев) between 1954 and 1959, and of Frederick F. Sellers Jr., Mu-Yue Hsiao and Leroy W. Bearnson in 1968.
Since then, significant advances were accomplished in both, the theory and in the application of the BDC in switching circuit design and logic synthesis.
Works of André Thayse, Marc Davio and Jean-Pierre Deschamps in the 1970s formed the basics of BDC on which Dieter Bochmann, Christian Posthoff and Bernd Steinbach further developed BDC into a self-contained mathematical theory later on.
A complementary theory of Boolean integral calculus (German: Boolescher Integralkalkül) has been developed as well.
BDC has also found uses in discrete event dynamic systems (DEDS) in digital network communication protocols.
Meanwhile, BDC has seen extensions to multi-valued variables and functions as well as to lattices of Boolean functions.
== Overview ==
Boolean differential operators play a significant role in BDC. They allow the application of differentials as known from classical analysis to be extended to logical functions.
The differentials
d
x
i
{\displaystyle dx_{i}}
of a Boolean variable
x
i
{\displaystyle x_{i}}
models the relation:
d
x
i
=
{
0
,
no change of
x
i
1
,
change of
x
i
{\displaystyle dx_{i}={\begin{cases}0,&{\text{no change of }}x_{i}\\1,&{\text{change of }}x_{i}\end{cases}}}
There are no constraints in regard to the nature, the causes and consequences of a change.
The differentials
d
x
i
{\displaystyle dx_{i}}
are binary. They can be used just like common binary variables.
== See also ==
Boolean Algebra
Boole's expansion theorem
Ramadge–Wonham framework
== References ==
== Further reading ==
Davio, Marc; Piret, Philippe M. (July 1969). "Les dérivées Booléennes et leur application au diagnostic" [Boolean derivatives and their application and diagnosis]. Philips Revue (in French). 12 (3). Brussels, Belgium: Philips Research Laboratory, Manufacture Belge de Lampes et de Materiel Electronique (MBLE Research Laboratory): 63–76. (14 pages)
Rudeanu, Sergiu (September 1974). Boolean Functions and Equations. North-Holland Publishing Company/American Elsevier Publishing Company. ISBN 0-44410520-4. ISBN 0-72042082-2. (462 pages)
Bochmann, Dieter [in German] (1977). "Boolean differential calculus (a survey)". Engineering Cybernetics. 15 (5). Institute of Electrical and Electronics Engineers (IEEE): 67–75. ISSN 0013-788X. (9 pages) Translation of: Bochmann, Dieter [in German] (1977). "[Boolean differential calculus (survey)]". Известия Академии наук СССР – Техническая кибернетика (Izvestii︠a︡ Akademii Nauk SSSR – Tekhnicheskai︠a︡ kibernetika) [Proceedings of the Academy of Sciences of the USSR – Engineering Cybernetics] (in Russian) (5): 125–133. (9 pages)
Kühnrich, Martin (1986). "Differentialoperatoren über Booleschen Algebren" [Differential operators on Boolean algebras]. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (in German). 32 (17–18). Berlin, Germany (East): 271–288. doi:10.1002/malq.19860321703. #18. (18 pages)
Dresig, Frank (1992). Gruppierung – Theorie und Anwendung in der Logiksynthese [Grouping – Theory and application in logic synthesis]. Fortschritt-Berichte VDI, Ser. 9 (in German). Vol. 145. Düsseldorf, Germany: VDI-Verlag. ISBN 3-18-144509-6. DNB-IDN 940164671. (NB. Also: Chemnitz, Technische Universität, Dissertation.) (147 pages)
Scheuring, Rainer; Wehlan, Herbert "Hans" (1993). "Control of Discrete Event Systems by Means of the Boolean Differential Calculus". In Balemi, Silvano; Kozák, Petr; Smedinga, Rein (eds.). Discrete Event Systems: Modeling and Control. Progress in Systems and Control Theory (PSCT). Vol. 13. Basel, Switzerland: Birkhäuser Verlag. pp. 79–93. doi:10.1007/978-3-0348-9120-2_7. ISBN 978-3-0348-9916-1. (15 pages)
Posthoff, Christian; Steinbach, Bernd [in German] (2004-02-04). Logic Functions and Equations – Binary Models for Computer Science (1st ed.). Dordrecht, Netherlands: Springer Science + Business Media B.V. doi:10.1007/978-1-4020-2938-7. ISBN 1-4020-2937-3. OCLC 254106952. ISBN 978-1-4020-2937-0. (392 pages)
Steinbach, Bernd [in German]; Posthoff, Christian (2009-02-12). Logic Functions and Equations – Examples and Exercises (1st ed.). Dordrecht, Netherlands: Springer Science + Business Media B.V. doi:10.1007/978-1-4020-9595-5. ISBN 978-1-4020-9594-8. LCCN 2008941076. (xxii+232 pages) [1] (NB. Per DNB-IDN 1010457748 this hardcover edition has been rereleased as softcover edition in 2010.)
Steinbach, Bernd [in German]; Posthoff, Christian (2010-06-01). "Boolean Differential Calculus – Theory and Applications". Journal of Computational and Theoretical Nanoscience. 7 (6). American Scientific Publishers: 933–981. doi:10.1166/jctn.2010.1441. ISSN 1546-1955. (49 pages)
Steinbach, Bernd [in German]; Posthoff, Christian (2010-01-15) [2009]. "Chapter 3: Boolean Differential Calculus". In Sasao, Tsutomu; Butler, Jon T. (eds.). Progress in Applications of Boolean Functions. Synthesis Lectures on Digital Circuits and Systems (1st ed.). San Rafael, CA, USA: Morgan & Claypool Publishers. pp. 55–78, 121–126. doi:10.2200/S00243ED1V01Y200912DCS026. ISBN 978-1-60845-181-4. S2CID 37053010. Lecture #26. (24 of 153 pages)
== External links ==
Wehlan, Herbert "Hans" (2010-12-06). "Boolean differential calculus". In Hazewinkel, Michiel (ed.). Boolean differential calculus - Encyclopedia of Mathematics. Encyclopedia of Mathematics. Springer Science+Business Media. ISBN 978-1-4020-0609-8. Archived from the original on 2017-10-16. Retrieved 2017-10-16.
Institut für Informatik (IfI) (2017). "XBOOLE". TU Bergakademie Freiberg. Archived from the original on 2017-10-31. Retrieved 2017-10-31. with "XBOOLE Monitor". 2008-07-23. Archived from the original on 2017-10-31. Retrieved 2017-10-31. | Wikipedia/Boolean_differential_calculus |
Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design.
Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. Usually, the smaller circuit with the same function is cheaper, takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit.
In terms of Boolean algebra, the optimization of a complex Boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one.
== Motivation ==
The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits.
With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description. While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic).
== Methods ==
The methods of logic circuit simplifications are equally applicable to Boolean expression minimization.
=== Classification ===
Today, logic optimization is divided into various categories:
Based on circuit representation
Two-level logic optimization
Multi-level logic optimization
Based on circuit characteristics
Sequential logic optimization
Combinational logic optimization
Based on type of execution
Graphical optimization methods
Tabular optimization methods
Algebraic optimization methods
=== Graphical methods ===
Graphical methods represent the required logical function by a diagram representing the logic variables and value of the function. By manipulating or inspecting a diagram, much tedious calculation may be eliminated.
Graphical minimization methods for two-level logic include:
Euler diagram (aka Eulerian circle) (1768) by Leonhard P. Euler (1707–1783)
Venn diagram (1880) by John Venn (1834–1923)
Karnaugh map (1953) by Maurice Karnaugh
=== Boolean expression minimization ===
The same methods of Boolean expression minimization (simplification) listed below may be applied to the circuit optimization.
For the case when the Boolean function is specified by a circuit (that is, we want to find an equivalent circuit of minimum size possible), the unbounded circuit minimization problem was long-conjectured to be
Σ
2
P
{\displaystyle \Sigma _{2}^{P}}
-complete in time complexity, a result finally proved in 2008, but there are effective heuristics such as Karnaugh maps and the Quine–McCluskey algorithm that facilitate the process.
Boolean function minimizing methods include:
Quine–McCluskey algorithm
Petrick's method
=== Optimal multi-level methods ===
Methods that find optimal circuit representations of Boolean functions are often referred to as exact synthesis in the literature. Due to the computational complexity, exact synthesis is tractable only for small Boolean functions. Recent approaches map the optimization problem to a Boolean satisfiability problem. This allows finding optimal circuit representations using a SAT solver.
=== Heuristic methods ===
A heuristic method uses established rules that solve a practical useful subset of the much larger possible set of problems. The heuristic method may not produce the theoretically optimum solution, but if useful, will provide most of the optimization desired with a minimum of effort. An example of a computer system that uses heuristic methods for logic optimization is the Espresso heuristic logic minimizer.
=== Two-level versus multi-level representations ===
While a two-level circuit representation of circuits strictly refers to the flattened view of the circuit in terms of SOPs (sum-of-products) — which is more applicable to a PLA implementation of the design — a multi-level representation is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional representation (binary decision diagrams, algebraic decision diagrams) of the circuit. In sum-of-products (SOP) form, AND gates form the smallest unit and are stitched together using ORs, whereas in product-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic.
If we have two functions F1 and F2:
F
1
=
A
B
+
A
C
+
A
D
,
{\displaystyle F_{1}=AB+AC+AD,\,}
F
2
=
A
′
B
+
A
′
C
+
A
′
E
.
{\displaystyle F_{2}=A'B+A'C+A'E.\,}
The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:
P = B + C.
F1 = AP + AD.
F2 = A'P + A'E.
While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of the term B + C.
Similarly, we distinguish between combinational circuits and sequential circuits. Combinational circuits produce their outputs based only on the current inputs. They can be represented by Boolean relations. Some examples are priority encoders, binary decoders, multiplexers, demultiplexers.
Sequential circuits produce their output based on both current and past inputs, depending on a clock signal to distinguish the previous inputs from the current inputs. They can be represented by finite state machines. Some examples are flip-flops and counters.
== Example ==
While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented.
Consider the circuit used to represent
(
A
∧
B
¯
)
∨
(
A
¯
∧
B
)
{\displaystyle (A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)}
. It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need two inverters, two AND gates, and an OR gate.
The circuit can simplified (minimized) by applying laws of Boolean algebra or using intuition. Since the example states that
A
{\displaystyle A}
is true when
B
{\displaystyle B}
is false and the other way around, one can conclude that this simply means
A
≠
B
{\displaystyle A\neq B}
. In terms of logical gates, inequality simply means an XOR gate (exclusive or). Therefore,
(
A
∧
B
¯
)
∨
(
A
¯
∧
B
)
⟺
A
≠
B
{\displaystyle (A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)\iff A\neq B}
. Then the two circuits shown below are equivalent, as can be checked using a truth table:
== See also ==
Binary decision diagram (BDD)
Don't care condition
Prime implicant
Circuit complexity — on estimation of the circuit complexity
Function composition
Function decomposition
Gate underutilization
Logic redundancy
Harvard minimizing chart (Wikiversity) (Wikibooks)
== Notes ==
== References ==
== Further reading ==
Lind, Larry Frederick; Nelson, John Christopher Cunliffe (1977). Analysis and Design of Sequential Digital Systems. Macmillan Press. ISBN 0-33319266-4. (146 pages)
De Micheli, Giovanni (1994). Synthesis and Optimization of Digital Circuits. McGraw-Hill. ISBN 0-07-016333-2. (NB. Chapters 7–9 cover combinatorial two-level, combinatorial multi-level, and respectively sequential circuit optimization.)
Hachtel, Gary D.; Somenzi, Fabio (2006) [1996]. Logic Synthesis and Verification Algorithms. Springer Science & Business Media. ISBN 978-0-387-31005-3.
Kohavi, Zvi; Jha, Niraj K. (2009). "4–6". Switching and Finite Automata Theory (3rd ed.). Cambridge University Press. ISBN 978-0-521-85748-2.
Rutenbar, Rob A. Multi-level minimization, Part I: Models & Methods (PDF) (lecture slides). Carnegie Mellon University (CMU). Lecture 7. Archived (PDF) from the original on 2018-01-15. Retrieved 2018-01-15; Rutenbar, Rob A. Multi-level minimization, Part II: Cube/Cokernel Extract (PDF) (lecture slides). Carnegie Mellon University (CMU). Lecture 8. Archived (PDF) from the original on 2018-01-15. Retrieved 2018-01-15. | Wikipedia/Circuit_minimization_for_Boolean_functions |
In electronic design, wire routing, commonly called simply routing, is a step in the design of printed circuit boards (PCBs) and integrated circuits (ICs). It builds on a preceding step, called placement, which determines the location of each active element of an IC or component on a PCB. After placement, the routing step adds wires needed to properly connect the placed components while obeying all design rules for the IC. Together, the placement and routing steps of IC design are known as place and route.
The task of all routers is the same. They are given some pre-existing polygons consisting of pins (also called terminals) on cells, and optionally some pre-existing wiring called preroutes. Each of these polygons are associated with a net, usually by name or number. The primary task of the router is to create geometries such that all terminals assigned to the same net are connected, no terminals assigned to different nets are connected, and all design rules are obeyed. A router can fail by not connecting terminals that should be connected (an open), by mistakenly connecting two terminals that should not be connected (a short), or by creating a design rule violation. In addition, to correctly connect the nets, routers may also be expected to make sure the design meets timing, has no crosstalk problems, meets any metal density requirements, does not suffer from antenna effects, and so on. This long list of often conflicting objectives is what makes routing extremely difficult.
Almost every problem associated with routing is known to be intractable. The simplest routing problem, called the Steiner tree problem, of finding the shortest route for one net in one layer with no obstacles and no design rules is known to be NP-complete, both in the case where all angles are allowed or if routing is restricted to only horizontal and vertical wires. Variants of channel routing have also been shown to be NP-complete, as well as routing which reduces crosstalk, number of vias, and so on.
Routers therefore seldom attempt to find an optimum result. Instead, almost all routing is based on heuristics which try to find a solution that is good enough.
Design rules sometimes vary considerably from layer to layer. For example, the allowed width and spacing on the lower layers may be four or more times smaller than the allowed widths and spacings on the upper layers. This introduces many additional complications not faced by routers for other applications such as printed circuit board or multi-chip module design. Particular difficulties ensue if the rules are not simple multiples of each other, and when vias must traverse between layers with different rules.
== Types of routers ==
The earliest types of EDA routers were "manual routers"—the drafter clicked a mouse on the endpoint of each line segment of each net.
Modern PCB design software typically provides "interactive routers"—the drafter selects a pad and clicks a few places to give the EDA tool an idea of where to go, and the EDA tool tries to place wires as close to that path as possible without violating design rule checking (DRC). Some more advanced interactive routers have "push and shove" (aka "shove-aside" or "automoving") features in an interactive router; the EDA tool pushes other nets out of the way, if possible, in order to place a new wire where the drafter wants it and still avoid violating DRC.
Modern PCB design software also typically provides "autorouters" that route all remaining unrouted connections without human intervention.
The main types of autorouters are:
Maze router
Lee router
Hadlock router
Flood router
Line-probe router
Mikami–Tahuchi router
Hightower router
Pattern router
Channel router
Switchbox router
River router
Spine and stitch router
Gridless router
Area router
Graph theory-based router
Bloodhound router (CADSTAR by Racal-Redac / Zuken)
Specctra (aka Allegro PCB Router) (gridless since version 10)
Topological router
FreeStyle Router (aka SpeedWay, a DOS-based autorouter for P-CAD)
TopoR (a Windows-based autorouter, also used in Eremex's Delta Design)
Toporouter (Anthony Blake's open-source router in PCB of the gEDA suite)
TopRouter (the topological pre-router in CadSoft/Autodesk's EAGLE 7.0 and higher)
SimplifyPCB (a topological router with a focus on bundle routing with hand-routing results)
== How routers work ==
Many routers execute the following overall algorithm:
First, determine an approximate course for each net, often by routing on a coarse grid. This step is called global routing, and may optionally include layer assignment. Global routing limits the size and complexity of the following detailed routing steps, which can be done grid square by grid square.
For detailed routing, the most common technique is rip-up and reroute aka rip-up and retry:
Select a sequence in which the nets are to be routed.
Route each net in sequence
If not all nets can be successfully routed, apply any of a variety of "cleanup" methods, in which selected routings are removed, the order of the remaining nets to be routed is changed, and the remaining routings are attempted again.
This process repeats until all nets are routed or the program (or user) gives up.
An alternative approach is to treat shorts, design rule violations, obstructions, etc. on a similar footing as excess wire length—that is, as finite costs to be reduced (at first) rather than as absolutes to be avoided. This multi-pass "iterative-improvement" routing method is described by the following algorithm:
For each of several iterative passes:
Prescribe or adjust the weight parameters of an "objective function" (having a weight parameter value for each unit of excess wire length, and for each type of violation). E.g., for the first pass, excess wire length may typically be given a high cost, while design violations such as shorts, adjacency, etc. are given a low cost. In later passes, the relative ordering of costs is changed so that violations are high-cost, or may be prohibited absolutely.
Select (or randomly choose) a sequence in which nets are to be routed during this pass.
"Rip up" (if previously routed) and reroute each net in turn, so as to minimize the value of the objective function for that net. (Some of the routings will in general have shorts or other design violations.)
Proceed to the next iterative pass until routing is complete and correct, is not further improved, or some other termination criterion is satisfied.
Most routers assign wiring layers to carry predominantly "x" or "y" directional wiring, though there have been routers which avoid or reduce the need for such assignment. There are advantages and disadvantages to each approach. Restricted directions make power supply design and the control of inter-layer crosstalk easier, but allowing arbitrary routes can reduce the need for vias and decrease the number of required wiring layers.
== See also ==
Electronic design automation
Design flow (EDA)
Integrated circuit design
Place and route
Auto polarity (differential pairs)
Auto crossover (Ethernet)
== References ==
== Further reading ==
Scheffer, Louis K.; Lavagno, Luciano; Martin, Grant (2006). "Chapter 8: Routing". Electronic Design Automation For Integrated Circuits Handbook. Vol. II. Boca Raton, FL, USA: CRC Press / Taylor & Francis. ISBN 978-0-8493-3096-4.
== External links ==
http://www.eecs.northwestern.edu/~haizhou/357/lec6.pdf
http://www.facweb.iitkgp.ernet.in/~isg/CAD/SLIDES/10-grid-routing.pdf | Wikipedia/Routing_(electronic_design_automation) |
Placement is an essential step in electronic design automation — the portion of the physical design
flow that assigns exact locations for various circuit components within the chip's core area. An inferior placement assignment will not only affect the chip's performance but might also make it non-manufacturable by producing excessive wire-length, which is beyond available routing resources. Consequently, a placer must perform the assignment while optimizing a number of objectives to ensure that a circuit meets its performance demands. Together, the placement and routing steps of IC design are known as place and route.
A placer takes a given synthesized circuit netlist together with a technology library and produces a valid placement layout. The layout is optimized according to the aforementioned objectives and ready for cell resizing and buffering — a step essential for timing and signal integrity satisfaction. Clock tree synthesis and Routing follow, completing the physical design process. In many cases, parts of, or the entire, physical design flow are iterated a number of times until design closure is achieved.
== Application specifics ==
In the case of application-specific integrated circuits, or ASICs, the chip's core layout area comprises a number of fixed height rows, with either some or no space between them. Each row consists of a number of sites which can be occupied by the circuit components. A free site is a site that is not occupied by any component. Circuit components are either standard cells, macro blocks, or I/O pads. Standard cells have a fixed height equal to a row's height, but have variable widths. The width of a cell is an integral number of sites.
On the other hand, blocks are typically larger than cells and have variable heights that can stretch a multiple number of rows. Some blocks can have preassigned locations — say from a previous floorplanning process — which limit the placer's task to assigning locations for just the cells. In this case, the blocks are typically referred to by fixed blocks. Alternatively, some or all of the blocks may not have preassigned locations. In this case, they have to be placed with the cells in what is commonly referred to as mixed-mode placement.
In addition to ASICs, placement retains its prime importance in gate array structures such as field-programmable gate arrays (FPGAs). Here, prefabricated transistors are typically arranged in rows
(or “arrays”) that are separated by routing channels. Placement maps the circuit's subcircuits into programmable FPGA logic blocks in a manner that guarantees the completion of the subsequent stage of routing.
== Objectives and constraints ==
Placement is formulated as constrained optimization. In particular, the clock cycle of a chip is determined by the delay of its longest path, usually referred to as the critical path. Given a performance specification, a placer must ensure that no path exists with delay exceeding the maximum specified delay.
Other key constraints include
avoiding overlaps between circuit components (the instances in the netlist)
placing circuit components into predetermined "sites"
There are usually multiple optimization objectives, including:
Total wire length: the sum of the lengths of all the wires in the design
Routing congestion: local congestion is the difference between the lengths of wires in a region and the length of routing tracks available in that region; local values can be aggregated in several ways, such as adding up top 10% greatest values.
Power: dynamic switching power depends on wirelengths, which in turn depend on component locations.
Additionally, it is desirable to finish the placement process quickly.
Total wirelength is typically the primary objective of most existing placers and serves as a precursor to other optimizations because, e.g., power and delay tend to grow with wire length. Total wire length determines the routing demand and whether it can be satisfied by the routing supply defined by available routing tracks. However, making wires very short sometimes leads to local routing demand exceeding local routing supply. Such situations often require routing detours, which increase wire lengths and signal delays. Therefore, after preliminary optimization of total wirelength, it is also important to handle routing congestion.
Power minimization typically notes wires with greater switching activity factors and assigns greater priority to making them shorter. When many "hot" components are placed nearby, a hot spot may arise and lead to harmful temperature gradients. In such cases, components can be spread out.
== Basic techniques ==
Placement is divided into global placement and detailed placement. Global placement introduces dramatic changes by distributing all the instances to appropriate locations in the global scale with minor overlaps allowed. Detailed placement shifts each instance to nearby legal location with very moderate layout change. Placement and overall design quality is most dependent on the global placement performance.
Early techniques for placement of integrated circuits can be categorized as combinatorial optimization. For IC designs with thousands or tens of thousands of components, simulated annealing methodologies such as TimberWolf exhibits the best results. When IC designs grew to millions of components, placement leveraged hypergraph partitioning using nested-partitioning frameworks such as Capo. Combinatorial methods directly prevent component overlaps but struggle with interconnect optimization at large scale. They are typically stochastic and can produce very different results for the same input when launched multiple times.
Analytical methods for global placement model interconnect length by a continuous function and minimize this function directly subject to component density constraints. These methods run faster and scale better than combinatorial methods, but do not prevent component overlaps and must be postprocessed by combinatorial methods for detailed placement.
Quadratic placement is an early analytical method that models interconnect length by a quadratic function and uses high-performance quadratic optimization techniques. When it was developed, it demonstrated competitive quality of results and also stability, unlike combinatorial methods. GORDIAN formulates the wirelength cost as a quadratic function while still spreading cells apart through recursive partitioning. The algorithm models placement density as a linear term into the quadratic cost function and solves the placement problem by pure quadratic programming. A common enhancement is weighting each net by the inverse of its length on the previous iteration. Provided the process converges, this minimizes an objective linear in the wirelength. The majority of modern quadratic placers (KraftWerk, FastPlace, SimPL) follow this framework, each with different heuristics on how to determine the linear density force.
Nonlinear placement models wirelength by exponential (nonlinear) functions and density by local piece-wise quadratic functions, in order to achieve better accuracy thus quality improvement. Follow-up academic work includes APlace and NTUplace.
ePlace is a state of the art global placement algorithm. It spreads instances apart by simulating an electrostatic field, which minimizes quality overhead thus achieves good performance.
In 2021, Google Brain reported good results from the use of AI techniques (in particular reinforcement learning) for the placement problem. However, this result is quite controversial, as the paper does not contain head-to-head comparisons to existing placers, and is difficult to replicate due to proprietary content. At least one initially favorable commentary has been retracted upon further review.
== See also ==
Electronic design automation
Design flow (EDA)
Integrated circuit design
Floorplan (microelectronics)
Place and route
== References ==
== Further reading/External links ==
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)
ACM Transactions on Design Automation of Electronic Systems (TODAES)
IEEE Transactions on Very Large Scale Integration Systems (TVLSI) | Wikipedia/Placement_(electronic_design_automation) |
In computer graphics and digital photography, a raster graphic, raster image, or simply raster is a two-dimensional image or picture represented as a rectangular matrix or grid of pixels, viewable via a computer display, paper, or other display medium. A raster image is technically characterized by the width and height of the image in pixels and by the number of bits per pixel. Raster images are stored in image files with varying dissemination, production, generation, and acquisition formats.
The printing and prepress industries know raster graphics as contones (from "continuous tones"). In contrast, line art is usually implemented as vector graphics in digital systems.
Many raster manipulations map directly onto the mathematical formalisms of linear algebra, where mathematical objects of matrix structure are of central concern.
== Etymology ==
The word "raster" has its origins in the Latin rastrum (a rake), which is derived from radere (to scrape). It originates from the raster scan of cathode-ray tube (CRT) video monitors, which draw the image line by line by magnetically or electrostatically steering a focused electron beam. By association, it can also refer to a rectangular grid of pixels. The word rastrum is now used to refer to a device for drawing musical staff lines.
== Data model ==
The fundamental strategy underlying the raster data model is the tessellation of a plane, into a two-dimensional array of squares, each called a cell or pixel (from "picture element"). In digital photography, the plane is the visual field as projected onto the image sensor; in computer art, the plane is a virtual canvas; in geographic information systems, the plane is a projection of the Earth's surface. The size of each square pixel, known as the resolution or support, is constant across the grid.
Raster or gridded data may be the result of a gridding procedure.
A single numeric value is then stored for each pixel. For most images, this value is a visible color, but other measurements are possible, even numeric codes for qualitative categories. Each raster grid has a specified pixel format, the data type for each number. Common pixel formats are binary, gray-scale, palettized, and full-color, where color depth determines the fidelity of the colors represented, and color space determines the range of color coverage (which is often less than the full range of human color vision). Most modern color raster formats represent color using 24 bits (over 16 million distinct colors), with 8 bits (values 0–255) for each color channel (red, green, and blue). The digital sensors used for remote sensing and astronomy are often able to detect and store wavelengths beyond the visible spectrum; the large CCD bitmapped sensor at the Vera C. Rubin Observatory captures 3.2 gigapixels in a single image (6.4 GB raw), over six color channels which exceed the spectral range of human color vision.
== Uses ==
=== Image storage ===
Most computer images are stored in raster graphics formats or compressed variations, including GIF, JPEG, and PNG, which are popular on the World Wide Web. A raster data structure is based on a (usually rectangular, square-based) tessellation of the 2D plane into cells, each containing a single value. To store the data in a file, the two-dimensional array must be serialized. The most common way to do this is a row-major format, in which the cells along the first (usually top) row are listed left to right, followed immediately by those of the second row, and so on.
In the example at right, the cells of tessellation A are overlaid on the point pattern B resulting in an array C of quadrant counts representing the number of points in each cell. For purposes of visualization a lookup table has been used to color each of the cells in an image D. Here are the numbers as a serial row-major array:
1 3 0 0 1 12 8 0 1 4 3 3 0 2 0 2 1 7 4 1 5 4 2 2 0 3 1 2 2 2 2 3 0 5 1 9 3 3 3 4 5 0 8 0 2 4 3 2 8 4 3 2 2 7 2 3 2 10 1 5 2 1 3 7
To reconstruct the two-dimensional grid, the file must include a header section at the beginning that contains at least the number of columns, and the pixel datatype (especially the number of bits or bytes per value) so the reader knows where each value ends to start reading the next one. Headers may also include the number of rows, georeferencing parameters for geographic data, or other metadata tags, such as those specified in the Exif standard.
==== Compression ====
High-resolution raster grids contain a large number of pixels, and thus consume a large amount of memory. This has led to multiple approaches to compressing the data volume into smaller files. The most common strategy is to look for patterns or trends in the pixel values, then store a parameterized form of the pattern instead of the original data. Common raster compression algorithms include run-length encoding (RLE), JPEG, LZ (the basis for PNG and ZIP), Lempel–Ziv–Welch (LZW) (the basis for GIF), and others.
For example, Run length encoding looks for repeated values in the array, and replaces them with the value and the number of times it appears. Thus, the raster above would be represented as:
This technique is very efficient when there are large areas of identical values, such as a line drawing, but in a photograph where pixels are usually slightly different from their neighbors, the RLE file would be up to twice the size of the original.
Some compression algorithms, such as RLE and LZW, are lossless, where the original pixel values can be perfectly regenerated from the compressed data. Other algorithms, such as JPEG, are lossy, because the parameterized patterns are only an approximation of the original pixel values, so the latter can only be estimated from the compressed data.
==== Raster–vector conversion ====
Vector images (line work) can be rasterized (converted into pixels), and raster images vectorized (raster images converted into vector graphics), by software. In both cases some information is lost, although certain vectorization operations can recreate salient information, as in the case of optical character recognition.
=== Displays ===
Early mechanical televisions developed in the 1920s employed rasterization principles. Electronic television based on cathode-ray tube displays are raster scanned with horizontal rasters painted left to right, and the raster lines painted top to bottom.
Modern flat-panel displays such as LED monitors still use a raster approach. Each on-screen pixel directly corresponds to a small number of bits in memory. The screen is refreshed simply by scanning through pixels and coloring them according to each set of bits. The refresh procedure, being speed critical, is often implemented by dedicated circuitry, often as a part of a graphics processing unit.
Using this approach, the computer contains an area of memory that holds all the data that are to be displayed. The central processor writes data into this region of memory and the video controller collects them from there. The bits of data stored in this block of memory are related to the eventual pattern of pixels that will be used to construct an image on the display.
An early scanned display with raster computer graphics was invented in the late 1960s by A. Michael Noll at Bell Labs, but its patent application filed February 5, 1970, was abandoned at the Supreme Court in 1977 over the issue of the patentability of computer software.
=== Printing ===
During the 1970s and 1980s, pen plotters, using Vector graphics, were common for creating precise drawings, especially on large format paper. However, since then almost all printers create the printed image as a raster grid, including both laser and inkjet printers. When the source information is vector, rendering specifications and software such as PostScript are used to create the raster image.
=== Three-dimensional rasters ===
Three-dimensional voxel raster graphics are employed in video games and are also used in medical imaging such as MRI scanners.
=== Geographic information systems ===
Geographic phenomena are commonly represented in a raster format in GIS. The raster grid is georeferenced, so that each pixel (commonly called a cell in GIS because the "picture" part of "pixel" is not relevant) represents a square region of geographic space. The value of each cell then represents some measurable (qualitative or quantitative) property of that region, typically conceptualized as a field. Examples of fields commonly represented in rasters include: temperature, population density, soil moisture, land cover, surface elevation, etc. Two sampling models are used to derive cell values from the field: in a lattice, the value is measured at the center point of each cell; in a grid, the value is a summary (usually a mean or mode) of the value over the entire cell.
== Resolution ==
Raster graphics are resolution dependent, meaning they cannot scale up to an arbitrary resolution without loss of apparent quality. This property contrasts with the capabilities of vector graphics, which easily scale up to the quality of the device rendering them. Raster graphics deal more practically than vector graphics with photographs and photo-realistic images, while vector graphics often serve better for typesetting or for graphic design. Modern computer-monitors typically display about 72 to 130 pixels per inch (PPI), and some modern consumer printers can resolve 2400 dots per inch (DPI) or more; determining the most appropriate image resolution for a given printer-resolution can pose difficulties, since printed output may have a greater level of detail than a viewer can discern on a monitor. Typically, a resolution of 150 to 300 PPI works well for 4-color process (CMYK) printing.
However, for printing technologies that perform color mixing through dithering (halftone) rather than through overprinting (virtually all home/office inkjet and laser printers), printer DPI and image PPI have a very different meaning, and this can be misleading. Because, through the dithering process, the printer builds a single image pixel out of several printer dots to increase color depth, the printer's DPI setting must be set far higher than the desired PPI to ensure sufficient color depth without sacrificing image resolution. Thus, for instance, printing an image at 250 PPI may actually require a printer setting of 1200 DPI.
== Raster-based image editors ==
Raster-based image editors, such as PaintShop Pro, Corel Painter, Adobe Photoshop, Paint.NET, Microsoft Paint, Krita, and GIMP, revolve around editing pixels, unlike vector-based image editors, such as Xfig, CorelDRAW, Adobe Illustrator, or Inkscape, which revolve around editing lines and shapes (vectors). When an image is rendered in a raster-based image editor, the image is composed of millions of pixels. At its core, a raster image editor works by manipulating each individual pixel. Most pixel-based image editors work using the RGB color model, but some also allow the use of other color models such as the CMYK color model.
== See also ==
Comparison of raster graphics editors
Dither
Halftone
Pixel-art scaling algorithms
Raster graphics editor
Raster graphics file formats
Raster image processor
Raster scan
Rasterisation
Text semigraphics
Texture atlas
Vector graphics – a contrasting graphics method
== References == | Wikipedia/Raster_graphics |
An application-specific integrated circuit (ASIC ) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips.
As feature sizes have shrunk and chip design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.
Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices.
== History ==
Early ASICs used gate array technology. By 1967, Ferranti and Interdesign were manufacturing early bipolar gate arrays. In 1967, Fairchild Semiconductor introduced the Micromatrix family of bipolar diode–transistor logic (DTL) and transistor–transistor logic (TTL) arrays.
Complementary metal–oxide–semiconductor (CMOS) technology opened the door to the broad commercialization of gate arrays. The first CMOS gate arrays were developed by Robert Lipp, in 1974 for International Microcircuits, Inc. (IMI).
Metal–oxide–semiconductor (MOS) standard-cell technology was introduced by Fairchild and Motorola, under the trade names Micromosaic and Polycell, in the 1970s. This technology was later successfully commercialized by VLSI Technology (founded 1979) and LSI Logic (1981).
A successful commercial application of gate array circuitry was found in the low-end 8-bit ZX81 and ZX Spectrum personal computers, introduced in 1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution aimed at handling the computer's graphics.
Customization occurred by varying a metal interconnect mask. Gate arrays had complexities of up to a few thousand gates; this is now called mid-scale integration. Later versions became more generalized, with different base dies customized by both metal and polysilicon layers. Some base dies also include random-access memory (RAM) elements.
== Standard-cell designs ==
In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer. While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers used factory-specific tools to complete the implementation of their designs. A solution to this problem, which also yielded a much higher density device, was the implementation of standard cells. Every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay, capacitance and inductance, that could also be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve very high gate density and good electrical performance. Standard-cell design is intermediate between § Gate-array and semi-custom design and § Full-custom design in terms of its non-recurring engineering and recurring component costs as well as performance and speed of development (including time to market).
By the late 1990s, logic synthesis tools became available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits (ICs) are designed in the following conceptual stages referred to as electronics design flow, although these stages overlap significantly in practice:
Requirements engineering: A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC, usually derived from requirements analysis.
Register-transfer level (RTL) design: The design team constructs a description of an ASIC to achieve these goals using a hardware description language. This process is similar to writing a computer program in a high-level language.
Functional verification: Suitability for purpose is verified by functional verification. This may include such techniques as logic simulation through test benches, formal verification, emulation, or creating and evaluating an equivalent pure software model, as in Simics. Each verification technique has advantages and disadvantages, and most often several methods are used together for ASIC verification. Unlike most FPGAs, ASICs cannot be reprogrammed once fabricated and therefore ASIC designs that are not completely correct are much more costly, increasing the need for full test coverage.
Logic synthesis: Logic synthesis transforms the RTL design into a large collection called of lower-level constructs called standard cells. These constructs are taken from a standard-cell library consisting of pre-characterized collections of logic gates performing specific functions. The standard cells are typically specific to the planned manufacturer of the ASIC. The resulting collection of standard cells and the needed electrical connections between them is called a gate-level netlist.
Placement: The gate-level netlist is next processed by a placement tool which places the standard cells onto a region of an integrated circuit die representing the final ASIC. The placement tool attempts to find an optimized placement of the standard cells, subject to a variety of specified constraints.
Routing: An electronics routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them. Since the search space is large, this process will produce a "sufficient" rather than "globally optimal" solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility, commonly called a "fab" or "foundry" to manufacture physical integrated circuits. Placement and routing are closely interrelated and are collectively called place and route in electronics design.
Sign-off: Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In the case of a digital circuit, this will then be further mapped into delay information from which the circuit performance can be estimated, usually by static timing analysis. This, and other final tests such as design rule checking and power analysis collectively called signoff are intended to ensure that the device will function correctly over all extremes of the process, voltage and temperature. When this testing is complete the photomask information is released for chip fabrication.
These steps, implemented with a level of skill common in the industry, almost always produce a final device that correctly implements the original design, unless flaws are later introduced by the physical fabrication process.
The design steps also called design flow, are also common to standard product design. The significant difference is that standard-cell design uses the manufacturer's cell libraries that have been used in potentially hundreds of other design implementations and therefore are of much lower risk than a full custom design. Standard cells produce a design density that is cost-effective, and they can also integrate IP cores and static random-access memory (SRAM) effectively, unlike gate arrays.
== Gate-array and semi-custom design ==
Gate array design is a manufacturing method in which diffused layers, each consisting of transistors and other active devices, are predefined and electronics wafers containing such devices are "held in stock" or unconnected prior to the metallization stage of the fabrication process. The physical design process defines the interconnections of these layers for the final device. For most ASIC manufacturers, this consists of between two and nine metal layers with each layer running perpendicular to the one below it. Non-recurring engineering costs are much lower than full custom designs, as photolithographic masks are required only for the metal layers. Production cycles are much shorter, as metallization is a comparatively quick process; thereby accelerating time to market.
Gate-array ASICs are always a compromise between rapid design and performance as mapping a given design onto what a manufacturer held as a stock wafer never gives 100% circuit utilization. Often difficulties in routing the interconnect require migration onto a larger array device with a consequent increase in the piece part price. These difficulties are often a result of the layout EDA software used to develop the interconnect.
Pure, logic-only gate-array design is rarely implemented by circuit designers today, having been almost entirely replaced by field-programmable devices. The most prominent of such devices are field-programmable gate arrays (FPGAs) which can be programmed by the user and thus offer minimal tooling charges, non-recurring engineering, only marginally increased piece part cost, and comparable performance.
Today, gate arrays are evolving into structured ASICs that consist of a large IP core like a CPU, digital signal processor units, peripherals, standard interfaces, integrated memories, SRAM, and a block of reconfigurable, uncommitted logic. This shift is largely because ASIC devices are capable of integrating large blocks of system functionality, and systems on a chip (SoCs) require glue logic, communications subsystems (such as networks on chip), peripherals, and other components rather than only functional units and basic interconnection.
In their frequent usages in the field, the terms "gate array" and "semi-custom" are synonymous when referring to ASICs. Process engineers more commonly use the term "semi-custom", while "gate-array" is more commonly used by logic (or gate-level) designers.
== Full-custom design ==
By contrast, full-custom ASIC design defines all the photolithographic layers of the device. Full-custom design is used for both ASIC design and for standard product design.
The benefits of full-custom design include reduced area (and therefore recurring component cost), performance improvements, and also the ability to integrate analog components and other pre-designed—and thus fully verified—components, such as microprocessor cores, that form a system on a chip.
The disadvantages of full-custom design can include increased manufacturing and design time, increased non-recurring engineering costs, more complexity in the computer-aided design (CAD) and electronic design automation systems, and a much higher skill requirement on the part of the design team.
For digital-only designs, however, "standard-cell" cell libraries, together with modern CAD systems, can offer considerable performance/cost benefits with low risk. Automated layout tools are quick and easy to use and also offer the possibility to "hand-tweak" or manually optimize any performance-limiting aspect of the design.
This is designed by using basic logic gates, circuits or layout specially for a design.
== Structured design ==
Structured ASIC design (also referred to as "platform ASIC design") is a relatively new trend in the semiconductor industry, resulting in some variation in its definition. However, the basic premise of a structured ASIC is that both manufacturing cycle time and design cycle time are reduced compared to cell-based ASIC, by virtue of there being pre-defined metal layers (thus reducing manufacturing time) and pre-characterization of what is on the silicon (thus reducing design cycle time).
Definition from Foundations of Embedded Systems states that: In a "structured ASIC" design, the logic mask-layers of a device are predefined by the ASIC vendor (or in some cases by a third party). Design differentiation and customization is achieved by creating custom metal layers that create custom connections between predefined lower-layer logic elements. "Structured ASIC" technology is seen as bridging the gap between field-programmable gate arrays and "standard-cell" ASIC designs. Because only a small number of chip layers must be custom-produced, "structured ASIC" designs have much smaller non-recurring expenditures (NRE) than "standard-cell" or "full-custom" chips, which require that a full mask set be produced for every design.
This is effectively the same definition as a gate array. What distinguishes a structured ASIC from a gate array is that in a gate array, the predefined metal layers serve to make manufacturing turnaround faster. In a structured ASIC, the use of predefined metallization is primarily to reduce cost of the mask sets as well as making the design cycle time significantly shorter.
For example, in a cell-based or gate-array design the user must often design power, clock, and test structures themselves. By contrast, these are predefined in most structured ASICs and therefore can save time and expense for the designer compared to gate-array based designs. Likewise, the design tools used for structured ASIC can be substantially lower cost and easier (faster) to use than cell-based tools, because they do not have to perform all the functions that cell-based tools do. In some cases, the structured ASIC vendor requires customized tools for their device (e.g., custom physical synthesis) be used, also allowing for the design to be brought into manufacturing more quickly.
== Cell libraries, IP-based design, hard and soft macros ==
Cell libraries of logical primitives are usually provided by the device manufacturer as part of the service. Although they will incur no additional cost, their release will be covered by the terms of a non-disclosure agreement (NDA) and they will be regarded as intellectual property by the manufacturer. Usually, their physical design will be pre-defined so they could be termed "hard macros".
What most engineers understand as "intellectual property" are IP cores, designs purchased from a third-party as sub-components of a larger ASIC. They may be provided in the form of a hardware description language (often termed a "soft macro"), or as a fully routed design that could be printed directly onto an ASIC's mask (often termed a "hard macro"). Many organizations now sell such pre-designed cores – CPUs, Ethernet, USB or telephone interfaces – and larger organizations may have an entire department or division to produce cores for the rest of the organization. The company ARM only sells IP cores, making it a fabless manufacturer.
Indeed, the wide range of functions now available in structured ASIC design is a result of the phenomenal improvement in electronics in the late 1990s and early 2000s; as a core takes a lot of time and investment to create, its re-use and further development cuts product cycle times dramatically and creates better products. Additionally, open-source hardware organizations such as OpenCores are collecting free IP cores, paralleling the open-source software movement in hardware design.
Soft macros are often process-independent (i.e. they can be fabricated on a wide range of manufacturing processes and different manufacturers). Hard macros are process-limited and usually further design effort must be invested to migrate (port) to a different process or manufacturer.
== Multi-project wafers ==
Some manufacturers and IC design houses offer multi-project wafer service (MPW) as a method of obtaining low cost prototypes. Often called shuttles, these MPWs, containing several designs, run at regular, scheduled intervals on a "cut and go" basis, usually with limited liability on the part of the manufacturer. The contract involves delivery of bare dies or the assembly and packaging of a handful of devices. The service usually involves the supply of a physical design database (i.e. masking information or pattern generation (PG) tape). The manufacturer is often referred to as a "silicon foundry" due to the low involvement it has in the process.
== Application-specific standard product ==
An application-specific standard product or ASSP is an integrated circuit that implements a specific function that appeals to a wide market. As opposed to ASICs that combine a collection of functions and are designed by or for one customer, ASSPs are available as off-the-shelf components. ASSPs are used in all industries, from automotive to communications.
For example, two ICs that might or might not be considered ASICs are a controller chip for a PC and a chip for a modem. Both of these examples are specific to an application (which is typical of an ASIC) but are sold to many different system vendors (which is typical of standard parts). ASICs such as these are sometimes called application-specific standard products (ASSPs).
Examples of ASSPs are encoding/decoding chip, Ethernet network interface controller chip and flash memory controller chip.
== See also ==
== References ==
== Sources ==
Anthony Cataldo (26 March 2002). "Xilinx looks to ease path to custom FPGAs". EE Times. CMP Media, LLC. Archived from the original on 29 September 2007. Retrieved 14 December 2006.
"Xilinx intros next-gen EasyPath FPGAs priced below structured ASICs". EDP Weekly's IT Monitor. Millin Publishing, Inc. 18 October 2004.
== External links ==
Media related to Application-specific integrated circuits at Wikimedia Commons | Wikipedia/Application-specific_integrated_circuit |
Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs).
== History ==
=== Early days ===
The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s.
Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time.
The next era began following the publication of "Introduction to VLSI Systems" by Carver Mead and Lynn Conway in 1980, and is considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today.
The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Widely used were the Espresso heuristic logic minimizer, responsible for circuit complexity reductions and Magic, a computer-aided design platform. Another crucial development was the formation of MOSIS, a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects per wafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth.
=== Commercial birth ===
1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such as Hewlett-Packard, Tektronix and Intel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business. Daisy Systems, Mentor Graphics and Valid Logic Systems were all founded around this time and collectively referred to as DMV. In 1981, the U.S. Department of Defense additionally began funding of VHDL as a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis.
The first trade show for EDA was held at the Design Automation Conference in 1984 and in 1986, Verilog, another popular high-level design language, was first introduced as a hardware description language by Gateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to perform logic synthesis.
=== Modern day ===
Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools.
Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts). Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal.
EDA for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology. Some users are foundry operators, who operate the semiconductor fabrication facilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality into FPGAs or field-programmable gate arrays, customisable integrated circuit designs.
== Software focuses ==
=== Design ===
Design flow primarily remains characterised via several primary components; these include:
High-level synthesis (additionally known as behavioral synthesis or algorithmic synthesis) – The high-level design description (e.g. in C/C++) is converted into RTL or the register transfer level, responsible for representing circuitry via the utilisation of interactions between registers.
Logic synthesis – The translation of RTL design description (e.g. written in Verilog or VHDL) into a discrete netlist or representation of logic gates.
Schematic capture – For standard cell digital, analog, RF-like Capture CIS in Orcad by Cadence and ISIS in Proteus.
Layout – usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus
=== Simulation ===
Transistor simulation – low-level transistor-simulation of a schematic/layout's behavior, accurate at device-level.
Logic simulation – digital-simulation of an RTL or gate-netlist's digital (Boolean 0/1) behavior, accurate at Boolean-level.
Behavioral simulation – high-level simulation of a design's architectural operation, accurate at cycle-level or interface-level.
Hardware emulation – Use of special purpose hardware to emulate the logic of a proposed design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is called in-circuit emulation.
Technology CAD simulate and analyze the underlying process technology. Electrical properties of devices are derived directly from device physics
=== Analysis and verification ===
Functional verification: ensures logic design matches specifications and executes tasks correctly. Includes dynamic functional verification via simulation, emulation, and prototypes.
RTL Linting for adherence to coding rules such as syntax, semantics, and style.
Clock domain crossing verification (CDC check): similar to linting, but these checks/tools specialize in detecting and reporting potential issues like data loss, meta-stability due to use of multiple clock domains in the design.
Formal verification, also model checking: attempts to prove, by mathematical methods, that the system has certain desired properties, and that some undesired effects (such as deadlock) cannot occur.
Equivalence checking: algorithmic comparison between a chip's RTL-description and synthesized gate-netlist, to ensure functional equivalence at the logical level.
Static timing analysis: analysis of the timing of a circuit in an input-independent manner, hence finding a worst case over all possible inputs.
Layout extraction: starting with a proposed layout, compute the (approximate) electrical characteristics of every wire and device. Often used in conjunction with static timing analysis above to estimate the performance of the completed chip.
Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for cases of interest in IC and PCB design. They are known for being slower but more accurate than the layout extraction above.
Physical verification, PV: checking if a design is physically manufacturable, and that the resulting chips will not have any function-preventing physical defects, and will meet original specifications.
=== Manufacturing preparation ===
Mask data preparation or MDP - The generation of actual lithography photomasks, utilised to physically manufacture the chip.
Chip finishing which includes custom designations and structures to improve manufacturability of the layout. Examples of the latter are a seal ring and filler structures.
Producing a reticle layout with test patterns and alignment marks.
Layout-to-mask preparation that enhances layout data with graphics operations, such as resolution enhancement techniques (RET) – methods for increasing the quality of the final photomask. This also includes optical proximity correction (OPC) or inverse lithography technology (ILT) – the up-front compensation for diffraction and interference effects occurring later when chip is manufactured using this mask.
Mask generation – The generation of flat mask image from hierarchical design.
Automatic test pattern generation or ATPG – The generation of pattern data systematically to exercise as many logic-gates and other components as possible.
Built-in self-test or BIST – The installation of self-contained test-controllers to automatically test a logic or memory structure in the design
=== Functional safety ===
Functional safety analysis, systematic computation of failure in time (FIT) rates and diagnostic coverage metrics for designs in order to meet the compliance requirements for the desired safety integrity levels.
Functional safety synthesis, add reliability enhancements to structured elements (modules, RAMs, ROMs, register files, FIFOs) to improve fault detection / fault tolerance. This includes (not limited to) addition of error detection and / or correction codes (Hamming), redundant logic for fault detection and fault tolerance (duplicate / triplicate) and protocol checks (interface parity, address alignment, beat count)
Functional safety verification, running of a fault campaign, including insertion of faults into the design and verification that the safety mechanism reacts in an appropriate manner for the faults that are deemed covered.
== Companies ==
=== Current ===
Market capitalization and company name as of March 2023:
$57.87 billion – Synopsys
$56.68 billion – Cadence Design Systems
$24.98 billion – Ansys
AU$4.88 billion – Altium
¥77.25 billion – Zuken
=== Defunct ===
Market capitalization and company name as of December 2011:
$2.33 billion – Mentor Graphics; Siemens acquired Mentor in 2017 and renamed as Siemens EDA in 2021
$507 million – Magma Design Automation; Synopsys acquired Magma in February 2012
NT$6.44 billion – SpringSoft; Synopsys acquired SpringSoft in August 2012
=== Acquisitions ===
Many EDA companies acquire small companies with software or other technology that can be adapted to their core business. Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs on digital circuitry; many new tools incorporate analog design and mixed systems. This is happening due to a trend to place entire electronic systems on a single chip.
== Technical conferences ==
Design Automation Conference
International Conference on Computer-Aided Design
Design Automation and Test in Europe
Asia and South Pacific Design Automation Conference
Symposia on VLSI Technology and Circuits
== See also ==
Computer-aided design (CAD)
Circuit design
EDA database
Foundations and Trends in Electronic Design Automation
Signoff (electronic design automation)
Comparison of EDA software
Platform-based design
Silicon compiler
== References ==
Notes | Wikipedia/Electronics_design |
Transaction-level modeling (TLM) is an approach to modelling complex digital systems by using electronic design automation software.: 1955 TLM language (TLML) is a hardware description language, usually, written in C++ and based on SystemC library. TLMLs are used for modelling where details of communication among modules are separated from the details of the implementation of functional units or of the communication architecture. It's used for modelling of systems that involve complex data communication mechanisms.: 1955
Components such as buses or FIFOs are modeled as channels, and are presented to modules using SystemC interface classes. Transaction requests take place by calling interface functions of these channel models, which encapsulate low-level details of the information exchange. At the transaction level, the emphasis is more on the functionality of the data transfers – what data are transferred to and from what locations – and less on their actual implementation, that is, on the actual protocol used for data transfer. This approach makes it easier for the system-level designer to experiment, for example, with different bus architectures (all supporting a common abstract interface) without having to recode models that interact with any of the buses, provided these models interact with the bus through the common interface.
However, the application of transaction-level modeling is not specific to the SystemC language and can be used with other languages. The concept of TLM first appears in system level language and modeling domain.
Transaction-level models are used for high-level synthesis of register-transfer level (RTL) models for a lower-level modelling and implementation of system components. RTL is usually represented by a hardware description language source code (e.g. VHDL, SystemC, Verilog).: 1955–1957
== History ==
In 2000, Thorsten Grötker, R&D Manager at Synopsys was preparing a presentation on the communication mechanism in what was to become the SystemC 2.0 standard, and referred to it as "transaction-based modeling". Gilles Baillieu, then a corporate application engineer at Synopsys, insisted that the new term had to contain "level", as in "register-transfer level" or "behavioral level". The fact that TLM does not denote a single level of abstraction but rather a modeling technique didn't make him change his mind. It had to be "level" in order to make it stick. So it became "TLM".
The Open SystemC Initiative was formed to standardize and proliferate the use of the SystemC language. That organization is sponsored by major EDA vendors and customers sharing a common interest in facilitating tool development and IP interoperability. The organization developed the OSCI simulator for open use and distribution.
Since those early days SystemC has been adopted as the language of choice for high level synthesis, connecting the design modeling and virtual prototype application domains with the functional verification and automated path gate level implementation. This offers project teams the ability to produce one model for multiple purposes. At the 2010 DVCon event, OSCI produced a specification of the first synthesizable subset of SystemC for industry standardization.
== See also ==
Discrete event simulation (DES)
Event loop
Event-driven programming
Message passing
Reactor pattern vs. Proactor pattern
Transaction processing
Asynchronous circuit
Assembly modelling, for CADs
== References ==
== External links ==
SystemC.org - SystemC home page. | Wikipedia/Transaction-level_modeling |
In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation.
== History ==
The roots of logic synthesis can be traced to the treatment of logic by George Boole (1815 to 1864), in what is now termed Boolean algebra. In 1938, Claude Shannon showed that the two-valued Boolean algebra can describe the operation of switching circuits. In the early days, logic design involved manipulating the truth table representations as Karnaugh maps. The Karnaugh map-based minimization of logic is guided by a set of rules on how entries in the maps can be combined. A human designer can typically only work with Karnaugh maps containing up to four to six variables.
The first step toward automation of logic minimization was the introduction of the Quine–McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization. Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. Another area of early research was in state minimization and encoding of finite-state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early automation of logic synthesis. The evolution from discrete logic components to programmable logic arrays (PLAs) hastened the need for efficient two-level minimization, since minimizing terms in a two-level representation reduces the area in a PLA.
Two-level logic circuits are of limited importance in a very-large-scale integration (VLSI) design; most designs use multiple levels of logic. Almost any circuit representation in RTL or Behavioural Description is a multi-level representation. An early system that was used to design multilevel circuits was LSS from IBM. It used local transformations to simplify logic. Work on LSS and the Yorktown Silicon Compiler spurred rapid research progress in logic synthesis in the 1980s. Several universities contributed by making their research available to the public, most notably SIS from University of California, Berkeley, RASP from University of California, Los Angeles and BOLD from University of Colorado, Boulder. Within a decade, the technology migrated to commercial logic synthesis products offered by electronic design automation companies.
== Commercial tools ==
The leading developers and providers of logic synthesis software packages are Synopsys, Cadence, and Siemens. Their synthesis tools are Synopsys Design Compiler, Cadence First Encounter and Siemens Precision RTL.
== Logic elements ==
Logic design is a step in the standard design cycle in which the functional design of an electronic circuit is converted into the representation which captures logic operations, arithmetic operations, control flow, etc. A common output of this step is RTL description. Logic design is commonly followed by the circuit design step. In modern electronic design automation parts of the logical design may be automated using high-level synthesis tools based on the behavioral description of the circuit.
Logic operations usually consist of Boolean AND, OR, XOR and NAND operations, and are the most basic forms of operations in an electronic circuit. Arithmetic operations are usually implemented with the use of logic operators.
== High-level or behavioral ==
With a goal of increasing designer productivity, research efforts on the synthesis of circuits specified at the behavioral level have led to the emergence of commercial solutions in 2004, which are used for complex ASIC and FPGA design. These tools automatically synthesize circuits specified using high-level languages, like ANSI C/C++ or SystemC, to a register transfer level (RTL) specification, which can be used as input to a gate-level logic synthesis flow. Using high-level synthesis, also known as ESL synthesis, the allocation of work to clock cycles and across structural components, such as floating-point ALUs, is done by the compiler using an optimisation procedure, whereas with RTL logic synthesis (even from behavioural Verilog or VHDL, where a thread of execution can make multiple reads and writes to a variable within a clock cycle) those allocation decisions have already been made.
== Multi-level logic minimization ==
Typical practical implementations of a logic function utilize a multi-level network of logic elements. Starting from an RTL description of a design, the synthesis tool constructs a corresponding multilevel Boolean network.
Next, this network is optimized using several technology-independent techniques before technology-dependent optimizations are performed. The typical cost function during technology-independent optimizations is total literal count of the factored representation of the logic function (which correlates quite well with circuit area).
Finally, technology-dependent optimization transforms the technology-independent circuit into a network of gates in a given technology. The simple cost estimates are replaced by more concrete, implementation-driven estimates during and after technology mapping. Mapping is constrained by factors such as the available gates (logic functions) in the technology library, the drive sizes for each gate, and the delay, power, and area characteristics of each gate.
== See also ==
Silicon compiler
Binary decision diagram
Functional verification
Boolean differential calculus
Synthesis of Integral Design by DEC, a 1980s tool used to design VAX 9000 mainframe CPUs and others ICs
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field of Electronic design automation. The above summary was derived, with permission, from Volume 2, Chapter 2, Logic Synthesis by Sunil Khatri and Narendra Shenoy.
== Further reading ==
Burgun, Luc; Greiner, Alain; Prado Lopes Eudes (October 1994). "A Consistent Approach in Logic Synthesis for FPGA Architectures". Proceedings of the International Conference on ASIC. Pekin: 104–107.
Jiang, Jie-Hong "Roland"; Devadas, Srinivas (2009). "Chapter 6: Logic synthesis in a nutshell". In Wang, Laung-Terng; Chang, Yao-Wen; Cheng, Kwang-Ting (eds.). Electronic design automation: synthesis, verification, and test. Morgan Kaufmann. ISBN 978-0-12-374364-0.
Hachtel, Gary D.; Somenzi, Fabio (2006) [1996]. Logic Synthesis and Verification Algorithms. Springer Science & Business Media. ISBN 0-7923-9746-0.
Hassoun, Soha; Sasao, Tsutomu, eds. (2002). Logic synthesis and verification. Kluwer. ISBN 978-0-7923-7606-4.
Perkowski, Marek A.; Grygiel, Stanislaw (1995-11-20). "6. Historical Overview of the Research on Decomposition". A Survey of Literature on Function Decomposition (PDF). Version IV. Functional Decomposition Group, Department of Electrical Engineering, Portland University, Portland, Oregon, USA. CiteSeerX 10.1.1.64.1129. Archived (PDF) from the original on 2021-03-28. Retrieved 2021-03-28. (188 pages)
Stanković, Radomir S. [in German]; Sasao, Tsutomu; Astola, Jaakko Tapio [in Finnish] (August 2001). "Publications in the First Twenty Years of Switching Theory and Logic Design" (PDF). Tampere International Center for Signal Processing (TICSP) Series. Tampere University of Technology / TTKK, Monistamo, Finland. ISSN 1456-2774. S2CID 62319288. #14. Archived (PDF) from the original on 2017-08-09. Retrieved 2021-03-28. (4+60 pages)
== External links ==
Media related to Logic design at Wikimedia Commons | Wikipedia/Logic_design |
The Philosophical Magazine is one of the oldest scientific journals published in English. It was established by Alexander Tilloch in 1798; in 1822 Richard Taylor became joint editor and it has been published continuously by Taylor & Francis ever since.
== Early history ==
The name of the journal dates from a period when "natural philosophy" embraced all aspects of science. The very first paper published in the journal carried the title "Account of Mr Cartwright's Patent Steam Engine". Other articles in the first volume include "Methods of discovering whether Wine has been adulterated with any Metals prejudicial to Health" and "Description of the Apparatus used by Lavoisier to produce Water from its component Parts, Oxygen and Hydrogen".
== 19th century ==
Early in the nineteenth century, classic papers by Humphry Davy, Michael Faraday and James Prescott Joule appeared in the journal and in the 1860s James Clerk Maxwell contributed several long articles, culminating in a paper containing the deduction that light is an electromagnetic wave or, as he put it himself, "We can scarcely avoid the inference that light consists in transverse undulations of the same medium which is the cause of electric and magnetic phenomena". The famous experimental paper of Albert A. Michelson and Edward Morley was published in 1887 and this was followed ten years later by J. J. Thomson with article "Cathode Rays" – essentially the discovery of the electron.
In 1814, the Philosophical Magazine merged with the Journal of Natural Philosophy, Chemistry, and the Arts, otherwise known as Nicholson's Journal (published by William Nicholson), to form The Philosophical Magazine and Journal. Further mergers in 1827 with the Annals of Philosophy, and in 1840 with The London and Edinburgh Philosophical Magazine and Journal of Science (named the Edinburgh Journal of Science until 1832) led to the retitling of the journal as The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. In 1949, the title reverted to The Philosophical Magazine.
== 20th century ==
In the early part of the 20th century, Ernest Rutherford was a frequent contributor. He once told a friend to "watch out for the next issue of Philosophical Magazine; it is highly radioactive!" Aside from his work on understanding radioactivity, Rutherford proposed the experiments of Hans Geiger and Ernest Marsden that verified his nuclear model of the atom and led to Niels Bohr's famous paper on planetary electrons, which was published in the journal in 1913. Another classic contribution from Rutherford was entitled "Collision of α Particles with Light Atoms. IV. An Anomalous Effect in Nitrogen" – an article describing no less than the discovery of the proton, which he named a year later.
In 1978 the journal was divided into two independent parts, Philosophical Magazine A and Philosophical Magazine B. Part A published papers on structure, defects and mechanical properties while Part B focussed on statistical mechanics, electronic, optical and magnetic properties.
== Recent developments ==
Since the middle of the 20th century, the journal has focused on condensed matter physics and published significant papers on dislocations, mechanical properties of solids, amorphous semiconductors and glass. As subject area evolved and it became more difficult to classify research into distinct areas, it was no longer considered necessary to publish the journal in two parts, so in 2003 parts A and B were re-merged. In its current form, 36 issues of the Philosophical Magazine are published each year, supplemented by 12 issues of Philosophical Magazine Letters.
== Editors ==
Previous editors of the Philosophical Magazine have been John Tyndall, J.J. Thomson, Sir Nevill Mott, and William Lawrence Bragg. The journal is currently edited by Edward A. Davis.
== Philosophical Magazine Letters ==
In 1987, the sister journal Philosophical Magazine Letters was established with the aim of rapidly publishing short communications on all aspects of condensed matter physics. It is edited by Edward A. Davis and Peter Riseborough. This monthly journal had a 2022 impact factor of 1.2.
== Series ==
Over its 200-year history, Philosophical Magazine has occasionally restarted its volume numbers at 1, designating a new "series" each time. The journal's series are as follows:
Philosophical Magazine, Series 1 (1798–1826), volumes 1 through 68
Philosophical Magazine, Series 2 (1827–1832), volumes 1 through 11
Philosophical Magazine, Series 3 (1832–1850), volumes 1 through 37
Philosophical Magazine, Series 4 (1851–1875), volumes 1 through 50
Philosophical Magazine, Series 5 (1876–1900), volumes 1 through 50
Philosophical Magazine, Series 6 (1901–1925), volumes 1 through 50
Philosophical Magazine, Series 7 (1926–1955), volumes 1 through 46
Philosophical Magazine, Series 8 (1955–present), volumes 1 through 95 (through December 2015)
If the renumbering had not occurred, the 2015 volume (series 8, volume 95) would have been volume 407.
== References ==
== External links ==
Philosophical Magazine website at Taylor & Francis
Digitised volumes at Biodiversity Heritage Library (with links to Preceding and Succeeding series)
Digitised volumes of "The London, Edinburgh and Dublin philosophical magazine" (3.Ser. 17.1840 - 37.1850; 4.Ser. 1.1851- 50.1875; 5.Ser. 1.1876-50.1900) at the Jena University Library
Philosophical Magazine on Internet Archive.
Philosophical Magazine Letters print: ISSN 0950-0839
Philosophical Magazine Letters online: ISSN 1362-3036 | Wikipedia/The_London,_Edinburgh,_and_Dublin_Philosophical_Magazine_and_Journal_of_Science |
A three-dimensional integrated circuit (3D IC) is a MOS (metal-oxide semiconductor) integrated circuit (IC) manufactured by stacking as many as 16 or more ICs and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. The 3D IC is one of several 3D integration schemes that exploit the z-direction to achieve electrical performance benefits in microelectronics and nanoelectronics.
3D integrated circuits can be classified by their level of interconnect hierarchy at the global (package), intermediate (bond pad) and local (transistor) level. In general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs); 3D heterogeneous integration; and 3D systems integration; as well as true monolithic 3D ICs.
International organizations such as the Jisso Technology Roadmap Committee (JIC) and the International Technology Roadmap for Semiconductors (ITRS) have worked to classify the various 3D integration technologies to further the establishment of standards and roadmaps of 3D integration. As of the 2010s, 3D ICs are widely used for NAND flash memory and in mobile devices.
== Types ==
=== 3D ICs vs. 3D packaging ===
3D packaging refers to 3D integration schemes that rely on traditional interconnection methods such as wire bonding and flip chip to achieve vertical stacking. 3D packaging can be divided into 3D system in package (3D SiP) and 3D wafer level package (3D WLP). 3D SiPs that have been in mainstream manufacturing for some time and have a well-established infrastructure include stacked memory dies interconnected with wire bonds and package on package (PoP) configurations interconnected with wire bonds or flip chip technology. PoP is used for vertically integrating disparate technologies. 3D WLP uses wafer level processes such as redistribution layers (RDLs) and wafer bumping processes to form interconnects.
2.5D interposer is a 3D WLP that interconnects dies side-by-side on a silicon, glass, or organic interposer using through silicon vias (TSVs) and an RDL. In all types of 3D packaging, chips in the package communicate using off-chip signaling, much as if they were mounted in separate packages on a normal printed circuit board. The interposer may be made of silicon, and is under the dies it connects together. A design can be split into several dies, and then mounted on the interposer with micro bumps.
3D ICs can be divided into 3D Stacked ICs (3D SIC), which refers to advanced packaging techniques stacking IC chips using TSV interconnects, and monolithic 3D ICs, which use fab processes to realize 3D interconnects at the local levels of the on-chip wiring hierarchy as set forth by the ITRS, this results in direct vertical interconnects between device layers. The first examples of a monolithic approach are seen in Samsung's 3D V-NAND devices.
As of the 2010s, 3D IC packages are widely used for NAND flash memory in mobile devices.
=== 3D SiCs ===
The digital electronics market requires a higher density semiconductor memory chip to cater to recently released CPU components, and the multiple die stacking technique has been suggested as a solution to this problem. JEDEC disclosed the upcoming DRAM technology includes the "3D SiC" die stacking plan at "Server Memory Forum", November 1–2, 2011, Santa Clara, CA. In August 2014, Samsung Electronics started producing 64 GB SDRAM modules for servers based on emerging DDR4 (double-data rate 4) memory using 3D TSV package technology. Newer proposed standards for 3D stacked DRAM include Wide I/O, Wide I/O 2, Hybrid Memory Cube, High Bandwidth Memory.
=== Monolithic 3D ICs ===
True monolithic 3D ICs are built in layers on a single semiconductor wafer, which is then diced into 3D ICs. There is only one substrate, hence no need for aligning, thinning, bonding, or through-silicon vias. In general, monolithic 3D ICs are still a developing technology and are considered by most to be several years away from production.
Process temperature limitations can be addressed by partitioning the transistor fabrication into two phases. A high temperature phase which is done before layer transfer followed by a layer transfer using ion-cut, also known as layer transfer, which has been used to produce Silicon on Insulator (SOI) wafers for the past two decades. Multiple thin (10s–100s nanometer scale) layers of virtually defect-free Silicon can be created by utilizing low temperature (<400 °C) bond and cleave techniques, and placed on top of active transistor circuitry, followed by permanent finalization of the transistors using etch and deposition processes. This monolithic 3D IC technology has been researched at Stanford University under a DARPA-sponsored grant.
CEA-Leti also developed monolithic 3D IC approaches, called sequential 3D IC. In 2014, the French research institute introduced its CoolCube™, a low-temperature process flow that provides a true path to 3DVLSI.
At Stanford University, researchers designed monolithic 3D ICs using carbon nanotube (CNT) structures vs. silicon using a wafer-scale low temperature CNT transfer processes that can be done at 120 °C.
== Manufacturing technologies for 3D SiCs ==
There are several methods for 3D IC design, including recrystallization and wafer bonding methods. There are two major types of wafer bonding, Cu-Cu connections (copper-to-copper connections between stacked ICs, used in TSVs) and through-silicon via (TSV). 3D ICs with TSVs may use solder microbumps, small solder balls as an interface between two individual dies in a 3D IC. As of 2014, a number of memory products such as High Bandwidth Memory (HBM) and the Hybrid Memory Cube have been launched that implement 3D IC stacking with TSVs. There are a number of key stacking approaches being implemented and explored. These include die-to-die, die-to-wafer, and wafer-to-wafer.
Die-to-Die
Electronic components are built on multiple die, which are then aligned and bonded. Thinning and TSV creation may be done before or after bonding. One advantage of die-to-die is that each component die can be tested first, so that one bad die does not ruin an entire stack. Moreover, each die in the 3D IC can be binned beforehand, so that they can be mixed and matched to optimize power consumption and performance (e.g. matching multiple dice from the low power process corner for a mobile application).
Die-to-Wafer
Electronic components are built on two semiconductor wafers. One wafer is diced; the singulated dice are aligned and bonded onto die sites of the second wafer. As in the wafer-on-wafer method, thinning and TSV creation are performed either before or after bonding. Additional die may be added to the stacks before dicing.
Wafer-to-Wafer
Electronic components are built on two or more semiconductor wafers, which are then aligned, bonded, and diced into 3D ICs. Each wafer may be thinned before or after bonding. Vertical connections are either built into the wafers before bonding or else created in the stack after bonding. These "through-silicon vias" (TSVs) pass through the silicon substrate(s) between active layers and/or between an active layer and an external bond pad. Wafer-to-wafer bonding can reduce yields, since if any 1 of N chips in a 3D IC are defective, the entire 3D IC will be defective. Moreover, the wafers must be the same size, but many exotic materials (e.g. III-Vs) are manufactured on much smaller wafers than CMOS logic or DRAM (typically 300 mm), complicating heterogeneous integration.
== Benefits ==
While traditional CMOS scaling processes improves signal propagation speed, scaling from current manufacturing and chip-design technologies is becoming more difficult and costly, in part because of power-density constraints, and in part because interconnects do not become faster while transistors do. 3D ICs address the scaling challenge by stacking 2D dies and connecting them in the 3rd dimension. This promises to speed up communication between layered chips, compared to planar layout. 3D ICs promise many significant benefits, including:
Footprint
More functionality fits into a small space. The smaller form factors are of great importance in embedded devices such as mobile phones, IoT systems for which 3D non-volatile memory stacks have been developed (e.g. 3D NAND chips) [1] :: Moore's Law Extension: The increased number of transistors being packed in the same footprint is seen as an extension to Moore's law by some researchers. This enables extending the Moore's Law without its traditional pair of Dennard Scaling towards a new generation of chips with increased computing capacity for the same footprint.[2]:
Cost
Partitioning a large chip into multiple smaller dies with 3D stacking can improve the yield and reduce the fabrication cost if individual dies are tested separately.
Heterogeneous Integration
Circuit layers can be built with different processes, or even on different types of wafers. This means that components can be optimized to a much greater degree than if they were built together on a single wafer. Moreover, components with incompatible manufacturing could be combined in a single 3D IC.
Shorter Interconnect
The average wire length is reduced. Common figures reported by researchers are on the order of 10–15%, but this reduction mostly applies to longer interconnect, which may affect circuit delay by a greater amount. Given that 3D wires have much higher capacitance than conventional in-die wires, circuit delay may or may not improve.
Power
Keeping a signal on-chip can reduce its power consumption by 10–100 times. Shorter wires also reduce power consumption by producing less parasitic capacitance. Reducing the power budget leads to less heat generation, extended battery life, and lower cost of operation.
Design
The vertical dimension adds a higher order of connectivity and offers new design possibilities.
Circuit Security
3D integration can achieve security through obscurity; the stacked structure complicates attempts to reverse engineer the circuitry. Sensitive circuits may also be divided among the layers in such a way as to obscure the function of each layer. Moreover, 3D integration allows to integrate dedicated, system monitor-like features in separate layers. The objective here is to implement some kind of hardware firewall for any commodity components/chips to be monitored at runtime, seeking to protect the whole electronic system against run-time attacks as well as malicious hardware modifications.
Bandwidth
3D integration allows large numbers of vertical vias between the layers. This allows construction of wide bandwidth buses between functional blocks in different layers. A typical example would be a processor+memory 3D stack, with the cache memory stacked on top of the processor. This arrangement allows a bus much wider than the typical 128 or 256 bits between the cache and processor. Wide buses in turn alleviate the memory wall problem.
Modularity
3D integration modular integration a wide range of custom stacks through standardizing the layer interfaces for numerous stacking options. As a result, custom stack designs can be manufactured with modular building blocks (e.g. custom number of DRAM or eDRAM layers, custom accelerator layers, customizable Non-Volatile Memory layers can be integrated to meet different design requirements). This provides design and cost advantages to semiconductor firms.[3]
Other potential advantages include better integration of neuromorphic chips in computing systems. Despite being low power alternatives to general purpose CPUs and GPUs, neuromorphic chips use a fundamentally different "spike-based" computation, which is not directly compatible with legacy digital computation. 3D integration provides key opportunities in this integration.[4]
== Challenges ==
Because this technology is new, it carries new challenges, including:
Cost
While cost is a benefit when compared with scaling, it has also been identified as a challenge to the commercialization of 3D ICs in mainstream consumer applications. However, work is being done to address this. Although 3D technology is new and fairly complex, the cost of the manufacturing process is surprisingly straightforward when broken down into the activities that build up the entire process. By analyzing the combination of activities that lay at the base, cost drivers can be identified. Once the cost drivers are identified, it becomes a less complicated endeavor to determine where the majority of cost comes from and, more importantly, where cost has the potential to be reduced.
Yield
Each extra manufacturing step adds a risk for defects. In order for 3D ICs to be commercially viable, defects could be repaired or tolerated, or defect density can be improved.
Heat
Heat building up within the stack must be dissipated. This is an inevitable issue as electrical proximity correlates with thermal proximity. Specific thermal hotspots must be more carefully managed.
Design Complexity
Taking full advantage of 3D integration requires sophisticated design techniques and new CAD tools.
TSV-introduced Overhead
TSVs are large compared to gates and impact floorplans. At the 45 nm technology node, the area footprint of a 10μm x 10μm TSV is comparable to that of about 50 gates. Furthermore, manufacturability demands landing pads and keep-out zones which further increase TSV area footprint. Depending on the technology choices, TSVs block some subset of layout resources. Via-first TSVs are manufactured before metallization, thus occupy the device layer and result in placement obstacles. Via-last TSVs are manufactured after metallization and pass through the chip. Thus, they occupy both the device and metal layers, resulting in placement and routing obstacles. While the usage of TSVs is generally expected to reduce wirelength, this depends on the number of TSVs and their characteristics. Also, the granularity of inter-die partitioning impacts wirelength. It typically decreases for moderate (blocks with 20-100 modules) and coarse (block-level partitioning) granularities, but increases for fine (gate-level partitioning) granularities.
Testing
To achieve high overall yield and reduce costs, separate testing of independent dies is essential. However, tight integration between adjacent active layers in 3D ICs entails a significant amount of interconnect between different sections of the same circuit module that were partitioned to different dies. Aside from the massive overhead introduced by required TSVs, sections of such a module, e.g., a multiplier, cannot be independently tested by conventional techniques. This particularly applies to timing-critical paths laid out in 3D.
Lack of Standards
There are few standards for TSV-based 3D IC design, manufacturing, and packaging, although this issue is being addressed. In addition, there are many integration options being explored such as via-last, via-first, via-middle; interposers or direct bonding; etc.
Heterogeneous Integration Supply Chain
In heterogeneously integrated systems, the delay of one part from one of the different parts suppliers delays the delivery of the whole product, and so delays the revenue for each of the 3D IC part suppliers.
Lack of Clearly Defined Ownership
It is unclear who should own the 3D IC integration and packaging/assembly. It could be assembly houses like ASE or the product OEMs.
Thermomechanical Stress and Reliability
3D stacks have more complex material compositions and thermomechanical profiles compared to 2D designs. The stacking of multiple thinned silicon layers, multiple wiring (BEOL) layers, insulators, through silicon vias, micro-C4s result in complex thermomechanical forces and stress patterns being exerted to the 3D stacks. As a result, local heating in one part of the stack (e.g. on thinned device layers) may result reliability challenges. This requires design-time analysis and reliability-aware design processes. [5]
== Design styles ==
Depending on partitioning granularity, different design styles can be distinguished. Gate-level integration faces multiple challenges and currently appears less practical than block-level integration.
Gate-level Integration
This style partitions standard cells between multiple dies. It promises wirelength reduction and great flexibility. However, wirelength reduction may be undermined unless modules of certain minimal size are preserved. On the other hand, its adverse effects include the massive number of necessary TSVs for interconnects. This design style requires 3D place-and-route tools, which are unavailable yet. Also, partitioning a design block across multiple dies implies that it cannot be fully tested before die stacking. After die stacking (post-bond testing), a single failed die can render several good dies unusable, undermining yield. This style also amplifies the impact of process variation, especially inter-die variation. In fact, a 3D layout may yield more poorly than the same circuit laid out in 2D, contrary to the original promise of 3D IC integration. Furthermore, this design style requires to redesign available Intellectual Property, since existing IP blocks and EDA tools do not provision for 3D integration.
Block-level Integration
This style assigns entire design blocks to separate dies. Design blocks subsume most of the netlist connectivity and are linked by a small number of global interconnects. Therefore, block-level integration promises to reduce TSV overhead. Sophisticated 3D systems combining heterogeneous dies require distinct manufacturing processes at different technology nodes for fast and low-power random logic, several memory types, analog and RF circuits, etc. Block-level integration, which allows separate and optimized manufacturing processes, thus appears crucial for 3D integration. Furthermore, this style might facilitate the transition from current 2D design towards 3D IC design. Basically, 3D-aware tools are only needed for partitioning and thermal analysis. Separate dies will be designed using (adapted) 2D tools and 2D blocks. This is motivated by the broad availability of reliable IP blocks. It is more convenient to use available 2D IP blocks and to place the mandatory TSVs in the unoccupied space between blocks instead of redesigning IP blocks and embedding TSVs. Design-for-testability structures are a key component of IP blocks and can therefore be used to facilitate testing for 3D ICs. Also, critical paths can be mostly embedded within 2D blocks, which limits the impact of TSV and inter-die variation on manufacturing yield. Finally, modern chip design often requires last-minute engineering changes. Restricting the impact of such changes to single dies is essential to limit cost.
== History ==
Several years after the MOS integrated circuit (MOS IC) chip was first proposed by Mohamed Atalla at Bell Labs in 1960, the concept of a three-dimensional MOS integrated circuit was proposed by Texas Instruments researchers Robert W. Haisty, Rowland E. Johnson and Edward W. Mehal in 1964. In 1969, the concept of a three-dimensional MOS integrated circuit memory chip was proposed by NEC researchers Katsuhiro Onoda, Ryo Igarashi, Toshio Wada, Sho Nakanuma and Toru Tsujide.
Arm has made a high-density 3D logic test chip, and Intel with its Foveros 3D logic chip packing is planning to ship CPUs using it. IBM demonstrated a fluid that could be used for both power delivery and cooling 3D ICs.
=== Demonstrations (1983–2012) ===
==== Japan (1983–2005) ====
3D ICs were first successfully demonstrated in 1980s Japan, where research and development (R&D) on 3D ICs was initiated in 1981 with the "Three Dimensional Circuit Element R&D Project" by the Research and Development Association for Future (New) Electron Devices. There were initially two forms of 3D IC design being investigated, recrystallization and wafer bonding, with the earliest successful demonstrations using recrystallization. In October 1983, a Fujitsu research team including S. Kawamura, Nobuo Sasaki and T. Iwai successfully fabricated a three-dimensional complementary metal–oxide–semiconductor (CMOS) integrated circuit, using laser beam recrystallization. It consisted of a structure in which one type of transistor is fabricated directly above a transistor of the opposite type, with separate gates and an insulator in between. A double-layer of silicon nitride and phosphosilicate glass (PSG) film was used as an intermediate insulating layer between the top and bottom devices. This provided the basis for realizing a multi-layered 3D device composed of vertically stacked transistors, with separate gates and an insulating layer in between. In December 1983, the same Fujitsu research team fabricated a 3D integrated circuit with a silicon-on-insulator (SOI) CMOS structure. The following year, they fabricated a 3D gate array with vertically stacked dual SOI/CMOS structure using beam recrystallization.
In 1986, Mitsubishi Electric researchers Yoichi Akasaka and Tadashi Nishimura laid out the basic concepts and proposed technologies for 3D ICs. The following year, a Mitsubishi research team including Nishimura, Akasaka and Osaka University graduate Yasuo Inoue fabricated an image signal processor (ISP) on a 3D IC, with an array of photosensors, CMOS A-to-D converters, arithmetic logic units (ALU) and shift registers arranged in a three-layer structure. In 1989, an NEC research team led by Yoshihiro Hayashi fabricated a 3D IC with a four-layer structure using laser beam crystallisation. In 1990, a Matsushita research team including K. Yamazaki, Y. Itoh and A. Wada fabricated a parallel image signal processor on a four-layer 3D IC, with SOI (silicon-on-insulator) layers formed by laser recrystallization, and the four layers consisting of an optical sensor, level detector, memory and ALU.
The most common form of 3D IC design is wafer bonding. Wafer bonding was initially called "cumulatively bonded IC" (CUBIC), which began development in 1981 with the "Three Dimensional Circuit Element R&D Project" in Japan and was completed in 1990 by Yoshihiro Hayashi's NEC research team, who demonstrated a method where several thin-film devices are bonded cumulatively, which would allow a large number of device layers. They proposed fabrication of separate devices in separate wafers, reduction in the thickness of the wafers, providing front and back leads, and connecting the thinned die to each other. They used CUBIC technology to fabricate and test a two active layer device in a top-to-bottom fashion, having a bulk-Si NMOS FET lower layer and a thinned NMOS FET upper layer, and proposed CUBIC technology that could fabricate 3D ICs with more than three active layers.
The first 3D IC stacked chips fabricated with a through-silicon via (TSV) process were invented in 1980s Japan. Hitachi filed a Japanese patent in 1983, followed by Fujitsu in 1984. In 1986, a Japanese patent filed by Fujitsu described a stacked chip structure using TSV. In 1989, Mitsumasa Koyonagi of Tohoku University pioneered the technique of wafer-to-wafer bonding with TSV, which he used to fabricate a 3D LSI chip in 1989. In 1999, the Association of Super-Advanced Electronics Technologies (ASET) in Japan began funding the development of 3D IC chips using TSV technology, called the "R&D on High Density Electronic System Integration Technology" project. The term "through-silicon via" (TSV) was coined by Tru-Si Technologies researchers Sergey Savastiouk, O. Siniaguine, and E. Korczynski, who proposed a TSV method for a 3D wafer-level packaging (WLP) solution in 2000.
The Koyanagi Group at Tohoku University, led by Mitsumasa Koyanagi, used TSV technology to fabricate a three-layer memory chip in 2000, a three-layer artificial retina chip in 2001, a three-layer microprocessor in 2002, and a ten-layer memory chip in 2005. The same year, a Stanford University research team consisting of Kaustav Banerjee, Shukri J. Souri, Pawan Kapur and Krishna C. Saraswat presented a novel 3D chip design that exploits the vertical dimension to alleviate the interconnect related problems and facilitates heterogeneous integration of technologies to realize a system-on-a-chip (SoC) design.
In 2001, a Toshiba research team including T. Imoto, M. Matsui and C. Takubo developed a "System Block Module" wafer bonding process for manufacturing 3D IC packages.
==== Europe (1988–2005) ====
Fraunhofer and Siemens began research on 3D IC integration in 1987. In 1988, they fabricated 3D CMOS IC devices based on re-crystallization of poly-silicon. In 1997, the inter-chip via (ICV) method was developed by a Fraunhofer–Siemens research team including Peter Ramm, Manfred Engelhardt, Werner Pamler, Christof Landesberger and Armin Klumpp. It was a first industrial 3D IC process, based on Siemens CMOS fab wafers. A variation of that TSV process was later called TSV-SLID (solid liquid inter-diffusion) technology. It was an approach to 3D IC design based on low temperature wafer bonding and vertical integration of IC devices using inter-chip vias, which they patented.
Ramm went on to develop industry-academic consortia for production of relevant 3D integration technologies. In the German funded cooperative VIC project between Siemens and Fraunhofer, they demonstrated a complete industrial 3D IC stacking process (1993–1996). With his Siemens and Fraunhofer colleagues, Ramm published results showing the details of key processes such as 3D metallization [T. Grassl, P. Ramm, M. Engelhardt, Z. Gabric, O. Spindler, First International Dielectrics for VLSI/ULSI Interconnection Metallization Conference – DUMIC, Santa Clara, CA, 20–22 Feb, 1995] and at ECTC 1995 they presented early investigations on stacked memory in processors.
In the early 2000s, a team of Fraunhofer and Infineon Munich researchers investigated 3D TSV technologies with particular focus on die-to-substrate stacking within the German/Austrian EUREKA project VSI and initiated the European Integrating Projects e-CUBES, as a first European 3D technology platform, and e-BRAINS with a.o., Infineon, Siemens, EPFL, IMEC and Tyndall, where heterogeneous 3D integrated system demonstrators were fabricated and evaluated. A particular focus of the e-BRAINS project was the development of novel low-temperature processes for highly reliable 3D integrated sensor systems.
==== United States (1999–2012) ====
Copper-to-copper wafer bonding, also called Cu-Cu connections or Cu-Cu wafer bonding, was developed at MIT by a research team consisting of Andy Fan, Adnan-ur Rahman and Rafael Reif in 1999. Reif and Fan further investigated Cu-Cu wafer bonding with other MIT researchers including Kuan-Neng Chen, Shamik Das, Chuan Seng Tan and Nisha Checka during 2001–2002. In 2003, DARPA and the Microelectronics Center of North Carolina (MCNC) began funding R&D on 3D IC technology.
In 2004, Tezzaron Semiconductor built working 3D devices from six different designs. The chips were built in two layers with "via-first" tungsten TSVs for vertical interconnection. Two wafers were stacked face-to-face and bonded with a copper process. The top wafer was thinned and the two-wafer stack was then diced into chips. The first chip tested was a simple memory register, but the most notable of the set was an 8051 processor/memory stack that exhibited much higher speed and lower power consumption than an analogous 2D assembly.
In 2004, Intel presented a 3D version of the Pentium 4 CPU. The chip was manufactured with two dies using face-to-face stacking, which allowed a dense via structure. Backside TSVs are used for I/O and power supply. For the 3D floorplan, designers manually arranged functional blocks in each die aiming for power reduction and performance improvement. Splitting large and high-power blocks and careful rearrangement allowed to limit thermal hotspots. The 3D design provides 15% performance improvement (due to eliminated pipeline stages) and 15% power saving (due to eliminated repeaters and reduced wiring) compared to the 2D Pentium 4.
The Teraflops Research Chip introduced in 2007 by Intel is an experimental 80-core design with stacked memory. Due to the high demand for memory bandwidth, a traditional I/O approach would consume 10 to 25 W. To improve upon that, Intel designers implemented a TSV-based memory bus. Each core is connected to one memory tile in the SRAM die with a link that provides 12 GB/s bandwidth, resulting in a total bandwidth of 1 TB/s while consuming only 2.2 W.
An academic implementation of a 3D processor was presented in 2008 at the University of Rochester by Professor Eby Friedman and his students. The chip runs at a 1.4 GHz and it was designed for optimized vertical processing between the stacked chips which gives the 3D processor abilities that the traditional one layered chip could not reach. One challenge in manufacturing of the three-dimensional chip was to make all of the layers work in harmony without any obstacles that would interfere with a piece of information traveling from one layer to another.
In ISSCC 2012, two 3D-IC-based multi-core designs using GlobalFoundries' 130 nm process and Tezzaron's FaStack technology were presented and demonstrated:
3D-MAPS, a 64 custom core implementation with two-logic-die stack, was demonstrated by researchers from the School of Electrical and Computer Engineering at Georgia Institute of Technology.
Centip3De, near-threshold design based on ARM Cortex-M3 cores, was from the Department of Electrical Engineering and Computer Science at University of Michigan.
Though released much layer IBM Research and Semiconductor Research and Development Groups design and manufactured a number of 3D processor stacks successfully starting from 2007-2008. These stacks (dubbed Escher internally) have demonstrated successful implementation of eDRAM, logic and processor stacks as well as key experiments in power, thermal, noise and reliability characterization of 3D chips. [6]
=== Commercial 3D ICs (2004–present) ===
The earliest known commercial use of a 3D IC chip was in Sony's PlayStation Portable (PSP) handheld game console, released in 2004. The PSP hardware includes eDRAM (embedded DRAM) memory manufactured by Toshiba in a 3D system-in-package chip with two dies stacked vertically. Toshiba called it "semi-embedded DRAM" at the time, before later calling it a stacked "chip-on-chip" (CoC) solution.
In April 2007, Toshiba commercialized an eight-layer 3D IC, the 16 GB THGAM embedded NAND flash memory chip, which was manufactured with eight stacked 2 GB NAND flash chips. In September 2007, Hynix introduced 24-layer 3D IC technology, with a 16 GB flash memory chip that was manufactured with 24 stacked NAND flash chips using a wafer bonding process. Toshiba also used an eight-layer 3D IC for their 32 GB THGBM flash chip in 2008. In 2010, Toshiba used a 16-layer 3D IC for their 128 GB THGBM2 flash chip, which was manufactured with 16 stacked 8 GB chips. In the 2010s, 3D ICs came into widespread commercial use in the form of multi-chip package and package on package solutions for NAND flash memory in mobile devices.
Elpida Memory developed the first 8 GB DRAM chip (stacked with four DDR3 SDRAM dies) in September 2009, and released it in June 2011. TSMC announced plans for 3D IC production with TSV technology in January 2010. In 2011, SK Hynix introduced 16 GB DDR3 SDRAM (40 nm class) using TSV technology, Samsung Electronics introduced 3D-stacked 32 GB DDR3 (30 nm class) based on TSV in September, and then Samsung and Micron Technology announced TSV-based Hybrid Memory Cube (HMC) technology in October.
High Bandwidth Memory (HBM), developed by Samsung, AMD, and SK Hynix, uses stacked chips and TSVs. The first HBM memory chip was manufactured by SK Hynix in 2013. In January 2016, Samsung Electronics announced early mass production of HBM2, at up to 8 GB per stack.
In 2017, Samsung Electronics combined 3D IC stacking with its 3D V-NAND technology (based on charge trap flash technology), manufacturing its 512 GB KLUFG8R1EM flash memory chip with eight stacked 64-layer V-NAND chips. In 2019, Samsung produced a 1 TB flash chip with 16 stacked V-NAND dies. As of 2018, Intel is considering the use of 3D ICs to improve performance. As of 2022, 232-layer NAND, i.e. memory device, chips are made by Micron, that previously in April 2019 were making 96-layer chips; and Toshiba made 96-layer devices in 2018.
In 2022, AMD has introduced Zen 4 processors, and some Zen 4 processors have 3D Cache included.
== See also ==
2.5D integrated circuit
Advanced packaging (semiconductors)
Charge trap flash (CTF)
FinFET (3D transistor)
MOSFET
Multigate device (MuGFET)
V-NAND (3D NAND)
== Notes ==
== References ==
== Further reading ==
Philip Garrou, Christopher Bower, Peter Ramm: Handbook of 3D Integration, Technology and Applications of 3D Integrated Circuits Vol. 1 and Vol. 2, Wiley-VCH, Weinheim 2008, ISBN 978-3-527-32034-9.
Yuan Xie, Jason Cong, Sachin Sapatnekar: Three-Dimensional Integrated Circuit Design: Eda, Design And Microarchitectures, Publisher: Springer, ISBN 1-4419-0783-1, ISBN 978-1-4419-0783-7, 978–1441907837, Publishing Date: Dec. 2009.
Philip Garrou, Mitsumasa Koyanagi, Peter Ramm: Handbook of 3D Integration, 3D Process Technology Vol. 3, Wiley-VCH, Weinheim 2014, ISBN 978-3-527-33466-7.
Paul D. Franzon, Erik Jan Marinissen, Muhannad S. Bakir, Philip Garrou, Mitsumasa Koyanagi, Peter Ramm: Handbook of 3D Integration: "Design, Test, and Thermal Management of 3D Integrated Circuits", Vol. 4, Wiley-VCH, Weinheim 2019, ISBN 978-3-527-33855-9.
== External links ==
Euronymous (2007-05-02). "3D Integration: A Revolution in Design". Real World Technologies. Retrieved 2014-05-15.
Semiconductors (2006). "Mapping progress in 3D IC integration". Solid State Technology. Archived from the original on January 31, 2013. Retrieved 2014-05-15.
Peter Ramm; et al. (2010-09-16). "3D Integration technology: Status and application development". 2010 Proceedings of ESSCIRC. IEEE. pp. 9–16. doi:10.1109/ESSCIRC.2010.5619857. hdl:11250/2463188. ISBN 978-1-4244-6664-1. S2CID 1239311.
Mingjie Lin; Abbas El Gamal; Yi-chang Lu & Simon Wong (2006-02-22). "Performance benefits of monolithically stacked 3D-FPGA". Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays. Vol. 26. p. 113. doi:10.1145/1117201.1117219. ISBN 1595932925. S2CID 7818893.
"Joint Project for Mechanical Qualification of Next Generation High Density Package-on-Package (PoP) with Through Mold Via Technology". Retrieved 2014-05-15.
"Advancements in Stacked Chip Scale Packaging (S-CSP), Provides System-in-a-Package Functionality for Wireless and Handheld Applications". Retrieved 2014-05-15.
Smith, Lee (July 6, 2010). "Achieving the 3rd Generation From 3D Packaging to 3D IC Architectures". Future Fab International. Amkor Technology. Retrieved 2014-05-15.
"Factors Affecting Electromigration and Current Carrying Capacity of Flip Chip and 3D IC Interconnects". Retrieved 2014-05-15.
"Evaluation for UV Laser Dicing Process and its Reliability for Various Designs of Stack Chip Scale Package". Retrieved 2014-05-15.
"High Density PoP (Package-on-Package) and Package Stacking Development". Retrieved 2014-05-15.
"3D Interconnect Technology Coming to Light". EDN. 2004. Archived from the original on 2008-12-03. Retrieved 2008-01-22.
"Three-dimensional SoCs perform for future". EE Design. 2003. Retrieved 2014-05-15.
"MagnaChip, Tezzaron form partnership for 3D chips". EE Times. 2004. Archived from the original on 2013-01-21.
"Matrix preps 64-Mbyte write-once memory". EE Times. 2001. Archived from the original on 2008-05-15. Retrieved 2014-05-15.
"Samsung starts mass producing first 3D vertical NAND flash, August 2013". Electroiq.com. 2013-08-06. Archived from the original on 2013-08-18. Retrieved 2014-05-15.
"CEA Leti placed monolithic 3D as the next generation technology as alternative to dimension scaling, August 2013". Electroiq.com. Archived from the original on 2013-08-19. Retrieved 2014-05-15.
"3D integration: A status report". 2009. Archived from the original on 2013-01-22. Retrieved 2011-01-21.
Deepak C. Sekar & Zvi Or-Bach. "Monolithic 3D-ICs with Single Crystal Silicon Layers" (PDF). Retrieved 2014-05-15.
"Global 3D Chips/3D IC Market to Reach US$5.2 Billion by 2015". PRWeb. 2010. Archived from the original on September 1, 2010. Retrieved 2014-05-15.
"Samsung Develops 30nm-class 32GB Green DDR3 for Next-generation Servers, Using TSV Package Technology". Samsung.com. 2011. Retrieved 2014-05-15.
"How Might 3-D ICs Come Together?". Semiconductor International. 2008. Archived from the original on 2010-03-04. Retrieved 2009-06-11.
"Three-Dimensional ICs Solve the Interconnect Paradox". Semiconductor International. 2005. Archived from the original on 2008-02-12. Retrieved 2008-01-22.
"Ziptronix, Raytheon Prove 3-D Integration of 0.5 µm CMOS Device". Semiconductor International. 2007. Archived from the original on 2007-11-06. Retrieved 2008-01-22.
Peter Ramm; Armin Klumpp; Josef Weber; Maaike Taklo (2010). "3D System-on-Chip Technologies for More than Moore Systems". Journal of Microsystem Technologies. 16 (7): 1051–1055. Bibcode:2010MiTec..16.1051R. doi:10.1007/s00542-009-0976-1. S2CID 55824967.
Philip Garrou, James Lu & Peter Ramm (2012). "Chapter 15". Three-Dimensional Integration. Wiley-VCH. Retrieved 2014-05-15. {{cite book}}: |work= ignored (help) | Wikipedia/Three-dimensional_integrated_circuit |
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.
For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory).
In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
== History ==
In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831–1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.
== Definition ==
Formally, a structure can be defined as a triple
A
=
(
A
,
σ
,
I
)
{\displaystyle {\mathcal {A}}=(A,\sigma ,I)}
consisting of a domain
A
,
{\displaystyle A,}
a signature
σ
,
{\displaystyle \sigma ,}
and an interpretation function
I
{\displaystyle I}
that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature
σ
{\displaystyle \sigma }
one can refer to it as a
σ
{\displaystyle \sigma }
-structure.
=== Domain ===
The domain of a structure is an arbitrary set; it is also called the underlying set of the structure, its carrier (especially in universal algebra), its universe (especially in model theory, cf. universe), or its domain of discourse. In classical first-order logic, the definition of a structure prohibits the empty domain.
Sometimes the notation
dom
(
A
)
{\displaystyle \operatorname {dom} ({\mathcal {A}})}
or
|
A
|
{\displaystyle |{\mathcal {A}}|}
is used for the domain of
A
,
{\displaystyle {\mathcal {A}},}
but often no notational distinction is made between a structure and its domain (that is, the same symbol
A
{\displaystyle {\mathcal {A}}}
refers both to the structure and its domain.)
=== Signature ===
The signature
σ
=
(
S
,
ar
)
{\displaystyle \sigma =(S,\operatorname {ar} )}
of a structure consists of:
a set
S
{\displaystyle S}
of function symbols and relation symbols, along with
a function
ar
:
S
→
N
0
{\displaystyle \operatorname {ar} :\ S\to \mathbb {N} _{0}}
that ascribes to each symbol
s
{\displaystyle s}
a natural number
n
=
ar
(
s
)
.
{\displaystyle n=\operatorname {ar} (s).}
The natural number
n
=
ar
(
s
)
{\displaystyle n=\operatorname {ar} (s)}
of a symbol
s
{\displaystyle s}
is called the arity of
s
{\displaystyle s}
because it is the arity of the interpretation of
s
.
{\displaystyle s.}
Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.
=== Interpretation function ===
The interpretation function
I
{\displaystyle I}
of
A
{\displaystyle {\mathcal {A}}}
assigns functions and relations to the symbols of the signature. To each function symbol
f
{\displaystyle f}
of arity
n
{\displaystyle n}
is assigned an
n
{\displaystyle n}
-ary function
f
A
=
I
(
f
)
{\displaystyle f^{\mathcal {A}}=I(f)}
on the domain. Each relation symbol
R
{\displaystyle R}
of arity
n
{\displaystyle n}
is assigned an
n
{\displaystyle n}
-ary relation
R
A
=
I
(
R
)
⊆
A
a
r
(
R
)
{\displaystyle R^{\mathcal {A}}=I(R)\subseteq A^{\operatorname {ar(R)} }}
on the domain. A nullary (
=
0
{\displaystyle =\,0}
-ary) function symbol
c
{\displaystyle c}
is called a constant symbol, because its interpretation
I
(
c
)
{\displaystyle I(c)}
can be identified with a constant element of the domain.
When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol
s
{\displaystyle s}
and its interpretation
I
(
s
)
.
{\displaystyle I(s).}
For example, if
f
{\displaystyle f}
is a binary function symbol of
A
,
{\displaystyle {\mathcal {A}},}
one simply writes
f
:
A
2
→
A
{\displaystyle f:{\mathcal {A}}^{2}\to {\mathcal {A}}}
rather than
f
A
:
|
A
|
2
→
|
A
|
.
{\displaystyle f^{\mathcal {A}}:|{\mathcal {A}}|^{2}\to |{\mathcal {A}}|.}
=== Examples ===
The standard signature
σ
f
{\displaystyle \sigma _{f}}
for fields consists of two binary function symbols
+
{\displaystyle \mathbf {+} }
and
×
{\displaystyle \mathbf {\times } }
where additional symbols can be derived, such as a unary function symbol
−
{\displaystyle \mathbf {-} }
(uniquely determined by
+
{\displaystyle \mathbf {+} }
) and the two constant symbols
0
{\displaystyle \mathbf {0} }
and
1
{\displaystyle \mathbf {1} }
(uniquely determined by
+
{\displaystyle \mathbf {+} }
and
×
{\displaystyle \mathbf {\times } }
respectively).
Thus a structure (algebra) for this signature consists of a set of elements
A
{\displaystyle A}
together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers
Q
,
{\displaystyle \mathbb {Q} ,}
the real numbers
R
{\displaystyle \mathbb {R} }
and the complex numbers
C
,
{\displaystyle \mathbb {C} ,}
like any other field, can be regarded as
σ
{\displaystyle \sigma }
-structures in an obvious way:
Q
=
(
Q
,
σ
f
,
I
Q
)
R
=
(
R
,
σ
f
,
I
R
)
C
=
(
C
,
σ
f
,
I
C
)
{\displaystyle {\begin{alignedat}{3}{\mathcal {Q}}&=(\mathbb {Q} ,\sigma _{f},I_{\mathcal {Q}})\\{\mathcal {R}}&=(\mathbb {R} ,\sigma _{f},I_{\mathcal {R}})\\{\mathcal {C}}&=(\mathbb {C} ,\sigma _{f},I_{\mathcal {C}})\\\end{alignedat}}}
In all three cases we have the standard signature given by
σ
f
=
(
S
f
,
ar
f
)
{\displaystyle \sigma _{f}=(S_{f},\operatorname {ar} _{f})}
with
S
f
=
{
+
,
×
,
−
,
0
,
1
}
{\displaystyle S_{f}=\{+,\times ,-,0,1\}}
and
ar
f
(
+
)
=
2
,
ar
f
(
×
)
=
2
,
ar
f
(
−
)
=
1
,
ar
f
(
0
)
=
0
,
ar
f
(
1
)
=
0.
{\displaystyle {\begin{alignedat}{3}\operatorname {ar} _{f}&(+)&&=2,\\\operatorname {ar} _{f}&(\times )&&=2,\\\operatorname {ar} _{f}&(-)&&=1,\\\operatorname {ar} _{f}&(0)&&=0,\\\operatorname {ar} _{f}&(1)&&=0.\\\end{alignedat}}}
The interpretation function
I
Q
{\displaystyle I_{\mathcal {Q}}}
is:
I
Q
(
+
)
:
Q
×
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(+):\mathbb {Q} \times \mathbb {Q} \to \mathbb {Q} }
is addition of rational numbers,
I
Q
(
×
)
:
Q
×
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(\times ):\mathbb {Q} \times \mathbb {Q} \to \mathbb {Q} }
is multiplication of rational numbers,
I
Q
(
−
)
:
Q
→
Q
{\displaystyle I_{\mathcal {Q}}(-):\mathbb {Q} \to \mathbb {Q} }
is the function that takes each rational number
x
{\displaystyle x}
to
−
x
,
{\displaystyle -x,}
and
I
Q
(
0
)
∈
Q
{\displaystyle I_{\mathcal {Q}}(0)\in \mathbb {Q} }
is the number
0
,
{\displaystyle 0,}
and
I
Q
(
1
)
∈
Q
{\displaystyle I_{\mathcal {Q}}(1)\in \mathbb {Q} }
is the number
1
;
{\displaystyle 1;}
and
I
R
{\displaystyle I_{\mathcal {R}}}
and
I
C
{\displaystyle I_{\mathcal {C}}}
are similarly defined.
But the ring
Z
{\displaystyle \mathbb {Z} }
of integers, which is not a field, is also a
σ
f
{\displaystyle \sigma _{f}}
-structure in the same way. In fact, there is no requirement that any of the field axioms hold in a
σ
f
{\displaystyle \sigma _{f}}
-structure.
A signature for ordered fields needs an additional binary relation such as
<
{\displaystyle \,<\,}
or
≤
,
{\displaystyle \,\leq ,\,}
and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word.
The ordinary signature for set theory includes a single binary relation
∈
.
{\displaystyle \in .}
A structure for this signature consists of a set of elements and an interpretation of the
∈
{\displaystyle \in }
relation as a binary relation on these elements.
== Induced substructures and closed subsets ==
A
{\displaystyle {\mathcal {A}}}
is called an (induced) substructure of
B
{\displaystyle {\mathcal {B}}}
if
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
have the same signature
σ
(
A
)
=
σ
(
B
)
;
{\displaystyle \sigma ({\mathcal {A}})=\sigma ({\mathcal {B}});}
the domain of
A
{\displaystyle {\mathcal {A}}}
is contained in the domain of
B
:
{\displaystyle {\mathcal {B}}:}
|
A
|
⊆
|
B
|
;
{\displaystyle |{\mathcal {A}}|\subseteq |{\mathcal {B}}|;}
and
the interpretations of all function and relation symbols agree on
|
A
|
.
{\displaystyle |{\mathcal {A}}|.}
The usual notation for this relation is
A
⊆
B
.
{\displaystyle {\mathcal {A}}\subseteq {\mathcal {B}}.}
A subset
B
⊆
|
A
|
{\displaystyle B\subseteq |{\mathcal {A}}|}
of the domain of a structure
A
{\displaystyle {\mathcal {A}}}
is called closed if it is closed under the functions of
A
,
{\displaystyle {\mathcal {A}},}
that is, if the following condition is satisfied: for every natural number
n
,
{\displaystyle n,}
every
n
{\displaystyle n}
-ary function symbol
f
{\displaystyle f}
(in the signature of
A
{\displaystyle {\mathcal {A}}}
) and all elements
b
1
,
b
2
,
…
,
b
n
∈
B
,
{\displaystyle b_{1},b_{2},\dots ,b_{n}\in B,}
the result of applying
f
{\displaystyle f}
to the
n
{\displaystyle n}
-tuple
b
1
b
2
…
b
n
{\displaystyle b_{1}b_{2}\dots b_{n}}
is again an element of
B
:
{\displaystyle B:}
f
(
b
1
,
b
2
,
…
,
b
n
)
∈
B
.
{\displaystyle f(b_{1},b_{2},\dots ,b_{n})\in B.}
For every subset
B
⊆
|
A
|
{\displaystyle B\subseteq |{\mathcal {A}}|}
there is a smallest closed subset of
|
A
|
{\displaystyle |{\mathcal {A}}|}
that contains
B
.
{\displaystyle B.}
It is called the closed subset generated by
B
,
{\displaystyle B,}
or the hull of
B
,
{\displaystyle B,}
and denoted by
⟨
B
⟩
{\displaystyle \langle B\rangle }
or
⟨
B
⟩
A
{\displaystyle \langle B\rangle _{\mathcal {A}}}
. The operator
⟨
⟩
{\displaystyle \langle \rangle }
is a finitary closure operator on the set of subsets of
|
A
|
{\displaystyle |{\mathcal {A}}|}
.
If
A
=
(
A
,
σ
,
I
)
{\displaystyle {\mathcal {A}}=(A,\sigma ,I)}
and
B
⊆
A
{\displaystyle B\subseteq A}
is a closed subset, then
(
B
,
σ
,
I
′
)
{\displaystyle (B,\sigma ,I')}
is an induced substructure of
A
,
{\displaystyle {\mathcal {A}},}
where
I
′
{\displaystyle I'}
assigns to every symbol of σ the restriction to
B
{\displaystyle B}
of its interpretation in
A
.
{\displaystyle {\mathcal {A}}.}
Conversely, the domain of an induced substructure is a closed subset.
The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail.
=== Examples ===
Let
σ
=
{
+
,
×
,
−
,
0
,
1
}
{\displaystyle \sigma =\{+,\times ,-,0,1\}}
be again the standard signature for fields. When regarded as
σ
{\displaystyle \sigma }
-structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms.
The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield.
The most obvious way to define a graph is a structure with a signature
σ
{\displaystyle \sigma }
consisting of a single binary relation symbol
E
.
{\displaystyle E.}
The vertices of the graph form the domain of the structure, and for two vertices
a
{\displaystyle a}
and
b
,
{\displaystyle b,}
(
a
,
b
)
∈
E
{\displaystyle (a,b)\!\in {\text{E}}}
means that
a
{\displaystyle a}
and
b
{\displaystyle b}
are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let
G
{\displaystyle G}
be a graph consisting of two vertices connected by an edge, and let
H
{\displaystyle H}
be the graph consisting of the same vertices but no edges.
H
{\displaystyle H}
is a subgraph of
G
,
{\displaystyle G,}
but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs.
== Homomorphisms and embeddings ==
=== Homomorphisms ===
Given two structures
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
of the same signature σ, a (σ-)homomorphism from
A
{\displaystyle {\mathcal {A}}}
to
B
{\displaystyle {\mathcal {B}}}
is a map
h
:
|
A
|
→
|
B
|
{\displaystyle h:|{\mathcal {A}}|\rightarrow |{\mathcal {B}}|}
that preserves the functions and relations. More precisely:
For every n-ary function symbol f of σ and any elements
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
, the following equation holds:
h
(
f
(
a
1
,
a
2
,
…
,
a
n
)
)
=
f
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
{\displaystyle h(f(a_{1},a_{2},\dots ,a_{n}))=f(h(a_{1}),h(a_{2}),\dots ,h(a_{n}))}
.
For every n-ary relation symbol R of σ and any elements
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
, the following implication holds:
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
⟹
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
∈
R
B
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}\implies (h(a_{1}),h(a_{2}),\dots ,h(a_{n}))\in R^{\mathcal {B}}}
where
R
A
{\displaystyle R^{\mathcal {A}}}
,
R
B
{\displaystyle R^{\mathcal {B}}}
is the interpretation of the relation symbol
R
{\displaystyle R}
of the object theory in the structure
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
respectively.
A homomorphism h from
A
{\displaystyle {\mathcal {A}}}
to
B
{\displaystyle {\mathcal {B}}}
is typically denoted as
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
, although technically the function h is between the domains
|
A
|
{\displaystyle |{\mathcal {A}}|}
,
|
B
|
{\displaystyle |{\mathcal {B}}|}
of the two structures
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
.
For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms.
A homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
is sometimes called strong if:
For every n-ary relation symbol R of the object theory and any elements
b
1
,
b
2
,
…
,
b
n
∈
|
B
|
{\displaystyle b_{1},b_{2},\dots ,b_{n}\in |{\mathcal {B}}|}
such that
(
b
1
,
b
2
,
…
,
b
n
)
∈
R
B
{\displaystyle (b_{1},b_{2},\dots ,b_{n})\in R^{\mathcal {B}}}
, there are
a
1
,
a
2
,
…
,
a
n
∈
|
A
|
{\displaystyle a_{1},a_{2},\dots ,a_{n}\in |{\mathcal {A}}|}
such that
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}}
and
b
1
=
h
(
a
1
)
,
b
2
=
h
(
a
2
)
,
…
,
b
n
=
h
(
a
n
)
.
{\displaystyle b_{1}=h(a_{1}),\,b_{2}=h(a_{2}),\,\dots ,\,b_{n}=h(a_{n}).}
The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above.
=== Embeddings ===
A (σ-)homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
is called a (σ-)embedding if it is one-to-one and
for every n-ary relation symbol R of σ and any elements
a
1
,
a
2
,
…
,
a
n
{\displaystyle a_{1},a_{2},\dots ,a_{n}}
, the following equivalence holds:
(
a
1
,
a
2
,
…
,
a
n
)
∈
R
A
⟺
(
h
(
a
1
)
,
h
(
a
2
)
,
…
,
h
(
a
n
)
)
∈
R
B
{\displaystyle (a_{1},a_{2},\dots ,a_{n})\in R^{\mathcal {A}}\iff (h(a_{1}),h(a_{2}),\dots ,h(a_{n}))\in R^{\mathcal {B}}}
(where as before
R
A
{\displaystyle R^{\mathcal {A}}}
,
R
B
{\displaystyle R^{\mathcal {B}}}
refers to the interpretation of the relation symbol R of the object theory σ in the structure
A
{\displaystyle {\mathcal {A}}}
,
B
{\displaystyle {\mathcal {B}}}
respectively).
Thus an embedding is the same thing as a strong homomorphism which is one-to-one.
The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom.
Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom.
=== Example ===
As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure.
=== Homomorphism problem ===
The following problem is known as the homomorphism problem:
Given two finite structures
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
of a finite relational signature, find a homomorphism
h
:
A
→
B
{\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}}
or show that no such homomorphism exists.
Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the complexity of CSP can be studied using the methods of finite model theory.
Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem.
== Structures and first-order logic ==
Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer.
=== Satisfaction relation ===
Each first-order structure
M
=
(
M
,
σ
,
I
)
{\displaystyle {\mathcal {M}}=(M,\sigma ,I)}
has a satisfaction relation
M
⊨
ϕ
{\displaystyle {\mathcal {M}}\vDash \phi }
defined for all formulas
ϕ
{\displaystyle \,\phi }
in the language consisting of the language of
M
{\displaystyle {\mathcal {M}}}
together with a constant symbol for each element of
M
,
{\displaystyle M,}
which is interpreted as that element.
This relation is defined inductively using Tarski's T-schema.
A structure
M
{\displaystyle {\mathcal {M}}}
is said to be a model of a theory
T
{\displaystyle T}
if the language of
M
{\displaystyle {\mathcal {M}}}
is the same as the language of
T
{\displaystyle T}
and every sentence in
T
{\displaystyle T}
is satisfied by
M
.
{\displaystyle {\mathcal {M}}.}
Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms.
=== Definable relations ===
An
n
{\displaystyle n}
-ary relation
R
{\displaystyle R}
on the universe (i.e. domain)
M
{\displaystyle M}
of the structure
M
{\displaystyle {\mathcal {M}}}
is said to be definable (or explicitly definable cf. Beth definability, or
∅
{\displaystyle \emptyset }
-definable, or definable with parameters from
∅
{\displaystyle \emptyset }
cf. below) if there is a formula
φ
(
x
1
,
…
,
x
n
)
{\displaystyle \varphi (x_{1},\ldots ,x_{n})}
such that
R
=
{
(
a
1
,
…
,
a
n
)
∈
M
n
:
M
⊨
φ
(
a
1
,
…
,
a
n
)
}
.
{\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.}
In other words,
R
{\displaystyle R}
is definable if and only if there is a formula
φ
{\displaystyle \varphi }
such that
(
a
1
,
…
,
a
n
)
∈
R
⇔
M
⊨
φ
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\ldots ,a_{n})\in R\Leftrightarrow {\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})}
is correct.
An important special case is the definability of specific elements. An element
m
{\displaystyle m}
of
M
{\displaystyle M}
is definable in
M
{\displaystyle {\mathcal {M}}}
if and only if there is a formula
φ
(
x
)
{\displaystyle \varphi (x)}
such that
M
⊨
∀
x
(
x
=
m
↔
φ
(
x
)
)
.
{\displaystyle {\mathcal {M}}\vDash \forall x(x=m\leftrightarrow \varphi (x)).}
==== Definability with parameters ====
A relation
R
{\displaystyle R}
is said to be definable with parameters (or
|
M
|
{\displaystyle |{\mathcal {M}}|}
-definable) if there is a formula
φ
{\displaystyle \varphi }
with parameters from
M
{\displaystyle {\mathcal {M}}}
such that
R
{\displaystyle R}
is definable using
φ
.
{\displaystyle \varphi .}
Every element of a structure is definable using the element itself as a parameter.
Some authors use definable to mean definable without parameters, while other authors mean definable with parameters. Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists.
==== Implicit definability ====
Recall from above that an
n
{\displaystyle n}
-ary relation
R
{\displaystyle R}
on the universe
M
{\displaystyle M}
of
M
{\displaystyle {\mathcal {M}}}
is explicitly definable if there is a formula
φ
(
x
1
,
…
,
x
n
)
{\displaystyle \varphi (x_{1},\ldots ,x_{n})}
such that
R
=
{
(
a
1
,
…
,
a
n
)
∈
M
n
:
M
⊨
φ
(
a
1
,
…
,
a
n
)
}
.
{\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.}
Here the formula
φ
{\displaystyle \varphi }
used to define a relation
R
{\displaystyle R}
must be over the signature of
M
{\displaystyle {\mathcal {M}}}
and so
φ
{\displaystyle \varphi }
may not mention
R
{\displaystyle R}
itself, since
R
{\displaystyle R}
is not in the signature of
M
.
{\displaystyle {\mathcal {M}}.}
If there is a formula
φ
{\displaystyle \varphi }
in the extended language containing the language of
M
{\displaystyle {\mathcal {M}}}
and a new symbol
R
,
{\displaystyle R,}
and the relation
R
{\displaystyle R}
is the only relation on
M
{\displaystyle {\mathcal {M}}}
such that
M
⊨
φ
,
{\displaystyle {\mathcal {M}}\vDash \varphi ,}
then
R
{\displaystyle R}
is said to be implicitly definable over
M
.
{\displaystyle {\mathcal {M}}.}
By Beth's theorem, every implicitly definable relation is explicitly definable.
== Many-sorted structures ==
Structures as defined above are sometimes called one-sorted structures to distinguish them from the more general many-sorted structures. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers.
Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols:
If V is a vector space over a field F, the corresponding two-sorted structure
V
{\displaystyle {\mathcal {V}}}
consists of the vector domain
|
V
|
V
=
V
{\displaystyle |{\mathcal {V}}|_{V}=V}
, the scalar domain
|
V
|
S
=
F
{\displaystyle |{\mathcal {V}}|_{S}=F}
, and the obvious functions, such as the vector zero
0
V
V
=
0
∈
|
V
|
V
{\displaystyle 0_{V}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{V}}
, the scalar zero
0
S
V
=
0
∈
|
V
|
S
{\displaystyle 0_{S}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{S}}
, or scalar multiplication
×
V
:
|
V
|
S
×
|
V
|
V
→
|
V
|
V
{\displaystyle \times ^{\mathcal {V}}:|{\mathcal {V}}|_{S}\times |{\mathcal {V}}|_{V}\rightarrow |{\mathcal {V}}|_{V}}
.
Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly.
In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory.
== Other generalizations ==
=== Partial algebras ===
Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.
∀
{\displaystyle \forall }
x
∀
{\displaystyle \forall }
y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1.
In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity.
=== Structures for typed languages ===
In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type.
=== Higher-order languages ===
There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language.
=== Structures that are proper classes ===
In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class.
In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain.
== See also ==
Mathematical structure – Additional mathematical object
== Notes ==
== References ==
Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
Chang, Chen Chung; Keisler, H. Jerome (1989) [1973], Model Theory, Elsevier, ISBN 978-0-7204-0692-4
Diestel, Reinhard (2005) [1997], Graph Theory, Graduate Texts in Mathematics, vol. 173 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4
Ebbinghaus, Heinz-Dieter; Flum, Jörg; Thomas, Wolfgang (1994), Mathematical Logic (2nd ed.), New York: Springer, ISBN 978-0-387-94258-2
Hinman, P. (2005), Fundamentals of Mathematical Logic, A K Peters, ISBN 978-1-56881-262-5
Hodges, Wilfrid (1993), Model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-30442-9
Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6
Marker, David (2002), Model Theory: An Introduction, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98760-6
Poizat, Bruno (2000), A Course in Model Theory: An Introduction to Contemporary Mathematical Logic, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98655-5
Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6
Rothmaler, Philipp (2000), Introduction to Model Theory, London: CRC Press, ISBN 978-90-5699-313-9
== External links ==
Semantics section in Classical Logic (an entry of Stanford Encyclopedia of Philosophy) | Wikipedia/Model_(logic) |
Computer-aided design (CAD) is the use of computers (or workstations) to aid in the creation, modification, analysis, or optimization of a design.: 3 This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.: 4 Designs made through CAD software help protect products and inventions when used in patent applications. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The terms computer-aided drafting (CAD) and computer-aided design and drafting (CADD) are also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it is known as mechanical design automation (MDA), which includes the process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.: 71, 106
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design (building information modeling), prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).
== Overview ==
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within the product lifecycle management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
Computer-aided engineering (CAE) and finite element analysis (FEA, FEM)
Computer-aided manufacturing (CAM) including instructions to computer numerical control (CNC) machines
Photorealistic rendering and motion simulation
Document management and revision control using product data management (PDM)
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.
== Types ==
There are several different types of CAD, each requiring the operator to think differently about how to use them and design their virtual components in a different manner. Virtually all of CAD tools rely on constraint concepts that are used to define geometric or non-geometric elements of a model.
=== 2D CAD ===
There are many producers of the lower-end 2D sketching systems, including a number of free and open-source programs. These provide an approach to the drawing process where scale and placement on the drawing sheet can easily be adjusted in the final draft as required, unlike in hand drafting.
=== 3D CAD ===
3D wireframe is an extension of 2D drafting into a three-dimensional space. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.
3D "dumb" solids are created in a way analogous to manipulations of real-world objects. Basic three-dimensional geometric forms (e.g., prisms, cylinders, spheres, or rectangles) have solid volumes added or subtracted from them as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids do not usually include tools to easily allow the motion of the components, set their limits to their motion, or identify interference between components.
There are several types of 3D solid modeling
Parametric modeling allows the operator to use what is referred to as "design intent". The objects and features are created modifiable. Any future modifications can be made by changing on how the original part was created. If a feature was intended to be located from the center of the part, the operator should locate it from the center of the model. The feature could be located using any geometric object already available in the part, but this random placement would defeat the design intent. If the operator designs the part as it functions, the parametric modeler is able to make changes to the part while maintaining geometric and functional relationships.
Direct or explicit modeling provide the ability to edit geometry without a history tree. With direct modeling, once a sketch is used to create geometry it is incorporated into the new geometry, and the designer only has to modify the geometry afterward without needing the original sketch. As with parametric modeling, direct modeling has the ability to include the relationships between selected geometry (e.g., tangency, concentricity).
Assembly modelling is a process which incorporates results of the previous single-part modelling into a final product containing several parts. Assemblies can be hierarchical, depending on the specific CAD software vendor, and highly complex models can be achieved (e.g. in building engineering by using computer-aided architectural design software): 539
==== Freeform CAD ====
Top-end CAD systems offer the capability to incorporate more organic, aesthetic and ergonomic features into the designs. Freeform surface modeling is often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.
== Technology ==
Originally software for CAD systems was developed with computer languages such as Fortran, ALGOL but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature-based modeler and freeform surface systems are built around a number of key C modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.
Unexpected capabilities of these associative relationships have led to a new form of prototyping called digital prototyping. In contrast to physical prototypes, which entail manufacturing time in the design. That said, CAD models can be generated by a computer after the physical prototype has been scanned using an industrial CT scanning machine. Depending on the nature of the business, digital or physical prototypes can be initially chosen according to specific needs.
Today, CAD systems exist for all the major platforms (Windows, Linux, UNIX and Mac OS X); some packages support multiple platforms.
Currently, no special hardware is required for most CAD software. However, some CAD systems can do graphically and computationally intensive tasks, so a modern graphics card, high speed (and possibly multiple) CPUs and large amounts of RAM may be recommended.
The human-machine interface is generally via a computer mouse but can also be via a pen and digitizing graphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of a Spacemouse/SpaceBall. Some systems also support stereoscopic glasses for viewing the 3D model. Technologies that in the past were limited to larger installations or specialist applications have become available to a wide group of users. These include the CAVE or HMDs and interactive devices like motion-sensing technology
== Software ==
Starting with the IBM Drafting System in the mid-1960s, computer-aided design systems began to provide more capabilitties than just an ability to reproduce manual drafting with electronic drafting, and the cost-benefit for companies to switch to CAD became apparent. The software automated many tasks that are taken for granted from computer systems today, such as automated generation of bills of materials, auto layout in integrated circuits, interference checking, and many others. Eventually, CAD provided the designer with the ability to perform engineering calculations. During this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a revolutionary change in the engineering industry, where draftsman, designer, and engineer roles that had previously been separate began to merge. CAD is an example of the pervasive effect computers were beginning to have on the industry.
Current computer-aided design software packages range from 2D vector-based drafting systems to 3D solid and surface modelers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematical modeling.
CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).
CAD is mainly used for detailed design of 3D models or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components. It can also be used to design objects such as jewelry, furniture, appliances, etc. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs. 4D BIM is a type of virtual construction engineering simulation incorporating time or schedule-related information for project management.
CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to layout and develop work on screen, print it out and save it for future editing, saving time on their drawings.
=== License management software ===
In the 2000s, some CAD system software vendors shipped their distributions with a dedicated license manager software that controlled how often or how many users can utilize the CAD system.: 166 It could run either on a local machine (by loading from a local storage device) or a local network fileserver and was usually tied to a specific IP address in latter case.: 166
== List of software packages ==
CAD software enables engineers and architects to design, inspect and manage engineering projects within an integrated graphical user interface (GUI) on a personal computer system. Most applications support solid modeling with boundary representation (B-Rep) and NURBS geometry, and enable the same to be published in a variety of formats.
Based on market statistics, commercial software from Autodesk, Dassault Systems, Siemens PLM Software, and PTC dominate the CAD industry. The following is a list of major CAD applications, grouped by usage statistics.
=== Commercial software ===
ABViewer
AC3D
Alibre Design
ArchiCAD (Graphisoft)
AutoCAD (Autodesk)
AutoTURN
AxSTREAM
BricsCAD
CATIA (Dassault Systèmes)
Cobalt
CorelCAD
EAGLE
Fusion 360 (Autodesk)
IntelliCAD
Inventor (Autodesk)
IRONCAD
KeyCreator (Kubotek)
Landscape Express
MEDUSA4
MicroStation (Bentley Systems)
Modelur (AgiliCity)
Onshape (PTC)
NX (Siemens Digital Industries Software)
PTC Creo (successor to Pro/ENGINEER) (PTC)
PunchCAD
Remo 3D
Revit (Autodesk)
Rhinoceros 3D
SketchUp
Solid Edge (Siemens Digital Industries Software)
SOLIDWORKS (Dassault Systèmes)
SpaceClaim
T-FLEX CAD
TranslateCAD
TurboCAD
Vectorworks (Nemetschek)
=== Open-source software ===
Blender
BRL-CAD
FreeCAD
LibreCAD
LeoCAD
OpenSCAD
QCAD
Salome (software)
SolveSpace
=== Freeware ===
BricsCAD Shape
Tinkercad (successor to Autodesk 123D)
=== CAD kernels ===
ACIS by (Spatial Corp owned by Dassault Systèmes)
C3D Toolkit by C3D Labs
Open CASCADE Open Source
Parasolid by (Siemens Digital Industries Software)
ShapeManager by (Autodesk)
== See also ==
== References ==
== External links ==
MIT 1982 CAD lab
Learning materials related to Computer-aided design at Wikiversity
Learning materials related to Computer-aided Geometric Design at Wikiversity | Wikipedia/Computer_aided_design |
A hybrid integrated circuit (HIC), hybrid microcircuit, hybrid circuit or simply hybrid is a miniaturized electronic circuit constructed of individual devices, such as semiconductor devices (e.g. transistors, diodes or monolithic ICs) and passive components (e.g. resistors, inductors, transformers, and capacitors), bonded to a substrate or printed circuit board (PCB). A PCB having components on a Printed wiring board (PWB) is not considered a true hybrid circuit according to the definition of MIL-PRF-38534.
== Overview ==
"Integrated circuit", as the term is currently used, usually refers to a monolithic IC which differs notably from a HIC in that a HIC is fabricated by inter-connecting a number of components on a substrate whereas an IC's (monolithic) components are fabricated in a series of steps entirely on a single wafer which is then diced into chips. Some hybrid circuits may contain monolithic ICs, particularly Multi-chip module (MCM) hybrid circuits.
Hybrid circuits could be encapsulated in epoxy, as shown in the photo, or in military and space applications, a lid was soldered onto the package. A hybrid circuit serves as a component on a PCB in the same way as a monolithic integrated circuit; the difference between the two types of devices is in how they are constructed and manufactured. The advantage of hybrid circuits is that components which cannot be included in a monolithic IC can be used, e.g., capacitors of large value, wound components, crystals, inductors. In military and space applications, numerous integrated circuits, transistors and diodes, in their die form, would be placed on either a ceramic or beryllium substrate. Either gold or aluminum wire would be bonded from the pads of the IC, transistor, or diode to the substrate.
Thick film technology is often used as the interconnecting medium for hybrid integrated circuits. The use of screen printed thick film interconnect provides advantages of versatility over thin film although feature sizes may be larger and deposited resistors wider in tolerance. Multi-layer thick film is a technique for further improvements in integration using a screen printed insulating dielectric to ensure connections between layers are made only where required. One key advantage for the circuit designer is complete freedom in the choice of resistor value in thick film technology. Planar resistors are also screen printed and included in the thick film interconnect design. The composition and dimensions of resistors can be selected to provide the desired values. The final resistor value is determined by design and can be adjusted by laser trimming. Once the hybrid circuit is fully populated with components, fine tuning prior to final test may be achieved by active laser trimming.
Thin film technology was also employed in the 1960s. Ultra Electronics manufactured circuits using a silica glass substrate. A film of tantalum was deposited by sputtering followed by a layer of gold by evaporation. The gold layer was first etched following the application of a photoresist to form solder compatible connection pads. Resistive networks were formed, also by a photoresist and etching process. These were trimmed to a high precision by selective adonization of the film. Capacitors and semiconductors were in the form of LID (Leadless Inverted Devices) soldered to the surface by selectively heating the substrate from the underside. Completed circuits were potted in a diallyl phthalate resin. Several customized passive networks were made using these techniques as were some amplifiers and other specialized circuits. It is believed that some passive networks were used in the engine control units manufactured by Ultra Electronics for Concorde.
Some modern hybrid circuit technologies, such as LTCC-substrate hybrids, allow for embedding of components within the layers of a multi-layer substrate in addition to components placed on the surface of the substrate. This technology produces a circuit that is, to some degree, three-dimensional.
Hybrid ICs are especially suitable for analog signals. They were used in some early digital computers but were replaced therein by monolithic ICs which offered higher performance.
== Other electronic hybrids ==
In the early days of telephones, separate modules containing transformers and resistors were called hybrids or hybrid coils; they have been replaced by semiconductor integrated circuits.
In the early days of transistors the term hybrid circuit was used to describe circuits with both transistors and vacuum tubes; e.g., an audio amplifier with transistors used for voltage amplification followed by a vacuum tube power output stage, as suitable power transistors were not available. This usage, and the devices, are obsolete, however amplifiers that use a tube preamplifier stage coupled with a solid state output stage are still in production, and are called hybrid amplifiers in reference to this.
== See also ==
Chip on board, aka black blobs
System in a package
Multi-chip module (MCM)
Monolithic microwave integrated circuit (MMIC)
Solid Logic Technology (SLT)
MIL-PRF-38534
Printed circuit board (PCB)
Printed Electronic Circuit - Ancestor of the Hybrid IC
== References ==
== External links ==
Media related to Hybrid integrated circuits at Wikimedia Commons | Wikipedia/Hybrid_integrated_circuit |
Switching circuit theory is the mathematical study of the properties of networks of idealized switches. Such networks may be strictly combinational logic, in which their output state is only a function of the present state of their inputs; or may also contain sequential elements, where the present state depends on the present state and past states; in that sense, sequential circuits are said to include "memory" of past states. An important class of sequential circuits are state machines. Switching circuit theory is applicable to the design of telephone systems, computers, and similar systems. Switching circuit theory provided the mathematical foundations and tools for digital system design in almost all areas of modern technology.
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880–1881 he showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work remained unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.
In 1898, Martin Boda described a switching theory for signalling block systems.
Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).
The theory was independently established through the works of NEC engineer Akira Nakashima in Japan, Claude Shannon in the United States, and Victor Shestakov in the Soviet Union. The three published a series of papers showing that the two-valued Boolean algebra, can describe the operation of switching circuits. However, Shannon's work has largely overshadowed the other two, and despite some scholars arguing the similarities of Nakashima's work to Shannon's, their approaches and theoretical frameworks were markedly different. Also implausible is that Shestakov's influenced the other two due to the language barriers and the relative obscurity of his work abroad. Furthermore, Shannon and Shestakov defended their theses the same year in 1938, and Shestakov did not publish until 1941.
Ideal switches are considered as having only two exclusive states, for example, open or closed. In some analysis, the state of a switch can be considered to have no influence on the output of the system and is designated as a "don't care" state. In complex networks it is necessary to also account for the finite switching time of physical switches; where two or more different paths in a network may affect the output, these delays may result in a "logic hazard" or "race condition" where the output state changes due to the different propagation times through the network.
== See also ==
Circuit switching
Message switching
Packet switching
Fast packet switching
Network switching subsystem
5ESS Switching System
Number One Electronic Switching System
Boolean circuit
C-element
Circuit complexity
Circuit minimization
Karnaugh map
Logic design
Logic gate
Logic in computer science
Nonblocking minimal spanning switch
Programmable logic controller – computer software mimics relay circuits for industrial applications
Quine–McCluskey algorithm
Relay – an early kind of logic device
Switching lemma
Unate function
== References ==
== Further reading ==
Keister, William; Ritchie, Alistair E.; Washburn, Seth H. (1951). The Design of Switching Circuits. The Bell Telephone Laboratories Series (1 ed.). D. Van Nostrand Company, Inc. p. 147. Archived from the original on 2020-05-09. Retrieved 2020-05-09. [8] (2+xx+556+2 pages)
Caldwell, Samuel Hawks (1958-12-01) [February 1958]. Written at Watertown, Massachusetts, USA. Switching Circuits and Logical Design. 5th printing September 1963 (1st ed.). New York, USA: John Wiley & Sons Inc. ISBN 0-47112969-0. LCCN 58-7896. {{cite book}}: ISBN / Date incompatibility (help) (xviii+686 pages)
Perkowski, Marek A.; Grygiel, Stanislaw (1995-11-20). "6. Historical Overview of the Research on Decomposition". A Survey of Literature on Function Decomposition (PDF). Version IV. Functional Decomposition Group, Department of Electrical Engineering, Portland University, Portland, Oregon, USA. CiteSeerX 10.1.1.64.1129. Archived (PDF) from the original on 2021-03-28. Retrieved 2021-03-28. (188 pages)
Stanković, Radomir S. [in German]; Sasao, Tsutomu; Astola, Jaakko Tapio [in Finnish] (August 2001). "Publications in the First Twenty Years of Switching Theory and Logic Design" (PDF). Tampere International Center for Signal Processing (TICSP) Series. Tampere University of Technology / TTKK, Monistamo, Finland. ISSN 1456-2774. S2CID 62319288. #14. Archived from the original (PDF) on 2017-08-09. Retrieved 2021-03-28. (4+60 pages)
Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. (xviii+212 pages) | Wikipedia/Switching_circuit_theory |
In set theory, inner model theory is the study of certain models of ZFC or some fragment or strengthening thereof. Ordinarily these models are transitive subsets or subclasses of the von Neumann universe V, or sometimes of a generic extension of V. Inner model theory studies the relationships of these models to determinacy, large cardinals, and descriptive set theory. Despite the name, it is considered more a branch of set theory than of model theory.
== Examples ==
The class of all sets is an inner model containing all other inner models.
The first non-trivial example of an inner model was the constructible universe L developed by Kurt Gödel. Every model M of ZF has an inner model LM satisfying the axiom of constructibility, and this will be the smallest inner model of M containing all the ordinals of M. Regardless of the properties of the original model, LM will satisfy the generalized continuum hypothesis and combinatorial axioms such as the diamond principle ◊.
HOD, the class of sets that are hereditarily ordinal definable, form an inner model, which satisfies ZFC.
The sets that are hereditarily definable over a countable sequence of ordinals form an inner model, used in Solovay's theorem.
L(R), the smallest inner model containing all real numbers and all ordinals.
L[U], the class constructed relative to a normal, non-principal,
κ
{\displaystyle \kappa }
-complete ultrafilter U over an ordinal
κ
{\displaystyle \kappa }
(see zero dagger).
== Consistency results ==
One important use of inner models is the proof of consistency results. If it can be shown that every model of an axiom A has an inner model satisfying axiom B, then if A is consistent, B must also be consistent. This analysis is most useful when A is an axiom independent of ZFC, for example a large cardinal axiom; it is one of the tools used to rank axioms by consistency strength.
== References ==
Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag
Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-00384-7
== See also ==
Core model
Inner model | Wikipedia/Inner_model_theory |
In mathematics, fuzzy sets (also known as uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set.
At the same time, Salii (1965) defined a more general kind of structure called an "L-relation", which he studied in an abstract algebraic context;
fuzzy relations are special cases of L-relations when L is the unit interval [0, 1].
They are now used throughout fuzzy mathematics, having applications in areas such as linguistics (De Cock, Bodenhofer & Kerre 2000), decision-making (Kuzmin 1982), and clustering (Bezdek 1978).
In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition—an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only takes values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.
== Definition ==
A fuzzy set is a pair
(
U
,
m
)
{\displaystyle (U,m)}
where
U
{\displaystyle U}
is a set (often required to be non-empty) and
m
:
U
→
[
0
,
1
]
{\displaystyle m\colon U\rightarrow [0,1]}
a membership function.
The reference set
U
{\displaystyle U}
(sometimes denoted by
Ω
{\displaystyle \Omega }
or
X
{\displaystyle X}
) is called universe of discourse, and for each
x
∈
U
,
{\displaystyle x\in U,}
the value
m
(
x
)
{\displaystyle m(x)}
is called the grade of membership of
x
{\displaystyle x}
in
(
U
,
m
)
{\displaystyle (U,m)}
.
The function
m
=
μ
A
{\displaystyle m=\mu _{A}}
is called the membership function of the fuzzy set
A
=
(
U
,
m
)
{\displaystyle A=(U,m)}
.
For a finite set
U
=
{
x
1
,
…
,
x
n
}
,
{\displaystyle U=\{x_{1},\dots ,x_{n}\},}
the fuzzy set
(
U
,
m
)
{\displaystyle (U,m)}
is often denoted by
{
m
(
x
1
)
/
x
1
,
…
,
m
(
x
n
)
/
x
n
}
.
{\displaystyle \{m(x_{1})/x_{1},\dots ,m(x_{n})/x_{n}\}.}
Let
x
∈
U
{\displaystyle x\in U}
. Then
x
{\displaystyle x}
is called
not included in the fuzzy set
(
U
,
m
)
{\displaystyle (U,m)}
if
m
(
x
)
=
0
{\displaystyle m(x)=0}
(no member),
fully included if
m
(
x
)
=
1
{\displaystyle m(x)=1}
(full member),
partially included if
0
<
m
(
x
)
<
1
{\displaystyle 0<m(x)<1}
(fuzzy member).
The (crisp) set of all fuzzy sets on a universe
U
{\displaystyle U}
is denoted with
S
F
(
U
)
{\displaystyle SF(U)}
(or sometimes just
F
(
U
)
{\displaystyle F(U)}
).
=== Crisp sets related to a fuzzy set ===
For any fuzzy set
A
=
(
U
,
m
)
{\displaystyle A=(U,m)}
and
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
the following crisp sets are defined:
A
≥
α
=
A
α
=
{
x
∈
U
∣
m
(
x
)
≥
α
}
{\displaystyle A^{\geq \alpha }=A_{\alpha }=\{x\in U\mid m(x)\geq \alpha \}}
is called its α-cut (aka α-level set)
A
>
α
=
A
α
′
=
{
x
∈
U
∣
m
(
x
)
>
α
}
{\displaystyle A^{>\alpha }=A'_{\alpha }=\{x\in U\mid m(x)>\alpha \}}
is called its strong α-cut (aka strong α-level set)
S
(
A
)
=
Supp
(
A
)
=
A
>
0
=
{
x
∈
U
∣
m
(
x
)
>
0
}
{\displaystyle S(A)=\operatorname {Supp} (A)=A^{>0}=\{x\in U\mid m(x)>0\}}
is called its support
C
(
A
)
=
Core
(
A
)
=
A
=
1
=
{
x
∈
U
∣
m
(
x
)
=
1
}
{\displaystyle C(A)=\operatorname {Core} (A)=A^{=1}=\{x\in U\mid m(x)=1\}}
is called its core (or sometimes kernel
Kern
(
A
)
{\displaystyle \operatorname {Kern} (A)}
).
Note that some authors understand "kernel" in a different way; see below.
=== Other definitions ===
A fuzzy set
A
=
(
U
,
m
)
{\displaystyle A=(U,m)}
is empty (
A
=
∅
{\displaystyle A=\varnothing }
) iff (if and only if)
∀
{\displaystyle \forall }
x
∈
U
:
μ
A
(
x
)
=
m
(
x
)
=
0
{\displaystyle x\in U:\mu _{A}(x)=m(x)=0}
Two fuzzy sets
A
{\displaystyle A}
and
B
{\displaystyle B}
are equal (
A
=
B
{\displaystyle A=B}
) iff
∀
x
∈
U
:
μ
A
(
x
)
=
μ
B
(
x
)
{\displaystyle \forall x\in U:\mu _{A}(x)=\mu _{B}(x)}
A fuzzy set
A
{\displaystyle A}
is included in a fuzzy set
B
{\displaystyle B}
(
A
⊆
B
{\displaystyle A\subseteq B}
) iff
∀
x
∈
U
:
μ
A
(
x
)
≤
μ
B
(
x
)
{\displaystyle \forall x\in U:\mu _{A}(x)\leq \mu _{B}(x)}
For any fuzzy set
A
{\displaystyle A}
, any element
x
∈
U
{\displaystyle x\in U}
that satisfies
μ
A
(
x
)
=
0.5
{\displaystyle \mu _{A}(x)=0.5}
is called a crossover point.
Given a fuzzy set
A
{\displaystyle A}
, any
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
, for which
A
=
α
=
{
x
∈
U
∣
μ
A
(
x
)
=
α
}
{\displaystyle A^{=\alpha }=\{x\in U\mid \mu _{A}(x)=\alpha \}}
is not empty, is called a level of A.
The level set of A is the set of all levels
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
representing distinct cuts. It is the image of
μ
A
{\displaystyle \mu _{A}}
:
Λ
A
=
{
α
∈
[
0
,
1
]
:
A
=
α
≠
∅
}
=
{
α
∈
[
0
,
1
]
:
{\displaystyle \Lambda _{A}=\{\alpha \in [0,1]:A^{=\alpha }\neq \varnothing \}=\{\alpha \in [0,1]:{}}
∃
{\displaystyle \exists }
x
∈
U
(
μ
A
(
x
)
=
α
)
}
=
μ
A
(
U
)
{\displaystyle x\in U(\mu _{A}(x)=\alpha )\}=\mu _{A}(U)}
For a fuzzy set
A
{\displaystyle A}
, its height is given by
Hgt
(
A
)
=
sup
{
μ
A
(
x
)
∣
x
∈
U
}
=
sup
(
μ
A
(
U
)
)
{\displaystyle \operatorname {Hgt} (A)=\sup\{\mu _{A}(x)\mid x\in U\}=\sup(\mu _{A}(U))}
where
sup
{\displaystyle \sup }
denotes the supremum, which exists because
μ
A
(
U
)
{\displaystyle \mu _{A}(U)}
is non-empty and bounded above by 1. If U is finite, we can simply replace the supremum by the maximum.
A fuzzy set
A
{\displaystyle A}
is said to be normalized iff
Hgt
(
A
)
=
1
{\displaystyle \operatorname {Hgt} (A)=1}
In the finite case, where the supremum is a maximum, this means that at least one element of the fuzzy set has full membership. A non-empty fuzzy set
A
{\displaystyle A}
may be normalized with result
A
~
{\displaystyle {\tilde {A}}}
by dividing the membership function of the fuzzy set by its height:
∀
x
∈
U
:
μ
A
~
(
x
)
=
μ
A
(
x
)
/
Hgt
(
A
)
{\displaystyle \forall x\in U:\mu _{\tilde {A}}(x)=\mu _{A}(x)/\operatorname {Hgt} (A)}
Besides similarities this differs from the usual normalization in that the normalizing constant is not a sum.
For fuzzy sets
A
{\displaystyle A}
of real numbers
(
U
⊆
R
)
{\displaystyle (U\subseteq \mathbb {R} )}
with bounded support, the width is defined as
Width
(
A
)
=
sup
(
Supp
(
A
)
)
−
inf
(
Supp
(
A
)
)
{\displaystyle \operatorname {Width} (A)=\sup(\operatorname {Supp} (A))-\inf(\operatorname {Supp} (A))}
In the case when
Supp
(
A
)
{\displaystyle \operatorname {Supp} (A)}
is a finite set, or more generally a closed set, the width is just
Width
(
A
)
=
max
(
Supp
(
A
)
)
−
min
(
Supp
(
A
)
)
{\displaystyle \operatorname {Width} (A)=\max(\operatorname {Supp} (A))-\min(\operatorname {Supp} (A))}
In the n-dimensional case
(
U
⊆
R
n
)
{\displaystyle (U\subseteq \mathbb {R} ^{n})}
the above can be replaced by the n-dimensional volume of
Supp
(
A
)
{\displaystyle \operatorname {Supp} (A)}
.
In general, this can be defined given any measure on U, for instance by integration (e.g. Lebesgue integration) of
Supp
(
A
)
{\displaystyle \operatorname {Supp} (A)}
.
A real fuzzy set
A
(
U
⊆
R
)
{\displaystyle A(U\subseteq \mathbb {R} )}
is said to be convex (in the fuzzy sense, not to be confused with a crisp convex set), iff
∀
x
,
y
∈
U
,
∀
λ
∈
[
0
,
1
]
:
μ
A
(
λ
x
+
(
1
−
λ
)
y
)
≥
min
(
μ
A
(
x
)
,
μ
A
(
y
)
)
{\displaystyle \forall x,y\in U,\forall \lambda \in [0,1]:\mu _{A}(\lambda {x}+(1-\lambda )y)\geq \min(\mu _{A}(x),\mu _{A}(y))}
.
Without loss of generality, we may take x ≤ y, which gives the equivalent formulation
∀
z
∈
[
x
,
y
]
:
μ
A
(
z
)
≥
min
(
μ
A
(
x
)
,
μ
A
(
y
)
)
{\displaystyle \forall z\in [x,y]:\mu _{A}(z)\geq \min(\mu _{A}(x),\mu _{A}(y))}
.
This definition can be extended to one for a general topological space U: we say the fuzzy set
A
{\displaystyle A}
is convex when, for any subset Z of U, the condition
∀
z
∈
Z
:
μ
A
(
z
)
≥
inf
(
μ
A
(
∂
Z
)
)
{\displaystyle \forall z\in Z:\mu _{A}(z)\geq \inf(\mu _{A}(\partial Z))}
holds, where
∂
Z
{\displaystyle \partial Z}
denotes the boundary of Z and
f
(
X
)
=
{
f
(
x
)
∣
x
∈
X
}
{\displaystyle f(X)=\{f(x)\mid x\in X\}}
denotes the image of a set X (here
∂
Z
{\displaystyle \partial Z}
) under a function f (here
μ
A
{\displaystyle \mu _{A}}
).
=== Fuzzy set operations ===
Although the complement of a fuzzy set has a single most common definition, the other main operations, union and intersection, do have some ambiguity.
For a given fuzzy set
A
{\displaystyle A}
, its complement
¬
A
{\displaystyle \neg {A}}
(sometimes denoted as
A
c
{\displaystyle A^{c}}
or
c
A
{\displaystyle cA}
) is defined by the following membership function:
∀
x
∈
U
:
μ
¬
A
(
x
)
=
1
−
μ
A
(
x
)
{\displaystyle \forall x\in U:\mu _{\neg {A}}(x)=1-\mu _{A}(x)}
.
Let t be a t-norm, and s the corresponding s-norm (aka t-conorm). Given a pair of fuzzy sets
A
,
B
{\displaystyle A,B}
, their intersection
A
∩
B
{\displaystyle A\cap {B}}
is defined by:
∀
x
∈
U
:
μ
A
∩
B
(
x
)
=
t
(
μ
A
(
x
)
,
μ
B
(
x
)
)
{\displaystyle \forall x\in U:\mu _{A\cap {B}}(x)=t(\mu _{A}(x),\mu _{B}(x))}
,
and their union
A
∪
B
{\displaystyle A\cup {B}}
is defined by:
∀
x
∈
U
:
μ
A
∪
B
(
x
)
=
s
(
μ
A
(
x
)
,
μ
B
(
x
)
)
{\displaystyle \forall x\in U:\mu _{A\cup {B}}(x)=s(\mu _{A}(x),\mu _{B}(x))}
.
By the definition of the t-norm, we see that the union and intersection are commutative, monotonic, associative, and have both a null and an identity element. For the intersection, these are ∅ and U, respectively, while for the union, these are reversed. However, the union of a fuzzy set and its complement may not result in the full universe U, and the intersection of them may not give the empty set ∅. Since the intersection and union are associative, it is natural to define the intersection and union of a finite family of fuzzy sets recursively. It is noteworthy that the generally accepted standard operators for the union and intersection of fuzzy sets are the max and min operators:
∀
x
∈
U
:
μ
A
∪
B
(
x
)
=
max
(
μ
A
(
x
)
,
μ
B
(
x
)
)
{\displaystyle \forall x\in U:\mu _{A\cup {B}}(x)=\max(\mu _{A}(x),\mu _{B}(x))}
and
μ
A
∩
B
(
x
)
=
min
(
μ
A
(
x
)
,
μ
B
(
x
)
)
{\displaystyle \mu _{A\cap {B}}(x)=\min(\mu _{A}(x),\mu _{B}(x))}
.
If the standard negator
n
(
α
)
=
1
−
α
,
α
∈
[
0
,
1
]
{\displaystyle n(\alpha )=1-\alpha ,\alpha \in [0,1]}
is replaced by another strong negator, the fuzzy set difference (defined below) may be generalized by
∀
x
∈
U
:
μ
¬
A
(
x
)
=
n
(
μ
A
(
x
)
)
.
{\displaystyle \forall x\in U:\mu _{\neg {A}}(x)=n(\mu _{A}(x)).}
The triple of fuzzy intersection, union and complement form a De Morgan Triplet. That is, De Morgan's laws extend to this triple.
Examples for fuzzy intersection/union pairs with standard negator can be derived from samples provided in the article about t-norms.
The fuzzy intersection is not idempotent in general, because the standard t-norm min is the only one which has this property. Indeed, if the arithmetic multiplication is used as the t-norm, the resulting fuzzy intersection operation is not idempotent. That is, iteratively taking the intersection of a fuzzy set with itself is not trivial. It instead defines the m-th power of a fuzzy set, which can be canonically generalized for non-integer exponents in the following way:
For any fuzzy set
A
{\displaystyle A}
and
ν
∈
R
+
{\displaystyle \nu \in \mathbb {R} ^{+}}
the ν-th power of
A
{\displaystyle A}
is defined by the membership function:
∀
x
∈
U
:
μ
A
ν
(
x
)
=
μ
A
(
x
)
ν
.
{\displaystyle \forall x\in U:\mu _{A^{\nu }}(x)=\mu _{A}(x)^{\nu }.}
The case of exponent two is special enough to be given a name.
For any fuzzy set
A
{\displaystyle A}
the concentration
C
O
N
(
A
)
=
A
2
{\displaystyle CON(A)=A^{2}}
is defined
∀
x
∈
U
:
μ
C
O
N
(
A
)
(
x
)
=
μ
A
2
(
x
)
=
μ
A
(
x
)
2
.
{\displaystyle \forall x\in U:\mu _{CON(A)}(x)=\mu _{A^{2}}(x)=\mu _{A}(x)^{2}.}
Taking
0
0
=
1
{\displaystyle 0^{0}=1}
, we have
A
0
=
U
{\displaystyle A^{0}=U}
and
A
1
=
A
.
{\displaystyle A^{1}=A.}
Given fuzzy sets
A
,
B
{\displaystyle A,B}
, the fuzzy set difference
A
∖
B
{\displaystyle A\setminus B}
, also denoted
A
−
B
{\displaystyle A-B}
, may be defined straightforwardly via the membership function:
∀
x
∈
U
:
μ
A
∖
B
(
x
)
=
t
(
μ
A
(
x
)
,
n
(
μ
B
(
x
)
)
)
,
{\displaystyle \forall x\in U:\mu _{A\setminus {B}}(x)=t(\mu _{A}(x),n(\mu _{B}(x))),}
which means
A
∖
B
=
A
∩
¬
B
{\displaystyle A\setminus B=A\cap \neg {B}}
, e. g.:
∀
x
∈
U
:
μ
A
∖
B
(
x
)
=
min
(
μ
A
(
x
)
,
1
−
μ
B
(
x
)
)
.
{\displaystyle \forall x\in U:\mu _{A\setminus {B}}(x)=\min(\mu _{A}(x),1-\mu _{B}(x)).}
Another proposal for a set difference could be:
∀
x
∈
U
:
μ
A
−
B
(
x
)
=
μ
A
(
x
)
−
t
(
μ
A
(
x
)
,
μ
B
(
x
)
)
.
{\displaystyle \forall x\in U:\mu _{A-{B}}(x)=\mu _{A}(x)-t(\mu _{A}(x),\mu _{B}(x)).}
Proposals for symmetric fuzzy set differences have been made by Dubois and Prade (1980), either by taking the absolute value, giving
∀
x
∈
U
:
μ
A
△
B
(
x
)
=
|
μ
A
(
x
)
−
μ
B
(
x
)
|
,
{\displaystyle \forall x\in U:\mu _{A\triangle B}(x)=|\mu _{A}(x)-\mu _{B}(x)|,}
or by using a combination of just max, min, and standard negation, giving
∀
x
∈
U
:
μ
A
△
B
(
x
)
=
max
(
min
(
μ
A
(
x
)
,
1
−
μ
B
(
x
)
)
,
min
(
μ
B
(
x
)
,
1
−
μ
A
(
x
)
)
)
.
{\displaystyle \forall x\in U:\mu _{A\triangle B}(x)=\max(\min(\mu _{A}(x),1-\mu _{B}(x)),\min(\mu _{B}(x),1-\mu _{A}(x))).}
Axioms for definition of generalized symmetric differences analogous to those for t-norms, t-conorms, and negators have been proposed by Vemur et al. (2014) with predecessors by Alsina et al. (2005) and Bedregal et al. (2009).
In contrast to crisp sets, averaging operations can also be defined for fuzzy sets.
=== Disjoint fuzzy sets ===
In contrast to the general ambiguity of intersection and union operations, there is clearness for disjoint fuzzy sets:
Two fuzzy sets
A
,
B
{\displaystyle A,B}
are disjoint iff
∀
x
∈
U
:
μ
A
(
x
)
=
0
∨
μ
B
(
x
)
=
0
{\displaystyle \forall x\in U:\mu _{A}(x)=0\lor \mu _{B}(x)=0}
which is equivalent to
∄
{\displaystyle \nexists }
x
∈
U
:
μ
A
(
x
)
>
0
∧
μ
B
(
x
)
>
0
{\displaystyle x\in U:\mu _{A}(x)>0\land \mu _{B}(x)>0}
and also equivalent to
∀
x
∈
U
:
min
(
μ
A
(
x
)
,
μ
B
(
x
)
)
=
0
{\displaystyle \forall x\in U:\min(\mu _{A}(x),\mu _{B}(x))=0}
We keep in mind that min/max is a t/s-norm pair, and any other will work here as well.
Fuzzy sets are disjoint if and only if their supports are disjoint according to the standard definition for crisp sets.
For disjoint fuzzy sets
A
,
B
{\displaystyle A,B}
any intersection will give ∅, and any union will give the same result, which is denoted as
A
∪
˙
B
=
A
∪
B
{\displaystyle A\,{\dot {\cup }}\,B=A\cup B}
with its membership function given by
∀
x
∈
U
:
μ
A
∪
˙
B
(
x
)
=
μ
A
(
x
)
+
μ
B
(
x
)
{\displaystyle \forall x\in U:\mu _{A{\dot {\cup }}B}(x)=\mu _{A}(x)+\mu _{B}(x)}
Note that only one of both summands is greater than zero.
For disjoint fuzzy sets
A
,
B
{\displaystyle A,B}
the following holds true:
Supp
(
A
∪
˙
B
)
=
Supp
(
A
)
∪
Supp
(
B
)
{\displaystyle \operatorname {Supp} (A\,{\dot {\cup }}\,B)=\operatorname {Supp} (A)\cup \operatorname {Supp} (B)}
This can be generalized to finite families of fuzzy sets as follows:
Given a family
A
=
(
A
i
)
i
∈
I
{\displaystyle A=(A_{i})_{i\in I}}
of fuzzy sets with index set I (e.g. I = {1,2,3,...,n}). This family is (pairwise) disjoint iff
for all
x
∈
U
there exists at most one
i
∈
I
such that
μ
A
i
(
x
)
>
0.
{\displaystyle {\text{for all }}x\in U{\text{ there exists at most one }}i\in I{\text{ such that }}\mu _{A_{i}}(x)>0.}
A family of fuzzy sets
A
=
(
A
i
)
i
∈
I
{\displaystyle A=(A_{i})_{i\in I}}
is disjoint, iff the family of underlying supports
Supp
∘
A
=
(
Supp
(
A
i
)
)
i
∈
I
{\displaystyle \operatorname {Supp} \circ A=(\operatorname {Supp} (A_{i}))_{i\in I}}
is disjoint in the standard sense for families of crisp sets.
Independent of the t/s-norm pair, intersection of a disjoint family of fuzzy sets will give ∅ again, while the union has no ambiguity:
⋃
i
∈
I
˙
A
i
=
⋃
i
∈
I
A
i
{\displaystyle {\dot {\bigcup \limits _{i\in I}}}\,A_{i}=\bigcup _{i\in I}A_{i}}
with its membership function given by
∀
x
∈
U
:
μ
⋃
i
∈
I
˙
A
i
(
x
)
=
∑
i
∈
I
μ
A
i
(
x
)
{\displaystyle \forall x\in U:\mu _{{\dot {\bigcup \limits _{i\in I}}}A_{i}}(x)=\sum _{i\in I}\mu _{A_{i}}(x)}
Again only one of the summands is greater than zero.
For disjoint families of fuzzy sets
A
=
(
A
i
)
i
∈
I
{\displaystyle A=(A_{i})_{i\in I}}
the following holds true:
Supp
(
⋃
i
∈
I
˙
A
i
)
=
⋃
i
∈
I
Supp
(
A
i
)
{\displaystyle \operatorname {Supp} \left({\dot {\bigcup \limits _{i\in I}}}\,A_{i}\right)=\bigcup \limits _{i\in I}\operatorname {Supp} (A_{i})}
=== Scalar cardinality ===
For a fuzzy set
A
{\displaystyle A}
with finite support
Supp
(
A
)
{\displaystyle \operatorname {Supp} (A)}
(i.e. a "finite fuzzy set"), its cardinality (aka scalar cardinality or sigma-count) is given by
Card
(
A
)
=
sc
(
A
)
=
|
A
|
=
∑
x
∈
U
μ
A
(
x
)
{\displaystyle \operatorname {Card} (A)=\operatorname {sc} (A)=|A|=\sum _{x\in U}\mu _{A}(x)}
.
In the case that U itself is a finite set, the relative cardinality is given by
RelCard
(
A
)
=
‖
A
‖
=
sc
(
A
)
/
|
U
|
=
|
A
|
/
|
U
|
{\displaystyle \operatorname {RelCard} (A)=\|A\|=\operatorname {sc} (A)/|U|=|A|/|U|}
.
This can be generalized for the divisor to be a non-empty fuzzy set: For fuzzy sets
A
,
G
{\displaystyle A,G}
with G ≠ ∅, we can define the relative cardinality by:
RelCard
(
A
,
G
)
=
sc
(
A
|
G
)
=
sc
(
A
∩
G
)
/
sc
(
G
)
{\displaystyle \operatorname {RelCard} (A,G)=\operatorname {sc} (A|G)=\operatorname {sc} (A\cap {G})/\operatorname {sc} (G)}
,
which looks very similar to the expression for conditional probability.
Note:
sc
(
G
)
>
0
{\displaystyle \operatorname {sc} (G)>0}
here.
The result may depend on the specific intersection (t-norm) chosen.
For
G
=
U
{\displaystyle G=U}
the result is unambiguous and resembles the prior definition.
=== Distance and similarity ===
For any fuzzy set
A
{\displaystyle A}
the membership function
μ
A
:
U
→
[
0
,
1
]
{\displaystyle \mu _{A}:U\to [0,1]}
can be regarded as a family
μ
A
=
(
μ
A
(
x
)
)
x
∈
U
∈
[
0
,
1
]
U
{\displaystyle \mu _{A}=(\mu _{A}(x))_{x\in U}\in [0,1]^{U}}
. The latter is a metric space with several metrics
d
{\displaystyle d}
known. A metric can be derived from a norm (vector norm)
‖
‖
{\displaystyle \|\,\|}
via
d
(
α
,
β
)
=
‖
α
−
β
‖
{\displaystyle d(\alpha ,\beta )=\|\alpha -\beta \|}
.
For instance, if
U
{\displaystyle U}
is finite, i.e.
U
=
{
x
1
,
x
2
,
.
.
.
x
n
}
{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}
, such a metric may be defined by:
d
(
α
,
β
)
:=
max
{
|
α
(
x
i
)
−
β
(
x
i
)
|
:
i
=
1
,
.
.
.
,
n
}
{\displaystyle d(\alpha ,\beta ):=\max\{|\alpha (x_{i})-\beta (x_{i})|:i=1,...,n\}}
where
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are sequences of real numbers between 0 and 1.
For infinite
U
{\displaystyle U}
, the maximum can be replaced by a supremum.
Because fuzzy sets are unambiguously defined by their membership function, this metric can be used to measure distances between fuzzy sets on the same universe:
d
(
A
,
B
)
:=
d
(
μ
A
,
μ
B
)
{\displaystyle d(A,B):=d(\mu _{A},\mu _{B})}
,
which becomes in the above sample:
d
(
A
,
B
)
=
max
{
|
μ
A
(
x
i
)
−
μ
B
(
x
i
)
|
:
i
=
1
,
.
.
.
,
n
}
{\displaystyle d(A,B)=\max\{|\mu _{A}(x_{i})-\mu _{B}(x_{i})|:i=1,...,n\}}
.
Again for infinite
U
{\displaystyle U}
the maximum must be replaced by a supremum. Other distances (like the canonical 2-norm) may diverge, if infinite fuzzy sets are too different, e.g.,
∅
{\displaystyle \varnothing }
and
U
{\displaystyle U}
.
Similarity measures (here denoted by
S
{\displaystyle S}
) may then be derived from the distance, e.g. after a proposal by Koczy:
S
=
1
/
(
1
+
d
(
A
,
B
)
)
{\displaystyle S=1/(1+d(A,B))}
if
d
(
A
,
B
)
{\displaystyle d(A,B)}
is finite,
0
{\displaystyle 0}
else,
or after Williams and Steele:
S
=
exp
(
−
α
d
(
A
,
B
)
)
{\displaystyle S=\exp(-\alpha {d(A,B)})}
if
d
(
A
,
B
)
{\displaystyle d(A,B)}
is finite,
0
{\displaystyle 0}
else
where
α
>
0
{\displaystyle \alpha >0}
is a steepness parameter and
exp
(
x
)
=
e
x
{\displaystyle \exp(x)=e^{x}}
.
=== L-fuzzy sets ===
Sometimes, more general variants of the notion of fuzzy set are used, with membership functions taking values in a (fixed or variable) algebra or structure
L
{\displaystyle L}
of a given kind; usually it is required that
L
{\displaystyle L}
be at least a poset or lattice. These are usually called L-fuzzy sets, to distinguish them from those valued over the unit interval. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. These kinds of generalizations were first considered in 1967 by Joseph Goguen, who was a student of Zadeh. A classical corollary may be indicating truth and membership values by {f, t} instead of {0, 1}.
An extension of fuzzy sets has been provided by Atanassov. An intuitionistic fuzzy set (IFS)
A
{\displaystyle A}
is characterized by two functions:
1.
μ
A
(
x
)
{\displaystyle \mu _{A}(x)}
– degree of membership of x
2.
ν
A
(
x
)
{\displaystyle \nu _{A}(x)}
– degree of non-membership of x
with functions
μ
A
,
ν
A
:
U
→
[
0
,
1
]
{\displaystyle \mu _{A},\nu _{A}:U\to [0,1]}
with
∀
x
∈
U
:
μ
A
(
x
)
+
ν
A
(
x
)
≤
1
{\displaystyle \forall x\in U:\mu _{A}(x)+\nu _{A}(x)\leq 1}
.
This resembles a situation like some person denoted by
x
{\displaystyle x}
voting
for a proposal
A
{\displaystyle A}
: (
μ
A
(
x
)
=
1
,
ν
A
(
x
)
=
0
{\displaystyle \mu _{A}(x)=1,\nu _{A}(x)=0}
),
against it: (
μ
A
(
x
)
=
0
,
ν
A
(
x
)
=
1
{\displaystyle \mu _{A}(x)=0,\nu _{A}(x)=1}
),
or abstain from voting: (
μ
A
(
x
)
=
ν
A
(
x
)
=
0
{\displaystyle \mu _{A}(x)=\nu _{A}(x)=0}
).
After all, we have a percentage of approvals, a percentage of denials, and a percentage of abstentions.
For this situation, special "intuitive fuzzy" negators, t- and s-norms can be defined. With
D
∗
=
{
(
α
,
β
)
∈
[
0
,
1
]
2
:
α
+
β
=
1
}
{\displaystyle D^{*}=\{(\alpha ,\beta )\in [0,1]^{2}:\alpha +\beta =1\}}
and by combining both functions to
(
μ
A
,
ν
A
)
:
U
→
D
∗
{\displaystyle (\mu _{A},\nu _{A}):U\to D^{*}}
this situation resembles a special kind of L-fuzzy sets.
Once more, this has been expanded by defining picture fuzzy sets (PFS) as follows: A PFS A is characterized by three functions mapping U to [0, 1]:
μ
A
,
η
A
,
ν
A
{\displaystyle \mu _{A},\eta _{A},\nu _{A}}
, "degree of positive membership", "degree of neutral membership", and "degree of negative membership" respectively and additional condition
∀
x
∈
U
:
μ
A
(
x
)
+
η
A
(
x
)
+
ν
A
(
x
)
≤
1
{\displaystyle \forall x\in U:\mu _{A}(x)+\eta _{A}(x)+\nu _{A}(x)\leq 1}
This expands the voting sample above by an additional possibility of "refusal of voting".
With
D
∗
=
{
(
α
,
β
,
γ
)
∈
[
0
,
1
]
3
:
α
+
β
+
γ
=
1
}
{\displaystyle D^{*}=\{(\alpha ,\beta ,\gamma )\in [0,1]^{3}:\alpha +\beta +\gamma =1\}}
and special "picture fuzzy" negators, t- and s-norms this resembles just another type of L-fuzzy sets.
=== Pythagorean fuzzy sets ===
One extension of IFS is what is known as Pythagorean fuzzy sets. Such sets satisfy the constraint
μ
A
(
x
)
2
+
ν
A
(
x
)
2
≤
1
{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}
, which is reminiscent of the Pythagorean theorem. Pythagorean fuzzy sets can be applicable to real life applications in which the previous condition of
μ
A
(
x
)
+
ν
A
(
x
)
≤
1
{\displaystyle \mu _{A}(x)+\nu _{A}(x)\leq 1}
is not valid. However, the less restrictive condition of
μ
A
(
x
)
2
+
ν
A
(
x
)
2
≤
1
{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}
may be suitable in more domains.
== Fuzzy logic ==
As an extension of the case of multi-valued logic, valuations (
μ
:
V
o
→
W
{\displaystyle \mu :{\mathit {V}}_{o}\to {\mathit {W}}}
) of propositional variables (
V
o
{\displaystyle {\mathit {V}}_{o}}
) into a set of membership degrees (
W
{\displaystyle {\mathit {W}}}
) can be thought of as membership functions mapping predicates into fuzzy sets (or more formally, into an ordered set of fuzzy pairs, called a fuzzy relation). With these valuations, many-valued logic can be extended to allow for fuzzy premises from which graded conclusions may be drawn.
This extension is sometimes called "fuzzy logic in the narrow sense" as opposed to "fuzzy logic in the wider sense," which originated in the engineering fields of automated control and knowledge engineering, and which encompasses many topics involving fuzzy sets and "approximated reasoning."
Industrial applications of fuzzy sets in the context of "fuzzy logic in the wider sense" can be found at fuzzy logic.
== Fuzzy number ==
A fuzzy number is a fuzzy set that satisfies all the following conditions:
A is normalised;
A is a convex set;
The membership function
μ
A
(
x
)
{\displaystyle \mu _{A}(x)}
achieves the value 1 at least once;
The membership function
μ
A
(
x
)
{\displaystyle \mu _{A}(x)}
is at least segmentally continuous.
If these conditions are not satisfied, then A is not a fuzzy number. The core of this fuzzy number is a singleton; its location is:
C
(
A
)
=
x
∗
:
μ
A
(
x
∗
)
=
1
{\displaystyle \,C(A)=x^{*}:\mu _{A}(x^{*})=1}
Fuzzy numbers can be likened to the funfair game "guess your weight," where someone guesses the contestant's weight, with closer guesses being more correct, and where the guesser "wins" if he or she guesses near enough to the contestant's weight, with the actual weight being completely correct (mapping to 1 by the membership function).
The kernel
K
(
A
)
=
Kern
(
A
)
{\displaystyle K(A)=\operatorname {Kern} (A)}
of a fuzzy interval
A
{\displaystyle A}
is defined as the 'inner' part, without the 'outbound' parts where the membership value is constant ad infinitum. In other words, the smallest subset of
R
{\displaystyle \mathbb {R} }
where
μ
A
(
x
)
{\displaystyle \mu _{A}(x)}
is constant outside of it, is defined as the kernel.
However, there are other concepts of fuzzy numbers and intervals as some authors do not insist on convexity.
== Fuzzy categories ==
The use of set membership as a key component of category theory can be generalized to fuzzy sets. This approach, which began in 1968 shortly after the introduction of fuzzy set theory, led to the development of Goguen categories in the 21st century. In these categories, rather than using two valued set membership, more general intervals are used, and may be lattices as in L-fuzzy sets.
There are numerous mathematical extensions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965 by Zadeh, many new mathematical constructions and theories treating imprecision, inaccuracy, vagueness, uncertainty and vulnerability have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others attempt to mathematically model inaccuracy/vagueness and uncertainty in a different way. The diversity of such constructions and corresponding theories includes:
Fuzzy Sets (Zadeh, 1965)
interval sets (Moore, 1966),
L-fuzzy sets (Goguen, 1967),
flou sets (Gentilhomme, 1968),
type-2 fuzzy sets and type-n fuzzy sets (Zadeh, 1975),
interval-valued fuzzy sets (Grattan-Guinness, 1975; Jahn, 1975; Sambuc, 1975; Zadeh, 1975),
level fuzzy sets (Radecki, 1977)
rough sets (Pawlak, 1982),
intuitionistic fuzzy sets (Atanassov, 1983),
fuzzy multisets (Yager, 1986),
intuitionistic L-fuzzy sets (Atanassov, 1986),
rough multisets (Grzymala-Busse, 1987),
fuzzy rough sets (Nakamura, 1988),
real-valued fuzzy sets (Blizard, 1989),
vague sets (Wen-Lung Gau and Buehrer, 1993),
α-level sets (Yao, 1997),
shadowed sets (Pedrycz, 1998),
neutrosophic sets (NSs) (Smarandache, 1998),
bipolar fuzzy sets (Wen-Ran Zhang, 1998),
genuine sets (Demirci, 1999),
soft sets (Molodtsov, 1999),
complex fuzzy set (2002),
intuitionistic fuzzy rough sets (Cornelis, De Cock and Kerre, 2003)
L-fuzzy rough sets (Radzikowska and Kerre, 2004),
multi-fuzzy sets (Sabu Sebastian, 2009),
generalized rough fuzzy sets (Feng, 2010)
rough intuitionistic fuzzy sets (Thomas and Nair, 2011),
soft rough fuzzy sets (Meng, Zhang and Qin, 2011)
soft fuzzy rough sets (Meng, Zhang and Qin, 2011)
soft multisets (Alkhazaleh, Salleh and Hassan, 2011)
fuzzy soft multisets (Alkhazaleh and Salleh, 2012)
pythagorean fuzzy set (Yager , 2013),
picture fuzzy set (Cuong, 2013),
spherical fuzzy set (Mahmood, 2018).
== Fuzzy relation equation ==
The fuzzy relation equation is an equation of the form A · R = B, where A and B are fuzzy sets, R is a fuzzy relation, and A · R stands for the composition of A with R .
== Entropy ==
A measure d of fuzziness for fuzzy sets of universe
U
{\displaystyle U}
should fulfill the following conditions for all
x
∈
U
{\displaystyle x\in U}
:
d
(
A
)
=
0
{\displaystyle d(A)=0}
if
A
{\displaystyle A}
is a crisp set:
μ
A
(
x
)
∈
{
0
,
1
}
{\displaystyle \mu _{A}(x)\in \{0,\,1\}}
d
(
A
)
{\displaystyle d(A)}
has a unique maximum if
∀
x
∈
U
:
μ
A
(
x
)
=
0.5
{\displaystyle \forall x\in U:\mu _{A}(x)=0.5}
∀
x
∈
U
:
(
μ
A
(
x
)
≤
μ
B
(
x
)
≤
0.5
)
∨
(
μ
A
(
x
)
≥
μ
B
(
x
)
≥
0.5
)
{\displaystyle \forall x\in U:(\mu _{A}(x)\leq \mu _{B}(x)\leq 0.5)\lor (\mu _{A}(x)\geq \mu _{B}(x)\geq 0.5)}
⇒
d
(
A
)
≤
d
(
B
)
{\displaystyle \Rightarrow d(A)\leq d(B)}
,
which means that B is "crisper" than A.
d
(
¬
A
)
=
d
(
A
)
{\displaystyle d(\neg {A})=d(A)}
In this case
d
(
A
)
{\displaystyle d(A)}
is called the entropy of the fuzzy set A.
For finite
U
=
{
x
1
,
x
2
,
.
.
.
x
n
}
{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}
the entropy of a fuzzy set
A
{\displaystyle A}
is given by
d
(
A
)
=
H
(
A
)
+
H
(
¬
A
)
{\displaystyle d(A)=H(A)+H(\neg {A})}
,
H
(
A
)
=
−
k
∑
i
=
1
n
μ
A
(
x
i
)
ln
μ
A
(
x
i
)
{\displaystyle H(A)=-k\sum _{i=1}^{n}\mu _{A}(x_{i})\ln \mu _{A}(x_{i})}
or just
d
(
A
)
=
−
k
∑
i
=
1
n
S
(
μ
A
(
x
i
)
)
{\displaystyle d(A)=-k\sum _{i=1}^{n}S(\mu _{A}(x_{i}))}
where
S
(
x
)
=
H
e
(
x
)
{\displaystyle S(x)=H_{e}(x)}
is Shannon's function (natural entropy function)
S
(
α
)
=
−
α
ln
α
−
(
1
−
α
)
ln
(
1
−
α
)
,
α
∈
[
0
,
1
]
{\displaystyle S(\alpha )=-\alpha \ln \alpha -(1-\alpha )\ln(1-\alpha ),\ \alpha \in [0,1]}
and
k
{\displaystyle k}
is a constant depending on the measure unit and the logarithm base used (here we have used the natural base e).
The physical interpretation of k is the Boltzmann constant kB.
Let
A
{\displaystyle A}
be a fuzzy set with a continuous membership function (fuzzy variable). Then
H
(
A
)
=
−
k
∫
−
∞
∞
Cr
{
A
≥
t
}
ln
Cr
{
A
≥
t
}
d
t
{\displaystyle H(A)=-k\int _{-\infty }^{\infty }\operatorname {Cr} \lbrace A\geq t\rbrace \ln \operatorname {Cr} \lbrace A\geq t\rbrace \,dt}
and its entropy is
d
(
A
)
=
−
k
∫
−
∞
∞
S
(
Cr
{
A
≥
t
}
)
d
t
.
{\displaystyle d(A)=-k\int _{-\infty }^{\infty }S(\operatorname {Cr} \lbrace A\geq t\rbrace )\,dt.}
== Extensions ==
There are many mathematical constructions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965, many new mathematical constructions and theories treating imprecision, inexactness, ambiguity, and uncertainty have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others try to mathematically model imprecision and uncertainty in a different way.
== See also ==
== References == | Wikipedia/Fuzzy_set_theory |
Evolutionary dynamics is the study of the mathematical principles according to which biological organisms as well as cultural ideas evolve and evolved. This is mostly achieved through the mathematical discipline of population genetics, along with evolutionary game theory. Most population genetics considers changes in the frequencies of alleles at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, one derives quantitative genetics. Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic. In evolutionary game theory, developed first by John Maynard Smith, evolutionary biology concepts may take a deterministic mathematical form, with selection acting directly on inherited phenotypes. These same models can be applied to studying the evolution of human preferences and ideologies. Many variants on these models have been developed, which incorporate weak selection, mutual population structure, stochasticity, etc. These models have relevance also to the generation and maintenance of tissues in mammals, since an understanding of tissue cell kinetics, architecture, and development from adult stem cells has important implications for aging and cancer.
== References ==
== External links ==
Evolutionary Game Dynamics from Clay Mathematics Institute | Wikipedia/Evolutionary_dynamics |
Set Theory: An Introduction to Independence Proofs is a textbook and reference work in set theory by Kenneth Kunen. It starts from basic notions, including the ZFC axioms, and quickly develops combinatorial notions such as trees, Suslin's problem, the diamond principle, and Martin's axiom. It develops some basic model theory (rather specifically aimed at models of set theory) and the theory of Gödel's constructible universe, L. The book then proceeds to describe the method of forcing.
Kunen completely rewrote the book for the 2011 edition (under the title Set Theory), including more model theory.
== References ==
Baumgartner, James E. (June 1986). "Set Theory. An Introduction to Independence Proofs by Kenneth Kunen". The Journal of Symbolic Logic. 51 (2): 462–464. doi:10.2307/2274070. JSTOR 2274070.
Henson, C. Ward (1984). "Set theory, An introduction to independence proofs by Kenneth Kunen". Bull. Amer. Math. Soc. 10: 129–131. doi:10.1090/S0273-0979-1984-15214-5.
Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN 0-444-85401-0. Zbl 0443.03021.
Kunen, Kenneth (2011). Set theory. Studies in Logic. Vol. 34. London: College Publications. ISBN 978-1-84890-050-9. MR 2905394. Zbl 1262.03001. | Wikipedia/Set_Theory:_An_Introduction_to_Independence_Proofs |
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function.
Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept.
A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as
f
(
x
)
=
x
2
+
1
;
{\displaystyle f(x)=x^{2}+1;}
in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if
f
(
x
)
=
x
2
+
1
,
{\displaystyle f(x)=x^{2}+1,}
then
f
(
4
)
=
4
2
+
1
=
17.
{\displaystyle f(4)=4^{2}+1=17.}
Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane.
Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.
The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details.
== Definition ==
A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function.
If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written
y
=
f
(
x
)
.
{\displaystyle y=f(x).}
In this notation, x is the argument or variable of the function.
A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain.
A function f, its domain X, and its codomain Y are often specified by the notation
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
One may write
x
↦
y
{\displaystyle x\mapsto y}
instead of
y
=
f
(
x
)
{\displaystyle y=f(x)}
, where the symbol
↦
{\displaystyle \mapsto }
(read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function
x
↦
x
2
.
{\displaystyle x\mapsto x^{2}.}
The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is a real function, the determination of the domain of the function
x
↦
1
/
f
(
x
)
{\displaystyle x\mapsto 1/f(x)}
requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function.
A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S.
=== Formal definition ===
The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs
(
x
,
y
)
{\displaystyle (x,y)}
such that
x
∈
X
{\displaystyle x\in X}
and
y
∈
Y
.
{\displaystyle y\in Y.}
The set of all these pairs is called the Cartesian product of X and Y and denoted
X
×
Y
.
{\displaystyle X\times Y.}
Thus, the above definition may be formalized as follows.
A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions:
For every
x
{\displaystyle x}
in
X
{\displaystyle X}
there exists
y
{\displaystyle y}
in
Y
{\displaystyle Y}
such that
(
x
,
y
)
∈
R
.
{\displaystyle (x,y)\in R.}
If
(
x
,
y
)
∈
R
{\displaystyle (x,y)\in R}
and
(
x
,
z
)
∈
R
,
{\displaystyle (x,z)\in R,}
then
y
=
z
.
{\displaystyle y=z.}
This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation):
A function is formed by three sets, the domain
X
,
{\displaystyle X,}
the codomain
Y
,
{\displaystyle Y,}
and the graph
R
{\displaystyle R}
that satisfy the three following conditions.
R
⊆
{
(
x
,
y
)
∣
x
∈
X
,
y
∈
Y
}
{\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}}
∀
x
∈
X
,
∃
y
∈
Y
,
(
x
,
y
)
∈
R
{\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad }
(
x
,
y
)
∈
R
∧
(
x
,
z
)
∈
R
⟹
y
=
z
{\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad }
=== Partial functions ===
Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every
x
∈
X
,
{\displaystyle x\in X,}
there is at most one y in Y such that
(
x
,
y
)
∈
R
.
{\displaystyle (x,y)\in R.}
Using functional notation, this means that, given
x
∈
X
,
{\displaystyle x\in X,}
either
f
(
x
)
{\displaystyle f(x)}
is in Y, or it is undefined.
The set of the elements of X such that
f
(
x
)
{\displaystyle f(x)}
is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function.
In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain.
In calculus, a real-valued function of a real variable or real function is a partial function from the set
R
{\displaystyle \mathbb {R} }
of the real numbers to itself. Given a real function
f
:
x
↦
f
(
x
)
{\displaystyle f:x\mapsto f(x)}
its multiplicative inverse
x
↦
1
/
f
(
x
)
{\displaystyle x\mapsto 1/f(x)}
is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse.
Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers
C
{\displaystyle \mathbb {C} }
. The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function
z
↦
1
/
ζ
(
z
)
{\displaystyle z\mapsto 1/\zeta (z)}
is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis.
In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem).
=== Multivariate functions ===
A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed.
Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation.
Commonly, an n-tuple is denoted enclosed between parentheses, such as in
(
1
,
2
,
…
,
n
)
.
{\displaystyle (1,2,\ldots ,n).}
When using functional notation, one usually omits the parentheses surrounding tuples, writing
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\ldots ,x_{n})}
instead of
f
(
(
x
1
,
…
,
x
n
)
)
.
{\displaystyle f((x_{1},\ldots ,x_{n})).}
Given n sets
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\ldots ,X_{n},}
the set of all n-tuples
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\ldots ,x_{n})}
such that
x
1
∈
X
1
,
…
,
x
n
∈
X
n
{\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}}
is called the Cartesian product of
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\ldots ,X_{n},}
and denoted
X
1
×
⋯
×
X
n
.
{\displaystyle X_{1}\times \cdots \times X_{n}.}
Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain.
f
:
U
→
Y
,
{\displaystyle f:U\to Y,}
where the domain U has the form
U
⊆
X
1
×
⋯
×
X
n
.
{\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.}
If all the
X
i
{\displaystyle X_{i}}
are equal to the set
R
{\displaystyle \mathbb {R} }
of the real numbers or to the set
C
{\displaystyle \mathbb {C} }
of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables.
== Notation ==
There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below.
=== Functional notation ===
The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in
f
(
x
)
,
sin
(
3
)
,
or
f
(
x
2
+
1
)
.
{\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).}
The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain (
x
2
+
1
{\displaystyle x^{2}+1}
in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let
f
(
x
)
=
sin
(
x
2
+
1
)
{\displaystyle f(x)=\sin(x^{2}+1)}
".
When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x).
Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols.
The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let
f
(
x
)
{\displaystyle f(x)}
be a function". This is an abuse of notation that is useful for a simpler formulation.
=== Arrow notation ===
Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example,
x
↦
x
+
1
{\displaystyle x\mapsto x+1}
is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of
R
{\displaystyle \mathbb {R} }
is implied.
The domain and codomain can also be explicitly stated, for example:
sqr
:
Z
→
Z
x
↦
x
2
.
{\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}}
This defines a function sqr from the integers to the integers that returns the square of its input.
As a common application of the arrow notation, suppose
f
:
X
×
X
→
Y
;
(
x
,
t
)
↦
f
(
x
,
t
)
{\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)}
is a function in two variables, and we want to refer to a partially applied function
X
→
Y
{\displaystyle X\to Y}
produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted
x
↦
f
(
x
,
t
0
)
{\displaystyle x\mapsto f(x,t_{0})}
using the arrow notation. The expression
x
↦
f
(
x
,
t
0
)
{\displaystyle x\mapsto f(x,t_{0})}
(read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0).
=== Index notation ===
Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes
f
x
.
{\displaystyle f_{x}.}
This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element
f
n
{\displaystyle f_{n}}
is called the nth element of the sequence.
The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map
x
↦
f
(
x
,
t
)
{\displaystyle x\mapsto f(x,t)}
(see above) would be denoted
f
t
{\displaystyle f_{t}}
using index notation, if we define the collection of maps
f
t
{\displaystyle f_{t}}
by the formula
f
t
(
x
)
=
f
(
x
,
t
)
{\displaystyle f_{t}(x)=f(x,t)}
for all
x
,
t
∈
X
{\displaystyle x,t\in X}
.
=== Dot notation ===
In the notation
x
↦
f
(
x
)
,
{\displaystyle x\mapsto f(x),}
the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x.
For example,
a
(
⋅
)
2
{\displaystyle a(\cdot )^{2}}
may stand for the function
x
↦
a
x
2
{\displaystyle x\mapsto ax^{2}}
, and
∫
a
(
⋅
)
f
(
u
)
d
u
{\textstyle \int _{a}^{\,(\cdot )}f(u)\,du}
may stand for a function defined by an integral with variable upper bound:
x
↦
∫
a
x
f
(
u
)
d
u
{\textstyle x\mapsto \int _{a}^{x}f(u)\,du}
.
=== Specialized notations ===
There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above.
=== Functions of more than one variable ===
In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers
(
x
,
y
)
{\displaystyle (x,y)}
to the sum of their squares,
x
2
+
y
2
{\displaystyle x^{2}+y^{2}}
. Such a function is commonly written as
f
(
x
,
y
)
=
x
2
+
y
2
{\displaystyle f(x,y)=x^{2}+y^{2}}
and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as
f
(
w
,
x
,
y
)
{\displaystyle f(w,x,y)}
,
f
(
w
,
x
,
y
,
z
)
{\displaystyle f(w,x,y,z)}
.
== Other terms ==
A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function.
Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map.
Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function.
== Specifying a function ==
Given a function
f
{\displaystyle f}
, by definition, to each element
x
{\displaystyle x}
of the domain of the function
f
{\displaystyle f}
, there is a unique element associated to it, the value
f
(
x
)
{\displaystyle f(x)}
of
f
{\displaystyle f}
at
x
{\displaystyle x}
. There are several ways to specify or describe how
x
{\displaystyle x}
is related to
f
(
x
)
{\displaystyle f(x)}
, both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function
f
{\displaystyle f}
.
=== By listing function values ===
On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if
A
=
{
1
,
2
,
3
}
{\displaystyle A=\{1,2,3\}}
, then one can define a function
f
:
A
→
R
{\displaystyle f:A\to \mathbb {R} }
by
f
(
1
)
=
2
,
f
(
2
)
=
3
,
f
(
3
)
=
4.
{\displaystyle f(1)=2,f(2)=3,f(3)=4.}
=== By a formula ===
Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain.
For example, in the above example,
f
{\displaystyle f}
can be defined by the formula
f
(
n
)
=
n
+
1
{\displaystyle f(n)=n+1}
, for
n
∈
{
1
,
2
,
3
}
{\displaystyle n\in \{1,2,3\}}
.
When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from
R
{\displaystyle \mathbb {R} }
to
R
,
{\displaystyle \mathbb {R} ,}
the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative.
For example,
f
(
x
)
=
1
+
x
2
{\displaystyle f(x)={\sqrt {1+x^{2}}}}
defines a function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
whose domain is
R
,
{\displaystyle \mathbb {R} ,}
because
1
+
x
2
{\displaystyle 1+x^{2}}
is always positive if x is a real number. On the other hand,
f
(
x
)
=
1
−
x
2
{\displaystyle f(x)={\sqrt {1-x^{2}}}}
defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.)
Functions can be classified by the nature of formulas that define them:
A quadratic function is a function that may be written
f
(
x
)
=
a
x
2
+
b
x
+
c
,
{\displaystyle f(x)=ax^{2}+bx+c,}
where a, b, c are constants.
More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example,
f
(
x
)
=
x
3
−
3
x
−
1
{\displaystyle f(x)=x^{3}-3x-1}
and
f
(
x
)
=
(
x
−
1
)
(
x
3
+
1
)
+
2
x
2
−
1
{\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1}
are polynomial functions of
x
{\displaystyle x}
.
A rational function is the same, with divisions also allowed, such as
f
(
x
)
=
x
−
1
x
+
1
,
{\displaystyle f(x)={\frac {x-1}{x+1}},}
and
f
(
x
)
=
1
x
+
1
+
3
x
−
2
x
−
1
.
{\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.}
An algebraic function is the same, with nth roots and roots of polynomials also allowed.
An elementary function is the same, with logarithms and exponential functions allowed.
=== Inverse and implicit functions ===
A function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function
f
−
1
:
Y
→
X
{\displaystyle f^{-1}:Y\to X}
that maps
y
∈
Y
{\displaystyle y\in Y}
to the element
x
∈
X
{\displaystyle x\in X}
such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers.
If a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is not bijective, it may occur that one can select subsets
E
⊆
X
{\displaystyle E\subseteq X}
and
F
⊆
Y
{\displaystyle F\subseteq Y}
such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly.
More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every
x
∈
E
,
{\displaystyle x\in E,}
there is some
y
∈
Y
{\displaystyle y\in Y}
such that x R y. If one has a criterion allowing selecting such a y for every
x
∈
E
,
{\displaystyle x\in E,}
this defines a function
f
:
E
→
Y
,
{\displaystyle f:E\to Y,}
called an implicit function, because it is implicitly defined by the relation R.
For example, the equation of the unit circle
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0].
In this example, the equation can be solved in y, giving
y
=
±
1
−
x
2
,
{\displaystyle y=\pm {\sqrt {1-x^{2}}},}
but, in more complicated examples, this is impossible. For example, the relation
y
5
+
y
+
x
=
0
{\displaystyle y^{5}+y+x=0}
defines y as an implicit function of x, called the Bring radical, which has
R
{\displaystyle \mathbb {R} }
as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots.
The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point.
=== Using differential calculus ===
Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function.
More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0.
Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by
e
x
=
∑
n
=
0
∞
x
n
n
!
{\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}}
. However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number.
=== By recurrence ===
Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations.
The factorial function on the nonnegative integers (
n
↦
n
!
{\displaystyle n\mapsto n!}
) is a basic example, as it can be defined by the recurrence relation
n
!
=
n
(
n
−
1
)
!
for
n
>
0
,
{\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,}
and the initial condition
0
!
=
1.
{\displaystyle 0!=1.}
== Representing a function ==
A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts.
=== Graphs and plots ===
Given a function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
its graph is, formally, the set
G
=
{
(
x
,
f
(
x
)
)
∣
x
∈
X
}
.
{\displaystyle G=\{(x,f(x))\mid x\in X\}.}
In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element
(
x
,
y
)
∈
G
{\displaystyle (x,y)\in G}
may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function
x
↦
x
2
,
{\displaystyle x\mapsto x^{2},}
consisting of all points with coordinates
(
x
,
x
2
)
{\displaystyle (x,x^{2})}
for
x
∈
R
,
{\displaystyle x\in \mathbb {R} ,}
yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function
x
↦
x
2
,
{\displaystyle x\mapsto x^{2},}
with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates
(
r
,
θ
)
=
(
x
,
x
2
)
,
{\displaystyle (r,\theta )=(x,x^{2}),}
the plot obtained is Fermat's spiral.
=== Tables ===
A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function
f
:
{
1
,
…
,
5
}
2
→
R
{\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} }
defined as
f
(
x
,
y
)
=
x
y
{\displaystyle f(x,y)=xy}
can be represented by the familiar multiplication table
On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places:
Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions.
=== Bar chart ===
A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis).
== General properties ==
This section describes general properties of functions, that are independent of specific properties of the domain and the codomain.
=== Standard functions ===
There are a number of standard functions that occur frequently:
For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function
∅
→
X
{\displaystyle \varnothing \to X}
is not equal to
∅
→
Y
{\displaystyle \varnothing \to Y}
if and only if
X
≠
Y
{\displaystyle X\neq Y}
, although their graphs are both the empty set.
For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set.
Given a function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
the canonical surjection of f onto its image
f
(
X
)
=
{
f
(
x
)
∣
x
∈
X
}
{\displaystyle f(X)=\{f(x)\mid x\in X\}}
is the function from X to f(X) that maps x to f(x).
For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself.
The identity function on a set X, often denoted by idX, is the inclusion of X into itself.
=== Function composition ===
Given two functions
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
such that the domain of g is the codomain of f, their composition is the function
g
∘
f
:
X
→
Z
{\displaystyle g\circ f:X\rightarrow Z}
defined by
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
.
{\displaystyle (g\circ f)(x)=g(f(x)).}
That is, the value of
g
∘
f
{\displaystyle g\circ f}
is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right.
The composition
g
∘
f
{\displaystyle g\circ f}
is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both
g
∘
f
{\displaystyle g\circ f}
and
f
∘
g
{\displaystyle f\circ g}
satisfy these conditions, the composition is not necessarily commutative, that is, the functions
g
∘
f
{\displaystyle g\circ f}
and
f
∘
g
{\displaystyle f\circ g}
need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then
g
(
f
(
x
)
)
=
x
2
+
1
{\displaystyle g(f(x))=x^{2}+1}
and
f
(
g
(
x
)
)
=
(
x
+
1
)
2
{\displaystyle f(g(x))=(x+1)^{2}}
agree just for
x
=
0.
{\displaystyle x=0.}
The function composition is associative in the sense that, if one of
(
h
∘
g
)
∘
f
{\displaystyle (h\circ g)\circ f}
and
h
∘
(
g
∘
f
)
{\displaystyle h\circ (g\circ f)}
is defined, then the other is also defined, and they are equal, that is,
(
h
∘
g
)
∘
f
=
h
∘
(
g
∘
f
)
.
{\displaystyle (h\circ g)\circ f=h\circ (g\circ f).}
Therefore, it is usual to just write
h
∘
g
∘
f
.
{\displaystyle h\circ g\circ f.}
The identity functions
id
X
{\displaystyle \operatorname {id} _{X}}
and
id
Y
{\displaystyle \operatorname {id} _{Y}}
are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has
f
∘
id
X
=
id
Y
∘
f
=
f
.
{\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.}
=== Image and preimage ===
Let
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is,
f
(
A
)
=
{
f
(
x
)
∣
x
∈
A
}
.
{\displaystyle f(A)=\{f(x)\mid x\in A\}.}
The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain.
On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
and is given by the equation
f
−
1
(
y
)
=
{
x
∈
X
∣
f
(
x
)
=
y
}
.
{\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.}
Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by
f
−
1
(
B
)
{\displaystyle f^{-1}(B)}
and is given by the equation
f
−
1
(
B
)
=
{
x
∈
X
∣
f
(
x
)
∈
B
}
.
{\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.}
For example, the preimage of
{
4
,
9
}
{\displaystyle \{4,9\}}
under the square function is the set
{
−
3
,
−
2
,
2
,
3
}
{\displaystyle \{-3,-2,2,3\}}
.
By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then
f
−
1
(
0
)
=
Z
{\displaystyle f^{-1}(0)=\mathbb {Z} }
.
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties:
A
⊆
B
⟹
f
(
A
)
⊆
f
(
B
)
{\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)}
C
⊆
D
⟹
f
−
1
(
C
)
⊆
f
−
1
(
D
)
{\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)}
A
⊆
f
−
1
(
f
(
A
)
)
{\displaystyle A\subseteq f^{-1}(f(A))}
C
⊇
f
(
f
−
1
(
C
)
)
{\displaystyle C\supseteq f(f^{-1}(C))}
f
(
f
−
1
(
f
(
A
)
)
)
=
f
(
A
)
{\displaystyle f(f^{-1}(f(A)))=f(A)}
f
−
1
(
f
(
f
−
1
(
C
)
)
)
=
f
−
1
(
C
)
{\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)}
The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f.
If a function f has an inverse (see below), this inverse is denoted
f
−
1
.
{\displaystyle f^{-1}.}
In this case
f
−
1
(
C
)
{\displaystyle f^{-1}(C)}
may denote either the image by
f
−
1
{\displaystyle f^{-1}}
or the preimage by f of C. This is not a problem, as these sets are equal. The notation
f
(
A
)
{\displaystyle f(A)}
and
f
−
1
(
C
)
{\displaystyle f^{-1}(C)}
may be ambiguous in the case of sets that contain some subsets as elements, such as
{
x
,
{
x
}
}
.
{\displaystyle \{x,\{x\}\}.}
In this case, some care may be needed, for example, by using square brackets
f
[
A
]
,
f
−
1
[
C
]
{\displaystyle f[A],f^{-1}[C]}
for images and preimages of subsets and ordinary parentheses for images and preimages of elements.
=== Injective, surjective and bijective functions ===
Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a function.
The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every
y
∈
Y
,
{\displaystyle y\in Y,}
the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
g
∘
f
=
id
X
,
{\displaystyle g\circ f=\operatorname {id} _{X},}
that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element
x
0
{\displaystyle x_{0}}
in X (which exists as X is supposed to be nonempty), and one defines g by
g
(
y
)
=
x
{\displaystyle g(y)=x}
if
y
=
f
(
x
)
{\displaystyle y=f(x)}
and
g
(
y
)
=
x
0
{\displaystyle g(y)=x_{0}}
if
y
∉
f
(
X
)
.
{\displaystyle y\not \in f(X).}
Conversely, if
g
∘
f
=
id
X
,
{\displaystyle g\circ f=\operatorname {id} _{X},}
and
y
=
f
(
x
)
,
{\displaystyle y=f(x),}
then
x
=
g
(
y
)
,
{\displaystyle x=g(y),}
and thus
f
−
1
(
y
)
=
{
x
}
.
{\displaystyle f^{-1}(y)=\{x\}.}
The function f is surjective (or onto, or is a surjection) if its range
f
(
X
)
{\displaystyle f(X)}
equals its codomain
Y
{\displaystyle Y}
, that is, if, for each element
y
{\displaystyle y}
of the codomain, there exists some element
x
{\displaystyle x}
of the domain such that
f
(
x
)
=
y
{\displaystyle f(x)=y}
(in other words, the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
of every
y
∈
Y
{\displaystyle y\in Y}
is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
f
∘
g
=
id
Y
,
{\displaystyle f\circ g=\operatorname {id} _{Y},}
that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by
g
(
y
)
=
x
,
{\displaystyle g(y)=x,}
where
x
{\displaystyle x}
is an arbitrarily chosen element of
f
−
1
(
y
)
.
{\displaystyle f^{-1}(y).}
The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every
y
∈
Y
,
{\displaystyle y\in Y,}
the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
g
∘
f
=
id
X
{\displaystyle g\circ f=\operatorname {id} _{X}}
and
f
∘
g
=
id
Y
.
{\displaystyle f\circ g=\operatorname {id} _{Y}.}
(Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward).
Every function
f
:
X
→
Y
{\displaystyle f:X\to Y}
may be factorized as the composition
i
∘
s
{\displaystyle i\circ s}
of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f.
"One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical.
=== Restriction and extension ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a function and S is a subset of X, then the restriction of
f
{\displaystyle f}
to S, denoted
f
|
S
{\displaystyle f|_{S}}
, is the function from S to Y defined by
f
|
S
(
x
)
=
f
(
x
)
{\displaystyle f|_{S}(x)=f(x)}
for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function
f
{\displaystyle f}
such that
f
|
S
{\displaystyle f|_{S}}
is injective, then the canonical surjection of
f
|
S
{\displaystyle f|_{S}}
onto its image
f
|
S
(
S
)
=
f
(
S
)
{\displaystyle f|_{S}(S)=f(S)}
is a bijection, and thus has an inverse function from
f
(
S
)
{\displaystyle f(S)}
to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos.
Function restriction may also be used for "gluing" functions together. Let
X
=
⋃
i
∈
I
U
i
{\textstyle X=\bigcup _{i\in I}U_{i}}
be the decomposition of X as a union of subsets, and suppose that a function
f
i
:
U
i
→
Y
{\displaystyle f_{i}:U_{i}\to Y}
is defined on each
U
i
{\displaystyle U_{i}}
such that for each pair
i
,
j
{\displaystyle i,j}
of indices, the restrictions of
f
i
{\displaystyle f_{i}}
and
f
j
{\displaystyle f_{j}}
to
U
i
∩
U
j
{\displaystyle U_{i}\cap U_{j}}
are equal. Then this defines a unique function
f
:
X
→
Y
{\displaystyle f:X\to Y}
such that
f
|
U
i
=
f
i
{\displaystyle f|_{U_{i}}=f_{i}}
for all i. This is the way that functions on manifolds are defined.
An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane.
Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function
h
(
x
)
=
a
x
+
b
c
x
+
d
{\displaystyle h(x)={\frac {ax+b}{cx+d}}}
such that ad − bc ≠ 0. Its domain is the set of all real numbers different from
−
d
/
c
,
{\displaystyle -d/c,}
and its image is the set of all real numbers different from
a
/
c
.
{\displaystyle a/c.}
If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting
h
(
∞
)
=
a
/
c
{\displaystyle h(\infty )=a/c}
and
h
(
−
d
/
c
)
=
∞
{\displaystyle h(-d/c)=\infty }
.
== In calculus ==
The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined.
Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis.
=== Real function ===
A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions.
The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval.
Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
(
f
−
g
)
(
x
)
=
f
(
x
)
−
g
(
x
)
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
.
{\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.}
The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by
f
g
(
x
)
=
f
(
x
)
g
(
x
)
,
{\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},}
but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g.
The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function
x
↦
1
x
,
{\displaystyle x\mapsto {\frac {1}{x}},}
whose graph is a hyperbola, and whose domain is the whole real line except for 0.
The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm.
A real function f is monotonic in an interval if the sign of
f
(
x
)
−
f
(
y
)
x
−
y
{\displaystyle {\frac {f(x)-f(y)}{x-y}}}
does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function.
Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation
y
″
+
y
=
0
{\displaystyle y''+y=0}
such that
sin
0
=
0
,
cos
0
=
1
,
∂
sin
x
∂
x
(
0
)
=
1
,
∂
cos
x
∂
x
(
0
)
=
0.
{\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.}
=== Vector-valued function ===
When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function.
Some vector-valued functions are defined on a subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
or other spaces that share geometric or topological properties of
R
n
{\displaystyle \mathbb {R} ^{n}}
, such as manifolds. These vector-valued functions are given the name vector fields.
== Function space ==
In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions.
Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces.
== Multi-valued functions ==
Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point
x
0
,
{\displaystyle x_{0},}
there are several possible starting values for the function.
For example, in defining the square root as the inverse function of the square function, for any positive real number
x
0
,
{\displaystyle x_{0},}
there are two choices for the value of the square root, one of which is positive and denoted
x
0
,
{\displaystyle {\sqrt {x_{0}}},}
and another which is negative and denoted
−
x
0
.
{\displaystyle -{\sqrt {x_{0}}}.}
These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x.
In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of
x
3
−
3
x
−
y
=
0
{\displaystyle x^{3}-3x-y=0}
(see the figure on the right). For y = 0 one may choose either
0
,
3
,
or
−
3
{\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}}
for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2.
Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy.
== In the foundations of mathematics ==
The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions.
For example, the singleton set may be considered as a function
x
↦
{
x
}
.
{\displaystyle x\mapsto \{x\}.}
Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions.
These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set.
In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus.
== In computer science ==
In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects.
In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory.
Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions.
General recursive functions are partial functions from integers to integers that can be defined from
constant functions,
successor, and
projection functions
via the operators
composition,
primitive recursion, and
minimization.
Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties:
a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.),
every sequence of symbols may be coded as a sequence of bits,
a bit sequence can be interpreted as the binary representation of an integer.
Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation.
In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus.
== See also ==
=== Subpages ===
=== Generalizations ===
=== Related topics ===
== Notes ==
== References ==
== Sources ==
== Further reading ==
== External links ==
The Wolfram Functions – website giving formulae and visualizations of many mathematical functions
NIST Digital Library of Mathematical Functions | Wikipedia/Mathematical_function |
In mathematics, a structural set theory is an approach to set theory that emphasizes the aspect of sets as abstract structures. It is in contrast to a more traditional ZFC set-theory, which emphasizes membership. A prime example is Lawvere's Elementary Theory of the Category of Sets, which identifies sets in terms of relations to each other through functions. Another example is SEAR (Sets, Elements, And Relations).
The adjective "structural" comes from the structuralism in the philosophy of mathematics.
== References ==
Shulman, Michael (1 April 2019). "Comparing material and structural set theories". Annals of Pure and Applied Logic. 170 (4): 465–504. arXiv:1808.05204. doi:10.1016/j.apal.2018.11.002. ISSN 0168-0072.
François G. Dorais, Back to Cantor?, a blog post
== Further reading ==
structural set theory in nLab | Wikipedia/Structural_set_theory |
In mathematics, a topos (US: , UK: ; plural topoi or , or toposes) is a category that behaves like the category of sheaves of sets on a topological space (or more generally, on a site). Topoi behave much like the category of sets and possess a notion of localization. The Grothendieck topoi find applications in algebraic geometry, and more general elementary topoi are used in logic.
The mathematical field that studies topoi is called topos theory.
== Grothendieck topos (topos in geometry) ==
Since the introduction of sheaves into mathematics in the 1940s, a major theme has been to study a space by studying sheaves on a space. This idea was expounded by Alexander Grothendieck by introducing the notion of a "topos". The main utility of this notion is in the abundance of situations in mathematics where topological heuristics are very effective, but an honest topological space is lacking; it is sometimes possible to find a topos formalizing the heuristic. An important example of this programmatic idea is the étale topos of a scheme. Another illustration of the capability of Grothendieck topoi to incarnate the “essence” of different mathematical situations is given by their use as "bridges" for connecting theories which, albeit written in possibly very different languages, share a common mathematical content.
=== Equivalent definitions ===
A Grothendieck topos is a category
C
{\displaystyle C}
which satisfies any one of the following three properties. (A theorem of Jean Giraud states that the properties below are all equivalent.)
There is a small category
D
{\displaystyle D}
and an inclusion
C
↪
Presh
(
D
)
{\displaystyle C\hookrightarrow \operatorname {Presh} (D)}
that admits a finite-limit-preserving left adjoint.
C
{\displaystyle C}
is the category of sheaves on a Grothendieck site.
C
{\displaystyle C}
satisfies Giraud's axioms, below.
Here
Presh
(
D
)
{\displaystyle \operatorname {Presh} (D)}
denotes the category of contravariant functors from
D
{\displaystyle D}
to the category of sets; such a contravariant functor is frequently called a presheaf.
==== Giraud's axioms ====
Giraud's axioms for a category
C
{\displaystyle C}
are:
C
{\displaystyle C}
has a small set of generators, and admits all small colimits. Furthermore, fiber products distribute over coproducts; that is, given a set
I
{\displaystyle I}
, an
I
{\displaystyle I}
-indexed coproduct mapping to
A
{\displaystyle A}
, and a morphism
A
′
→
A
{\displaystyle A'\to A}
, the pullback is an
I
{\displaystyle I}
-indexed coproduct of the pullbacks:
(
∐
i
∈
I
B
i
)
×
A
A
′
≅
∐
i
∈
I
(
B
i
×
A
A
′
)
.
{\displaystyle \left(\coprod _{i\in I}B_{i}\right)\times _{A}A'\cong \coprod _{i\in I}(B_{i}\times _{A}A').}
Sums in
C
{\displaystyle C}
are disjoint. In other words, the fiber product of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
over their sum is the initial object in
C
{\displaystyle C}
.
All equivalence relations in
C
{\displaystyle C}
are effective.
The last axiom needs the most explanation. If X is an object of C, an "equivalence relation" R on X is a map R → X × X in C
such that for any object Y in C, the induced map Hom(Y, R) → Hom(Y, X) × Hom(Y, X) gives an ordinary equivalence relation on the set Hom(Y, X). Since C has colimits we may form the coequalizer of the two maps R → X; call this X/R. The equivalence relation is "effective" if the canonical map
R
→
X
×
X
/
R
X
{\displaystyle R\to X\times _{X/R}X\,\!}
is an isomorphism.
=== Examples ===
Giraud's theorem already gives "sheaves on sites" as a complete list of examples. Note, however, that nonequivalent sites often give
rise to equivalent topoi. As indicated in the introduction, sheaves on ordinary topological spaces motivate many of the basic definitions and results of topos theory.
==== Category of sets and G-sets ====
The category of sets is an important special case: it plays the role of a point in topos theory. Indeed, a set may be thought of as a sheaf on a point since functors on the singleton category with a single object and only the identity morphism are just specific sets in the category of sets.
Similarly, there is a topos
B
G
{\displaystyle BG}
for any group
G
{\displaystyle G}
which is equivalent to the category of
G
{\displaystyle G}
-sets. We construct this as the category of presheaves on the category with one object, but now the set of morphisms is given by the group
G
{\displaystyle G}
. Since any functor must give a
G
{\displaystyle G}
-action on the target, this gives the category of
G
{\displaystyle G}
-sets. Similarly, for a groupoid
G
{\displaystyle {\mathcal {G}}}
the category of presheaves on
G
{\displaystyle {\mathcal {G}}}
gives a collection of sets indexed by the set of objects in
G
{\displaystyle {\mathcal {G}}}
, and the automorphisms of an object in
G
{\displaystyle {\mathcal {G}}}
has an action on the target of the functor.
==== Topoi from ringed spaces ====
More exotic examples, and the raison d'être of topos theory, come from algebraic geometry. The basic example of a topos comes from the Zariski topos of a scheme. For each scheme
X
{\displaystyle X}
there is a site
Open
(
X
)
{\displaystyle {\text{Open}}(X)}
(of objects given by open subsets and morphisms given by inclusions) whose category of presheaves forms the Zariski topos
(
X
)
Z
a
r
{\displaystyle (X)_{Zar}}
. But once distinguished classes of morphisms are considered, there are multiple generalizations of this which leads to non-trivial mathematics. Moreover, topoi give the foundations for studying schemes purely as functors on the category of algebras.
To a scheme and even a stack one may associate an étale topos, an fppf topos, or a Nisnevich topos. Another important example of a topos is from the crystalline site. In the case of the étale topos, these form the foundational objects of study in anabelian geometry, which studies objects in algebraic geometry that are determined entirely by the structure of their étale fundamental group.
==== Pathologies ====
Topos theory is, in some sense, a generalization of classical point-set topology. One should therefore expect to see old and new instances of pathological behavior. For instance, there is an example due to Pierre Deligne of a nontrivial topos that has no points (see below for the definition of points of a topos).
=== Geometric morphisms ===
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are topoi, a geometric morphism
u
:
X
→
Y
{\displaystyle u:X\to Y}
is a pair of adjoint functors (u∗,u∗) (where u∗ : Y → X is left adjoint to u∗ : X → Y) such that u∗ preserves finite limits. Note that u∗ automatically preserves colimits by virtue of having a right adjoint.
By Freyd's adjoint functor theorem, to give a geometric morphism X → Y is to give a functor u∗: Y → X that preserves finite limits and all small colimits. Thus geometric morphisms between topoi may be seen as analogues of maps of locales.
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are topological spaces and
u
{\displaystyle u}
is a continuous map between them, then the pullback and pushforward operations on sheaves yield a geometric morphism between the associated topoi for the sites
Open
(
X
)
,
Open
(
Y
)
{\displaystyle {\text{Open}}(X),{\text{Open}}(Y)}
.
==== Points of topoi ====
A point of a topos
X
{\displaystyle X}
is defined as a geometric morphism from the topos of sets to
X
{\displaystyle X}
.
If X is an ordinary space and x is a point of X, then the functor that takes a sheaf F to its stalk Fx has a right adjoint
(the "skyscraper sheaf" functor), so an ordinary point of X also determines a topos-theoretic point. These may be constructed as the pullback-pushforward along the continuous map x: 1 → X.
For the etale topos
(
X
)
e
t
{\displaystyle (X)_{et}}
of a space
X
{\displaystyle X}
, a point is a bit more refined of an object. Given a point
x
:
Spec
(
κ
(
x
)
)
→
X
{\displaystyle x:{\text{Spec}}(\kappa (x))\to X}
of the underlying scheme
X
{\displaystyle X}
a point
x
′
{\displaystyle x'}
of the topos
(
X
)
e
t
{\displaystyle (X)_{et}}
is then given by a separable field extension
k
{\displaystyle k}
of
κ
(
x
)
{\displaystyle \kappa (x)}
such that the associated map
x
′
:
Spec
(
k
)
→
X
{\displaystyle x':{\text{Spec}}(k)\to X}
factors through the original point
x
{\displaystyle x}
. Then, the factorization map
Spec
(
k
)
→
Spec
(
κ
(
x
)
)
{\displaystyle {\text{Spec}}(k)\to {\text{Spec}}(\kappa (x))}
is an etale morphism of schemes.
More precisely, those are the global points. They are not adequate in themselves for displaying the space-like aspect of a topos, because a non-trivial topos may fail to have any. Generalized points are geometric morphisms from a topos Y (the stage of definition) to X. There are enough of these to display the space-like aspect. For example, if X is the classifying topos S[T] for a geometric theory T, then the universal property says that its points are the models of T (in any stage of definition Y).
==== Essential geometric morphisms ====
A geometric morphism (u∗,u∗) is essential if u∗ has a further left adjoint u!, or equivalently (by the adjoint functor theorem) if u∗ preserves not only finite but all small limits.
=== Ringed topoi ===
A ringed topos is a pair (X,R), where X is a topos and R is a commutative ring object in X. Most of the constructions of ringed spaces go through for ringed topoi. The category of R-module objects in X is an abelian category with enough injectives. A more useful abelian category is the subcategory of quasi-coherent R-modules: these are R-modules that admit a presentation.
Another important class of ringed topoi, besides ringed spaces, are the étale topoi of Deligne–Mumford stacks.
=== Homotopy theory of topoi ===
Michael Artin and Barry Mazur associated to the site underlying a topos a pro-simplicial set (up to homotopy). (It's better to consider it in Ho(pro-SS); see Edwards) Using this inverse system of simplicial sets one may sometimes associate to a homotopy invariant in classical topology an inverse system of invariants in topos theory. The study of the pro-simplicial set associated to the étale topos of a scheme is called étale homotopy theory. In good cases (if the scheme is Noetherian and geometrically unibranch), this pro-simplicial set is pro-finite.
== Elementary topoi (topoi in logic) ==
=== Introduction ===
Since the early 20th century, the predominant axiomatic foundation of mathematics has been set theory, in which all mathematical objects are ultimately represented by sets (including functions, which map between sets). More recent work in category theory allows this foundation to be generalized using topoi; each topos completely defines its own mathematical framework. The category of sets forms a familiar topos, and working within this topos is equivalent to using traditional set-theoretic mathematics. But one could instead choose to work with many alternative topoi. A standard formulation of the axiom of choice makes sense in any topos, and there are topoi in which it is invalid. Constructivists will be interested to work in a topos without the law of excluded middle. If symmetry under a particular group G is of importance, one can use the topos consisting of all G-sets.
It is also possible to encode an algebraic theory, such as the theory of groups, as a topos, in the form of a classifying topos. The individual models of the theory, i.e. the groups in our example, then correspond to functors from the encoding topos to the category of sets that respect the topos structure.
=== Formal definition ===
When used for foundational work a topos will be defined axiomatically; set theory is then treated as a special case of topos theory. Building from category theory, there are multiple equivalent definitions of a topos. The following has the virtue of being concise:
A topos is a category that has the following two properties:
All limits taken over finite index categories exist.
Every object has a power object. This plays the role of the powerset in set theory.
Formally, a power object of an object
X
{\displaystyle X}
is a pair
(
P
X
,
∋
X
)
{\displaystyle (PX,\ni _{X})}
with
∋
X
⊆
P
X
×
X
{\displaystyle {\ni _{X}}\subseteq PX\times X}
, which classifies relations, in the following sense.
First note that for every object
I
{\displaystyle I}
, a morphism
r
:
I
→
P
X
{\displaystyle r\colon I\to PX}
("a family of subsets") induces a subobject
{
(
i
,
x
)
|
x
∈
r
(
i
)
}
⊆
I
×
X
{\displaystyle \{(i,x)~|~x\in r(i)\}\subseteq I\times X}
. Formally, this is defined by pulling back
∋
X
{\displaystyle \ni _{X}}
along
r
×
X
:
I
×
X
→
P
X
×
X
{\displaystyle r\times X:I\times X\to PX\times X}
. The universal property of a power object is that every relation arises in this way, giving a bijective correspondence between relations
R
⊆
I
×
X
{\displaystyle R\subseteq I\times X}
and morphisms
r
:
I
→
P
X
{\displaystyle r\colon I\to PX}
.
From finite limits and power objects one can derive that
All colimits taken over finite index categories exist.
The category has a subobject classifier.
The category is Cartesian closed.
In some applications, the role of the subobject classifier is pivotal, whereas power objects are not. Thus some definitions reverse the roles of what is defined and what is derived.
=== Logical functors ===
A logical functor is a functor between topoi that preserves finite limits and power objects. Logical functors preserve the structures that topoi have. In particular, they preserve finite colimits, subobject classifiers, and exponential objects.
=== Explanation ===
A topos as defined above can be understood as a Cartesian closed category for which the notion of subobject of an object has an elementary or first-order definition. This notion, as a natural categorical abstraction of the notions of subset of a set, subgroup of a group, and more generally subalgebra of any algebraic structure, predates the notion of topos. It is definable in any category, not just topoi, in second-order language, i.e. in terms of classes of morphisms instead of individual morphisms, as follows. Given two monics m, n from respectively Y and Z to X, we say that m ≤ n when there exists a morphism p: Y → Z for which np = m, inducing a preorder on monics to X. When m ≤ n and n ≤ m we say that m and n are equivalent. The subobjects of X are the resulting equivalence classes of the monics to it.
In a topos "subobject" becomes, at least implicitly, a first-order notion, as follows.
As noted above, a topos is a category C having all finite limits and hence in particular the empty limit or final object 1. It is then natural to treat morphisms of the form x: 1 → X as elements x ∈ X. Morphisms f: X → Y thus correspond to functions mapping each element x ∈ X to the element fx ∈ Y, with application realized by composition.
One might then think to define a subobject of X as an equivalence class of monics m: X′ → X having the same image { mx | x ∈ X′ }. The catch is that two or more morphisms may correspond to the same function, that is, we cannot assume that C is concrete in the sense that the functor C(1,-): C → Set is faithful. For example the category Grph of graphs and their associated homomorphisms is a topos whose final object 1 is the graph with one vertex and one edge (a self-loop), but is not concrete because the elements 1 → G of a graph G correspond only to the self-loops and not the other edges, nor the vertices without self-loops. Whereas the second-order definition makes G and the subgraph of all self-loops of G (with their vertices) distinct subobjects of G (unless every edge is, and every vertex has, a self-loop), this image-based one does not. This can be addressed for the graph example and related examples via the Yoneda Lemma as described in the Further examples section below, but this then ceases to be first-order. Topoi provide a more abstract, general, and first-order solution.
As noted above, a topos C has a subobject classifier Ω, namely an object of C with an element t ∈ Ω, the generic subobject of C, having the property that every monic m: X′ → X arises as a pullback of the generic subobject along a unique morphism f: X → Ω, as per Figure 1. Now the pullback of a monic is a monic, and all elements including t are monics since there is only one morphism to 1 from any given object, whence the pullback of t along f: X → Ω is a monic. The monics to X are therefore in bijection with the pullbacks of t along morphisms from X to Ω. The latter morphisms partition the monics into equivalence classes each determined by a morphism f: X → Ω, the characteristic morphism of that class, which we take to be the subobject of X characterized or named by f.
All this applies to any topos, whether or not concrete. In the concrete case, namely C(1,-) faithful, for example the category of sets, the situation reduces to the familiar behavior of functions. Here the monics m: X′ → X are exactly the injections (one-one functions) from X′ to X, and those with a given image { mx | x ∈ X′ } constitute the subobject of X corresponding to the morphism f: X → Ω for which f−1(t) is that image. The monics of a subobject will in general have many domains, all of which however will be in bijection with each other.
To summarize, this first-order notion of subobject classifier implicitly defines for a topos the same equivalence relation on monics to X as had previously been defined explicitly by the second-order notion of subobject for any category. The notion of equivalence relation on a class of morphisms is itself intrinsically second-order, which the definition of topos neatly sidesteps by explicitly defining only the notion of subobject classifier Ω, leaving the notion of subobject of X as an implicit consequence characterized (and hence namable) by its associated morphism f: X → Ω.
=== Further examples and non-examples ===
Every Grothendieck topos is an elementary topos, but the converse is not true (since every Grothendieck topos is cocomplete, which is not required from an elementary topos).
The categories of finite sets, of finite G-sets (actions of a group G on a finite set), and of finite graphs are elementary topoi that are not Grothendieck topoi.
If C is a small category, then the functor category SetC (consisting of all covariant functors from C to sets, with natural transformations as morphisms) is a topos. For instance, the category Grph of graphs of the kind permitting multiple directed edges between two vertices is a topos. Such a graph consists of two sets, an edge set and a vertex set, and two functions s,t between those sets, assigning to every edge e its source s(e) and target t(e). Grph is thus equivalent to the functor category SetC, where C is the category with two objects E and V and two morphisms s,t: E → V giving respectively the source and target of each edge.
The Yoneda lemma asserts that Cop embeds in SetC as a full subcategory. In the graph example the embedding represents Cop as the subcategory of SetC whose two objects are V' as the one-vertex no-edge graph and E' as the two-vertex one-edge graph (both as functors), and whose two nonidentity morphisms are the two graph homomorphisms from V' to E' (both as natural transformations). The natural transformations from V' to an arbitrary graph (functor) G constitute the vertices of G while those from E' to G constitute its edges. Although SetC, which we can identify with Grph, is not made concrete by either V' or E' alone, the functor U: Grph → Set2 sending object G to the pair of sets (Grph(V' ,G), Grph(E' ,G)) and morphism h: G → H to the pair of functions (Grph(V' ,h), Grph(E' ,h)) is faithful. That is, a morphism of graphs can be understood as a pair of functions, one mapping the vertices and the other the edges, with application still realized as composition but now with multiple sorts of generalized elements. This shows that the traditional concept of a concrete category as one whose objects have an underlying set can be generalized to cater for a wider range of topoi by allowing an object to have multiple underlying sets, that is, to be multisorted.
The category of pointed sets with point-preserving functions is not a topos, since it doesn't have power objects: if
P
X
{\displaystyle PX}
were the power object of the pointed set
X
{\displaystyle X}
, and
1
{\displaystyle 1}
denotes the pointed singleton, then there is only one point-preserving function
r
:
1
→
P
X
{\displaystyle r\colon 1\to PX}
, but the relations in
1
×
X
{\displaystyle 1\times X}
are as numerous as the pointed subsets of
X
{\displaystyle X}
. The category of abelian groups is also not a topos, for a similar reason: every group homomorphism must map 0 to 0.
== See also ==
History of topos theory
Homotopy hypothesis
Intuitionistic type theory
∞-topos
Quasitopos
Geometric logic
Generalized space
== Notes ==
== References ==
Some gentle papers
Edwards, D.A.; Hastings, H.M. (Summer 1980). "Čech Theory: its Past, Present, and Future" (PDF). Rocky Mountain Journal of Mathematics. 10 (3): 429–468. doi:10.1216/RMJ-1980-10-3-429. JSTOR 44236540.
Baez, John. "Topos theory in a nutshell". A gentle introduction.
Steven Vickers: "Toposes pour les nuls" and "Toposes pour les vraiment nuls." Elementary and even more elementary introductions to toposes as generalized spaces.
Illusie, Luc (2004). "What is...A Topos?" (PDF). Notices of the AMS. 51 (9): 160–1.
The following texts are easy-paced introductions to toposes and the basics of category theory. They should be suitable for those knowing little mathematical logic and set theory, even non-mathematicians.
Lawvere, F. William; Schanuel, Stephen H. (1997). Conceptual Mathematics: A First Introduction to Categories. Cambridge University Press. ISBN 978-0-521-47817-5. An "introduction to categories for computer scientists, logicians, physicists, linguists, etc." (cited from cover text).
Lawvere, F. William; Rosebrugh, Robert (2003). Sets for Mathematics. Cambridge University Press. ISBN 978-0-521-01060-3. Introduces the foundations of mathematics from a categorical perspective.
Grothendieck foundational work on topoi:
Grothendieck, A.; Verdier, J.L. (1972). Théorie des Topos et Cohomologie Etale des Schémas. Lecture notes in mathematics. Vol. 269. Springer. doi:10.1007/BFb0081551. ISBN 978-3-540-37549-4. Tome 2 270 doi:10.1007/BFb0061319 ISBN 978-3-540-37987-4
The following monographs include an introduction to some or all of topos theory, but do not cater primarily to beginning students. Listed in (perceived) order of increasing difficulty.
McLarty, Colin (1992). Elementary Categories, Elementary Toposes. Clarendon Press. ISBN 978-0-19-158949-2. A nice introduction to the basics of category theory, topos theory, and topos logic. Assumes very few prerequisites.
Goldblatt, Robert (2013) [1984]. Topoi: The Categorial Analysis of Logic. Courier Corporation. ISBN 978-0-486-31796-0. A good start. Available online at Robert Goldblatt's homepage.
Bell, John L. (2001). "The Development of Categorical Logic". In Gabbay, D.M.; Guenthner, Franz (eds.). Handbook of Philosophical Logic. Vol. 12 (2nd ed.). Springer. pp. 279–. ISBN 978-1-4020-3091-8. Version available online at John Bell's homepage.
MacLane, Saunders; Moerdijk, Ieke (2012) [1994]. Sheaves in Geometry and Logic: A First Introduction to Topos Theory. Springer. ISBN 978-1-4612-0927-0. More complete, and more difficult to read.
Barr, Michael; Wells, Charles (2013) [1985]. Toposes, Triples and Theories. Springer. ISBN 978-1-4899-0023-4. (Online version). More concise than Sheaves in Geometry and Logic, but hard on beginners.
Reference works for experts, less suitable for first introduction
Edwards, D.A.; Hastings, H.M. (1976). Čech and Steenrod homotopy theories with applications to geometric topology. Lecture Notes in Maths. Vol. 542. Springer-Verlag. doi:10.1007/BFb0081083. ISBN 978-3-540-38103-7.
Borceux, Francis (1994). Handbook of Categorical Algebra: Volume 3, Sheaf Theory. Encyclopedia of Mathematics and its Applications. Vol. 52. Cambridge University Press. ISBN 978-0-521-44180-3. The third part of "Borceux' remarkable magnum opus", as Johnstone has labelled it. Still suitable as an introduction, though beginners may find it hard to recognize the most relevant results among the huge amount of material given.
Johnstone, Peter T. (2014) [1977]. Topos Theory. Courier. ISBN 978-0-486-49336-7. For a long time the standard compendium on topos theory. However, even Johnstone describes this work as "far too hard to read, and not for the faint-hearted."
Johnstone, Peter T. (2002). Sketches of an Elephant: A Topos Theory Compendium. Vol. 2. Clarendon Press. ISBN 978-0-19-851598-2. As of early 2010, two of the scheduled three volumes of this overwhelming compendium were available.
Caramello, Olivia (2017). Theories, Sites, Toposes: Relating and studying mathematical theories through topos-theoretic 'bridges. Vol. 1. Oxford University Press. doi:10.1093/oso/9780198758914.001.0001. ISBN 9780198758914.
Books that target special applications of topos theory
Pedicchio, Maria Cristina; Tholen, Walter; Rota, G.C., eds. (2004). Categorical Foundations: Special Topics in Order, Topology, Algebra, and Sheaf Theory. Encyclopedia of Mathematics and its Applications. Vol. 97. Cambridge University Press. ISBN 978-0-521-83414-8. Includes many interesting special applications. | Wikipedia/Topos_theory |
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
== History ==
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
as follows: an infinitely small increment
α
{\displaystyle \alpha }
of the independent variable x always produces an infinitely small change
f
(
x
+
α
)
−
f
(
x
)
{\displaystyle f(x+\alpha )-f(x)}
of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
== Real functions ==
=== Definition ===
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x tends to c, is equal to
f
(
c
)
.
{\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
(the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
is continuous on its whole domain, which is the closed interval
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and the tangent function
x
↦
tan
x
.
{\displaystyle x\mapsto \tan x.}
When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and
x
↦
sin
(
1
x
)
{\textstyle x\mapsto \sin({\frac {1}{x}})}
are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
f
:
D
→
R
{\textstyle f:D\to \mathbb {R} }
be a function whose domain
D
{\displaystyle D}
is contained in
R
{\displaystyle \mathbb {R} }
of real numbers.
Some (but not all) possibilities for
D
{\displaystyle D}
are:
D
{\displaystyle D}
is the whole real line; that is,
D
=
R
{\displaystyle D=\mathbb {R} }
D
{\displaystyle D}
is a closed interval of the form
D
=
[
a
,
b
]
=
{
x
∈
R
∣
a
≤
x
≤
b
}
,
{\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},}
where a and b are real numbers
D
{\displaystyle D}
is an open interval of the form
D
=
(
a
,
b
)
=
{
x
∈
R
∣
a
<
x
<
b
}
,
{\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},}
where a and b are real numbers
In the case of an open interval,
a
{\displaystyle a}
and
b
{\displaystyle b}
do not belong to
D
{\displaystyle D}
, and the values
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
are not defined, and if they are, they do not matter for continuity on
D
{\displaystyle D}
.
==== Definition in terms of limits of functions ====
The function f is continuous at some point c of its domain if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x approaches c through the domain of f, exists and is equal to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation, this is written as
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}{f(x)}=f(c).}
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal
f
(
c
)
.
{\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
==== Definition in terms of neighborhoods ====
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point
f
(
c
)
{\displaystyle f(c)}
as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood
N
1
(
f
(
c
)
)
{\displaystyle N_{1}(f(c))}
there is a neighborhood
N
2
(
c
)
{\displaystyle N_{2}(c)}
in its domain such that
f
(
x
)
∈
N
1
(
f
(
c
)
)
{\displaystyle f(x)\in N_{1}(f(c))}
whenever
x
∈
N
2
(
c
)
.
{\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
==== Definition in terms of limits of sequences ====
One can instead require that for any sequence
(
x
n
)
n
∈
N
{\displaystyle (x_{n})_{n\in \mathbb {N} }}
of points in the domain which converges to c, the corresponding sequence
(
f
(
x
n
)
)
n
∈
N
{\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}
converges to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation,
∀
(
x
n
)
n
∈
N
⊂
D
:
lim
n
→
∞
x
n
=
c
⇒
lim
n
→
∞
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.}
==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ====
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
as above and an element
x
0
{\displaystyle x_{0}}
of the domain
D
{\displaystyle D}
,
f
{\displaystyle f}
is said to be continuous at the point
x
0
{\displaystyle x_{0}}
when the following holds: For any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
however small, there exists some positive real number
δ
>
0
{\displaystyle \delta >0}
such that for all
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
with
x
0
−
δ
<
x
<
x
0
+
δ
,
{\displaystyle x_{0}-\delta <x<x_{0}+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
0
)
−
ε
<
f
(
x
)
<
f
(
x
0
)
+
ε
.
{\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .}
Alternatively written, continuity of
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
at
x
0
∈
D
{\displaystyle x_{0}\in D}
means that for every
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that for all
x
∈
D
{\displaystyle x\in D}
:
|
x
−
x
0
|
<
δ
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .}
More intuitively, we can say that if we want to get all the
f
(
x
)
{\displaystyle f(x)}
values to stay in some small neighborhood around
f
(
x
0
)
,
{\displaystyle f\left(x_{0}\right),}
we need to choose a small enough neighborhood for the
x
{\displaystyle x}
values around
x
0
.
{\displaystyle x_{0}.}
If we can do that no matter how small the
f
(
x
0
)
{\displaystyle f(x_{0})}
neighborhood is, then
f
{\displaystyle f}
is continuous at
x
0
.
{\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval
x
0
−
δ
<
x
<
x
0
+
δ
{\displaystyle x_{0}-\delta <x<x_{0}+\delta }
be entirely within the domain
D
{\displaystyle D}
, but Jordan removed that restriction.
==== Definition in terms of control of the remainder ====
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function
C
:
[
0
,
∞
)
→
[
0
,
∞
]
{\displaystyle C:[0,\infty )\to [0,\infty ]}
is called a control function if
C is non-decreasing
inf
δ
>
0
C
(
δ
)
=
0
{\displaystyle \inf _{\delta >0}C(\delta )=0}
A function
f
:
D
→
R
{\displaystyle f:D\to R}
is C-continuous at
x
0
{\displaystyle x_{0}}
if there exists such a neighbourhood
N
(
x
0
)
{\textstyle N(x_{0})}
that
|
f
(
x
)
−
f
(
x
0
)
|
≤
C
(
|
x
−
x
0
|
)
for all
x
∈
D
∩
N
(
x
0
)
{\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})}
A function is continuous in
x
0
{\displaystyle x_{0}}
if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions
C
{\displaystyle {\mathcal {C}}}
a function is
C
{\displaystyle {\mathcal {C}}}
-continuous if it is
C
{\displaystyle C}
-continuous for some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions
C
L
i
p
s
c
h
i
t
z
=
{
C
:
C
(
δ
)
=
K
|
δ
|
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}}
C
Hölder
−
α
=
{
C
:
C
(
δ
)
=
K
|
δ
|
α
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}}
C
uniform cont.
=
{
C
:
C
(
0
)
=
0
}
{\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}}
respectively.
==== Definition using oscillation ====
Continuity can also be defined in terms of oscillation: a function f is continuous at a point
x
0
{\displaystyle x_{0}}
if and only if its oscillation at that point is zero; in symbols,
ω
f
(
x
0
)
=
0.
{\displaystyle \omega _{f}(x_{0})=0.}
A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than
ε
{\displaystyle \varepsilon }
(hence a
G
δ
{\displaystyle G_{\delta }}
set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given
ε
0
{\displaystyle \varepsilon _{0}}
there is no
δ
{\displaystyle \delta }
that satisfies the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition, then the oscillation is at least
ε
0
,
{\displaystyle \varepsilon _{0},}
and conversely if for every
ε
{\displaystyle \varepsilon }
there is a desired
δ
,
{\displaystyle \delta ,}
the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
==== Definition using the hyperreals ====
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
=== Rules for continuity ===
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
Every constant function is continuous
The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is continuous
Addition and multiplication: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then their sum
f
+
g
{\displaystyle f+g}
and their product
f
⋅
g
{\displaystyle f\cdot g}
are continuous on the intersection
D
f
∩
D
g
{\displaystyle D_{f}\cap D_{g}}
, where
f
+
g
{\displaystyle f+g}
and
f
g
{\displaystyle fg}
are defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
and
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
{\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)}
.
Reciprocal: If the function
f
{\displaystyle f}
is continuous on the domain
D
f
{\displaystyle D_{f}}
, then its reciprocal
1
f
{\displaystyle {\tfrac {1}{f}}}
, defined by
(
1
f
)
(
x
)
=
1
f
(
x
)
{\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}}
is continuous on the domain
D
f
∖
f
−
1
(
0
)
{\displaystyle D_{f}\setminus f^{-1}(0)}
, that is, the domain
D
f
{\displaystyle D_{f}}
from which the points
x
{\displaystyle x}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
are removed.
Function composition: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then the composition
g
∘
f
{\displaystyle g\circ f}
defined by
1
{\displaystyle {1}}
is continuous on
D
f
∩
f
−
1
(
D
g
)
{\displaystyle D_{f}\cap f^{-1}(D_{g})}
, that the part of
D
f
{\displaystyle D_{f}}
that is mapped by
f
{\displaystyle f}
inside
D
g
{\displaystyle D_{g}}
.
The sine and cosine functions (
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
) are continuous everywhere.
The exponential function
e
x
{\displaystyle e^{x}}
is continuous everywhere.
The natural logarithm
ln
x
{\displaystyle \ln x}
is continuous on the domain formed by all positive real numbers
{
x
∣
x
>
0
}
{\displaystyle \{x\mid x>0\}}
.
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by
sinc
(
0
)
=
1
{\displaystyle \operatorname {sinc} (0)=1}
and
sinc
(
x
)
=
sin
x
x
{\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}}
for
x
≠
0
{\displaystyle x\neq 0}
. The above rules show immediately that the function is continuous for
x
≠
0
{\displaystyle x\neq 0}
, but, for proving the continuity at
0
{\displaystyle 0}
, one has to prove
lim
x
→
0
sin
x
x
=
1.
{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.}
As this is true, one gets that the sinc function is continuous function on all real numbers.
=== Examples of discontinuous functions ===
An example of a discontinuous function is the Heaviside step function
H
{\displaystyle H}
, defined by
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0
{\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}}
Pick for instance
ε
=
1
/
2
{\displaystyle \varepsilon =1/2}
. Then there is no
δ
{\displaystyle \delta }
-neighborhood around
x
=
0
{\displaystyle x=0}
, i.e. no open interval
(
−
δ
,
δ
)
{\displaystyle (-\delta ,\;\delta )}
with
δ
>
0
,
{\displaystyle \delta >0,}
that will force all the
H
(
x
)
{\displaystyle H(x)}
values to be within the
ε
{\displaystyle \varepsilon }
-neighborhood of
H
(
0
)
{\displaystyle H(0)}
, i.e. within
(
1
/
2
,
3
/
2
)
{\displaystyle (1/2,\;3/2)}
. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
sgn
(
x
)
=
{
1
if
x
>
0
0
if
x
=
0
−
1
if
x
<
0
{\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}}
is discontinuous at
x
=
0
{\displaystyle x=0}
but continuous everywhere else. Yet another example: the function
f
(
x
)
=
{
sin
(
x
−
2
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is continuous everywhere apart from
x
=
0
{\displaystyle x=0}
.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
f
(
x
)
=
{
1
if
x
=
0
1
q
if
x
=
p
q
(in lowest terms) is a rational number
0
if
x
is irrational
.
{\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}}
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
D
(
x
)
=
{
0
if
x
is irrational
(
∈
R
∖
Q
)
1
if
x
is rational
(
∈
Q
)
{\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}}
is nowhere continuous.
=== Properties ===
==== A useful lemma ====
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is continuous at a point
x
0
,
{\displaystyle x_{0},}
and
y
0
{\displaystyle y_{0}}
be a value such
f
(
x
0
)
≠
y
0
.
{\displaystyle f\left(x_{0}\right)\neq y_{0}.}
Then
f
(
x
)
≠
y
0
{\displaystyle f(x)\neq y_{0}}
throughout some neighbourhood of
x
0
.
{\displaystyle x_{0}.}
Proof: By the definition of continuity, take
ε
=
|
y
0
−
f
(
x
0
)
|
2
>
0
{\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0}
, then there exists
δ
>
0
{\displaystyle \delta >0}
such that
|
f
(
x
)
−
f
(
x
0
)
|
<
|
y
0
−
f
(
x
0
)
|
2
whenever
|
x
−
x
0
|
<
δ
{\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta }
Suppose there is a point in the neighbourhood
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
for which
f
(
x
)
=
y
0
;
{\displaystyle f(x)=y_{0};}
then we have the contradiction
|
f
(
x
0
)
−
y
0
|
<
|
f
(
x
0
)
−
y
0
|
2
.
{\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.}
==== Intermediate value theorem ====
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and k is some number between
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
,
{\displaystyle f(b),}
then there is some number
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
such that
f
(
c
)
=
k
.
{\displaystyle f(c)=k.}
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on
[
a
,
b
]
{\displaystyle [a,b]}
and
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
differ in sign, then, at some point
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
f
(
c
)
{\displaystyle f(c)}
must equal zero.
==== Extreme value theorem ====
The extreme value theorem states that if a function f is defined on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
(or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists
c
∈
[
a
,
b
]
{\displaystyle c\in [a,b]}
with
f
(
c
)
≥
f
(
x
)
{\displaystyle f(c)\geq f(x)}
for all
x
∈
[
a
,
b
]
.
{\displaystyle x\in [a,b].}
The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
(or any set that is not both closed and bounded), as, for example, the continuous function
f
(
x
)
=
1
x
,
{\displaystyle f(x)={\frac {1}{x}},}
defined on the open interval (0,1), does not attain a maximum, being unbounded above.
==== Relation to differentiability and integrability ====
Every differentiable function
f
:
(
a
,
b
)
→
R
{\displaystyle f:(a,b)\to \mathbb {R} }
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
f
(
x
)
=
|
x
|
=
{
x
if
x
≥
0
−
x
if
x
<
0
{\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}}
is everywhere continuous. However, it is not differentiable at
x
=
0
{\displaystyle x=0}
(but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted
C
1
(
(
a
,
b
)
)
.
{\displaystyle C^{1}((a,b)).}
More generally, the set of functions
f
:
Ω
→
R
{\displaystyle f:\Omega \to \mathbb {R} }
(from an open interval (or open subset of
R
{\displaystyle \mathbb {R} }
)
Ω
{\displaystyle \Omega }
to the reals) such that f is
n
{\displaystyle n}
times differentiable and such that the
n
{\displaystyle n}
-th derivative of f is continuous is denoted
C
n
(
Ω
)
.
{\displaystyle C^{n}(\Omega ).}
See differentiability class. In the field of computer graphics, properties related (but not identical) to
C
0
,
C
1
,
C
2
{\displaystyle C^{0},C^{1},C^{2}}
are sometimes called
G
0
{\displaystyle G^{0}}
(continuity of position),
G
1
{\displaystyle G^{1}}
(continuity of tangency), and
G
2
{\displaystyle G^{2}}
(continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
==== Pointwise and uniform limits ====
Given a sequence
f
1
,
f
2
,
…
:
I
→
R
{\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} }
of functions such that the limit
f
(
x
)
:=
lim
n
→
∞
f
n
(
x
)
{\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)}
exists for all
x
∈
D
,
{\displaystyle x\in D,}
, the resulting function
f
(
x
)
{\displaystyle f(x)}
is referred to as the pointwise limit of the sequence of functions
(
f
n
)
n
∈
N
.
{\displaystyle \left(f_{n}\right)_{n\in N}.}
The pointwise limit function need not be continuous, even if all functions
f
n
{\displaystyle f_{n}}
are continuous, as the animation at the right shows. However, f is continuous if all functions
f
n
{\displaystyle f_{n}}
are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
=== Directional Continuity ===
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number
ε
>
0
{\displaystyle \varepsilon >0}
however small, there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
c
<
x
<
c
+
δ
,
{\displaystyle c<x<c+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
will satisfy
|
f
(
x
)
−
f
(
c
)
|
<
ε
.
{\displaystyle |f(x)-f(c)|<\varepsilon .}
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with
c
−
δ
<
x
<
c
{\displaystyle c-\delta <x<c}
yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
=== Semicontinuity ===
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
|
x
−
c
|
<
δ
,
{\displaystyle |x-c|<\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
)
≥
f
(
c
)
−
ϵ
.
{\displaystyle f(x)\geq f(c)-\epsilon .}
The reverse condition is upper semi-continuity.
== Continuous functions between metric spaces ==
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set
X
{\displaystyle X}
equipped with a function (called metric)
d
X
,
{\displaystyle d_{X},}
that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
d
X
:
X
×
X
→
R
{\displaystyle d_{X}:X\times X\to \mathbb {R} }
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces
(
X
,
d
X
)
{\displaystyle \left(X,d_{X}\right)}
and
(
Y
,
d
Y
)
{\displaystyle \left(Y,d_{Y}\right)}
and a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
then
f
{\displaystyle f}
is continuous at the point
c
∈
X
{\displaystyle c\in X}
(with respect to the given metrics) if for any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a positive real number
δ
>
0
{\displaystyle \delta >0}
such that all
x
∈
X
{\displaystyle x\in X}
satisfying
d
X
(
x
,
c
)
<
δ
{\displaystyle d_{X}(x,c)<\delta }
will also satisfy
d
Y
(
f
(
x
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(x),f(c))<\varepsilon .}
As in the case of real functions above, this is equivalent to the condition that for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
lim
x
n
=
c
,
{\displaystyle \lim x_{n}=c,}
we have
lim
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \lim f\left(x_{n}\right)=f(c).}
The latter condition can be weakened as follows:
f
{\displaystyle f}
is continuous at the point
c
{\displaystyle c}
if and only if for every convergent sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
c
{\displaystyle c}
, the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
is a Cauchy sequence, and
c
{\displaystyle c}
is in the domain of
f
{\displaystyle f}
.
The set of points at which a function between metric spaces is continuous is a
G
δ
{\displaystyle G_{\delta }}
set – this follows from the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
T
:
V
→
W
{\displaystyle T:V\to W}
between normed vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
(which are vector spaces equipped with a compatible norm, denoted
‖
x
‖
{\displaystyle \|x\|}
) is continuous if and only if it is bounded, that is, there is a constant
K
{\displaystyle K}
such that
‖
T
(
x
)
‖
≤
K
‖
x
‖
{\displaystyle \|T(x)\|\leq K\|x\|}
for all
x
∈
V
.
{\displaystyle x\in V.}
=== Uniform, Hölder and Lipschitz continuity ===
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way
δ
{\displaystyle \delta }
depends on
ε
{\displaystyle \varepsilon }
and c in the definition above. Intuitively, a function f as above is uniformly continuous if the
δ
{\displaystyle \delta }
does
not depend on the point c. More precisely, it is required that for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that for every
c
,
b
∈
X
{\displaystyle c,b\in X}
with
d
X
(
b
,
c
)
<
δ
,
{\displaystyle d_{X}(b,c)<\delta ,}
we have that
d
Y
(
f
(
b
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(b),f(c))<\varepsilon .}
Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all
b
,
c
∈
X
,
{\displaystyle b,c\in X,}
the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
(
d
X
(
b
,
c
)
)
α
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }}
holds. Any Hölder continuous function is uniformly continuous. The particular case
α
=
1
{\displaystyle \alpha =1}
is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
d
X
(
b
,
c
)
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)}
holds for any
b
,
c
∈
X
.
{\displaystyle b,c\in X.}
The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
== Continuous functions between topological spaces ==
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces X and Y is continuous if for every open set
V
⊆
Y
,
{\displaystyle V\subseteq Y,}
the inverse image
f
−
1
(
V
)
=
{
x
∈
X
|
f
(
x
)
∈
V
}
{\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}}
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology
T
X
{\displaystyle T_{X}}
), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
f
:
X
→
T
{\displaystyle f:X\to T}
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
=== Continuity at a point ===
The translation in the language of neighborhoods of the
(
ε
,
δ
)
{\displaystyle (\varepsilon ,\delta )}
-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and
f
−
1
(
V
)
{\displaystyle f^{-1}(V)}
is the largest subset U of X such that
f
(
U
)
⊆
V
,
{\displaystyle f(U)\subseteq V,}
this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given
x
∈
X
,
{\displaystyle x\in X,}
a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
that converges to
x
{\displaystyle x}
in
X
,
{\displaystyle X,}
which is expressed by writing
B
→
x
,
{\displaystyle {\mathcal {B}}\to x,}
then necessarily
f
(
B
)
→
f
(
x
)
{\displaystyle f({\mathcal {B}})\to f(x)}
in
Y
.
{\displaystyle Y.}
If
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
denotes the neighborhood filter at
x
{\displaystyle x}
then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if
f
(
N
(
x
)
)
→
f
(
x
)
{\displaystyle f({\mathcal {N}}(x))\to f(x)}
in
Y
.
{\displaystyle Y.}
Moreover, this happens if and only if the prefilter
f
(
N
(
x
)
)
{\displaystyle f({\mathcal {N}}(x))}
is a filter base for the neighborhood filter of
f
(
x
)
{\displaystyle f(x)}
in
Y
.
{\displaystyle Y.}
=== Alternative definitions ===
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
==== Sequences and nets ====
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is sequentially continuous if whenever a sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
converges to a limit
x
,
{\displaystyle x,}
the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
converges to
f
(
x
)
.
{\displaystyle f(x).}
Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If
X
{\displaystyle X}
is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if
X
{\displaystyle X}
is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
==== Closure operator and interior operator definitions ====
In terms of the interior and closure operators, we have the following equivalences,
If we declare that a point
x
{\displaystyle x}
is close to a subset
A
⊆
X
{\displaystyle A\subseteq X}
if
x
∈
cl
X
A
,
{\displaystyle x\in \operatorname {cl} _{X}A,}
then this terminology allows for a plain English description of continuity:
f
{\displaystyle f}
is continuous if and only if for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
f
{\displaystyle f}
maps points that are close to
A
{\displaystyle A}
to points that are close to
f
(
A
)
.
{\displaystyle f(A).}
Similarly,
f
{\displaystyle f}
is continuous at a fixed given point
x
∈
X
{\displaystyle x\in X}
if and only if whenever
x
{\displaystyle x}
is close to a subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
then
f
(
x
)
{\displaystyle f(x)}
is close to
f
(
A
)
.
{\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on
X
{\displaystyle X}
can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset
A
{\displaystyle A}
of a topological space
X
{\displaystyle X}
to its topological closure
cl
X
A
{\displaystyle \operatorname {cl} _{X}A}
satisfies the Kuratowski closure axioms. Conversely, for any closure operator
A
↦
cl
A
{\displaystyle A\mapsto \operatorname {cl} A}
there exists a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
X
∖
cl
A
:
A
⊆
X
}
{\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}}
) such that for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
cl
A
{\displaystyle \operatorname {cl} A}
is equal to the topological closure
cl
(
X
,
τ
)
A
{\displaystyle \operatorname {cl} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with closure operators (both denoted by
cl
{\displaystyle \operatorname {cl} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
(
cl
A
)
⊆
cl
(
f
(
A
)
)
{\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))}
for every subset
A
⊆
X
.
{\displaystyle A\subseteq X.}
Similarly, the map that sends a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
to its topological interior
int
X
A
{\displaystyle \operatorname {int} _{X}A}
defines an interior operator. Conversely, any interior operator
A
↦
int
A
{\displaystyle A\mapsto \operatorname {int} A}
induces a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
int
A
:
A
⊆
X
}
{\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}}
) such that for every
A
⊆
X
,
{\displaystyle A\subseteq X,}
int
A
{\displaystyle \operatorname {int} A}
is equal to the topological interior
int
(
X
,
τ
)
A
{\displaystyle \operatorname {int} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with interior operators (both denoted by
int
{\displaystyle \operatorname {int} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
−
1
(
int
B
)
⊆
int
(
f
−
1
(
B
)
)
{\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)}
for every subset
B
⊆
Y
.
{\displaystyle B\subseteq Y.}
==== Filters and prefilters ====
Continuity can also be characterized in terms of filters. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if whenever a filter
B
{\displaystyle {\mathcal {B}}}
on
X
{\displaystyle X}
converges in
X
{\displaystyle X}
to a point
x
∈
X
,
{\displaystyle x\in X,}
then the prefilter
f
(
B
)
{\displaystyle f({\mathcal {B}})}
converges in
Y
{\displaystyle Y}
to
f
(
x
)
.
{\displaystyle f(x).}
This characterization remains true if the word "filter" is replaced by "prefilter."
=== Properties ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
are continuous, then so is the composition
g
∘
f
:
X
→
Z
.
{\displaystyle g\circ f:X\to Z.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology
τ
1
{\displaystyle \tau _{1}}
is said to be coarser than another topology
τ
2
{\displaystyle \tau _{2}}
(notation:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
) if every open subset with respect to
τ
1
{\displaystyle \tau _{1}}
is also open with respect to
τ
2
.
{\displaystyle \tau _{2}.}
Then, the identity map
id
X
:
(
X
,
τ
2
)
→
(
X
,
τ
1
)
{\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)}
is continuous if and only if
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
(see also comparison of topologies). More generally, a continuous function
(
X
,
τ
X
)
→
(
Y
,
τ
Y
)
{\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)}
stays continuous if the topology
τ
Y
{\displaystyle \tau _{Y}}
is replaced by a coarser topology and/or
τ
X
{\displaystyle \tau _{X}}
is replaced by a finer topology.
=== Homeomorphisms ===
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function
f
−
1
{\displaystyle f^{-1}}
need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
=== Defining topologies via continuous functions ===
Given a function
f
:
X
→
S
,
{\displaystyle f:X\to S,}
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which
f
−
1
(
A
)
{\displaystyle f^{-1}(A)}
is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that
A
=
f
−
1
(
U
)
{\displaystyle A=f^{-1}(U)}
for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions
S
→
X
{\displaystyle S\to X}
into all topological spaces X. Dually, a similar idea can be applied to maps
X
→
S
.
{\displaystyle X\to S.}
== Related notions ==
If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a continuous function from some subset
S
{\displaystyle S}
of a topological space
X
{\displaystyle X}
then a continuous extension of
f
{\displaystyle f}
to
X
{\displaystyle X}
is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
such that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for every
s
∈
S
,
{\displaystyle s\in S,}
which is a condition that often written as
f
=
F
|
S
.
{\displaystyle f=F{\big \vert }_{S}.}
In words, it is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
that restricts to
f
{\displaystyle f}
on
S
.
{\displaystyle S.}
This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is not continuous, then it could not possibly have a continuous extension. If
Y
{\displaystyle Y}
is a Hausdorff space and
S
{\displaystyle S}
is a dense subset of
X
{\displaystyle X}
then a continuous extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
to
X
,
{\displaystyle X,}
if one exists, will be unique. The Blumberg theorem states that if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is an arbitrary function then there exists a dense subset
D
{\displaystyle D}
of
R
{\displaystyle \mathbb {R} }
such that the restriction
f
|
D
:
D
→
R
{\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} }
is continuous; in other words, every function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between particular types of partially ordered sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is continuous if for each directed subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
we have
sup
f
(
A
)
=
f
(
sup
A
)
.
{\displaystyle \sup f(A)=f(\sup A).}
Here
sup
{\displaystyle \,\sup \,}
is the supremum with respect to the orderings in
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
F
:
C
→
D
{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}
between two categories is called continuous if it commutes with small limits. That is to say,
lim
←
i
∈
I
F
(
C
i
)
≅
F
(
lim
←
i
∈
I
C
i
)
{\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)}
for any small (that is, indexed by a set
I
,
{\displaystyle I,}
as opposed to a class) diagram of objects in
C
{\displaystyle {\mathcal {C}}}
.
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function
f
:
E
→
R
k
{\displaystyle f:E\to \mathbb {R} ^{k}}
defined on a Lebesgue measurable set
E
⊆
R
n
{\displaystyle E\subseteq \mathbb {R} ^{n}}
is called approximately continuous at a point
x
0
∈
E
{\displaystyle x_{0}\in E}
if the approximate limit of
f
{\displaystyle f}
at
x
0
{\displaystyle x_{0}}
exists and equals
f
(
x
0
)
{\displaystyle f(x_{0})}
. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
== See also ==
Direction-preserving function - an analog of a continuous function in discrete spaces.
== References ==
== Bibliography ==
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
"Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Discontinuous_function |
Effective descriptive set theory is the branch of descriptive set theory dealing with sets of reals having lightface definitions; that is, definitions that do not require an arbitrary real parameter (Moschovakis 1980). Thus effective descriptive set theory combines descriptive set theory with recursion theory.
== Constructions ==
=== Effective Polish space ===
An effective Polish space is a complete separable metric space that has a computable presentation. Such spaces are studied in both effective descriptive set theory and in constructive analysis. In particular, standard examples of Polish spaces such as the real line, the Cantor set and the Baire space are all effective Polish spaces.
=== Arithmetical hierarchy ===
The arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called "arithmetical".
More formally, the arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The classifications are denoted
Σ
n
0
{\displaystyle \Sigma _{n}^{0}}
and
Π
n
0
{\displaystyle \Pi _{n}^{0}}
for natural numbers n (including 0). The Greek letters here are lightface symbols, which indicates that the formulas do not contain set parameters.
If a formula
ϕ
{\displaystyle \phi }
is logically equivalent to a formula with only bounded quantifiers then
ϕ
{\displaystyle \phi }
is assigned the classifications
Σ
0
0
{\displaystyle \Sigma _{0}^{0}}
and
Π
0
0
{\displaystyle \Pi _{0}^{0}}
.
The classifications
Σ
n
0
{\displaystyle \Sigma _{n}^{0}}
and
Π
n
0
{\displaystyle \Pi _{n}^{0}}
are defined inductively for every natural number n using the following rules:
If
ϕ
{\displaystyle \phi }
is logically equivalent to a formula of the form
∃
n
1
∃
n
2
⋯
∃
n
k
ψ
{\displaystyle \exists n_{1}\exists n_{2}\cdots \exists n_{k}\psi }
, where
ψ
{\displaystyle \psi }
is
Π
n
0
{\displaystyle \Pi _{n}^{0}}
, then
ϕ
{\displaystyle \phi }
is assigned the classification
Σ
n
+
1
0
{\displaystyle \Sigma _{n+1}^{0}}
.
If
ϕ
{\displaystyle \phi }
is logically equivalent to a formula of the form
∀
n
1
∀
n
2
⋯
∀
n
k
ψ
{\displaystyle \forall n_{1}\forall n_{2}\cdots \forall n_{k}\psi }
, where
ψ
{\displaystyle \psi }
is
Σ
n
0
{\displaystyle \Sigma _{n}^{0}}
, then
ϕ
{\displaystyle \phi }
is assigned the classification
Π
n
+
1
0
{\displaystyle \Pi _{n+1}^{0}}
.
== References == | Wikipedia/Effective_descriptive_set_theory |
In mathematics, a filter on a set
X
{\displaystyle X}
is a family
B
{\displaystyle {\mathcal {B}}}
of subsets such that:
X
∈
B
{\displaystyle X\in {\mathcal {B}}}
and
∅
∉
B
{\displaystyle \emptyset \notin {\mathcal {B}}}
if
A
∈
B
{\displaystyle A\in {\mathcal {B}}}
and
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
, then
A
∩
B
∈
B
{\displaystyle A\cap B\in {\mathcal {B}}}
If
A
⊂
B
⊂
X
{\displaystyle A\subset B\subset X}
and
A
∈
B
{\displaystyle A\in {\mathcal {B}}}
, then
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
A filter on a set may be thought of as representing a "collection of large subsets", one intuitive example being the neighborhood filter. Filters appear in order theory, model theory, and set theory, but can also be found in topology, from which they originate. The dual notion of a filter is an ideal.
Filters were introduced by Henri Cartan in 1937 and as described in the article dedicated to filters in topology, they were subsequently used by Nicolas Bourbaki in their book Topologie Générale as an alternative to the related notion of a net developed in 1922 by E. H. Moore and Herman L. Smith. Order filters are generalizations of filters from sets to arbitrary partially ordered sets. Specifically, a filter on a set is just a proper order filter in the special case where the partially ordered set consists of the power set ordered by set inclusion.
== Preliminaries, notation, and basic notions ==
In this article, upper case Roman letters like
S
{\displaystyle S}
and
X
{\displaystyle X}
denote sets (but not families unless indicated otherwise) and
℘
(
X
)
{\displaystyle \wp (X)}
will denote the power set of
X
.
{\displaystyle X.}
A subset of a power set is called a family of sets (or simply, a family) where it is over
X
{\displaystyle X}
if it is a subset of
℘
(
X
)
.
{\displaystyle \wp (X).}
Families of sets will be denoted by upper case calligraphy letters such as
B
,
C
,
and
F
.
{\displaystyle {\mathcal {B}},{\mathcal {C}},{\text{ and }}{\mathcal {F}}.}
Whenever these assumptions are needed, then it should be assumed that
X
{\displaystyle X}
is non–empty and that
B
,
F
,
{\displaystyle {\mathcal {B}},{\mathcal {F}},}
etc. are families of sets over
X
.
{\displaystyle X.}
The terms "prefilter" and "filter base" are synonyms and will be used interchangeably.
Warning about competing definitions and notation
There are unfortunately several terms in the theory of filters that are defined differently by different authors.
These include some of the most important terms such as "filter".
While different definitions of the same term usually have significant overlap, due to the very technical nature of filters (and point–set topology), these differences in definitions nevertheless often have important consequences.
When reading mathematical literature, it is recommended that readers check how the terminology related to filters is defined by the author.
For this reason, this article will clearly state all definitions as they are used.
Unfortunately, not all notation related to filters is well established and some notation varies greatly across the literature (for example, the notation for the set of all prefilters on a set) so in such cases this article uses whatever notation is most self describing or easily remembered.
The theory of filters and prefilters is well developed and has a plethora of definitions and notations, many of which are now unceremoniously listed to prevent this article from becoming prolix and to allow for the easy look up of notation and definitions.
Their important properties are described later.
Sets operations
The upward closure or isotonization in
X
{\displaystyle X}
of a family of sets
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
is
and similarly the downward closure of
B
{\displaystyle {\mathcal {B}}}
is
B
↓
:=
{
S
⊆
B
:
B
∈
B
}
=
⋃
B
∈
B
℘
(
B
)
.
{\displaystyle {\mathcal {B}}^{\downarrow }:=\{S\subseteq B~:~B\in {\mathcal {B}}\,\}=\bigcup _{B\in {\mathcal {B}}}\wp (B).}
Throughout,
f
{\displaystyle f}
is a map and
S
{\displaystyle S}
is a set.
Nets and their tails
A directed set is a set
I
{\displaystyle I}
together with a preorder, which will be denoted by
≤
{\displaystyle \,\leq \,}
(unless explicitly indicated otherwise), that makes
(
I
,
≤
)
{\displaystyle (I,\leq )}
into an (upward) directed set; this means that for all
i
,
j
∈
I
,
{\displaystyle i,j\in I,}
there exists some
k
∈
I
{\displaystyle k\in I}
such that
i
≤
k
and
j
≤
k
.
{\displaystyle i\leq k{\text{ and }}j\leq k.}
For any indices
i
and
j
,
{\displaystyle i{\text{ and }}j,}
the notation
j
≥
i
{\displaystyle j\geq i}
is defined to mean
i
≤
j
{\displaystyle i\leq j}
while
i
<
j
{\displaystyle i<j}
is defined to mean that
i
≤
j
{\displaystyle i\leq j}
holds but it is not true that
j
≤
i
{\displaystyle j\leq i}
(if
≤
{\displaystyle \,\leq \,}
is antisymmetric then this is equivalent to
i
≤
j
and
i
≠
j
{\displaystyle i\leq j{\text{ and }}i\neq j}
).
A net in
X
{\displaystyle X}
is a map from a non–empty directed set into
X
.
{\displaystyle X.}
The notation
x
∙
=
(
x
i
)
i
∈
I
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}
will be used to denote a net with domain
I
.
{\displaystyle I.}
Warning about using strict comparison
If
x
∙
=
(
x
i
)
i
∈
I
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}
is a net and
i
∈
I
{\displaystyle i\in I}
then it is possible for the set
x
>
i
=
{
x
j
:
j
>
i
and
j
∈
I
}
,
{\displaystyle x_{>i}=\left\{x_{j}~:~j>i{\text{ and }}j\in I\right\},}
which is called the tail of
x
∙
{\displaystyle x_{\bullet }}
after
i
{\displaystyle i}
, to be empty (for example, this happens if
i
{\displaystyle i}
is an upper bound of the directed set
I
{\displaystyle I}
).
In this case, the family
{
x
>
i
:
i
∈
I
}
{\displaystyle \left\{x_{>i}~:~i\in I\right\}}
would contain the empty set, which would prevent it from being a prefilter (defined later).
This is the (important) reason for defining
Tails
(
x
∙
)
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)}
as
{
x
≥
i
:
i
∈
I
}
{\displaystyle \left\{x_{\geq i}~:~i\in I\right\}}
rather than
{
x
>
i
:
i
∈
I
}
{\displaystyle \left\{x_{>i}~:~i\in I\right\}}
or even
{
x
>
i
:
i
∈
I
}
∪
{
x
≥
i
:
i
∈
I
}
{\displaystyle \left\{x_{>i}~:~i\in I\right\}\cup \left\{x_{\geq i}~:~i\in I\right\}}
and it is for this reason that in general, when dealing with the prefilter of tails of a net, the strict inequality
<
{\displaystyle \,<\,}
may not be used interchangeably with the inequality
≤
.
{\displaystyle \,\leq .}
== Filters and prefilters ==
The following is a list of properties that a family
B
{\displaystyle {\mathcal {B}}}
of sets may possess and they form the defining properties of filters, prefilters, and filter subbases. Whenever it is necessary, it should be assumed that
B
⊆
℘
(
X
)
.
{\displaystyle {\mathcal {B}}\subseteq \wp (X).}
Many of the properties of
B
{\displaystyle {\mathcal {B}}}
defined above and below, such as "proper" and "directed downward," do not depend on
X
,
{\displaystyle X,}
so mentioning the set
X
{\displaystyle X}
is optional when using such terms. Definitions involving being "upward closed in
X
,
{\displaystyle X,}
" such as that of "filter on
X
,
{\displaystyle X,}
" do depend on
X
{\displaystyle X}
so the set
X
{\displaystyle X}
should be mentioned if it is not clear from context.
Filters
(
X
)
=
DualIdeals
(
X
)
∖
{
℘
(
X
)
}
⊆
Prefilters
(
X
)
⊆
FilterSubbases
(
X
)
.
{\displaystyle {\textrm {Filters}}(X)\quad =\quad {\textrm {DualIdeals}}(X)\,\setminus \,\{\wp (X)\}\quad \subseteq \quad {\textrm {Prefilters}}(X)\quad \subseteq \quad {\textrm {FilterSubbases}}(X).}
There are no prefilters on
X
=
∅
{\displaystyle X=\varnothing }
(nor are there any nets valued in
∅
{\displaystyle \varnothing }
), which is why this article, like most authors, will automatically assume without comment that
X
≠
∅
{\displaystyle X\neq \varnothing }
whenever this assumption is needed.
=== Basic examples ===
Named examples
The singleton set
B
=
{
X
}
{\displaystyle {\mathcal {B}}=\{X\}}
is called the indiscrete or trivial filter on
X
.
{\displaystyle X.}
It is the unique minimal filter on
X
{\displaystyle X}
because it is a subset of every filter on
X
{\displaystyle X}
; however, it need not be a subset of every prefilter on
X
.
{\displaystyle X.}
The dual ideal
℘
(
X
)
{\displaystyle \wp (X)}
is also called the degenerate filter on
X
{\displaystyle X}
(despite not actually being a filter). It is the only dual ideal on
X
{\displaystyle X}
that is not a filter on
X
.
{\displaystyle X.}
If
(
X
,
τ
)
{\displaystyle (X,\tau )}
is a topological space and
x
∈
X
,
{\displaystyle x\in X,}
then the neighborhood filter
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
at
x
{\displaystyle x}
is a filter on
X
.
{\displaystyle X.}
By definition, a family
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
is called a neighborhood basis (resp. a neighborhood subbase) at
x
for
(
X
,
τ
)
{\displaystyle x{\text{ for }}(X,\tau )}
if and only if
B
{\displaystyle {\mathcal {B}}}
is a prefilter (resp.
B
{\displaystyle {\mathcal {B}}}
is a filter subbase) and the filter on
X
{\displaystyle X}
that
B
{\displaystyle {\mathcal {B}}}
generates is equal to the neighborhood filter
N
(
x
)
.
{\displaystyle {\mathcal {N}}(x).}
The subfamily
τ
(
x
)
⊆
N
(
x
)
{\displaystyle \tau (x)\subseteq {\mathcal {N}}(x)}
of open neighborhoods is a filter base for
N
(
x
)
.
{\displaystyle {\mathcal {N}}(x).}
Both prefilters
N
(
x
)
and
τ
(
x
)
{\displaystyle {\mathcal {N}}(x){\text{ and }}\tau (x)}
also form a bases for topologies on
X
,
{\displaystyle X,}
with the topology generated
τ
(
x
)
{\displaystyle \tau (x)}
being coarser than
τ
.
{\displaystyle \tau .}
This example immediately generalizes from neighborhoods of points to neighborhoods of non–empty subsets
S
⊆
X
.
{\displaystyle S\subseteq X.}
B
{\displaystyle {\mathcal {B}}}
is an elementary prefilter if
B
=
Tails
(
x
∙
)
{\displaystyle {\mathcal {B}}=\operatorname {Tails} \left(x_{\bullet }\right)}
for some sequence
x
∙
=
(
x
i
)
i
=
1
∞
in
X
.
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }{\text{ in }}X.}
B
{\displaystyle {\mathcal {B}}}
is an elementary filter or a sequential filter on
X
{\displaystyle X}
if
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
generated by some elementary prefilter. The filter of tails generated by a sequence that is not eventually constant is necessarily not an ultrafilter. Every principal filter on a countable set is sequential as is every cofinite filter on a countably infinite set. The intersection of finitely many sequential filters is again sequential.
The set
F
{\displaystyle {\mathcal {F}}}
of all cofinite subsets of
X
{\displaystyle X}
(meaning those sets whose complement in
X
{\displaystyle X}
is finite) is proper if and only if
F
{\displaystyle {\mathcal {F}}}
is infinite (or equivalently,
X
{\displaystyle X}
is infinite), in which case
F
{\displaystyle {\mathcal {F}}}
is a filter on
X
{\displaystyle X}
known as the Fréchet filter or the cofinite filter on
X
.
{\displaystyle X.}
If
X
{\displaystyle X}
is finite then
F
{\displaystyle {\mathcal {F}}}
is equal to the dual ideal
℘
(
X
)
,
{\displaystyle \wp (X),}
which is not a filter. If
X
{\displaystyle X}
is infinite then the family
{
X
∖
{
x
}
:
x
∈
X
}
{\displaystyle \{X\setminus \{x\}~:~x\in X\}}
of complements of singleton sets is a filter subbase that generates the Fréchet filter on
X
.
{\displaystyle X.}
As with any family of sets over
X
{\displaystyle X}
that contains
{
X
∖
{
x
}
:
x
∈
X
}
,
{\displaystyle \{X\setminus \{x\}~:~x\in X\},}
the kernel of the Fréchet filter on
X
{\displaystyle X}
is the empty set:
ker
F
=
∅
.
{\displaystyle \ker {\mathcal {F}}=\varnothing .}
The intersection of all elements in any non–empty family
F
⊆
Filters
(
X
)
{\displaystyle \mathbb {F} \subseteq \operatorname {Filters} (X)}
is itself a filter on
X
{\displaystyle X}
called the infimum or greatest lower bound of
F
in
Filters
(
X
)
,
{\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X),}
which is why it may be denoted by
⋀
F
∈
F
F
.
{\displaystyle \bigwedge _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}.}
Said differently,
ker
F
=
⋂
F
∈
F
F
∈
Filters
(
X
)
.
{\displaystyle \ker \mathbb {F} =\bigcap _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}\in \operatorname {Filters} (X).}
Because every filter on
X
{\displaystyle X}
has
{
X
}
{\displaystyle \{X\}}
as a subset, this intersection is never empty. By definition, the infimum is the finest/largest (relative to
⊆
and
≤
{\displaystyle \,\subseteq \,{\text{ and }}\,\leq \,}
) filter contained as a subset of each member of
F
.
{\displaystyle \mathbb {F} .}
If
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are filters then their infimum in
Filters
(
X
)
{\displaystyle \operatorname {Filters} (X)}
is the filter
B
(
∪
)
F
.
{\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}.}
If
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are prefilters then
B
(
∪
)
F
{\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}}
is a prefilter that is coarser (with respect to
≤
{\displaystyle \,\leq }
) than both
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
(that is,
B
(
∪
)
F
≤
B
and
B
(
∪
)
F
≤
F
{\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\,(\cup )\,{\mathcal {F}}\leq {\mathcal {F}}}
); indeed, it is one of the finest such prefilters, meaning that if
S
{\displaystyle {\mathcal {S}}}
is a prefilter such that
S
≤
B
and
S
≤
F
{\displaystyle {\mathcal {S}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {S}}\leq {\mathcal {F}}}
then necessarily
S
≤
B
(
∪
)
F
.
{\displaystyle {\mathcal {S}}\leq {\mathcal {B}}\,(\cup )\,{\mathcal {F}}.}
More generally, if
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are non−empty families and if
S
:=
{
S
⊆
℘
(
X
)
:
S
≤
B
and
S
≤
F
}
{\displaystyle \mathbb {S} :=\{{\mathcal {S}}\subseteq \wp (X)~:~{\mathcal {S}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {S}}\leq {\mathcal {F}}\}}
then
B
(
∪
)
F
∈
S
{\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}\in \mathbb {S} }
and
B
(
∪
)
F
{\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}}
is a greatest element (with respect to
≤
{\displaystyle \leq }
) of
S
.
{\displaystyle \mathbb {S} .}
Let
∅
≠
F
⊆
DualIdeals
(
X
)
{\displaystyle \varnothing \neq \mathbb {F} \subseteq \operatorname {DualIdeals} (X)}
and let
∪
F
=
⋃
F
∈
F
F
.
{\displaystyle \cup \mathbb {F} =\bigcup _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}.}
The supremum or least upper bound of
F
in
DualIdeals
(
X
)
,
{\displaystyle \mathbb {F} {\text{ in }}\operatorname {DualIdeals} (X),}
denoted by
⋁
F
∈
F
F
,
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}},}
is the smallest (relative to
⊆
{\displaystyle \subseteq }
) dual ideal on
X
{\displaystyle X}
containing every element of
F
{\displaystyle \mathbb {F} }
as a subset; that is, it is the smallest (relative to
⊆
{\displaystyle \subseteq }
) dual ideal on
X
{\displaystyle X}
containing
∪
F
{\displaystyle \cup \mathbb {F} }
as a subset.
This dual ideal is
⋁
F
∈
F
F
=
π
(
∪
F
)
↑
X
,
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X},}
where
π
(
∪
F
)
:=
{
F
1
∩
⋯
∩
F
n
:
n
∈
N
and every
F
i
belongs to some
F
∈
F
}
{\displaystyle \pi \left(\cup \mathbb {F} \right):=\left\{F_{1}\cap \cdots \cap F_{n}~:~n\in \mathbb {N} {\text{ and every }}F_{i}{\text{ belongs to some }}{\mathcal {F}}\in \mathbb {F} \right\}}
is the π–system generated by
∪
F
.
{\displaystyle \cup \mathbb {F} .}
As with any non–empty family of sets,
∪
F
{\displaystyle \cup \mathbb {F} }
is contained in some filter on
X
{\displaystyle X}
if and only if it is a filter subbase, or equivalently, if and only if
⋁
F
∈
F
F
=
π
(
∪
F
)
↑
X
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X}}
is a filter on
X
,
{\displaystyle X,}
in which case this family is the smallest (relative to
⊆
{\displaystyle \subseteq }
) filter on
X
{\displaystyle X}
containing every element of
F
{\displaystyle \mathbb {F} }
as a subset and necessarily
F
⊆
Filters
(
X
)
.
{\displaystyle \mathbb {F} \subseteq \operatorname {Filters} (X).}
Let
∅
≠
F
⊆
Filters
(
X
)
{\displaystyle \varnothing \neq \mathbb {F} \subseteq \operatorname {Filters} (X)}
and let
∪
F
=
⋃
F
∈
F
F
.
{\displaystyle \cup \mathbb {F} =\bigcup _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}.}
The supremum or least upper bound of
F
in
Filters
(
X
)
,
{\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X),}
denoted by
⋁
F
∈
F
F
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}}
if it exists, is by definition the smallest (relative to
⊆
{\displaystyle \subseteq }
) filter on
X
{\displaystyle X}
containing every element of
F
{\displaystyle \mathbb {F} }
as a subset.
If it exists then necessarily
⋁
F
∈
F
F
=
π
(
∪
F
)
↑
X
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X}}
(as defined above) and
⋁
F
∈
F
F
{\displaystyle \bigvee _{{\mathcal {F}}\in \mathbb {F} }{\mathcal {F}}}
will also be equal to the intersection of all filters on
X
{\displaystyle X}
containing
∪
F
.
{\displaystyle \cup \mathbb {F} .}
This supremum of
F
in
Filters
(
X
)
{\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X)}
exists if and only if the dual ideal
π
(
∪
F
)
↑
X
{\displaystyle \pi \left(\cup \mathbb {F} \right)^{\uparrow X}}
is a filter on
X
.
{\displaystyle X.}
The least upper bound of a family of filters
F
{\displaystyle \mathbb {F} }
may fail to be a filter. Indeed, if
X
{\displaystyle X}
contains at least 2 distinct elements then there exist filters
B
and
C
on
X
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}{\text{ on }}X}
for which there does not exist a filter
F
on
X
{\displaystyle {\mathcal {F}}{\text{ on }}X}
that contains both
B
and
C
.
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.}
If
∪
F
{\displaystyle \cup \mathbb {F} }
is not a filter subbase then the supremum of
F
in
Filters
(
X
)
{\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X)}
does not exist and the same is true of its supremum in
Prefilters
(
X
)
{\displaystyle \operatorname {Prefilters} (X)}
but their supremum in the set of all dual ideals on
X
{\displaystyle X}
will exist (it being the degenerate filter
℘
(
X
)
{\displaystyle \wp (X)}
).
If
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are prefilters (resp. filters on
X
{\displaystyle X}
) then
B
(
∩
)
F
{\displaystyle {\mathcal {B}}\,(\cap )\,{\mathcal {F}}}
is a prefilter (resp. a filter) if and only if it is non–degenerate (or said differently, if and only if
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
mesh), in which case it is one of the coarsest prefilters (resp. the coarsest filter) on
X
{\displaystyle X}
(with respect to
≤
{\displaystyle \,\leq }
) that is finer (with respect to
≤
{\displaystyle \,\leq }
) than both
B
and
F
;
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}};}
this means that if
S
{\displaystyle {\mathcal {S}}}
is any prefilter (resp. any filter) such that
B
≤
S
and
F
≤
S
{\displaystyle {\mathcal {B}}\leq {\mathcal {S}}{\text{ and }}{\mathcal {F}}\leq {\mathcal {S}}}
then necessarily
B
(
∩
)
F
≤
S
,
{\displaystyle {\mathcal {B}}\,(\cap )\,{\mathcal {F}}\leq {\mathcal {S}},}
in which case it is denoted by
B
∨
F
.
{\displaystyle {\mathcal {B}}\vee {\mathcal {F}}.}
Let
I
and
X
{\displaystyle I{\text{ and }}X}
be non−empty sets and for every
i
∈
I
{\displaystyle i\in I}
let
D
i
{\displaystyle {\mathcal {D}}_{i}}
be a dual ideal on
X
.
{\displaystyle X.}
If
I
{\displaystyle {\mathcal {I}}}
is any dual ideal on
I
{\displaystyle I}
then
⋃
Ξ
∈
I
⋂
i
∈
Ξ
D
i
{\displaystyle \bigcup _{\Xi \in {\mathcal {I}}}\;\;\bigcap _{i\in \Xi }\;{\mathcal {D}}_{i}}
is a dual ideal on
X
{\displaystyle X}
called Kowalsky's dual ideal or Kowalsky's filter.
The club filter of a regular uncountable cardinal is the filter of all sets containing a club subset of
κ
.
{\displaystyle \kappa .}
It is a
κ
{\displaystyle \kappa }
-complete filter closed under diagonal intersection.
Other examples
Let
X
=
{
p
,
1
,
2
,
3
}
{\displaystyle X=\{p,1,2,3\}}
and let
B
=
{
{
p
}
,
{
p
,
1
,
2
}
,
{
p
,
1
,
3
}
}
,
{\displaystyle {\mathcal {B}}=\{\{p\},\{p,1,2\},\{p,1,3\}\},}
which makes
B
{\displaystyle {\mathcal {B}}}
a prefilter and a filter subbase that is not closed under finite intersections. Because
B
{\displaystyle {\mathcal {B}}}
is a prefilter, the smallest prefilter containing
B
{\displaystyle {\mathcal {B}}}
is
B
.
{\displaystyle {\mathcal {B}}.}
The π–system generated by
B
{\displaystyle {\mathcal {B}}}
is
{
{
p
,
1
}
}
∪
B
.
{\displaystyle \{\{p,1\}\}\cup {\mathcal {B}}.}
In particular, the smallest prefilter containing the filter subbase
B
{\displaystyle {\mathcal {B}}}
is not equal to the set of all finite intersections of sets in
B
.
{\displaystyle {\mathcal {B}}.}
The filter on
X
{\displaystyle X}
generated by
B
{\displaystyle {\mathcal {B}}}
is
B
↑
X
=
{
S
⊆
X
:
p
∈
S
}
=
{
{
p
}
∪
T
:
T
⊆
{
1
,
2
,
3
}
}
.
{\displaystyle {\mathcal {B}}^{\uparrow X}=\{S\subseteq X:p\in S\}=\{\{p\}\cup T~:~T\subseteq \{1,2,3\}\}.}
All three of
B
,
{\displaystyle {\mathcal {B}},}
the π–system
B
{\displaystyle {\mathcal {B}}}
generates, and
B
↑
X
{\displaystyle {\mathcal {B}}^{\uparrow X}}
are examples of fixed, principal, ultra prefilters that are principal at the point
p
;
B
↑
X
{\displaystyle p;{\mathcal {B}}^{\uparrow X}}
is also an ultrafilter on
X
.
{\displaystyle X.}
Let
(
X
,
τ
)
{\displaystyle (X,\tau )}
be a topological space,
B
⊆
℘
(
X
)
,
{\displaystyle {\mathcal {B}}\subseteq \wp (X),}
and define
B
¯
:=
{
cl
X
B
:
B
∈
B
}
,
{\displaystyle {\overline {\mathcal {B}}}:=\left\{\operatorname {cl} _{X}B~:~B\in {\mathcal {B}}\right\},}
where
B
{\displaystyle {\mathcal {B}}}
is necessarily finer than
B
¯
.
{\displaystyle {\overline {\mathcal {B}}}.}
If
B
{\displaystyle {\mathcal {B}}}
is non–empty (resp. non–degenerate, a filter subbase, a prefilter, closed under finite unions) then the same is true of
B
¯
.
{\displaystyle {\overline {\mathcal {B}}}.}
If
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
then
B
¯
{\displaystyle {\overline {\mathcal {B}}}}
is a prefilter but not necessarily a filter on
X
{\displaystyle X}
although
(
B
¯
)
↑
X
{\displaystyle \left({\overline {\mathcal {B}}}\right)^{\uparrow X}}
is a filter on
X
{\displaystyle X}
equivalent to
B
¯
.
{\displaystyle {\overline {\mathcal {B}}}.}
The set
B
{\displaystyle {\mathcal {B}}}
of all dense open subsets of a (non–empty) topological space
X
{\displaystyle X}
is a proper π–system and so also a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a π–system and a prefilter that is finer than
B
.
{\displaystyle {\mathcal {B}}.}
If
X
=
R
n
{\displaystyle X=\mathbb {R} ^{n}}
(with
1
≤
n
∈
N
{\displaystyle 1\leq n\in \mathbb {N} }
) then the set
B
LebFinite
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }}
of all
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
such that
B
{\displaystyle B}
has finite Lebesgue measure is a proper π–system and free prefilter that is also a proper subset of
B
.
{\displaystyle {\mathcal {B}}.}
The prefilters
B
LebFinite
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }}
and
B
{\displaystyle {\mathcal {B}}}
are equivalent and so generate the same filter on
X
.
{\displaystyle X.}
The prefilter
B
LebFinite
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }}
is properly contained in, and not equivalent to, the prefilter consisting of all dense subsets of
R
.
{\displaystyle \mathbb {R} .}
Since
X
{\displaystyle X}
is a Baire space, every countable intersection of sets in
B
LebFinite
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }}
is dense in
X
{\displaystyle X}
(and also comeagre and non–meager) so the set of all countable intersections of elements of
B
LebFinite
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }}
is a prefilter and π–system; it is also finer than, and not equivalent to,
B
LebFinite
.
{\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }.}
A filter subbase with no
⊆
−
{\displaystyle \,\subseteq -}
smallest prefilter containing it: In general, if a filter subbase
S
{\displaystyle {\mathcal {S}}}
is not a π–system then an intersection
S
1
∩
⋯
∩
S
n
{\displaystyle S_{1}\cap \cdots \cap S_{n}}
of
n
{\displaystyle n}
sets from
S
{\displaystyle {\mathcal {S}}}
will usually require a description involving
n
{\displaystyle n}
variables that cannot be reduced down to only two (consider, for instance
π
(
S
)
{\displaystyle \pi ({\mathcal {S}})}
when
S
=
{
(
−
∞
,
r
)
∪
(
r
,
∞
)
:
r
∈
R
}
{\displaystyle {\mathcal {S}}=\{(-\infty ,r)\cup (r,\infty )~:~r\in \mathbb {R} \}}
). This example illustrates an atypical class of a filter subbases
S
R
{\displaystyle {\mathcal {S}}_{R}}
where all sets in both
S
R
{\displaystyle {\mathcal {S}}_{R}}
and its generated π–system can be described as sets of the form
B
r
,
s
,
{\displaystyle B_{r,s},}
so that in particular, no more than two variables (specifically,
r
and
s
{\displaystyle r{\text{ and }}s}
) are needed to describe the generated π–system.
For all
r
,
s
∈
R
,
{\displaystyle r,s\in \mathbb {R} ,}
let
B
r
,
s
=
(
r
,
0
)
∪
(
s
,
∞
)
,
{\displaystyle B_{r,s}=(r,0)\cup (s,\infty ),}
where
B
r
,
s
=
B
min
(
r
,
s
)
,
s
{\displaystyle B_{r,s}=B_{\min(r,s),s}}
always holds so no generality is lost by adding the assumption
r
≤
s
.
{\displaystyle r\leq s.}
For all real
r
≤
s
and
u
≤
v
,
{\displaystyle r\leq s{\text{ and }}u\leq v,}
if
s
or
v
{\displaystyle s{\text{ or }}v}
is non-negative then
B
−
r
,
s
∩
B
−
u
,
v
=
B
−
min
(
r
,
u
)
,
max
(
s
,
v
)
.
{\displaystyle B_{-r,s}\cap B_{-u,v}=B_{-\min(r,u),\max(s,v)}.}
For every set
R
{\displaystyle R}
of positive reals, let
S
R
:=
{
B
−
r
,
r
:
r
∈
R
}
=
{
(
−
r
,
0
)
∪
(
r
,
∞
)
:
r
∈
R
}
and
B
R
:=
{
B
−
r
,
s
:
r
≤
s
with
r
,
s
∈
R
}
=
{
(
−
r
,
0
)
∪
(
s
,
∞
)
:
r
≤
s
in
R
}
.
{\displaystyle {\mathcal {S}}_{R}:=\left\{B_{-r,r}:r\in R\right\}=\{(-r,0)\cup (r,\infty ):r\in R\}\quad {\text{ and }}\quad {\mathcal {B}}_{R}:=\left\{B_{-r,s}:r\leq s{\text{ with }}r,s\in R\right\}=\{(-r,0)\cup (s,\infty ):r\leq s{\text{ in }}R\}.}
Let
X
=
R
{\displaystyle X=\mathbb {R} }
and suppose
∅
≠
R
⊆
(
0
,
∞
)
{\displaystyle \varnothing \neq R\subseteq (0,\infty )}
is not a singleton set. Then
S
R
{\displaystyle {\mathcal {S}}_{R}}
is a filter subbase but not a prefilter and
B
R
=
π
(
S
R
)
{\displaystyle {\mathcal {B}}_{R}=\pi \left({\mathcal {S}}_{R}\right)}
is the π–system it generates, so that
B
R
↑
X
{\displaystyle {\mathcal {B}}_{R}^{\uparrow X}}
is the unique smallest filter in
X
=
R
{\displaystyle X=\mathbb {R} }
containing
S
R
.
{\displaystyle {\mathcal {S}}_{R}.}
However,
S
R
↑
X
{\displaystyle {\mathcal {S}}_{R}^{\uparrow X}}
is not a filter on
X
{\displaystyle X}
(nor is it a prefilter because it is not directed downward, although it is a filter subbase) and
S
R
↑
X
{\displaystyle {\mathcal {S}}_{R}^{\uparrow X}}
is a proper subset of the filter
B
R
↑
X
.
{\displaystyle {\mathcal {B}}_{R}^{\uparrow X}.}
If
R
,
S
⊆
(
0
,
∞
)
{\displaystyle R,S\subseteq (0,\infty )}
are non−empty intervals then the filter subbases
S
R
and
S
S
{\displaystyle {\mathcal {S}}_{R}{\text{ and }}{\mathcal {S}}_{S}}
generate the same filter on
X
{\displaystyle X}
if and only if
R
=
S
.
{\displaystyle R=S.}
If
C
{\displaystyle {\mathcal {C}}}
is a prefilter satisfying
S
(
0
,
∞
)
⊆
C
⊆
B
(
0
,
∞
)
{\displaystyle {\mathcal {S}}_{(0,\infty )}\subseteq {\mathcal {C}}\subseteq {\mathcal {B}}_{(0,\infty )}}
then for any
C
∈
C
∖
S
(
0
,
∞
)
,
{\displaystyle C\in {\mathcal {C}}\setminus {\mathcal {S}}_{(0,\infty )},}
the family
C
∖
{
C
}
{\displaystyle {\mathcal {C}}\setminus \{C\}}
is also a prefilter satisfying
S
(
0
,
∞
)
⊆
C
∖
{
C
}
⊆
B
(
0
,
∞
)
.
{\displaystyle {\mathcal {S}}_{(0,\infty )}\subseteq {\mathcal {C}}\setminus \{C\}\subseteq {\mathcal {B}}_{(0,\infty )}.}
This shows that there cannot exist a minimal/least (with respect to
⊆
{\displaystyle \subseteq }
) prefilter that both contains
S
(
0
,
∞
)
{\displaystyle {\mathcal {S}}_{(0,\infty )}}
and is a subset of the π–system generated by
S
(
0
,
∞
)
.
{\displaystyle {\mathcal {S}}_{(0,\infty )}.}
This remains true even if the requirement that the prefilter be a subset of
B
(
0
,
∞
)
=
π
(
S
(
0
,
∞
)
)
{\displaystyle {\mathcal {B}}_{(0,\infty )}=\pi \left({\mathcal {S}}_{(0,\infty )}\right)}
is removed; that is, (in sharp contrast to filters) there does not exist a minimal/least (with respect to
⊆
{\displaystyle \subseteq }
) prefilter containing the filter subbase
S
(
0
,
∞
)
.
{\displaystyle {\mathcal {S}}_{(0,\infty )}.}
=== Ultrafilters ===
There are many other characterizations of "ultrafilter" and "ultra prefilter," which are listed in the article on ultrafilters. Important properties of ultrafilters are also described in that article.
Ultrafilters
(
X
)
=
Filters
(
X
)
∩
UltraPrefilters
(
X
)
⊆
UltraPrefilters
(
X
)
=
UltraFilterSubbases
(
X
)
⊆
Prefilters
(
X
)
{\displaystyle {\begin{alignedat}{8}{\textrm {Ultrafilters}}(X)\;&=\;{\textrm {Filters}}(X)\,\cap \,{\textrm {UltraPrefilters}}(X)\\&\subseteq \;{\textrm {UltraPrefilters}}(X)={\textrm {UltraFilterSubbases}}(X)\\&\subseteq \;{\textrm {Prefilters}}(X)\\\end{alignedat}}}
Any non–degenerate family that has a singleton set as an element is ultra, in which case it will then be an ultra prefilter if and only if it also has the finite intersection property.
The trivial filter
{
X
}
on
X
{\displaystyle \{X\}{\text{ on }}X}
is ultra if and only if
X
{\displaystyle X}
is a singleton set.
The ultrafilter lemma
The following important theorem is due to Alfred Tarski (1930).
A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it.
Assuming the axioms of Zermelo–Fraenkel (ZF), the ultrafilter lemma follows from the Axiom of choice (in particular from Zorn's lemma) but is strictly weaker than it. The ultrafilter lemma implies the Axiom of choice for finite sets. If only dealing with Hausdorff spaces, then most basic results (as encountered in introductory courses) in Topology (such as Tychonoff's theorem for compact Hausdorff spaces and the Alexander subbase theorem) and in functional analysis (such as the Hahn–Banach theorem) can be proven using only the ultrafilter lemma; the full strength of the axiom of choice might not be needed.
=== Kernels ===
The kernel is useful in classifying properties of prefilters and other families of sets.
If
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
then for any point
x
,
x
∉
ker
B
if and only if
X
∖
{
x
}
∈
B
↑
X
.
{\displaystyle x,x\not \in \ker {\mathcal {B}}{\text{ if and only if }}X\setminus \{x\}\in {\mathcal {B}}^{\uparrow X}.}
Properties of kernels
If
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
then
ker
(
B
↑
X
)
=
ker
B
{\displaystyle \ker \left({\mathcal {B}}^{\uparrow X}\right)=\ker {\mathcal {B}}}
and this set is also equal to the kernel of the π–system that is generated by
B
.
{\displaystyle {\mathcal {B}}.}
In particular, if
B
{\displaystyle {\mathcal {B}}}
is a filter subbase then the kernels of all of the following sets are equal:
(1)
B
,
{\displaystyle {\mathcal {B}},}
(2) the π–system generated by
B
,
{\displaystyle {\mathcal {B}},}
and (3) the filter generated by
B
.
{\displaystyle {\mathcal {B}}.}
If
f
{\displaystyle f}
is a map then
f
(
ker
B
)
⊆
ker
f
(
B
)
{\displaystyle f(\ker {\mathcal {B}})\subseteq \ker f({\mathcal {B}})}
and
f
−
1
(
ker
B
)
=
ker
f
−
1
(
B
)
.
{\displaystyle f^{-1}(\ker {\mathcal {B}})=\ker f^{-1}({\mathcal {B}}).}
If
B
≤
C
{\displaystyle {\mathcal {B}}\leq {\mathcal {C}}}
then
ker
C
⊆
ker
B
{\displaystyle \ker {\mathcal {C}}\subseteq \ker {\mathcal {B}}}
while if
B
{\displaystyle {\mathcal {B}}}
and
C
{\displaystyle {\mathcal {C}}}
are equivalent then
ker
B
=
ker
C
.
{\displaystyle \ker {\mathcal {B}}=\ker {\mathcal {C}}.}
Equivalent families have equal kernels. Two principal families are equivalent if and only if their kernels are equal; that is, if
B
{\displaystyle {\mathcal {B}}}
and
C
{\displaystyle {\mathcal {C}}}
are principal then they are equivalent if and only if
ker
B
=
ker
C
.
{\displaystyle \ker {\mathcal {B}}=\ker {\mathcal {C}}.}
==== Classifying families by their kernels ====
If
B
{\displaystyle {\mathcal {B}}}
is a principal filter on
X
{\displaystyle X}
then
∅
≠
ker
B
∈
B
{\displaystyle \varnothing \neq \ker {\mathcal {B}}\in {\mathcal {B}}}
and
B
=
{
ker
B
}
↑
X
=
{
S
∪
ker
B
:
S
⊆
X
∖
ker
B
}
=
℘
(
X
∖
ker
B
)
(
∪
)
{
ker
B
}
{\displaystyle {\mathcal {B}}=\{\ker {\mathcal {B}}\}^{\uparrow X}=\{S\cup \ker {\mathcal {B}}:S\subseteq X\setminus \ker {\mathcal {B}}\}=\wp (X\setminus \ker {\mathcal {B}})\,(\cup )\,\{\ker {\mathcal {B}}\}}
where
{
ker
B
}
{\displaystyle \{\ker {\mathcal {B}}\}}
is also the smallest prefilter that generates
B
.
{\displaystyle {\mathcal {B}}.}
Family of examples: For any non–empty
C
⊆
R
,
{\displaystyle C\subseteq \mathbb {R} ,}
the family
B
C
=
{
R
∖
(
r
+
C
)
:
r
∈
R
}
{\displaystyle {\mathcal {B}}_{C}=\{\mathbb {R} \setminus (r+C)~:~r\in \mathbb {R} \}}
is free but it is a filter subbase if and only if no finite union of the form
(
r
1
+
C
)
∪
⋯
∪
(
r
n
+
C
)
{\displaystyle \left(r_{1}+C\right)\cup \cdots \cup \left(r_{n}+C\right)}
covers
R
,
{\displaystyle \mathbb {R} ,}
in which case the filter that it generates will also be free. In particular,
B
C
{\displaystyle {\mathcal {B}}_{C}}
is a filter subbase if
C
{\displaystyle C}
is countable (for example,
C
=
Q
,
Z
,
{\displaystyle C=\mathbb {Q} ,\mathbb {Z} ,}
the primes), a meager set in
R
,
{\displaystyle \mathbb {R} ,}
a set of finite measure, or a bounded subset of
R
.
{\displaystyle \mathbb {R} .}
If
C
{\displaystyle C}
is a singleton set then
B
C
{\displaystyle {\mathcal {B}}_{C}}
is a subbase for the Fréchet filter on
R
.
{\displaystyle \mathbb {R} .}
For every filter
F
on
X
{\displaystyle {\mathcal {F}}{\text{ on }}X}
there exists a unique pair of dual ideals
F
∗
and
F
∙
on
X
{\displaystyle {\mathcal {F}}^{*}{\text{ and }}{\mathcal {F}}^{\bullet }{\text{ on }}X}
such that
F
∗
{\displaystyle {\mathcal {F}}^{*}}
is free,
F
∙
{\displaystyle {\mathcal {F}}^{\bullet }}
is principal, and
F
∗
∧
F
∙
=
F
,
{\displaystyle {\mathcal {F}}^{*}\wedge {\mathcal {F}}^{\bullet }={\mathcal {F}},}
and
F
∗
and
F
∙
{\displaystyle {\mathcal {F}}^{*}{\text{ and }}{\mathcal {F}}^{\bullet }}
do not mesh (that is,
F
∗
∨
F
∙
=
℘
(
X
)
{\displaystyle {\mathcal {F}}^{*}\vee {\mathcal {F}}^{\bullet }=\wp (X)}
). The dual ideal
F
∗
{\displaystyle {\mathcal {F}}^{*}}
is called the free part of
F
{\displaystyle {\mathcal {F}}}
while
F
∙
{\displaystyle {\mathcal {F}}^{\bullet }}
is called the principal part where at least one of these dual ideals is filter. If
F
{\displaystyle {\mathcal {F}}}
is principal then
F
∙
:=
F
and
F
∗
:=
℘
(
X
)
;
{\displaystyle {\mathcal {F}}^{\bullet }:={\mathcal {F}}{\text{ and }}{\mathcal {F}}^{*}:=\wp (X);}
otherwise,
F
∙
:=
{
ker
F
}
↑
X
{\displaystyle {\mathcal {F}}^{\bullet }:=\{\ker {\mathcal {F}}\}^{\uparrow X}}
and
F
∗
:=
F
∨
{
X
∖
(
ker
F
)
}
↑
X
{\displaystyle {\mathcal {F}}^{*}:={\mathcal {F}}\vee \{X\setminus \left(\ker {\mathcal {F}}\right)\}^{\uparrow X}}
is a free (non–degenerate) filter.
Finite prefilters and finite sets
If a filter subbase
B
{\displaystyle {\mathcal {B}}}
is finite then it is fixed (that is, not free);
this is because
ker
B
=
⋂
B
∈
B
B
{\displaystyle \ker {\mathcal {B}}=\bigcap _{B\in {\mathcal {B}}}B}
is a finite intersection and the filter subbase
B
{\displaystyle {\mathcal {B}}}
has the finite intersection property.
A finite prefilter is necessarily principal, although it does not have to be closed under finite intersections.
If
X
{\displaystyle X}
is finite then all of the conclusions above hold for any
B
⊆
℘
(
X
)
.
{\displaystyle {\mathcal {B}}\subseteq \wp (X).}
In particular, on a finite set
X
,
{\displaystyle X,}
there are no free filter subbases (and so no free prefilters), all prefilters are principal, and all filters on
X
{\displaystyle X}
are principal filters generated by their (non–empty) kernels.
The trivial filter
{
X
}
{\displaystyle \{X\}}
is always a finite filter on
X
{\displaystyle X}
and if
X
{\displaystyle X}
is infinite then it is the only finite filter because a non–trivial finite filter on a set
X
{\displaystyle X}
is possible if and only if
X
{\displaystyle X}
is finite.
However, on any infinite set there are non–trivial filter subbases and prefilters that are finite (although they cannot be filters).
If
X
{\displaystyle X}
is a singleton set then the trivial filter
{
X
}
{\displaystyle \{X\}}
is the only proper subset of
℘
(
X
)
{\displaystyle \wp (X)}
and moreover, this set
{
X
}
{\displaystyle \{X\}}
is a principal ultra prefilter and any superset
F
⊇
B
{\displaystyle {\mathcal {F}}\supseteq {\mathcal {B}}}
(where
F
⊆
℘
(
Y
)
and
X
⊆
Y
{\displaystyle {\mathcal {F}}\subseteq \wp (Y){\text{ and }}X\subseteq Y}
) with the finite intersection property will also be a principal ultra prefilter (even if
Y
{\displaystyle Y}
is infinite).
==== Characterizing fixed ultra prefilters ====
If a family of sets
B
{\displaystyle {\mathcal {B}}}
is fixed (that is,
ker
B
≠
∅
{\displaystyle \ker {\mathcal {B}}\neq \varnothing }
) then
B
{\displaystyle {\mathcal {B}}}
is ultra if and only if some element of
B
{\displaystyle {\mathcal {B}}}
is a singleton set, in which case
B
{\displaystyle {\mathcal {B}}}
will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter
B
{\displaystyle {\mathcal {B}}}
is ultra if and only if
ker
B
{\displaystyle \ker {\mathcal {B}}}
is a singleton set.
Every filter on
X
{\displaystyle X}
that is principal at a single point is an ultrafilter, and if in addition
X
{\displaystyle X}
is finite, then there are no ultrafilters on
X
{\displaystyle X}
other than these.
The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point.
=== Finer/coarser, subordination, and meshing ===
The preorder
≤
{\displaystyle \,\leq \,}
that is defined below is of fundamental importance for the use of prefilters (and filters) in topology. For instance, this preorder is used to define the prefilter equivalent of "subsequence", where "
F
≥
C
{\displaystyle {\mathcal {F}}\geq {\mathcal {C}}}
" can be interpreted as "
F
{\displaystyle {\mathcal {F}}}
is a subsequence of
C
{\displaystyle {\mathcal {C}}}
" (so "subordinate to" is the prefilter equivalent of "subsequence of"). It is also used to define prefilter convergence in a topological space.
The definition of
B
{\displaystyle {\mathcal {B}}}
meshes with
C
,
{\displaystyle {\mathcal {C}},}
which is closely related to the preorder
≤
,
{\displaystyle \,\leq ,}
is used in Topology to define cluster points.
Two families of sets
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
mesh and are compatible, indicated by writing
B
#
C
,
{\displaystyle {\mathcal {B}}\#{\mathcal {C}},}
if
B
∩
C
≠
∅
for all
B
∈
B
and
C
∈
C
.
{\displaystyle B\cap C\neq \varnothing {\text{ for all }}B\in {\mathcal {B}}{\text{ and }}C\in {\mathcal {C}}.}
If
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
do not mesh then they are dissociated. If
S
⊆
X
and
B
⊆
℘
(
X
)
{\displaystyle S\subseteq X{\text{ and }}{\mathcal {B}}\subseteq \wp (X)}
then
B
and
S
{\displaystyle {\mathcal {B}}{\text{ and }}S}
are said to mesh if
B
and
{
S
}
{\displaystyle {\mathcal {B}}{\text{ and }}\{S\}}
mesh, or equivalently, if the trace of
B
on
S
,
{\displaystyle {\mathcal {B}}{\text{ on }}S,}
which is the family
B
|
S
=
{
B
∩
S
:
B
∈
B
}
,
{\displaystyle {\mathcal {B}}{\big \vert }_{S}=\{B\cap S~:~B\in {\mathcal {B}}\},}
does not contain the empty set, where the trace is also called the restriction of
B
to
S
.
{\displaystyle {\mathcal {B}}{\text{ to }}S.}
Example: If
x
i
∙
=
(
x
i
n
)
n
=
1
∞
{\displaystyle x_{i_{\bullet }}=\left(x_{i_{n}}\right)_{n=1}^{\infty }}
is a subsequence of
x
∙
=
(
x
i
)
i
=
1
∞
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}
then
Tails
(
x
i
∙
)
{\displaystyle \operatorname {Tails} \left(x_{i_{\bullet }}\right)}
is subordinate to
Tails
(
x
∙
)
;
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right);}
in symbols:
Tails
(
x
i
∙
)
⊢
Tails
(
x
∙
)
{\displaystyle \operatorname {Tails} \left(x_{i_{\bullet }}\right)\vdash \operatorname {Tails} \left(x_{\bullet }\right)}
and also
Tails
(
x
∙
)
≤
Tails
(
x
i
∙
)
.
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(x_{i_{\bullet }}\right).}
Stated in plain English, the prefilter of tails of a subsequence is always subordinate to that of the original sequence.
To see this, let
C
:=
x
≥
i
∈
Tails
(
x
∙
)
{\displaystyle C:=x_{\geq i}\in \operatorname {Tails} \left(x_{\bullet }\right)}
be arbitrary (or equivalently, let
i
∈
N
{\displaystyle i\in \mathbb {N} }
be arbitrary) and it remains to show that this set contains some
F
:=
x
i
≥
n
∈
Tails
(
x
i
∙
)
.
{\displaystyle F:=x_{i_{\geq n}}\in \operatorname {Tails} \left(x_{i_{\bullet }}\right).}
For the set
x
≥
i
=
{
x
i
,
x
i
+
1
,
…
}
{\displaystyle x_{\geq i}=\left\{x_{i},x_{i+1},\ldots \right\}}
to contain
x
i
≥
n
=
{
x
i
n
,
x
i
n
+
1
,
…
}
,
{\displaystyle x_{i_{\geq n}}=\left\{x_{i_{n}},x_{i_{n+1}},\ldots \right\},}
it is sufficient to have
i
≤
i
n
.
{\displaystyle i\leq i_{n}.}
Since
i
1
<
i
2
<
⋯
{\displaystyle i_{1}<i_{2}<\cdots }
are strictly increasing integers, there exists
n
∈
N
{\displaystyle n\in \mathbb {N} }
such that
i
n
≥
i
,
{\displaystyle i_{n}\geq i,}
and so
x
≥
i
⊇
x
i
≥
n
{\displaystyle x_{\geq i}\supseteq x_{i_{\geq n}}}
holds, as desired.
Consequently,
TailsFilter
(
x
∙
)
⊆
TailsFilter
(
x
i
∙
)
.
{\displaystyle \operatorname {TailsFilter} \left(x_{\bullet }\right)\subseteq \operatorname {TailsFilter} \left(x_{i_{\bullet }}\right).}
The left hand side will be a strict/proper subset of the right hand side if (for instance) every point of
x
∙
{\displaystyle x_{\bullet }}
is unique (that is, when
x
∙
:
N
→
X
{\displaystyle x_{\bullet }:\mathbb {N} \to X}
is injective) and
x
i
∙
{\displaystyle x_{i_{\bullet }}}
is the even-indexed subsequence
(
x
2
,
x
4
,
x
6
,
…
)
{\displaystyle \left(x_{2},x_{4},x_{6},\ldots \right)}
because under these conditions, every tail
x
i
≥
n
=
{
x
2
n
,
x
2
n
+
2
,
x
2
n
+
4
,
…
}
{\displaystyle x_{i_{\geq n}}=\left\{x_{2n},x_{2n+2},x_{2n+4},\ldots \right\}}
(for every
n
∈
N
{\displaystyle n\in \mathbb {N} }
) of the subsequence will belong to the right hand side filter but not to the left hand side filter.
For another example, if
B
{\displaystyle {\mathcal {B}}}
is any family then
∅
≤
B
≤
B
≤
{
∅
}
{\displaystyle \varnothing \leq {\mathcal {B}}\leq {\mathcal {B}}\leq \{\varnothing \}}
always holds and furthermore,
{
∅
}
≤
B
if and only if
∅
∈
B
.
{\displaystyle \{\varnothing \}\leq {\mathcal {B}}{\text{ if and only if }}\varnothing \in {\mathcal {B}}.}
Assume that
C
and
F
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}}}
are families of sets that satisfy
B
≤
F
and
C
≤
F
.
{\displaystyle {\mathcal {B}}\leq {\mathcal {F}}{\text{ and }}{\mathcal {C}}\leq {\mathcal {F}}.}
Then
ker
F
⊆
ker
C
,
{\displaystyle \ker {\mathcal {F}}\subseteq \ker {\mathcal {C}},}
and
C
≠
∅
implies
F
≠
∅
,
{\displaystyle {\mathcal {C}}\neq \varnothing {\text{ implies }}{\mathcal {F}}\neq \varnothing ,}
and also
∅
∈
C
implies
∅
∈
F
.
{\displaystyle \varnothing \in {\mathcal {C}}{\text{ implies }}\varnothing \in {\mathcal {F}}.}
If in addition to
C
≤
F
,
F
{\displaystyle {\mathcal {C}}\leq {\mathcal {F}},{\mathcal {F}}}
is a filter subbase and
C
≠
∅
,
{\displaystyle {\mathcal {C}}\neq \varnothing ,}
then
C
{\displaystyle {\mathcal {C}}}
is a filter subbase and also
C
and
F
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}}}
mesh.
More generally, if both
∅
≠
B
≤
F
and
∅
≠
C
≤
F
{\displaystyle \varnothing \neq {\mathcal {B}}\leq {\mathcal {F}}{\text{ and }}\varnothing \neq {\mathcal {C}}\leq {\mathcal {F}}}
and if the intersection of any two elements of
F
{\displaystyle {\mathcal {F}}}
is non–empty, then
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
mesh.
Every filter subbase is coarser than both the π–system that it generates and the filter that it generates.
If
C
and
F
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}}}
are families such that
C
≤
F
,
{\displaystyle {\mathcal {C}}\leq {\mathcal {F}},}
the family
C
{\displaystyle {\mathcal {C}}}
is ultra, and
∅
∉
F
,
{\displaystyle \varnothing \not \in {\mathcal {F}},}
then
F
{\displaystyle {\mathcal {F}}}
is necessarily ultra. It follows that any family that is equivalent to an ultra family will necessarily be ultra. In particular, if
C
{\displaystyle {\mathcal {C}}}
is a prefilter then either both
C
{\displaystyle {\mathcal {C}}}
and the filter
C
↑
X
{\displaystyle {\mathcal {C}}^{\uparrow X}}
it generates are ultra or neither one is ultra.
If a filter subbase is ultra then it is necessarily a prefilter, in which case the filter that it generates will also be ultra. A filter subbase
B
{\displaystyle {\mathcal {B}}}
that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by
B
{\displaystyle {\mathcal {B}}}
to be ultra. If
S
⊆
X
and
B
⊆
℘
(
X
)
{\displaystyle S\subseteq X{\text{ and }}{\mathcal {B}}\subseteq \wp (X)}
is upward closed in
X
{\displaystyle X}
then
S
∉
B
if and only if
(
X
∖
S
)
#
B
.
{\displaystyle S\not \in {\mathcal {B}}{\text{ if and only if }}(X\setminus S)\#{\mathcal {B}}.}
Relational properties of subordination
The relation
≤
{\displaystyle \,\leq \,}
is reflexive and transitive, which makes it into a preorder on
℘
(
℘
(
X
)
)
.
{\displaystyle \wp (\wp (X)).}
The relation
≤
on
Filters
(
X
)
{\displaystyle \,\leq \,{\text{ on }}\operatorname {Filters} (X)}
is antisymmetric but if
X
{\displaystyle X}
has more than one point then it is not symmetric.
Symmetry:
For any
B
⊆
℘
(
X
)
,
B
≤
{
X
}
if and only if
{
X
}
=
B
.
{\displaystyle {\mathcal {B}}\subseteq \wp (X),{\mathcal {B}}\leq \{X\}{\text{ if and only if }}\{X\}={\mathcal {B}}.}
So the set
X
{\displaystyle X}
has more than one point if and only if the relation
≤
on
Filters
(
X
)
{\displaystyle \,\leq \,{\text{ on }}\operatorname {Filters} (X)}
is not symmetric.
Antisymmetry:
If
B
⊆
C
then
B
≤
C
{\displaystyle {\mathcal {B}}\subseteq {\mathcal {C}}{\text{ then }}{\mathcal {B}}\leq {\mathcal {C}}}
but while the converse does not hold in general, it does hold if
C
{\displaystyle {\mathcal {C}}}
is upward closed (such as if
C
{\displaystyle {\mathcal {C}}}
is a filter).
Two filters are equivalent if and only if they are equal, which makes the restriction of
≤
{\displaystyle \,\leq \,}
to
Filters
(
X
)
{\displaystyle \operatorname {Filters} (X)}
antisymmetric.
But in general,
≤
{\displaystyle \,\leq \,}
is not antisymmetric on
Prefilters
(
X
)
{\displaystyle \operatorname {Prefilters} (X)}
nor on
℘
(
℘
(
X
)
)
{\displaystyle \wp (\wp (X))}
; that is,
C
≤
B
and
B
≤
C
{\displaystyle {\mathcal {C}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\leq {\mathcal {C}}}
does not necessarily imply
B
=
C
{\displaystyle {\mathcal {B}}={\mathcal {C}}}
; not even if both
C
and
B
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {B}}}
are prefilters. For instance, if
B
{\displaystyle {\mathcal {B}}}
is a prefilter but not a filter then
B
≤
B
↑
X
and
B
↑
X
≤
B
but
B
≠
B
↑
X
.
{\displaystyle {\mathcal {B}}\leq {\mathcal {B}}^{\uparrow X}{\text{ and }}{\mathcal {B}}^{\uparrow X}\leq {\mathcal {B}}{\text{ but }}{\mathcal {B}}\neq {\mathcal {B}}^{\uparrow X}.}
==== Equivalent families of sets ====
The preorder
≤
{\displaystyle \,\leq \,}
induces its canonical equivalence relation on
℘
(
℘
(
X
)
)
,
{\displaystyle \wp (\wp (X)),}
where for all
B
,
C
∈
℘
(
℘
(
X
)
)
,
{\displaystyle {\mathcal {B}},{\mathcal {C}}\in \wp (\wp (X)),}
B
{\displaystyle {\mathcal {B}}}
is equivalent to
C
{\displaystyle {\mathcal {C}}}
if any of the following equivalent conditions hold:
C
≤
B
and
B
≤
C
.
{\displaystyle {\mathcal {C}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\leq {\mathcal {C}}.}
The upward closures of
C
and
B
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {B}}}
are equal.
Two upward closed (in
X
{\displaystyle X}
) subsets of
℘
(
X
)
{\displaystyle \wp (X)}
are equivalent if and only if they are equal.
If
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
then necessarily
∅
≤
B
≤
℘
(
X
)
{\displaystyle \varnothing \leq {\mathcal {B}}\leq \wp (X)}
and
B
{\displaystyle {\mathcal {B}}}
is equivalent to
B
↑
X
.
{\displaystyle {\mathcal {B}}^{\uparrow X}.}
Every equivalence class other than
{
∅
}
{\displaystyle \{\varnothing \}}
contains a unique representative (that is, element of the equivalence class) that is upward closed in
X
.
{\displaystyle X.}
Properties preserved between equivalent families
Let
B
,
C
∈
℘
(
℘
(
X
)
)
{\displaystyle {\mathcal {B}},{\mathcal {C}}\in \wp (\wp (X))}
be arbitrary and let
F
{\displaystyle {\mathcal {F}}}
be any family of sets. If
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
are equivalent (which implies that
ker
B
=
ker
C
{\displaystyle \ker {\mathcal {B}}=\ker {\mathcal {C}}}
) then for each of the statements/properties listed below, either it is true of both
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
or else it is false of both
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
:
Not empty
Proper (that is,
∅
{\displaystyle \varnothing }
is not an element)
Moreover, any two degenerate families are necessarily equivalent.
Filter subbase
Prefilter
In which case
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
generate the same filter on
X
{\displaystyle X}
(that is, their upward closures in
X
{\displaystyle X}
are equal).
Free
Principal
Ultra
Is equal to the trivial filter
{
X
}
{\displaystyle \{X\}}
In words, this means that the only subset of
℘
(
X
)
{\displaystyle \wp (X)}
that is equivalent to the trivial filter is the trivial filter. In general, this conclusion of equality does not extend to non−trivial filters (one exception is when both families are filters).
Meshes with
F
{\displaystyle {\mathcal {F}}}
Is finer than
F
{\displaystyle {\mathcal {F}}}
Is coarser than
F
{\displaystyle {\mathcal {F}}}
Is equivalent to
F
{\displaystyle {\mathcal {F}}}
Missing from the above list is the word "filter" because this property is not preserved by equivalence.
However, if
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
are filters on
X
,
{\displaystyle X,}
then they are equivalent if and only if they are equal; this characterization does not extend to prefilters.
Equivalence of prefilters and filter subbases
If
B
{\displaystyle {\mathcal {B}}}
is a prefilter on
X
{\displaystyle X}
then the following families are always equivalent to each other:
B
{\displaystyle {\mathcal {B}}}
;
the π–system generated by
B
{\displaystyle {\mathcal {B}}}
;
the filter on
X
{\displaystyle X}
generated by
B
{\displaystyle {\mathcal {B}}}
;
and moreover, these three families all generate the same filter on
X
{\displaystyle X}
(that is, the upward closures in
X
{\displaystyle X}
of these families are equal).
In particular, every prefilter is equivalent to the filter that it generates.
By transitivity, two prefilters are equivalent if and only if they generate the same filter.
Every prefilter is equivalent to exactly one filter on
X
,
{\displaystyle X,}
which is the filter that it generates (that is, the prefilter's upward closure).
Said differently, every equivalence class of prefilters contains exactly one representative that is a filter.
In this way, filters can be considered as just being distinguished elements of these equivalence classes of prefilters.
A filter subbase that is not also a prefilter cannot be equivalent to the prefilter (or filter) that it generates.
In contrast, every prefilter is equivalent to the filter that it generates.
This is why prefilters can, by and large, be used interchangeably with the filters that they generate while filter subbases cannot.
Every filter is both a π–system and a ring of sets.
Examples of determining equivalence/non–equivalence
Examples: Let
X
=
R
{\displaystyle X=\mathbb {R} }
and let
E
{\displaystyle E}
be the set
Z
{\displaystyle \mathbb {Z} }
of integers (or the set
N
{\displaystyle \mathbb {N} }
). Define the sets
B
=
{
[
e
,
∞
)
:
e
∈
E
}
and
C
open
=
{
(
−
∞
,
e
)
∪
(
1
+
e
,
∞
)
:
e
∈
E
}
and
C
closed
=
{
(
−
∞
,
e
]
∪
[
1
+
e
,
∞
)
:
e
∈
E
}
.
{\displaystyle {\mathcal {B}}=\{[e,\infty )~:~e\in E\}\qquad {\text{ and }}\qquad {\mathcal {C}}_{\operatorname {open} }=\{(-\infty ,e)\cup (1+e,\infty )~:~e\in E\}\qquad {\text{ and }}\qquad {\mathcal {C}}_{\operatorname {closed} }=\{(-\infty ,e]\cup [1+e,\infty )~:~e\in E\}.}
All three sets are filter subbases but none are filters on
X
{\displaystyle X}
and only
B
{\displaystyle {\mathcal {B}}}
is prefilter (in fact,
B
{\displaystyle {\mathcal {B}}}
is even free and closed under finite intersections). The set
C
closed
{\displaystyle {\mathcal {C}}_{\operatorname {closed} }}
is fixed while
C
open
{\displaystyle {\mathcal {C}}_{\operatorname {open} }}
is free (unless
E
=
N
{\displaystyle E=\mathbb {N} }
). They satisfy
C
closed
≤
C
open
≤
B
,
{\displaystyle {\mathcal {C}}_{\operatorname {closed} }\leq {\mathcal {C}}_{\operatorname {open} }\leq {\mathcal {B}},}
but no two of these families are equivalent; moreover, no two of the filters generated by these three filter subbases are equivalent/equal. This conclusion can be reached by showing that the π–systems that they generate are not equivalent. Unlike with
C
open
,
{\displaystyle {\mathcal {C}}_{\operatorname {open} },}
every set in the π–system generated by
C
closed
{\displaystyle {\mathcal {C}}_{\operatorname {closed} }}
contains
Z
{\displaystyle \mathbb {Z} }
as a subset, which is what prevents their generated π–systems (and hence their generated filters) from being equivalent. If
E
{\displaystyle E}
was instead
Q
or
R
{\displaystyle \mathbb {Q} {\text{ or }}\mathbb {R} }
then all three families would be free and although the sets
C
closed
and
C
open
{\displaystyle {\mathcal {C}}_{\operatorname {closed} }{\text{ and }}{\mathcal {C}}_{\operatorname {open} }}
would remain not equivalent to each other, their generated π–systems would be equivalent and consequently, they would generate the same filter on
X
{\displaystyle X}
; however, this common filter would still be strictly coarser than the filter generated by
B
.
{\displaystyle {\mathcal {B}}.}
== Set theoretic properties and constructions ==
=== Trace and meshing ===
If
B
{\displaystyle {\mathcal {B}}}
is a prefilter (resp. filter) on
X
and
S
⊆
X
{\displaystyle X{\text{ and }}S\subseteq X}
then the trace of
B
on
S
,
{\displaystyle {\mathcal {B}}{\text{ on }}S,}
which is the family
B
|
S
:=
B
(
∩
)
{
S
}
,
{\displaystyle {\mathcal {B}}{\big \vert }_{S}:={\mathcal {B}}(\cap )\{S\},}
is a prefilter (resp. a filter) if and only if
B
and
S
{\displaystyle {\mathcal {B}}{\text{ and }}S}
mesh (that is,
∅
∉
B
(
∩
)
{
S
}
{\displaystyle \varnothing \not \in {\mathcal {B}}(\cap )\{S\}}
), in which case the trace of
B
on
S
{\displaystyle {\mathcal {B}}{\text{ on }}S}
is said to be induced by
S
{\displaystyle S}
.
If
B
{\displaystyle {\mathcal {B}}}
is ultra and if
B
and
S
{\displaystyle {\mathcal {B}}{\text{ and }}S}
mesh then the trace
B
|
S
{\displaystyle {\mathcal {B}}{\big \vert }_{S}}
is ultra.
If
B
{\displaystyle {\mathcal {B}}}
is an ultrafilter on
X
{\displaystyle X}
then the trace of
B
on
S
{\displaystyle {\mathcal {B}}{\text{ on }}S}
is a filter on
S
{\displaystyle S}
if and only if
S
∈
B
.
{\displaystyle S\in {\mathcal {B}}.}
For example, suppose that
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
and
S
⊆
X
{\displaystyle X{\text{ and }}S\subseteq X}
is such that
S
≠
X
and
X
∖
S
∉
B
.
{\displaystyle S\neq X{\text{ and }}X\setminus S\not \in {\mathcal {B}}.}
Then
B
and
S
{\displaystyle {\mathcal {B}}{\text{ and }}S}
mesh and
B
∪
{
S
}
{\displaystyle {\mathcal {B}}\cup \{S\}}
generates a filter on
X
{\displaystyle X}
that is strictly finer than
B
.
{\displaystyle {\mathcal {B}}.}
When prefilters mesh
Given non–empty families
B
and
C
,
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}},}
the family
B
(
∩
)
C
:=
{
B
∩
C
:
B
∈
B
and
C
∈
C
}
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}:=\{B\cap C~:~B\in {\mathcal {B}}{\text{ and }}C\in {\mathcal {C}}\}}
satisfies
C
≤
B
(
∩
)
C
{\displaystyle {\mathcal {C}}\leq {\mathcal {B}}(\cap ){\mathcal {C}}}
and
B
≤
B
(
∩
)
C
.
{\displaystyle {\mathcal {B}}\leq {\mathcal {B}}(\cap ){\mathcal {C}}.}
If
B
(
∩
)
C
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}}
is proper (resp. a prefilter, a filter subbase) then this is also true of both
B
and
C
.
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.}
In order to make any meaningful deductions about
B
(
∩
)
C
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}}
from
B
and
C
,
B
(
∩
)
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}},{\mathcal {B}}(\cap ){\mathcal {C}}}
needs to be proper (that is,
∅
∉
B
(
∩
)
C
,
{\displaystyle \varnothing \not \in {\mathcal {B}}(\cap ){\mathcal {C}},}
which is the motivation for the definition of "mesh".
In this case,
B
(
∩
)
C
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}}
is a prefilter (resp. filter subbase) if and only if this is true of both
B
and
C
.
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.}
Said differently, if
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
are prefilters then they mesh if and only if
B
(
∩
)
C
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}}
is a prefilter.
Generalizing gives a well known characterization of "mesh" entirely in terms of subordination (that is,
≤
{\displaystyle \,\leq \,}
):
Two prefilters (resp. filter subbases)
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
mesh if and only if there exists a prefilter (resp. filter subbase)
F
{\displaystyle {\mathcal {F}}}
such that
C
≤
F
{\displaystyle {\mathcal {C}}\leq {\mathcal {F}}}
and
B
≤
F
.
{\displaystyle {\mathcal {B}}\leq {\mathcal {F}}.}
If the least upper bound of two filters
B
and
C
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}}
exists in
Filters
(
X
)
{\displaystyle \operatorname {Filters} (X)}
then this least upper bound is equal to
B
(
∩
)
C
.
{\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}.}
=== Images and preimages under functions ===
Throughout,
f
:
X
→
Y
and
g
:
Y
→
Z
{\displaystyle f:X\to Y{\text{ and }}g:Y\to Z}
will be maps between non–empty sets.
Images of prefilters
Let
B
⊆
℘
(
Y
)
.
{\displaystyle {\mathcal {B}}\subseteq \wp (Y).}
Many of the properties that
B
{\displaystyle {\mathcal {B}}}
may have are preserved under images of maps; notable exceptions include being upward closed, being closed under finite intersections, and being a filter, which are not necessarily preserved.
Explicitly, if one of the following properties is true of
B
on
Y
,
{\displaystyle {\mathcal {B}}{\text{ on }}Y,}
then it will necessarily also be true of
g
(
B
)
on
g
(
Y
)
{\displaystyle g({\mathcal {B}}){\text{ on }}g(Y)}
(although possibly not on the codomain
Z
{\displaystyle Z}
unless
g
{\displaystyle g}
is surjective):
Filter properties: ultra, ultrafilter, filter, prefilter, filter subbase, dual ideal, upward closed, proper/non–degenerate.
Ideal properties: ideal, closed under finite unions, downward closed, directed upward.
Moreover, if
B
⊆
℘
(
Y
)
{\displaystyle {\mathcal {B}}\subseteq \wp (Y)}
is a prefilter then so are both
g
(
B
)
and
g
−
1
(
g
(
B
)
)
.
{\displaystyle g({\mathcal {B}}){\text{ and }}g^{-1}(g({\mathcal {B}})).}
The image under a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
of an ultra set
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
is again ultra and if
B
{\displaystyle {\mathcal {B}}}
is an ultra prefilter then so is
f
(
B
)
.
{\displaystyle f({\mathcal {B}}).}
If
B
{\displaystyle {\mathcal {B}}}
is a filter then
g
(
B
)
{\displaystyle g({\mathcal {B}})}
is a filter on the range
g
(
Y
)
,
{\displaystyle g(Y),}
but it is a filter on the codomain
Z
{\displaystyle Z}
if and only if
g
{\displaystyle g}
is surjective.
Otherwise it is just a prefilter on
Z
{\displaystyle Z}
and its upward closure must be taken in
Z
{\displaystyle Z}
to obtain a filter.
The upward closure of
g
(
B
)
in
Z
{\displaystyle g({\mathcal {B}}){\text{ in }}Z}
is
g
(
B
)
↑
Z
=
{
S
⊆
Z
:
B
⊆
g
−
1
(
S
)
for some
B
∈
B
}
{\displaystyle g({\mathcal {B}})^{\uparrow Z}=\left\{S\subseteq Z~:~B\subseteq g^{-1}(S){\text{ for some }}B\in {\mathcal {B}}\right\}}
where if
B
{\displaystyle {\mathcal {B}}}
is upward closed in
Y
{\displaystyle Y}
(that is, a filter) then this simplifies to:
g
(
B
)
↑
Z
=
{
S
⊆
Z
:
g
−
1
(
S
)
∈
B
}
.
{\displaystyle g({\mathcal {B}})^{\uparrow Z}=\left\{S\subseteq Z~:~g^{-1}(S)\in {\mathcal {B}}\right\}.}
If
X
⊆
Y
{\displaystyle X\subseteq Y}
then taking
g
{\displaystyle g}
to be the inclusion map
X
→
Y
{\displaystyle X\to Y}
shows that any prefilter (resp. ultra prefilter, filter subbase) on
X
{\displaystyle X}
is also a prefilter (resp. ultra prefilter, filter subbase) on
Y
.
{\displaystyle Y.}
Preimages of prefilters
Let
B
⊆
℘
(
Y
)
.
{\displaystyle {\mathcal {B}}\subseteq \wp (Y).}
Under the assumption that
f
:
X
→
Y
{\displaystyle f:X\to Y}
is surjective:
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
is a prefilter (resp. filter subbase, π–system, closed under finite unions, proper) if and only if this is true of
B
.
{\displaystyle {\mathcal {B}}.}
However, if
B
{\displaystyle {\mathcal {B}}}
is an ultrafilter on
Y
{\displaystyle Y}
then even if
f
{\displaystyle f}
is surjective (which would make
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
a prefilter), it is nevertheless still possible for the prefilter
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
to be neither ultra nor a filter on
X
{\displaystyle X}
(see this footnote for an example).
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is not surjective then denote the trace of
B
on
f
(
X
)
{\displaystyle {\mathcal {B}}{\text{ on }}f(X)}
by
B
|
f
(
X
)
,
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)},}
where in this case particular case the trace satisfies:
B
|
f
(
X
)
=
f
(
f
−
1
(
B
)
)
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}=f\left(f^{-1}({\mathcal {B}})\right)}
and consequently also:
f
−
1
(
B
)
=
f
−
1
(
B
|
f
(
X
)
)
.
{\displaystyle f^{-1}({\mathcal {B}})=f^{-1}\left({\mathcal {B}}{\big \vert }_{f(X)}\right).}
This last equality and the fact that the trace
B
|
f
(
X
)
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}}
is a family of sets over
f
(
X
)
{\displaystyle f(X)}
means that to draw conclusions about
f
−
1
(
B
)
,
{\displaystyle f^{-1}({\mathcal {B}}),}
the trace
B
|
f
(
X
)
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}}
can be used in place of
B
{\displaystyle {\mathcal {B}}}
and the surjection
f
:
X
→
f
(
X
)
{\displaystyle f:X\to f(X)}
can be used in place of
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
For example:
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
is a prefilter (resp. filter subbase, π–system, proper) if and only if this is true of
B
|
f
(
X
)
.
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}.}
In this way, the case where
f
{\displaystyle f}
is not (necessarily) surjective can be reduced down to the case of a surjective function (which is a case that was described at the start of this subsection).
Even if
B
{\displaystyle {\mathcal {B}}}
is an ultrafilter on
Y
,
{\displaystyle Y,}
if
f
{\displaystyle f}
is not surjective then it is nevertheless possible that
∅
∈
B
|
f
(
X
)
,
{\displaystyle \varnothing \in {\mathcal {B}}{\big \vert }_{f(X)},}
which would make
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
degenerate as well. The next characterization shows that degeneracy is the only obstacle. If
B
{\displaystyle {\mathcal {B}}}
is a prefilter then the following are equivalent:
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
is a prefilter;
B
|
f
(
X
)
{\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}}
is a prefilter;
∅
∉
B
|
f
(
X
)
{\displaystyle \varnothing \not \in {\mathcal {B}}{\big \vert }_{f(X)}}
;
B
{\displaystyle {\mathcal {B}}}
meshes with
f
(
X
)
{\displaystyle f(X)}
and moreover, if
f
−
1
(
B
)
{\displaystyle f^{-1}({\mathcal {B}})}
is a prefilter then so is
f
(
f
−
1
(
B
)
)
.
{\displaystyle f\left(f^{-1}({\mathcal {B}})\right).}
If
S
⊆
Y
{\displaystyle S\subseteq Y}
and if
In
:
S
→
Y
{\displaystyle \operatorname {In} :S\to Y}
denotes the inclusion map then the trace of
B
on
S
{\displaystyle {\mathcal {B}}{\text{ on }}S}
is equal to
In
−
1
(
B
)
.
{\displaystyle \operatorname {In} ^{-1}({\mathcal {B}}).}
This observation allows the results in this subsection to be applied to investigating the trace on a set.
Bijections, injections, and surjections
All properties involving filters are preserved under bijections. This means that if
B
⊆
℘
(
Y
)
and
g
:
Y
→
Z
{\displaystyle {\mathcal {B}}\subseteq \wp (Y){\text{ and }}g:Y\to Z}
is a bijection, then
B
{\displaystyle {\mathcal {B}}}
is a prefilter (resp. ultra, ultra prefilter, filter on
X
,
{\displaystyle X,}
ultrafilter on
X
,
{\displaystyle X,}
filter subbase, π–system, ideal on
X
,
{\displaystyle X,}
etc.) if and only if the same is true of
g
(
B
)
on
Z
.
{\displaystyle g({\mathcal {B}}){\text{ on }}Z.}
A map
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
is injective if and only if for all prefilters
B
on
Y
,
B
{\displaystyle {\mathcal {B}}{\text{ on }}Y,{\mathcal {B}}}
is equivalent to
g
−
1
(
g
(
B
)
)
.
{\displaystyle g^{-1}(g({\mathcal {B}})).}
The image of an ultra family of sets under an injection is again ultra.
The map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a surjection if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a prefilter on
Y
{\displaystyle Y}
then the same is true of
f
−
1
(
B
)
on
X
{\displaystyle f^{-1}({\mathcal {B}}){\text{ on }}X}
(this result does not require the ultrafilter lemma).
==== Subordination is preserved by images and preimages ====
The relation
≤
{\displaystyle \,\leq \,}
is preserved under both images and preimages of families of sets.
This means that for any families
C
and
F
,
{\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}},}
C
≤
F
implies
g
(
C
)
≤
g
(
F
)
and
f
−
1
(
C
)
≤
f
−
1
(
F
)
.
{\displaystyle {\mathcal {C}}\leq {\mathcal {F}}\quad {\text{ implies }}\quad g({\mathcal {C}})\leq g({\mathcal {F}})\quad {\text{ and }}\quad f^{-1}({\mathcal {C}})\leq f^{-1}({\mathcal {F}}).}
Moreover, the following relations always hold for any family of sets
C
{\displaystyle {\mathcal {C}}}
:
C
≤
f
(
f
−
1
(
C
)
)
{\displaystyle {\mathcal {C}}\leq f\left(f^{-1}({\mathcal {C}})\right)}
where equality will hold if
f
{\displaystyle f}
is surjective.
Furthermore,
f
−
1
(
C
)
=
f
−
1
(
f
(
f
−
1
(
C
)
)
)
and
g
(
C
)
=
g
(
g
−
1
(
g
(
C
)
)
)
.
{\displaystyle f^{-1}({\mathcal {C}})=f^{-1}\left(f\left(f^{-1}({\mathcal {C}})\right)\right)\quad {\text{ and }}\quad g({\mathcal {C}})=g\left(g^{-1}(g({\mathcal {C}}))\right).}
If
B
⊆
℘
(
X
)
and
C
⊆
℘
(
Y
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X){\text{ and }}{\mathcal {C}}\subseteq \wp (Y)}
then
f
(
B
)
≤
C
if and only if
B
≤
f
−
1
(
C
)
{\displaystyle f({\mathcal {B}})\leq {\mathcal {C}}\quad {\text{ if and only if }}\quad {\mathcal {B}}\leq f^{-1}({\mathcal {C}})}
and
g
−
1
(
g
(
C
)
)
≤
C
{\displaystyle g^{-1}(g({\mathcal {C}}))\leq {\mathcal {C}}}
where equality will hold if
g
{\displaystyle g}
is injective.
=== Products of prefilters ===
Suppose
X
∙
=
(
X
i
)
i
∈
I
{\displaystyle X_{\bullet }=\left(X_{i}\right)_{i\in I}}
is a family of one or more non–empty sets, whose product will be denoted by
∏
X
∙
:=
∏
i
∈
I
X
i
,
{\displaystyle \prod X_{\bullet }:=\prod _{i\in I}X_{i},}
and for every index
i
∈
I
,
{\displaystyle i\in I,}
let
Pr
X
i
:
∏
X
∙
→
X
i
{\displaystyle \Pr {}_{X_{i}}:\prod X_{\bullet }\to X_{i}}
denote the canonical projection.
Let
B
∙
:=
(
B
i
)
i
∈
I
{\displaystyle {\mathcal {B}}_{\bullet }:=\left({\mathcal {B}}_{i}\right)_{i\in I}}
be non−empty families, also indexed by
I
,
{\displaystyle I,}
such that
B
i
⊆
℘
(
X
i
)
{\displaystyle {\mathcal {B}}_{i}\subseteq \wp \left(X_{i}\right)}
for each
i
∈
I
.
{\displaystyle i\in I.}
The product of the families
B
∙
{\displaystyle {\mathcal {B}}_{\bullet }}
is defined identically to how the basic open subsets of the product topology are defined (had all of these
B
i
{\displaystyle {\mathcal {B}}_{i}}
been topologies). That is, both the notations
∏
B
∙
=
∏
i
∈
I
B
i
{\displaystyle \prod _{}{\mathcal {B}}_{\bullet }=\prod _{i\in I}{\mathcal {B}}_{i}}
denote the family of all cylinder subsets
∏
i
∈
I
S
i
⊆
∏
X
∙
{\displaystyle \prod _{i\in I}S_{i}\subseteq \prod _{}X_{\bullet }}
such that
S
i
=
X
i
{\displaystyle S_{i}=X_{i}}
for all but finitely many
i
∈
I
{\displaystyle i\in I}
and where
S
i
∈
B
i
{\displaystyle S_{i}\in {\mathcal {B}}_{i}}
for any one of these finitely many exceptions (that is, for any
i
{\displaystyle i}
such that
S
i
≠
X
i
,
{\displaystyle S_{i}\neq X_{i},}
necessarily
S
i
∈
B
i
{\displaystyle S_{i}\in {\mathcal {B}}_{i}}
).
When every
B
i
{\displaystyle {\mathcal {B}}_{i}}
is a filter subbase then the family
⋃
i
∈
I
Pr
X
i
−
1
(
B
i
)
{\displaystyle \bigcup _{i\in I}\Pr {}_{X_{i}}^{-1}\left({\mathcal {B}}_{i}\right)}
is a filter subbase for the filter on
∏
X
∙
{\displaystyle \prod X_{\bullet }}
generated by
B
∙
.
{\displaystyle {\mathcal {B}}_{\bullet }.}
If
∏
B
∙
{\displaystyle \prod {\mathcal {B}}_{\bullet }}
is a filter subbase then the filter on
∏
X
∙
{\displaystyle \prod X_{\bullet }}
that it generates is called the filter generated by
B
∙
{\displaystyle {\mathcal {B}}_{\bullet }}
.
If every
B
i
{\displaystyle {\mathcal {B}}_{i}}
is a prefilter on
X
i
{\displaystyle X_{i}}
then
∏
B
∙
{\displaystyle \prod {\mathcal {B}}_{\bullet }}
will be a prefilter on
∏
X
∙
{\displaystyle \prod X_{\bullet }}
and moreover, this prefilter is equal to the coarsest prefilter
F
on
∏
X
∙
{\displaystyle {\mathcal {F}}{\text{ on }}\prod X_{\bullet }}
such that
Pr
X
i
(
F
)
=
B
i
{\displaystyle \Pr {}_{X_{i}}({\mathcal {F}})={\mathcal {B}}_{i}}
for every
i
∈
I
.
{\displaystyle i\in I.}
However,
∏
B
∙
{\displaystyle \prod {\mathcal {B}}_{\bullet }}
may fail to be a filter on
∏
X
∙
{\displaystyle \prod X_{\bullet }}
even if every
B
i
{\displaystyle {\mathcal {B}}_{i}}
is a filter on
X
i
.
{\displaystyle X_{i}.}
=== Set subtraction and some examples ===
Set subtracting away a subset of the kernel
If
B
{\displaystyle {\mathcal {B}}}
is a prefilter on
X
,
S
⊆
ker
B
,
and
S
∉
B
{\displaystyle X,S\subseteq \ker {\mathcal {B}},{\text{ and }}S\not \in {\mathcal {B}}}
then
{
B
∖
S
:
B
∈
B
}
{\displaystyle \{B\setminus S~:~B\in {\mathcal {B}}\}}
is a prefilter, where this latter set is a filter if and only if
B
{\displaystyle {\mathcal {B}}}
is a filter and
S
=
∅
.
{\displaystyle S=\varnothing .}
In particular, if
B
{\displaystyle {\mathcal {B}}}
is a neighborhood basis at a point
x
{\displaystyle x}
in a topological space
X
{\displaystyle X}
having at least 2 points, then
{
B
∖
{
x
}
:
B
∈
B
}
{\displaystyle \{B\setminus \{x\}~:~B\in {\mathcal {B}}\}}
is a prefilter on
X
.
{\displaystyle X.}
This construction is used to define
lim
x
≠
x
0
x
→
x
0
f
(
x
)
→
y
{\displaystyle \lim _{\stackrel {x\to x_{0}}{x\neq x_{0}}}f(x)\to y}
in terms of prefilter convergence.
Using duality between ideals and dual ideals
There is a dual relation
B
⊲
C
{\displaystyle {\mathcal {B}}\vartriangleleft {\mathcal {C}}}
or
C
⊳
B
,
{\displaystyle {\mathcal {C}}\vartriangleright {\mathcal {B}},}
which is defined to mean that every
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
is contained in some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
Explicitly, this means that for every
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
, there is some
C
∈
C
{\displaystyle C\in {\mathcal {C}}}
such that
B
⊆
C
.
{\displaystyle B\subseteq C.}
This relation is dual to
≤
{\displaystyle \,\leq \,}
in sense that
B
⊲
C
{\displaystyle {\mathcal {B}}\vartriangleleft {\mathcal {C}}}
if and only if
(
X
∖
B
)
≤
(
X
∖
C
)
.
{\displaystyle (X\setminus {\mathcal {B}})\leq (X\setminus {\mathcal {C}}).}
The relation
B
⊲
C
{\displaystyle {\mathcal {B}}\vartriangleleft {\mathcal {C}}}
is closely related to the downward closure of a family in a manner similar to how
≤
{\displaystyle \,\leq \,}
is related to the upward closure family.
For an example that uses this duality, suppose
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a map and
Ξ
⊆
℘
(
Y
)
.
{\displaystyle \Xi \subseteq \wp (Y).}
Define
Ξ
f
:=
{
I
⊆
X
:
f
(
I
)
∈
Ξ
}
{\displaystyle \Xi _{f}:=\{I\subseteq X~:~f(I)\in \Xi \}}
which contains the empty set if and only if
Ξ
{\displaystyle \Xi }
does. It is possible for
Ξ
{\displaystyle \Xi }
to be an ultrafilter and for
Ξ
f
{\displaystyle \Xi _{f}}
to be empty or not closed under finite intersections (see footnote for example). Although
Ξ
f
{\displaystyle \Xi _{f}}
does not preserve properties of filters very well, if
Ξ
{\displaystyle \Xi }
is downward closed (resp. closed under finite unions, an ideal) then this will also be true for
Ξ
f
.
{\displaystyle \Xi _{f}.}
Using the duality between ideals and dual ideals allows for a construction of the following filter.
Suppose
B
{\displaystyle {\mathcal {B}}}
is a filter on
Y
{\displaystyle Y}
and let
Ξ
:=
Y
∖
B
{\displaystyle \Xi :=Y\setminus {\mathcal {B}}}
be its dual in
Y
.
{\displaystyle Y.}
If
X
∉
Ξ
f
{\displaystyle X\not \in \Xi _{f}}
then
Ξ
f
{\displaystyle \Xi _{f}}
's dual
X
∖
Ξ
f
{\displaystyle X\setminus \Xi _{f}}
will be a filter.
Other examples
Example: The set
B
{\displaystyle {\mathcal {B}}}
of all dense open subsets of a topological space is a proper π–system and a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a π–system and a prefilter that is finer than
B
.
{\displaystyle {\mathcal {B}}.}
Example: The family
B
Open
{\displaystyle {\mathcal {B}}_{\operatorname {Open} }}
of all dense open sets of
X
=
R
{\displaystyle X=\mathbb {R} }
having finite Lebesgue measure is a proper π–system and a free prefilter. The prefilter
B
Open
{\displaystyle {\mathcal {B}}_{\operatorname {Open} }}
is properly contained in, and not equivalent to, the prefilter consisting of all dense open subsets of
R
.
{\displaystyle \mathbb {R} .}
Since
X
{\displaystyle X}
is a Baire space, every countable intersection of sets in
B
Open
{\displaystyle {\mathcal {B}}_{\operatorname {Open} }}
is dense in
X
{\displaystyle X}
(and also comeagre and non–meager) so the set of all countable intersections of elements of
B
Open
{\displaystyle {\mathcal {B}}_{\operatorname {Open} }}
is a prefilter and π–system; it is also finer than, and not equivalent to,
B
Open
.
{\displaystyle {\mathcal {B}}_{\operatorname {Open} }.}
== Filters and nets ==
This section will describe the relationships between prefilters and nets in great detail because of how important these details are applying filters to topology − particularly in switching from utilizing nets to utilizing filters and vice verse − and because it to make it easier to understand later why subnets (with their most commonly used definitions) are not generally equivalent with "sub–prefilters".
=== Nets to prefilters ===
A net
x
∙
=
(
x
i
)
i
∈
I
in
X
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}{\text{ in }}X}
is canonically associated with its prefilter of tails
Tails
(
x
∙
)
.
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right).}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a map and
x
∙
{\displaystyle x_{\bullet }}
is a net in
X
{\displaystyle X}
then
Tails
(
f
(
x
∙
)
)
=
f
(
Tails
(
x
∙
)
)
.
{\displaystyle \operatorname {Tails} \left(f\left(x_{\bullet }\right)\right)=f\left(\operatorname {Tails} \left(x_{\bullet }\right)\right).}
=== Prefilters to nets ===
A pointed set is a pair
(
S
,
s
)
{\displaystyle (S,s)}
consisting of a non–empty set
S
{\displaystyle S}
and an element
s
∈
S
.
{\displaystyle s\in S.}
For any family
B
,
{\displaystyle {\mathcal {B}},}
let
PointedSets
(
B
)
:=
{
(
B
,
b
)
:
B
∈
B
and
b
∈
B
}
.
{\displaystyle \operatorname {PointedSets} ({\mathcal {B}}):=\left\{(B,b)~:~B\in {\mathcal {B}}{\text{ and }}b\in B\right\}.}
Define a canonical preorder
≤
{\displaystyle \,\leq \,}
on pointed sets by declaring
(
R
,
r
)
≤
(
S
,
s
)
if and only if
R
⊇
S
.
{\displaystyle (R,r)\leq (S,s)\quad {\text{ if and only if }}\quad R\supseteq S.}
If
s
0
,
s
1
∈
S
then
(
S
,
s
0
)
≤
(
S
,
s
1
)
and
(
S
,
s
1
)
≤
(
S
,
s
0
)
{\displaystyle s_{0},s_{1}\in S{\text{ then }}\left(S,s_{0}\right)\leq \left(S,s_{1}\right){\text{ and }}\left(S,s_{1}\right)\leq \left(S,s_{0}\right)}
even if
s
0
≠
s
1
,
{\displaystyle s_{0}\neq s_{1},}
so this preorder is not antisymmetric and given any family of sets
B
,
{\displaystyle {\mathcal {B}},}
(
PointedSets
(
B
)
,
≤
)
{\displaystyle (\operatorname {PointedSets} ({\mathcal {B}}),\leq )}
is partially ordered if and only if
B
≠
∅
{\displaystyle {\mathcal {B}}\neq \varnothing }
consists entirely of singleton sets.
If
{
x
}
∈
B
then
(
{
x
}
,
x
)
{\displaystyle \{x\}\in {\mathcal {B}}{\text{ then }}(\{x\},x)}
is a maximal element of
PointedSets
(
B
)
{\displaystyle \operatorname {PointedSets} ({\mathcal {B}})}
; moreover, all maximal elements are of this form.
If
(
B
,
b
0
)
∈
PointedSets
(
B
)
then
(
B
,
b
0
)
{\displaystyle \left(B,b_{0}\right)\in \operatorname {PointedSets} ({\mathcal {B}}){\text{ then }}\left(B,b_{0}\right)}
is a greatest element if and only if
B
=
ker
B
,
{\displaystyle B=\ker {\mathcal {B}},}
in which case
{
(
B
,
b
)
:
b
∈
B
}
{\displaystyle \{(B,b)~:~b\in B\}}
is the set of all greatest elements. However, a greatest element
(
B
,
b
)
{\displaystyle (B,b)}
is a maximal element if and only if
B
=
{
b
}
=
ker
B
,
{\displaystyle B=\{b\}=\ker {\mathcal {B}},}
so there is at most one element that is both maximal and greatest.
There is a canonical map
Point
B
:
PointedSets
(
B
)
→
X
{\displaystyle \operatorname {Point} _{\mathcal {B}}~:~\operatorname {PointedSets} ({\mathcal {B}})\to X}
defined by
(
B
,
b
)
↦
b
.
{\displaystyle (B,b)\mapsto b.}
Although
(
PointedSets
(
B
)
,
≤
)
{\displaystyle (\operatorname {PointedSets} ({\mathcal {B}}),\leq )}
is not, in general, a partially ordered set, it is a directed set if (and only if)
B
{\displaystyle {\mathcal {B}}}
is a prefilter.
So the most immediate choice for the definition of "the net in
X
{\displaystyle X}
induced by a prefilter
B
{\displaystyle {\mathcal {B}}}
" is the assignment
(
B
,
b
)
↦
b
{\displaystyle (B,b)\mapsto b}
from
PointedSets
(
B
)
{\displaystyle \operatorname {PointedSets} ({\mathcal {B}})}
into
X
.
{\displaystyle X.}
If
B
{\displaystyle {\mathcal {B}}}
is a prefilter on
X
then
Net
B
{\displaystyle X{\text{ then }}\operatorname {Net} _{\mathcal {B}}}
is a net in
X
{\displaystyle X}
and the prefilter associated with
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
is
B
{\displaystyle {\mathcal {B}}}
; that is:
Tails
(
Net
B
)
=
B
.
{\displaystyle \operatorname {Tails} \left(\operatorname {Net} _{\mathcal {B}}\right)={\mathcal {B}}.}
This would not necessarily be true had
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
been defined on a proper subset of
PointedSets
(
B
)
.
{\displaystyle \operatorname {PointedSets} ({\mathcal {B}}).}
For example, suppose
X
{\displaystyle X}
has at least two distinct elements,
B
:=
{
X
}
{\displaystyle {\mathcal {B}}:=\{X\}}
is the indiscrete filter, and
x
∈
X
{\displaystyle x\in X}
is arbitrary. Had
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
instead been defined on the singleton set
D
:=
{
(
X
,
x
)
}
,
{\displaystyle D:=\{(X,x)\},}
where the restriction of
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
to
D
{\displaystyle D}
will temporarily be denote by
Net
D
:
D
→
X
,
{\displaystyle \operatorname {Net} _{D}:D\to X,}
then the prefilter of tails associated with
Net
D
:
D
→
X
{\displaystyle \operatorname {Net} _{D}:D\to X}
would be the principal prefilter
{
{
x
}
}
{\displaystyle \{\,\{x\}\,\}}
rather than the original filter
B
=
{
X
}
{\displaystyle {\mathcal {B}}=\{X\}}
;
this means that the equality
Tails
(
Net
D
)
=
B
{\displaystyle \operatorname {Tails} \left(\operatorname {Net} _{D}\right)={\mathcal {B}}}
is false, so unlike
Net
B
,
{\displaystyle \operatorname {Net} _{\mathcal {B}},}
the prefilter
B
{\displaystyle {\mathcal {B}}}
can not be recovered from
Net
D
.
{\displaystyle \operatorname {Net} _{D}.}
Worse still, while
B
{\displaystyle {\mathcal {B}}}
is the unique minimal filter on
X
,
{\displaystyle X,}
the prefilter
Tails
(
Net
D
)
=
{
{
x
}
}
{\displaystyle \operatorname {Tails} \left(\operatorname {Net} _{D}\right)=\{\{x\}\}}
instead generates a maximal filter (that is, an ultrafilter) on
X
.
{\displaystyle X.}
However, if
x
∙
=
(
x
i
)
i
∈
I
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}
is a net in
X
{\displaystyle X}
then it is not in general true that
Net
Tails
(
x
∙
)
{\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)}}
is equal to
x
∙
{\displaystyle x_{\bullet }}
because, for example, the domain of
x
∙
{\displaystyle x_{\bullet }}
may be of a completely different cardinality than that of
Net
Tails
(
x
∙
)
{\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)}}
(since unlike the domain of
Net
Tails
(
x
∙
)
,
{\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)},}
the domain of an arbitrary net in
X
{\displaystyle X}
could have any cardinality).
Ultranets and ultra prefilters
A net
x
∙
in
X
{\displaystyle x_{\bullet }{\text{ in }}X}
is called an ultranet or universal net in
X
{\displaystyle X}
if for every subset
S
⊆
X
,
x
∙
{\displaystyle S\subseteq X,x_{\bullet }}
is eventually in
S
{\displaystyle S}
or it is eventually in
X
∖
S
{\displaystyle X\setminus S}
;
this happens if and only if
Tails
(
x
∙
)
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)}
is an ultra prefilter.
A prefilter
B
on
X
{\displaystyle {\mathcal {B}}{\text{ on }}X}
is an ultra prefilter if and only if
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
is an ultranet in
X
.
{\displaystyle X.}
==== Partially ordered net ====
The domain of the canonical net
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
is in general not partially ordered. However, in 1955 Bruns and Schmidt discovered a construction that allows for the canonical net to have a domain that is both partially ordered and directed; this was independently rediscovered by Albert Wilansky in 1970.
It begins with the construction of a strict partial order (meaning a transitive and irreflexive relation)
<
{\displaystyle \,<\,}
on a subset of
B
×
N
×
X
{\displaystyle {\mathcal {B}}\times \mathbb {N} \times X}
that is similar to the lexicographical order on
B
×
N
{\displaystyle {\mathcal {B}}\times \mathbb {N} }
of the strict partial orders
(
B
,
⊋
)
and
(
N
,
<
)
.
{\displaystyle ({\mathcal {B}},\supsetneq ){\text{ and }}(\mathbb {N} ,<).}
For any
i
=
(
B
,
m
,
b
)
and
j
=
(
C
,
n
,
c
)
{\displaystyle i=(B,m,b){\text{ and }}j=(C,n,c)}
in
B
×
N
×
X
,
{\displaystyle {\mathcal {B}}\times \mathbb {N} \times X,}
declare that
i
<
j
{\displaystyle i<j}
if and only if
B
⊇
C
and either:
(1)
B
≠
C
or else (2)
B
=
C
and
m
<
n
,
{\displaystyle B\supseteq C{\text{ and either: }}{\text{(1) }}B\neq C{\text{ or else (2) }}B=C{\text{ and }}m<n,}
or equivalently, if and only if
(1)
B
⊇
C
,
and (2) if
B
=
C
then
m
<
n
.
{\displaystyle {\text{(1) }}B\supseteq C,{\text{ and (2) if }}B=C{\text{ then }}m<n.}
The non−strict partial order associated with
<
,
{\displaystyle \,<,}
denoted by
≤
,
{\displaystyle \,\leq ,}
is defined by declaring that
i
≤
j
if and only if
i
<
j
or
i
=
j
.
{\displaystyle i\leq j\,{\text{ if and only if }}i<j{\text{ or }}i=j.}
Unwinding these definitions gives the following characterization:
i
≤
j
{\displaystyle i\leq j}
if and only if
(1)
B
⊇
C
,
and (2) if
B
=
C
then
m
≤
n
,
{\displaystyle {\text{(1) }}B\supseteq C,{\text{ and (2) if }}B=C{\text{ then }}m\leq n,}
and also
(3) if
B
=
C
and
m
=
n
then
b
=
c
,
{\displaystyle {\text{(3) if }}B=C{\text{ and }}m=n{\text{ then }}b=c,}
which shows that
≤
{\displaystyle \,\leq \,}
is just the lexicographical order on
B
×
N
×
X
{\displaystyle {\mathcal {B}}\times \mathbb {N} \times X}
induced by
(
B
,
⊇
)
,
(
N
,
≤
)
,
and
(
X
,
=
)
,
{\displaystyle ({\mathcal {B}},\supseteq ),\,(\mathbb {N} ,\leq ),{\text{ and }}(X,=),}
where
X
{\displaystyle X}
is partially ordered by equality
=
.
{\displaystyle \,=.\,}
Both
<
and
≤
{\displaystyle \,<{\text{ and }}\leq \,}
are serial and neither possesses a greatest element or a maximal element; this remains true if they are each restricted to the subset of
B
×
N
×
X
{\displaystyle {\mathcal {B}}\times \mathbb {N} \times X}
defined by
Poset
B
:=
{
(
B
,
m
,
b
)
∈
B
×
N
×
X
:
b
∈
B
}
,
{\displaystyle {\begin{alignedat}{4}\operatorname {Poset} _{\mathcal {B}}\;&:=\;\{\,(B,m,b)\;\in \;{\mathcal {B}}\times \mathbb {N} \times X~:~b\in B\,\},\\\end{alignedat}}}
where it will henceforth be assumed that they are.
Denote the assignment
i
=
(
B
,
m
,
b
)
↦
b
{\displaystyle i=(B,m,b)\mapsto b}
from this subset by:
PosetNet
B
:
Poset
B
→
X
(
B
,
m
,
b
)
↦
b
{\displaystyle {\begin{alignedat}{4}\operatorname {PosetNet} _{\mathcal {B}}\ :\ &&\ \operatorname {Poset} _{\mathcal {B}}\ &&\,\to \;&X\\[0.5ex]&&\ (B,m,b)\ &&\,\mapsto \;&b\\[0.5ex]\end{alignedat}}}
If
i
0
=
(
B
0
,
m
0
,
b
0
)
∈
Poset
B
{\displaystyle i_{0}=\left(B_{0},m_{0},b_{0}\right)\in \operatorname {Poset} _{\mathcal {B}}}
then just as with
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
before, the tail of the
PosetNet
B
{\displaystyle \operatorname {PosetNet} _{\mathcal {B}}}
starting at
i
0
{\displaystyle i_{0}}
is equal to
B
0
.
{\displaystyle B_{0}.}
If
B
{\displaystyle {\mathcal {B}}}
is a prefilter on
X
{\displaystyle X}
then
PosetNet
B
{\displaystyle \operatorname {PosetNet} _{\mathcal {B}}}
is a net in
X
{\displaystyle X}
whose domain
Poset
B
{\displaystyle \operatorname {Poset} _{\mathcal {B}}}
is a partially ordered set and moreover,
Tails
(
PosetNet
B
)
=
B
.
{\displaystyle \operatorname {Tails} \left(\operatorname {PosetNet} _{\mathcal {B}}\right)={\mathcal {B}}.}
Because the tails of
PosetNet
B
and
Net
B
{\displaystyle \operatorname {PosetNet} _{\mathcal {B}}{\text{ and }}\operatorname {Net} _{\mathcal {B}}}
are identical (since both are equal to the prefilter
B
{\displaystyle {\mathcal {B}}}
), there is typically nothing lost by assuming that the domain of the net associated with a prefilter is both directed and partially ordered. If the set
N
{\displaystyle \mathbb {N} }
is replaced with the positive rational numbers then the strict partial order
<
{\displaystyle <}
will also be a dense order.
=== Subordinate filters and subnets ===
The notion of "
B
{\displaystyle {\mathcal {B}}}
is subordinate to
C
{\displaystyle {\mathcal {C}}}
" (written
B
⊢
C
{\displaystyle {\mathcal {B}}\vdash {\mathcal {C}}}
) is for filters and prefilters what "
x
n
∙
=
(
x
n
i
)
i
=
1
∞
{\displaystyle x_{n_{\bullet }}=\left(x_{n_{i}}\right)_{i=1}^{\infty }}
is a subsequence of
x
∙
=
(
x
i
)
i
=
1
∞
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}
" is for sequences.
For example, if
Tails
(
x
∙
)
=
{
x
≥
i
:
i
∈
N
}
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)=\left\{x_{\geq i}:i\in \mathbb {N} \right\}}
denotes the set of tails of
x
∙
{\displaystyle x_{\bullet }}
and if
Tails
(
x
n
∙
)
=
{
x
n
≥
i
:
i
∈
N
}
{\displaystyle \operatorname {Tails} \left(x_{n_{\bullet }}\right)=\left\{x_{n_{\geq i}}:i\in \mathbb {N} \right\}}
denotes the set of tails of the subsequence
x
n
∙
{\displaystyle x_{n_{\bullet }}}
(where
x
n
≥
i
:=
{
x
n
i
:
i
∈
N
}
{\displaystyle x_{n_{\geq i}}:=\left\{x_{n_{i}}~:~i\in \mathbb {N} \right\}}
) then
Tails
(
x
n
∙
)
⊢
Tails
(
x
∙
)
{\displaystyle \operatorname {Tails} \left(x_{n_{\bullet }}\right)~\vdash ~\operatorname {Tails} \left(x_{\bullet }\right)}
(that is,
Tails
(
x
∙
)
≤
Tails
(
x
n
∙
)
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(x_{n_{\bullet }}\right)}
) is true but
Tails
(
x
∙
)
⊢
Tails
(
x
n
∙
)
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)~\vdash ~\operatorname {Tails} \left(x_{n_{\bullet }}\right)}
is in general false.
==== Non–equivalence of subnets and subordinate filters ====
A subset
R
⊆
I
{\displaystyle R\subseteq I}
of a preordered space
(
I
,
≤
)
{\displaystyle (I,\leq )}
is frequent or cofinal in
I
{\displaystyle I}
if for every
i
∈
I
{\displaystyle i\in I}
there exists some
r
∈
R
such that
i
≤
r
.
{\displaystyle r\in R{\text{ such that }}i\leq r.}
If
R
⊆
I
{\displaystyle R\subseteq I}
contains a tail of
I
{\displaystyle I}
then
R
{\displaystyle R}
is said to be eventual or eventually in
I
{\displaystyle I}
; explicitly, this means that there exists some
i
∈
I
such that
I
≥
i
⊆
R
{\displaystyle i\in I{\text{ such that }}I_{\geq i}\subseteq R}
(that is,
j
∈
R
for all
j
∈
I
satisfying
i
≤
j
{\displaystyle j\in R{\text{ for all }}j\in I{\text{ satisfying }}i\leq j}
). An eventual set is necessarily not empty. A subset is eventual if and only if its complement is not frequent (which is termed infrequent).
A map
h
:
A
→
I
{\displaystyle h:A\to I}
between two preordered sets is order–preserving if whenever
a
,
b
∈
A
satisfy
a
≤
b
,
then
h
(
a
)
≤
h
(
b
)
.
{\displaystyle a,b\in A{\text{ satisfy }}a\leq b,{\text{ then }}h(a)\leq h(b).}
Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet."
The first definition of a subnet was introduced by John L. Kelley in 1955.
Stephen Willard introduced his own variant of Kelley's definition of subnet in 1970.
AA–subnets were introduced independently by Smiley (1957), Aarnes and Andenaes (1972), and Murdeshwar (1983); AA–subnets were studied in great detail by Aarnes and Andenaes but they are not often used.
Kelley did not require the map
h
{\displaystyle h}
to be order preserving while the definition of an AA–subnet does away entirely with any map between the two nets' domains and instead focuses entirely on
X
{\displaystyle X}
− the nets' common codomain.
Every Willard–subnet is a Kelley–subnet and both are AA–subnets.
In particular, if
y
∙
=
(
y
a
)
a
∈
A
{\displaystyle y_{\bullet }=\left(y_{a}\right)_{a\in A}}
is a Willard–subnet or a Kelley–subnet of
x
∙
=
(
x
i
)
i
∈
I
{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}
then
Tails
(
x
∙
)
≤
Tails
(
y
∙
)
.
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(y_{\bullet }\right).}
Example: Let
I
=
N
{\displaystyle I=\mathbb {N} }
and let
x
∙
{\displaystyle x_{\bullet }}
be a constant sequence, say
x
∙
=
(
0
)
i
∈
N
.
{\displaystyle x_{\bullet }=\left(0\right)_{i\in \mathbb {N} }.}
Let
s
1
=
0
{\displaystyle s_{1}=0}
and
A
=
{
1
}
{\displaystyle A=\{1\}}
so that
s
∙
=
(
s
a
)
a
∈
A
=
(
s
1
)
{\displaystyle s_{\bullet }=\left(s_{a}\right)_{a\in A}=\left(s_{1}\right)}
is a net on
A
.
{\displaystyle A.}
Then
s
∙
{\displaystyle s_{\bullet }}
is an AA-subnet of
x
∙
{\displaystyle x_{\bullet }}
because
Tails
(
x
∙
)
=
{
{
0
}
}
=
Tails
(
s
∙
)
.
{\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)=\{\{0\}\}=\operatorname {Tails} \left(s_{\bullet }\right).}
But
s
∙
{\displaystyle s_{\bullet }}
is not a Willard-subnet of
x
∙
{\displaystyle x_{\bullet }}
because there does not exist any map
h
:
A
→
I
{\displaystyle h:A\to I}
whose image is a cofinal subset of
I
=
N
.
{\displaystyle I=\mathbb {N} .}
Nor is
s
∙
{\displaystyle s_{\bullet }}
a Kelley-subnet of
x
∙
{\displaystyle x_{\bullet }}
because if
h
:
A
→
I
{\displaystyle h:A\to I}
is any map then
E
:=
I
∖
{
h
(
1
)
}
{\displaystyle E:=I\setminus \{h(1)\}}
is a cofinal subset of
I
=
N
{\displaystyle I=\mathbb {N} }
but
h
−
1
(
E
)
=
∅
{\displaystyle h^{-1}(E)=\varnothing }
is not eventually in
A
.
{\displaystyle A.}
AA–subnets have a defining characterization that immediately shows that they are fully interchangeable with sub(ordinate)filters.
Explicitly, what is meant is that the following statement is true for AA–subnets:
If
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are prefilters then
B
≤
F
if and only if
Net
F
{\displaystyle {\mathcal {B}}\leq {\mathcal {F}}{\text{ if and only if }}\operatorname {Net} _{\mathcal {F}}}
is an AA–subnet of
Net
B
.
{\displaystyle \;\operatorname {Net} _{\mathcal {B}}.}
If "AA–subnet" is replaced by "Willard–subnet" or "Kelley–subnet" then the above statement becomes false. In particular, the problem is that the following statement is in general false:
False statement: If
B
and
F
{\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}}
are prefilters such that
B
≤
F
then
Net
F
{\displaystyle {\mathcal {B}}\leq {\mathcal {F}}{\text{ then }}\operatorname {Net} _{\mathcal {F}}}
is a Kelley–subnet of
Net
B
.
{\displaystyle \;\operatorname {Net} _{\mathcal {B}}.}
Since every Willard–subnet is a Kelley–subnet, this statement remains false if the word "Kelley–subnet" is replaced with "Willard–subnet".
Counter example: For all
n
∈
N
,
{\displaystyle n\in \mathbb {N} ,}
let
B
n
=
{
1
}
∪
N
≥
n
.
{\displaystyle B_{n}=\{1\}\cup \mathbb {N} _{\geq n}.}
Let
B
=
{
B
n
:
n
∈
N
}
,
{\displaystyle {\mathcal {B}}=\{B_{n}~:~n\in \mathbb {N} \},}
which is a proper π–system, and let
F
=
{
{
1
}
}
∪
B
,
{\displaystyle {\mathcal {F}}=\{\{1\}\}\cup {\mathcal {B}},}
where both families are prefilters on the natural numbers
X
:=
N
=
{
1
,
2
,
…
}
.
{\displaystyle X:=\mathbb {N} =\{1,2,\ldots \}.}
Because
B
≤
F
,
F
{\displaystyle {\mathcal {B}}\leq {\mathcal {F}},{\mathcal {F}}}
is to
B
{\displaystyle {\mathcal {B}}}
as a subsequence is to a sequence.
So ideally,
S
=
Net
F
{\displaystyle S=\operatorname {Net} _{\mathcal {F}}}
should be a subnet of
B
=
Net
B
.
{\displaystyle B=\operatorname {Net} _{\mathcal {B}}.}
Let
I
:=
PointedSets
(
B
)
{\displaystyle I:=\operatorname {PointedSets} ({\mathcal {B}})}
be the domain of
Net
B
,
{\displaystyle \operatorname {Net} _{\mathcal {B}},}
so
I
{\displaystyle I}
contains a cofinal subset that is order isomorphic to
N
{\displaystyle \mathbb {N} }
and consequently contains neither a maximal nor greatest element.
Let
A
:=
PointedSets
(
F
)
=
{
M
}
∪
I
,
where
M
:=
(
1
,
{
1
}
)
{\displaystyle A:=\operatorname {PointedSets} ({\mathcal {F}})=\{M\}\cup I,{\text{ where }}M:=(1,\{1\})}
is both a maximal and greatest element of
A
.
{\displaystyle A.}
The directed set
A
{\displaystyle A}
also contains a subset that is order isomorphic to
N
{\displaystyle \mathbb {N} }
(because it contains
I
,
{\displaystyle I,}
which contains such a subset) but no such subset can be cofinal in
A
{\displaystyle A}
because of the maximal element
M
.
{\displaystyle M.}
Consequently, any order–preserving map
h
:
A
→
I
{\displaystyle h:A\to I}
must be eventually constant (with value
h
(
M
)
{\displaystyle h(M)}
) where
h
(
M
)
{\displaystyle h(M)}
is then a greatest element of the range
h
(
A
)
.
{\displaystyle h(A).}
Because of this, there can be no order preserving map
h
:
A
→
I
{\displaystyle h:A\to I}
that satisfies the conditions required for
Net
F
{\displaystyle \operatorname {Net} _{\mathcal {F}}}
to be a Willard–subnet of
Net
B
{\displaystyle \operatorname {Net} _{\mathcal {B}}}
(because the range of such a map
h
{\displaystyle h}
cannot be cofinal in
I
{\displaystyle I}
).
Suppose for the sake of contradiction that there exists a map
h
:
A
→
I
{\displaystyle h:A\to I}
such that
h
−
1
(
I
≥
i
)
{\displaystyle h^{-1}\left(I_{\geq i}\right)}
is eventually in
A
{\displaystyle A}
for all
i
∈
I
.
{\displaystyle i\in I.}
Because
h
(
M
)
∈
I
,
{\displaystyle h(M)\in I,}
there exist
n
,
n
0
∈
N
{\displaystyle n,n_{0}\in \mathbb {N} }
such that
h
(
M
)
=
(
n
0
,
B
n
)
with
n
0
∈
B
n
.
{\displaystyle h(M)=\left(n_{0},B_{n}\right){\text{ with }}n_{0}\in B_{n}.}
For every
i
∈
I
,
{\displaystyle i\in I,}
because
h
−
1
(
I
≥
i
)
{\displaystyle h^{-1}\left(I_{\geq i}\right)}
is eventually in
A
,
{\displaystyle A,}
it is necessary that
h
(
M
)
∈
I
≥
i
.
{\displaystyle h(M)\in I_{\geq i}.}
In particular, if
i
:=
(
n
+
2
,
B
n
+
2
)
{\displaystyle i:=\left(n+2,B_{n+2}\right)}
then
h
(
M
)
≥
i
=
(
n
+
2
,
B
n
+
2
)
,
{\displaystyle h(M)\geq i=\left(n+2,B_{n+2}\right),}
which by definition is equivalent to
B
n
⊆
B
n
+
2
,
{\displaystyle B_{n}\subseteq B_{n+2},}
which is false.
Consequently,
Net
F
{\displaystyle \operatorname {Net} _{\mathcal {F}}}
is not a Kelley–subnet of
Net
B
.
{\displaystyle \operatorname {Net} _{\mathcal {B}}.}
If "subnet" is defined to mean Willard–subnet or Kelley–subnet then nets and filters are not completely interchangeable because there exists a filter–sub(ordinate)filter relationships that cannot be expressed in terms of a net–subnet relationship between the two induced nets. In particular, the problem is that Kelley–subnets and Willard–subnets are not fully interchangeable with subordinate filters. If the notion of "subnet" is not used or if "subnet" is defined to mean AA–subnet, then this ceases to be a problem and so it becomes correct to say that nets and filters are interchangeable. Despite the fact that AA–subnets do not have the problem that Willard and Kelley subnets have, they are not widely used or known about.
== See also ==
Characterizations of the category of topological spaces
Convergence space – Generalization of the notion of convergence that is found in general topology
Filter (mathematics) – In mathematics, a special subset of a partially ordered set
Filter quantifier
Filters in topology – Use of filters to describe and characterize all basic topological notions and results.
Filtration (mathematics) – Indexed set in mathematics
Filtration (probability theory) – Model of information available at a given point of a random process
Filtration (abstract algebra)
Fréchet filter – frechet filterPages displaying wikidata descriptions as a fallback
Generic filter – in set theory, given a collection of dense open subsets of a poset, a filter that meets all sets in that collectionPages displaying wikidata descriptions as a fallback
Ideal (set theory) – Non-empty family of sets that is closed under finite unions and subsets
Stone–Čech compactification#Construction using ultrafilters – Concept in topology
The fundamental theorem of ultraproducts – Mathematical constructionPages displaying short descriptions of redirect targets
Ultrafilter – Maximal proper filter
Ultrafilter (set theory) – Maximal proper filterPages displaying short descriptions of redirect targets
== Notes ==
Proofs
== Citations ==
== References ==
Adams, Colin; Franzosa, Robert (2009). Introduction to Topology: Pure and Applied. New Delhi: Pearson Education. ISBN 978-81-317-2692-1. OCLC 789880519.
Arkhangel'skii, Alexander Vladimirovich; Ponomarev, V.I. (1984). Fundamentals of General Topology: Problems and Exercises. Mathematics and Its Applications. Vol. 13. Dordrecht Boston: D. Reidel. ISBN 978-90-277-1355-1. OCLC 9944489.
Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401.
Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129.
Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063.
Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
Burris, Stanley; Sankappanavar, Hanamantagouda P. (2012). A Course in Universal Algebra (PDF). Springer-Verlag. ISBN 978-0-9880552-0-9. Archived from the original on 1 April 2022.
Cartan, Henri (1937a). "Théorie des filtres". Comptes rendus hebdomadaires des séances de l'Académie des sciences. 205: 595–598.
Cartan, Henri (1937b). "Filtres et ultrafiltres". Comptes rendus hebdomadaires des séances de l'Académie des sciences. 205: 777–779.
Comfort, William Wistar; Negrepontis, Stylianos (1974). The Theory of Ultrafilters. Vol. 211. Berlin Heidelberg New York: Springer-Verlag. ISBN 978-0-387-06604-2. OCLC 1205452.
Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011.
Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303.
Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917.
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators. Pure and applied mathematics. Vol. 1. New York: Wiley-Interscience. ISBN 978-0-471-60848-6. OCLC 18412261.
Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
Howes, Norman R. (23 June 1995). Modern Analysis and Topology. Graduate Texts in Mathematics. New York: Springer-Verlag Science & Business Media. ISBN 978-0-387-97986-1. OCLC 31969970. OL 1272666M.
Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
Jech, Thomas (2006). Set Theory: The Third Millennium Edition, Revised and Expanded. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-44085-7. OCLC 50422939.
Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750.
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153.
Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
Koutras, Costas D.; Moyzes, Christos; Nomikos, Christos; Tsaprounis, Konstantinos; Zikos, Yorgos (20 October 2021). "On Weak Filters and Ultrafilters: Set Theory From (and for) Knowledge Representation". Logic Journal of the IGPL. 31: 68–95. doi:10.1093/jigpal/jzab030.
MacIver R., David (1 July 2004). "Filters in Analysis and Topology" (PDF). Archived from the original (PDF) on 2007-10-09. (Provides an introductory review of filters in topology and in metric spaces.)
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wilansky, Albert (17 October 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899.
Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. | Wikipedia/Filter_(set_theory) |
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers.
Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum).
== Cantor's argument ==
Cantor's first proof that infinite sets can have different cardinalities was published in 1874. This proof demonstrates that the set of natural numbers and the set of real numbers have different cardinalities. It uses the theorem that a bounded increasing sequence of real numbers has a limit, which can be proved by using Cantor's or Richard Dedekind's construction of the irrational numbers. Because Leopold Kronecker did not accept these constructions, Cantor was motivated to develop a new proof.
In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers." His new proof uses his diagonal argument to prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbers N = {1, 2, 3, ...}. This larger set consists of the elements (x1, x2, x3, ...), where each xn is either m or w. Each of these elements corresponds to a subset of N—namely, the element (x1, x2, x3, ...) corresponds to {n ∈ N: xn = w}. So Cantor's argument implies that the set of all subsets of N has greater cardinality than N. The set of all subsets of N is denoted by P(N), the power set of N.
Cantor generalized his argument to an arbitrary set A and the set consisting of all functions from A to {0, 1}. Each of these functions corresponds to a subset of A, so his generalized argument implies the theorem: The power set P(A) has greater cardinality than A. This is known as Cantor's theorem.
The argument below is a modern version of Cantor's argument that uses power sets (for his original argument, see Cantor's diagonal argument). By presenting a modern argument, it is possible to see which assumptions of axiomatic set theory are used. The first part of the argument proves that N and P(N) have different cardinalities:
There exists at least one infinite set. This assumption (not formally specified by Cantor) is captured in formal set theory by the axiom of infinity. This axiom implies that N, the set of all natural numbers, exists.
P(N), the set of all subsets of N, exists. In formal set theory, this is implied by the power set axiom, which says that for every set there is a set of all of its subsets.
The concept of "having the same number" or "having the same cardinality" can be captured by the idea of one-to-one correspondence. This (purely definitional) assumption is sometimes known as Hume's principle. As Frege said, "If a waiter wishes to be certain of laying exactly as many knives on a table as plates, he has no need to count either of them; all he has to do is to lay immediately to the right of every plate a knife, taking care that every knife on the table lies immediately to the right of a plate. Plates and knives are thus correlated one to one." Sets in such a correlation are called equinumerous, and the correlation is called a one-to-one correspondence.
A set cannot be put into one-to-one correspondence with its power set. This implies that N and P(N) have different cardinalities. It depends on very few assumptions of set theory, and, as John P. Mayberry puts it, is a "simple and beautiful argument" that is "pregnant with consequences". Here is the argument:
Let
A
{\displaystyle A}
be a set and
P
(
A
)
{\displaystyle P(A)}
be its power set. The following theorem will be proved: If
f
{\displaystyle f}
is a function from
A
{\displaystyle A}
to
P
(
A
)
,
{\displaystyle P(A),}
then it is not onto. This theorem implies that there is no one-to-one correspondence between
A
{\displaystyle A}
and
P
(
A
)
{\displaystyle P(A)}
since such a correspondence must be onto. Proof of theorem: Define the diagonal subset
D
=
{
x
∈
A
:
x
∉
f
(
x
)
}
.
{\displaystyle D=\{x\in A:x\notin f(x)\}.}
Since
D
∈
P
(
A
)
,
{\displaystyle D\in P(A),}
proving that for all
x
∈
A
,
D
≠
f
(
x
)
{\displaystyle x\in A,\,D\neq f(x)}
will imply that
f
{\displaystyle f}
is not onto. Let
x
∈
A
.
{\displaystyle x\in A.}
Then
x
∈
D
⇔
x
∉
f
(
x
)
,
{\displaystyle x\in D\Leftrightarrow x\notin f(x),}
which implies
x
∉
D
⇔
x
∈
f
(
x
)
.
{\displaystyle x\notin D\Leftrightarrow x\in f(x).}
So if
x
∈
D
,
{\displaystyle x\in D,}
then
x
∉
f
(
x
)
;
{\displaystyle x\notin f(x);}
and if
x
∉
D
,
{\displaystyle x\notin D,}
then
x
∈
f
(
x
)
.
{\displaystyle x\in f(x).}
Since one of these sets contains
x
{\displaystyle x}
and the other does not,
D
≠
f
(
x
)
.
{\displaystyle D\neq f(x).}
Therefore,
D
{\displaystyle D}
is not in the image of
f
{\displaystyle f}
, so
f
{\displaystyle f}
is not onto.
Next Cantor shows that
A
{\displaystyle A}
is equinumerous with a subset of
P
(
A
)
{\displaystyle P(A)}
. From this and the fact that
P
(
A
)
{\displaystyle P(A)}
and
A
{\displaystyle A}
have different cardinalities, he concludes that
P
(
A
)
{\displaystyle P(A)}
has greater cardinality than
A
{\displaystyle A}
. This conclusion uses his 1878 definition: If A and B have different cardinalities, then either B is equinumerous with a subset of A (in this case, B has less cardinality than A) or A is equinumerous with a subset of B (in this case, B has greater cardinality than A). This definition leaves out the case where A and B are equinumerous with a subset of the other set—that is, A is equinumerous with a subset of B and B is equinumerous with a subset of A. Because Cantor implicitly assumed that cardinalities are linearly ordered, this case cannot occur. After using his 1878 definition, Cantor stated that in an 1883 article he proved that cardinalities are well-ordered, which implies they are linearly ordered. This proof used his well-ordering principle "every set can be well-ordered", which he called a "law of thought". The well-ordering principle is equivalent to the axiom of choice.
Around 1895, Cantor began to regard the well-ordering principle as a theorem and attempted to prove it. In 1895, Cantor also gave a new definition of "greater than" that correctly defines this concept without the aid of his well-ordering principle. By using Cantor's new definition, the modern argument that P(N) has greater cardinality than N can be completed using weaker assumptions than his original argument:
The concept of "having greater cardinality" can be captured by Cantor's 1895 definition: B has greater cardinality than A if (1) A is equinumerous with a subset of B, and (2) B is not equinumerous with a subset of A. Clause (1) says B is at least as large as A, which is consistent with our definition of "having the same cardinality". Clause (2) implies that the case where A and B are equinumerous with a subset of the other set is false. Since clause (2) says that A is not at least as large as B, the two clauses together say that B is larger (has greater cardinality) than A.
The power set
P
(
A
)
{\displaystyle P(A)}
has greater cardinality than
A
,
{\displaystyle A,}
which implies that P(N) has greater cardinality than N. Here is the proof:
Define the subset
P
1
=
{
y
∈
P
(
A
)
:
∃
x
∈
A
(
y
=
{
x
}
)
}
.
{\displaystyle P_{1}=\{\,y\in P(A):\exists x\in A\,(y=\{x\})\,\}.}
Define
f
(
x
)
=
{
x
}
,
{\displaystyle f(x)=\{x\},}
which maps
A
{\displaystyle A}
onto
P
1
.
{\displaystyle P_{1}.}
Since
f
(
x
1
)
=
f
(
x
2
)
{\displaystyle f(x_{1})=f(x_{2})}
implies
x
1
=
x
2
,
f
{\displaystyle x_{1}=x_{2},\,f}
is a one-to-one correspondence from
A
{\displaystyle A}
to
P
1
.
{\displaystyle P_{1}.}
Therefore,
A
{\displaystyle A}
is equinumerous with a subset of
P
(
A
)
.
{\displaystyle P(A).}
Using proof by contradiction, assume that
A
1
,
{\displaystyle A_{1},}
a subset of
A
,
{\displaystyle A,}
is equinumerous with
P
(
A
)
.
{\displaystyle P(A).}
. Then there is a one-to-one correspondence
g
{\displaystyle g}
from
A
1
{\displaystyle A_{1}}
to
P
(
A
)
.
{\displaystyle P(A).}
Define
h
{\displaystyle h}
from
A
{\displaystyle A}
to
P
(
A
)
:
{\displaystyle P(A){\text{:}}}
if
x
∈
A
1
,
{\displaystyle x\in A_{1},}
then
h
(
x
)
=
g
(
x
)
;
{\displaystyle h(x)=g(x);}
if
x
∈
A
∖
A
1
,
{\displaystyle x\in A\setminus A_{1},}
then
h
(
x
)
=
{
}
.
{\displaystyle h(x)=\{\,\}.}
Since
g
{\displaystyle g}
maps
A
1
{\displaystyle A_{1}}
onto
P
(
A
)
,
h
{\displaystyle P(A),\,h}
maps
A
{\displaystyle A}
onto
P
(
A
)
,
{\displaystyle P(A),}
contradicting the theorem above stating that a function from
A
{\displaystyle A}
to
P
(
A
)
{\displaystyle P(A)}
is not onto. Therefore,
P
(
A
)
{\displaystyle P(A)}
is not equinumerous with a subset of
A
.
{\displaystyle A.}
Besides the axioms of infinity and power set, the axioms of separation, extensionality, and pairing were used in the modern argument. For example, the axiom of separation was used to define the diagonal subset
D
,
{\displaystyle D,}
the axiom of extensionality was used to prove
D
≠
f
(
x
)
,
{\displaystyle D\neq f(x),}
and the axiom of pairing was used in the definition of the subset
P
1
.
{\displaystyle P_{1}.}
== Reception of the argument ==
Initially, Cantor's theory was controversial among mathematicians and (later) philosophers. Logician Wilfrid Hodges (1998) has commented on the energy devoted to refuting this "harmless little argument" (i.e. Cantor's diagonal argument) asking, "what had it done to anyone to make them angry with it?" Mathematician Solomon Feferman has referred to Cantor's theories as “simply not relevant to everyday mathematics.”
Before Cantor, the notion of infinity was often taken as a useful abstraction which helped mathematicians reason about the finite world; for example the use of infinite limit cases in calculus. The infinite was deemed to have at most a potential existence, rather than an actual existence. "Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already". Carl Friedrich Gauss's views on the subject can be paraphrased as: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics." In other words, the only access we have to the infinite is through the notion of limits, and hence, we must not treat infinite sets as if they have an existence exactly comparable to the existence of finite sets.
Cantor's ideas ultimately were largely accepted, strongly supported by David Hilbert, amongst others. Hilbert predicted: "No one will drive us from the paradise which Cantor created for us." To which Wittgenstein replied "if one person can see it as a paradise of mathematicians, why should not another see it as a joke?" The rejection of Cantor's infinitary ideas influenced the development of schools of mathematics such as constructivism and intuitionism.
Wittgenstein did not object to mathematical formalism wholesale, but had a finitist view on what Cantor's proof meant. The philosopher maintained that belief in infinities arises from confusing the intensional nature of mathematical laws with the extensional nature of sets, sequences, symbols etc. A series of symbols is finite in his view: In Wittgenstein's words: "...A curve is not composed of points, it is a law that points
obey, or again, a law according to which points can be constructed."
He also described the diagonal argument as "hocus pocus" and not proving what it purports to do.
== Objection to the axiom of infinity ==
A common objection to Cantor's theory of infinite number involves the axiom of infinity (which is, indeed, an axiom and not a logical truth). Mayberry has noted that "the set-theoretical axioms that sustain modern mathematics are self-evident in differing degrees. One of them—indeed, the most important of them, namely Cantor's Axiom, the so-called Axiom of Infinity—has scarcely any claim to self-evidence at all".
Another objection is that the use of infinite sets is not adequately justified by analogy to finite sets. Hermann Weyl wrote:
... classical logic was abstracted from the mathematics of finite sets and their subsets …. Forgetful of this limited origin, one afterwards mistook that logic for something above and prior to all mathematics, and finally applied it, without justification, to the mathematics of infinite sets. This is the Fall and original sin of [Cantor's] set theory
The difficulty with finitism is to develop foundations of mathematics using finitist assumptions that incorporate what everyone reasonably regards as mathematics (for example, real analysis).
== See also ==
Preintuitionism
== Notes ==
== References ==
Bishop, Errett; Bridges, Douglas S. (1985), Constructive Analysis, Grundlehren Der Mathematischen Wissenschaften, Springer, ISBN 978-0-387-15066-6
Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik, 84: 242–248
Cantor, Georg (1891), "Ueber eine elementare Frage der Mannigfaltigkeitslehre" (PDF), Jahresbericht der Deutschen Mathematiker-Vereinigung, 1: 75–78
Cantor, Georg (1895), "Beiträge zur Begründung der transfiniten Mengenlehre (1)", Mathematische Annalen, 46 (4): 481–512, doi:10.1007/bf02124929, S2CID 177801164, archived from the original on April 23, 2014
Cantor, Georg; Philip Jourdain (trans.) (1954) [1915], Contributions to the Founding of the Theory of Transfinite Numbers, Dover, ISBN 978-0-486-60045-1 {{citation}}: ISBN / Date incompatibility (help)
Dauben, Joseph (1979), Georg Cantor: His Mathematics and Philosophy of the Infinite, Harvard University Press, ISBN 0-674-34871-0
Dunham, William (1991), Journey through Genius: The Great Theorems of Mathematics, Penguin Books, ISBN 978-0140147391
Ewald, William B., ed. (1996), From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, Oxford University Press, ISBN 0-19-850536-1
Frege, Gottlob; J.L. Austin (trans.) (1884), The Foundations of Arithmetic (2nd ed.), Northwestern University Press, ISBN 978-0-8101-0605-5 {{citation}}: ISBN / Date incompatibility (help)
Hallett, Michael (1984), Cantorian Set Theory and Limitation of Size, Clarendon Press, ISBN 0-19-853179-6
Hilbert, David (1926), "Über das Unendliche", Mathematische Annalen, vol. 95, pp. 161–190, doi:10.1007/BF01206605, JFM 51.0044.02, S2CID 121888793
"Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können."
Translated in Van Heijenoort, Jean, On the infinite, Harvard University Press
Kline, Morris (1982), Mathematics: The Loss of Certainty, Oxford, ISBN 0-19-503085-0{{citation}}: CS1 maint: location missing publisher (link)
Mayberry, J.P. (2000), The Foundations of Mathematics in the Theory of Sets, Encyclopedia of Mathematics and its Applications, vol. 82, Cambridge University Press
Moore, Gregory H. (1982), Zermelo's Axiom of Choice: Its Origins, Development & Influence, Springer, ISBN 978-1-4613-9480-8
Poincaré, Henri (1908), The Future of Mathematics (PDF), Revue generale des Sciences pures et appliquees, vol. 23, archived from the original (PDF) on 2003-06-29 (address to the Fourth International Congress of Mathematicians)
Sainsbury, R.M. (1979), Russell, London{{citation}}: CS1 maint: location missing publisher (link)
Weyl, Hermann (1946), "Mathematics and logic: A brief survey serving as a preface to a review of The Philosophy of Bertrand Russell", American Mathematical Monthly, vol. 53, pp. 2–13, doi:10.2307/2306078, JSTOR 2306078
Wittgenstein, Ludwig; A. J. P. Kenny (trans.) (1974), Philosophical Grammar, Oxford{{citation}}: CS1 maint: location missing publisher (link)
Wittgenstein; R. Hargreaves (trans.); R. White (trans.) (1964), Philosophical Remarks, Oxford{{citation}}: CS1 maint: location missing publisher (link)
Wittgenstein (2001), Remarks on the Foundations of Mathematics (3rd ed.), Oxford{{citation}}: CS1 maint: location missing publisher (link)
== External links ==
Doron Zeilberger's 68th Opinion
Philosopher Hartley Slater's argument against the idea of "number" that underpins Cantor's set theory
Wolfgang Mueckenheim: Transfinity - A Source Book
Hodges "An editor recalls some hopeless papers" | Wikipedia/Philosophical_objections_to_Cantor's_theory |
Cantor's first set theory article contains Georg Cantor's first theorems of transfinite set theory, which studies infinite sets and their properties. One of these theorems is his "revolutionary discovery" that the set of all real numbers is uncountably, rather than countably, infinite. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of real algebraic numbers is countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using the topological notion of a set being dense in an interval.
Cantor's article also contains a proof of the existence of transcendental numbers. Both constructive and non-constructive proofs have been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive. Cantor's correspondence with Richard Dedekind shows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it during proofreading. They have traced this and other facts about the article to the influence of Karl Weierstrass and Leopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory, measure theory, and the Lebesgue integral.
== The article ==
Cantor's article is short, less than four and a half pages. It begins with a discussion of the real algebraic numbers and a statement of his first theorem: The set of real algebraic numbers can be put into one-to-one correspondence with the set of positive integers. Cantor restates this theorem in terms more familiar to mathematicians of his time: "The set of real algebraic numbers can be written as an infinite sequence in which each number appears only once."
Cantor's second theorem works with a closed interval [a, b], which is the set of real numbers ≥ a and ≤ b. The theorem states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.
Cantor observes that combining his two theorems yields a new proof of Liouville's theorem that every interval [a, b] contains infinitely many transcendental numbers.
Cantor then remarks that his second theorem is:
the reason why collections of real numbers forming a so-called continuum (such as, all real numbers which are ≥ 0 and ≤ 1) cannot correspond one-to-one with the collection (ν) [the collection of all positive integers]; thus I have found the clear difference between a so-called continuum and a collection like the totality of real algebraic numbers.
This remark contains Cantor's uncountability theorem, which only states that an interval [a, b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of larger cardinality than the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.
Cantor only states his uncountability theorem. He does not use it in any proofs.
== The proofs ==
=== First theorem ===
To prove that the set of real algebraic numbers is countable, define the height of a polynomial of degree n with integer coefficients as: n − 1 + |a0| + |a1| + ... + |an|, where a0, a1, ..., an are the coefficients of the polynomial. Order the polynomials by their height, and order the real roots of polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that are irreducible over the integers. The following table contains the beginning of Cantor's enumeration.
=== Second theorem ===
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence.
To find a number in [a, b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in the open interval (a, b). Denote the smaller of these two numbers by a1 and the larger by b1. Similarly, find the first two numbers of the given sequence that are in (a1, b1). Denote the smaller by a2 and the larger by b2. Continuing this procedure generates a sequence of intervals (a1, b1), (a2, b2), (a3, b3), ... such that each interval in the sequence contains all succeeding intervals — that is, it generates a sequence of nested intervals. This implies that the sequence a1, a2, a3, ... is increasing and the sequence b1, b2, b3, ... is decreasing.
Either the number of intervals generated is finite or infinite. If finite, let (aL, bL) be the last interval. If infinite, take the limits a∞ = limn → ∞ an and b∞ = limn → ∞ bn. Since an < bn for all n, either a∞ = b∞ or a∞ < b∞. Thus, there are three cases to consider:
Case 1: There is a last interval (aL, bL). Since at most one xn can be in this interval, every y in this interval except xn (if it exists) is not in the given sequence.
Case 2: a∞ = b∞. Then a∞ is not in the sequence since for all n : a∞ is in the interval (an, bn) but xn does not belong to (an, bn). In symbols: a∞ ∈ (an, bn) but xn ∉ (an, bn).
Case 3: a∞ < b∞. Then every y in [a∞, b∞] is not contained in the given sequence since for all n : y belongs to (an, bn) but xn does not.
The proof is complete since, in all cases, at least one real number in [a, b] has been found that is not contained in the given sequence.
Cantor's proofs are constructive and have been used to write a computer program that generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.
=== Example of Cantor's construction ===
An example illustrates how Cantor's construction works. Consider the sequence: 1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 2/5, 3/5, 4/5, ... This sequence is obtained by ordering the rational numbers in (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omitting reducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an, bn). The second column lists the terms visited during the search for the first two terms in (an, bn). These two terms are in red.
Since the sequence contains all the rational numbers in (0, 1), the construction generates an irrational number, which turns out to be √2 − 1.
== Cantor's 1879 uncountability proof ==
=== Everywhere dense ===
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines the topological notion of a point set P being "everywhere dense in an interval":
If P lies partially or completely in the interval [α, β], then the remarkable case can happen that every interval [γ, δ] contained in [α, β], no matter how small, contains points of P. In such a case, we will say that P is everywhere dense in the interval [α, β].
In this discussion of Cantor's proof: a, b, c, d are used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a, b) implies a < b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A set P is everywhere dense in the interval [a, b] if and only if every open subinterval (c, d) of [a, b] contains at least one point of P.
Cantor did not specify how many points of P an open subinterval (c, d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point of P implies that every open subinterval contains infinitely many points of P.
=== Cantor's 1879 proof ===
Cantor modified his 1874 proof with a new proof of its second theorem: Given any sequence P of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in P. Cantor's new proof has only two cases. First, it handles the case of P not being dense in the interval, then it deals with the more difficult case of P being dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.
In the first case, P is not dense in [a, b]. By definition, P is dense in [a, b] if and only if for all subintervals (c, d) of [a, b], there is an x ∈ P such that x ∈ (c, d). Taking the negation of each side of the "if and only if" produces: P is not dense in [a, b] if and only if there exists a subinterval (c, d) of [a, b] such that for all x ∈ P : x ∉ (c, d). Therefore, every number in (c, d) is not contained in the sequence P. This case handles case 1 and case 3 of Cantor's 1874 proof.
In the second case, which handles case 2 of Cantor's 1874 proof, P is dense in [a, b]. The denseness of sequence P is used to recursively define a sequence of nested intervals that excludes all the numbers in P and whose intersection contains a single real number in [a, b]. The sequence of intervals starts with (a, b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong to P and to the current interval. These two numbers are the endpoints of the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequence P, which implies that the intersection of the nested intervals excludes all the numbers in P. Details of this proof and a proof that this intersection contains a single real number in [a, b] are given below.
== The development of Cantor's ideas ==
The development leading to Cantor's 1874 article appears in the correspondence between Cantor and Richard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1, n2, . . . , nν) where n1, n2, . . . , nν, and ν are positive integers.
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest". Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answered no, one would have a new proof of Liouville's theorem that there are transcendental numbers."
On December 7, Cantor sent Dedekind a proof by contradiction that the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in
[
0
,
1
]
{\displaystyle [0,1]}
can be written as a sequence. Then, he applies a construction to this sequence to produce a number in
[
0
,
1
]
{\displaystyle [0,1]}
that is not in the sequence, thus contradicting his assumption. Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers. Also, the proof in Cantor's December 7 letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article. The letter containing Cantor's December 7 proof was not published until 1937.
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
I show directly that if I start with a sequence
(1) ω1, ω2, ... , ωn, ...
I can determine, in every given interval [α, β], a number η that is not included in (1).
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a, b] to produce a transcendental number in this interval.
The non-constructive proof uses two proofs by contradiction:
The proof by contradiction used to prove the uncountability theorem (see Proof of Cantor's uncountability theorem).
The proof by contradiction used to prove the existence of transcendental numbers from the countability of the real algebraic numbers and the uncountability of real numbers. Cantor's December 2nd letter mentions this existence proof but does not contain it. Here is a proof: Assume that there are no transcendental numbers in [a, b]. Then all the numbers in [a, b] are algebraic. This implies that they form a subsequence of the sequence of all real algebraic numbers, which contradicts Cantor's uncountability theorem. Thus, the assumption that there are no transcendental numbers in [a, b] is false. Therefore, there is a transcendental number in [a, b].
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a, b]. This eliminates the subsequence step and all occurrences of [a, b] in the second proof by contradiction.
== A misconception about Cantor's work ==
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can be found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable. The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers. In that year, Oskar Perron gave the reverse-order proof and then stated: "... Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theorist Abraham Fraenkel stated that Cantor's method is "... a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential." In 1972, Irving Kaplansky wrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers ... and then apply the diagonal procedure ..., we get a perfectly definite transcendental number (it could be computed to any number of decimal places)." Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number in polynomial time. The program that uses Cantor's 1874 construction requires at least sub-exponential time.
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition), Eric Temple Bell's Men of Mathematics (1937; still being reprinted), Godfrey Hardy and E. M. Wright's An Introduction to the Theory of Numbers (1938; 2008 6th edition), Garrett Birkhoff and Saunders Mac Lane's A Survey of Modern Algebra (1941; 1997 5th edition), and Michael Spivak's Calculus (1967; 2008 4th edition). Since 2014, at least two books have appeared stating that Cantor's proof is constructive, and at least four have appeared stating that his proof does not construct any (or a single) transcendental.
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about the history of mathematics. In A Survey of Modern Algebra, Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number." The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. Even Leopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it. In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.
== The influence of Weierstrass and Kronecker on Cantor's article ==
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
Cantor's uncountability theorem was left out of the article he submitted. He added it during proofreading.
The article's title refers to the set of real algebraic numbers. The main topic in Cantor's correspondence was the set of real numbers.
The proof of Cantor's second theorem came from Dedekind. However, it omits Dedekind's explanation of why the limits a∞ and b∞ exist.
Cantor restricted his first theorem to the set of real algebraic numbers. The proof he was using demonstrates the countability of the set of all algebraic numbers.
To explain these facts, historians have pointed to the influence of Cantor's former professors, Karl Weierstrass and Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873. Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful. Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances ..." Cantor biographer Joseph Dauben believes that "local circumstances" refers to Kronecker who, as a member of the editorial board of Crelle's Journal, had delayed publication of an 1870 article by Eduard Heine, one of Cantor's colleagues. Cantor would submit his article to Crelle's Journal.
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did. It appears in a
remark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not. Weierstrass changed his opinion later. Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limits a∞ = limn → ∞ an and
b∞ = limn → ∞ bn exist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to the least upper bound property of the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers. Cantor did this for expository reasons and because of "local circumstances". This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers". This procedure would be acceptable to Weierstrass.
== Dedekind's contributions to Cantor's article ==
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example: ideals, which he used in algebraic number theory, and Dedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1, n2, ..., nν) where n1, n2, ..., nν, and ν are positive integers. Cantor's second result uses an indexed family of numbers: a set of the form (an1, n2, ..., nν) is the range of a function from the ν indices to the set of real numbers. His second result implies his first: let ν = 2 and an1, n2 = n1/n2. The function can be quite general—for example, an1, n2, n3, n4, n5 = (n1/n2)1/n3 + tan(n4/n5).
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable. In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I take n12 + n22 + ··· + nν2 =
N
{\displaystyle {\mathfrak {N}}}
and order the elements accordingly." However, Cantor's ordering is weaker than Dedekind's and cannot be extended to
n
{\displaystyle n}
-tuples of integers that include zeros.
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, which Cantor proved using infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences. So Cantor had a choice of proofs and chose to publish Dedekind's.
Cantor thanked Dedekind privately for his help: "... your comments (which I value highly) and your manner of putting some of the points were of great assistance to me." However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, and Hermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.
== The legacy of Cantor's article ==
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that the n-dimensional spaces Rn (where R is the set of real numbers) and the set of irrational numbers have the same cardinality as R.
In 1883, Cantor extended the positive integers with his infinite ordinals. This extension was necessary for his work on the Cantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities. His work on infinite sets together with Dedekind's set-theoretical work created set theory.
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countable unions of sets. In the 1890s, Émile Borel used countable unions in his theory of measure, and René Baire used countable ordinals to define his classes of functions. Building on the work of Borel and Baire, Henri Lebesgue created his theories of measure and integration, which were published from 1899 to 1901.
Countable models are used in set theory. In 1922, Thoralf Skolem proved that if conventional axioms of set theory are consistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is called Skolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, the first-order sentence that says the set of real numbers is uncountable is true within the model. In 1963, Paul Cohen used countable models to prove his independence theorems.
== See also ==
Cantor's theorem
== Notes ==
=== Note on Cantor's 1879 proof ===
== References ==
== Bibliography ==
Arkhangel'skii, A. V.; Fedorchuk, V. V. (1990), "The basic concepts and constructions of general topology", in Arkhangel'skii, A. V.; Pontryagin, L. S. (eds.), General Topology I, New York, Berlin: Springer-Verlag, pp. 1–90, ISBN 978-0-387-18178-3.
Audin, Michèle (2011), Remembering Sofya Kovalevskaya, London: Springer, ISBN 978-0-85729-928-4.
Bell, Eric Temple (1937), Men of Mathematics, New York: Simon & Schuster. Reprinted, 1984, ISBN 978-0-671-62818-5.
Birkhoff, Garrett; Mac Lane, Saunders (1941), A Survey of Modern Algebra, New York: Macmillan. Reprinted, Taylor & Francis, 1997, ISBN 978-1-56881-068-3.
Burton, David M. (1995), Burton's History of Mathematics (3rd ed.), Dubuque, Iowa: William C. Brown, ISBN 978-0-697-16089-8.
Cantor, Georg (1874), "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen", Journal für die Reine und Angewandte Mathematik (in German), 1874 (77): 258–262, doi:10.1515/crll.1874.77.258, S2CID 199545885.
Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik (in German), 1878 (84): 242–258, doi:10.1515/crll.1878.84.242 (inactive 1 November 2024){{citation}}: CS1 maint: DOI inactive as of November 2024 (link).
Cantor, Georg (1879), "Ueber unendliche, lineare Punktmannichfaltigkeiten. 1.", Mathematische Annalen (in German), 15: 1–7, doi:10.1007/bf01444101, S2CID 179177510.
Chowdhary, K. R. (2015), Fundamentals of Discrete Mathematical Structures (3rd ed.), Delhi, India: PHI Learning, ISBN 978-81-203-5074-8.
Cohen, Paul J. (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, PMC 221287, PMID 16578557.
Dasgupta, Abhijit (2014), Set Theory: With an Introduction to Real Point Sets, New York: Springer, ISBN 978-1-4614-8853-8.
Dauben, Joseph (1979), Georg Cantor: His Mathematics and Philosophy of the Infinite, Cambridge, Mass.: Harvard University Press, ISBN 978-0-674-34871-4.
Dauben, Joseph (1993), "Georg Cantor and the Battle for Transfinite Set Theory" (PDF), 9th ACMS Conference Proceedings.
Edwards, Harold M. (1989), "Kronecker's Views on the Foundations of Mathematics", in Rowe, David E.; McCleary, John (eds.), The History of Modern Mathematics, Volume 1, New York: Academic Press, pp. 67–77, ISBN 978-0-12-599662-4.
Ewald, William B., ed. (1996), From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, New York: Oxford University Press, ISBN 978-0-19-850536-5.
Ferreirós, José (1993), "On the relations between Georg Cantor and Richard Dedekind", Historia Mathematica, 20 (4): 343–363, doi:10.1006/hmat.1993.1030.
Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Basel: Birkhäuser, ISBN 978-3-7643-8349-7.
Fraenkel, Abraham (1930), "Georg Cantor", Jahresbericht der Deutschen Mathematiker-Vereinigung (in German), 39: 189–266.
Grattan-Guinness, Ivor (1971), "The Correspondence between Georg Cantor and Philip Jourdain", Jahresbericht der Deutschen Mathematiker-Vereinigung, 73: 111–130.
Gray, Robert (1994), "Georg Cantor and Transcendental Numbers" (PDF), American Mathematical Monthly, 101 (9): 819–832, doi:10.2307/2975129, JSTOR 2975129, MR 1300488, Zbl 0827.01004, archived from the original (PDF) on 2022-01-21, retrieved 2016-02-13.
Hardy, Godfrey; Wright, E. M. (1938), An Introduction to the Theory of Numbers, Oxford: Clarendon Press.
Havil, Julian (2012), The Irrationals, Princeton, Oxford: Princeton University Press, ISBN 978-0-691-16353-6.
Hawkins, Thomas (1970), Lebesgue's Theory of Integration, Madison, Wisconsin: University of Wisconsin Press, ISBN 978-0-299-05550-9.
Jarvis, Frazer (2014), Algebraic Number Theory, New York: Springer, ISBN 978-3-319-07544-0.
Kanamori, Akihiro (2012), "Set Theory from Cantor to Cohen" (PDF), in Gabbay, Dov M.; Kanamori, Akihiro; Woods, John H. (eds.), Sets and Extensions in the Twentieth Century, Amsterdam, Boston: Cambridge University Press, pp. 1–71, ISBN 978-0-444-51621-3.
Kaplansky, Irving (1972), Set Theory and Metric Spaces, Boston: Allyn and Bacon, ISBN 978-0-8284-0298-9.
Kelley, John L. (1991), General Topology, New York: Springer, ISBN 978-3-540-90125-9.
LeVeque, William J. (1956), Topics in Number Theory, vol. I, Reading, Massachusetts: Addison-Wesley. (Reprinted by Dover Publications, 2002, ISBN 978-0-486-42539-9.)
Noether, Emmy; Cavaillès, Jean, eds. (1937), Briefwechsel Cantor-Dedekind (in German), Paris: Hermann.
Perron, Oskar (1921), Irrationalzahlen (in German), Leipzig, Berlin: W. de Gruyter, OCLC 4636376.
Sheppard, Barnaby (2014), The Logic of Infinity, Cambridge: Cambridge University Press, ISBN 978-1-107-67866-8.
Spivak, Michael (1967), Calculus, London: W. A. Benjamin, ISBN 978-0914098911.
Stewart, Ian (2015), Galois Theory (4th ed.), Boca Raton, Florida: CRC Press, ISBN 978-1-4822-4582-0.
Stewart, Ian; Tall, David (2015), The Foundations of Mathematics (2nd ed.), New York: Oxford University Press, ISBN 978-0-19-870644-1.
Weisstein, Eric W., ed. (2003), "Continued Fraction", CRC Concise Encyclopedia of Mathematics, Boca Raton, Florida: Chapman & Hall/CRC, ISBN 978-1-58488-347-0. | Wikipedia/Cantor's_first_set_theory_article |
In computability theory, hyperarithmetic theory is a generalization of Turing computability. It has close connections with definability in second-order arithmetic and with weak systems of set theory such as Kripke–Platek set theory. It is an important tool in effective descriptive set theory.
The central focus of hyperarithmetic theory is the sets of natural numbers known as hyperarithmetic sets. There are three equivalent ways of defining this class of sets; the study of the relationships between these different definitions is one motivation for the study of hyperarithmetical theory.
== Hyperarithmetical sets and definability ==
The first definition of the hyperarithmetic sets uses the analytical hierarchy.
A set of natural numbers is classified at level
Σ
1
1
{\displaystyle \Sigma _{1}^{1}}
of this hierarchy if it is definable by a formula of second-order arithmetic with only existential set quantifiers and no other set quantifiers. A set is classified at level
Π
1
1
{\displaystyle \Pi _{1}^{1}}
of the analytical hierarchy if it is definable by a formula of second-order arithmetic with only universal set quantifiers and no other set quantifiers. A set is
Δ
1
1
{\displaystyle \Delta _{1}^{1}}
if it is both
Σ
1
1
{\displaystyle \Sigma _{1}^{1}}
and
Π
1
1
{\displaystyle \Pi _{1}^{1}}
. The hyperarithmetical sets are exactly the
Δ
1
1
{\displaystyle \Delta _{1}^{1}}
sets.
== Hyperarithmetical sets and iterated Turing jumps: the hyperarithmetical hierarchy ==
The definition of hyperarithmetical sets as
Δ
1
1
{\displaystyle \Delta _{1}^{1}}
does not directly depend on computability results. A second, equivalent, definition shows that the hyperarithmetical sets can be defined using infinitely iterated Turing jumps. This second definition also shows that the hyperarithmetical sets can be classified into a hierarchy extending the arithmetical hierarchy; the hyperarithmetical sets are exactly the sets that are assigned a rank in this hierarchy.
Each level of the hyperarithmetical hierarchy is indexed by a countable ordinal number (ordinal), but not all countable ordinals correspond to a level of the hierarchy. The ordinals used by the hierarchy are those with an ordinal notation, which is a concrete, effective description of the ordinal.
An ordinal notation is an effective description of a countable ordinal by a natural number. A system of ordinal notations is required in order to define the hyperarithmetic hierarchy. The fundamental property an ordinal notation must have is that it describes the ordinal in terms of smaller ordinals in an effective way. The following inductive definition is typical; it uses a pairing function
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
.
The number 0 is a notation for the ordinal 0.
If n is a notation for an ordinal λ then
⟨
1
,
n
⟩
{\displaystyle \langle 1,n\rangle }
is a notation for λ + 1;
Suppose that δ is a limit ordinal. A notation for δ is a number of the form
⟨
2
,
e
⟩
{\displaystyle \langle 2,e\rangle }
, where e is the index of a total computable function
ϕ
e
{\displaystyle \phi _{e}}
such that for each n,
ϕ
e
(
n
)
{\displaystyle \phi _{e}(n)}
is a notation for an ordinal λn less than δ and δ is the sup of the set
{
λ
n
∣
n
∈
N
}
{\displaystyle \{\lambda _{n}\mid n\in \mathbb {N} \}}
.
This may also be defined by taking effective joins at all levels instead of only notations for limit ordinals.
There are only countably many ordinal notations, since each notation is a natural number; thus there is a countable ordinal that is the supremum of all ordinals that have a notation. This ordinal is known as the Church–Kleene ordinal and is denoted
ω
1
C
K
{\displaystyle \omega _{1}^{CK}}
. Note that this ordinal is still countable, the symbol being only an analogy with the first uncountable ordinal,
ω
1
{\displaystyle \omega _{1}}
. The set of all natural numbers that are ordinal notations is denoted
O
{\displaystyle {\mathcal {O}}}
and called Kleene's
O
{\displaystyle {\mathcal {O}}}
.
Ordinal notations are used to define iterated Turing jumps. The sets of natural numbers used to define the hierarchy are
0
(
δ
)
{\displaystyle 0^{(\delta )}}
for each
δ
<
ω
1
C
K
{\displaystyle \delta <\omega _{1}^{CK}}
.
0
(
δ
)
{\displaystyle 0^{(\delta )}}
is sometimes also denoted
H
(
δ
)
{\displaystyle H(\delta )}
, or
H
e
{\displaystyle H_{e}}
for a notation
e
{\displaystyle e}
for
δ
{\displaystyle \delta }
. Suppose that δ has notation e. These sets were first defined by Davis (1950) and Mostowski (1951). The set
0
(
δ
)
{\displaystyle 0^{(\delta )}}
is defined using e as follows.
If δ = 0 then
0
(
δ
)
=
0
{\displaystyle 0^{(\delta )}=0}
is the empty set.
If δ = λ + 1 then
0
(
δ
)
{\displaystyle 0^{(\delta )}}
is the Turing jump of
0
(
λ
)
{\displaystyle 0^{(\lambda )}}
. The sets
0
(
1
)
{\displaystyle 0^{(1)}}
and
0
(
2
)
{\displaystyle 0^{(2)}}
are
0
′
{\displaystyle 0'}
and
0
″
{\displaystyle 0''}
, respectively.
If δ is a limit ordinal, let
⟨
λ
n
∣
n
∈
N
⟩
{\displaystyle \langle \lambda _{n}\mid n\in \mathbb {N} \rangle }
be the sequence of ordinals less than δ given by the notation e. The set
0
(
δ
)
{\displaystyle 0^{(\delta )}}
is given by the rule
0
(
δ
)
=
{
⟨
n
,
i
⟩
∣
i
∈
0
(
λ
n
)
}
{\displaystyle 0^{(\delta )}=\{\langle n,i\rangle \mid i\in 0^{(\lambda _{n})}\}}
. This is the effective join of the sets
0
(
λ
n
)
{\displaystyle 0^{(\lambda _{n})}}
.
Although the construction of
0
(
δ
)
{\displaystyle 0^{(\delta )}}
depends on having a fixed notation for δ, and each infinite ordinal has many notations, a theorem of Clifford Spector shows that the Turing degree of
0
(
δ
)
{\displaystyle 0^{(\delta )}}
depends only on δ, not on the particular notation used, and thus
0
(
δ
)
{\displaystyle 0^{(\delta )}}
is well defined up to Turing degree.
The hyperarithmetical hierarchy is defined from these iterated Turing jumps. A set X of natural numbers is classified at level δ of the hyperarithmetical hierarchy, for
δ
<
ω
1
C
K
{\displaystyle \delta <\omega _{1}^{CK}}
, if X is Turing reducible to
0
(
δ
)
{\displaystyle 0^{(\delta )}}
. There will always be a least such δ if there is any; it is this least δ that measures the level of uncomputability of X.
== Hyperarithmetical sets and constructibility ==
Let
L
α
{\displaystyle L_{\alpha }}
denote the
α
{\displaystyle \alpha }
th level of the constructible hierarchy, and let
n
:
O
→
ω
1
C
K
{\displaystyle n:{\mathcal {O}}\to \omega _{1}^{CK}}
be the map from a member of Kleene's O to the ordinal it represents. A subset of
N
{\displaystyle \mathbb {N} }
is hyperarithmetical if and only if it is a member of
L
ω
1
C
K
{\displaystyle L_{\omega _{1}^{CK}}}
. A subset of
N
{\displaystyle \mathbb {N} }
is definable by a
Π
1
1
{\displaystyle \Pi _{1}^{1}}
formula if and only if its image under
n
{\displaystyle n}
is
Σ
1
{\displaystyle \Sigma _{1}}
-definable on
L
ω
1
C
K
{\displaystyle L_{\omega _{1}^{CK}}}
, where
Σ
1
{\displaystyle \Sigma _{1}}
is from the Lévy hierarchy of formulae.
== Hyperarithmetical sets and recursion in higher types ==
A third characterization of the hyperarithmetical sets, due to Kleene, uses higher-type computable functionals. The type-2 functional
2
E
:
N
N
→
N
{\displaystyle {}^{2}E\colon \mathbb {N} ^{\mathbb {N} }\to \mathbb {N} }
is defined by the following rules:
2
E
(
f
)
=
1
{\displaystyle {}^{2}E(f)=1\quad }
if there is an i such that f(i) > 0,
2
E
(
f
)
=
0
{\displaystyle {}^{2}E(f)=0\quad }
if there is no i such that f(i) > 0.
Using a precise definition of computability relative to a type-2 functional, Kleene showed that a set of natural numbers is hyperarithmetical if and only if it is computable relative to
2
E
{\displaystyle {}^{2}E}
.
== Example: the truth set of arithmetic ==
Every arithmetical set is hyperarithmetical, but there are many other hyperarithmetical sets. One example of a hyperarithmetical, nonarithmetical set is the set T of Gödel numbers of formulas of Peano arithmetic that are true in the standard natural numbers
N
{\displaystyle \mathbb {N} }
. The set T is Turing equivalent to the set
0
(
ω
)
{\displaystyle 0^{(\omega )}}
, and so is not high in the hyperarithmetical hierarchy, although it is not arithmetically definable by Tarski's indefinability theorem.
== Fundamental results ==
The fundamental results of hyperarithmetic theory show that the three definitions above define the same collection of sets of natural numbers. These equivalences are due to Kleene.
Completeness results are also fundamental to the theory. A set of natural numbers is
Π
1
1
{\displaystyle \Pi _{1}^{1}}
complete if it is at level
Π
1
1
{\displaystyle \Pi _{1}^{1}}
of the analytical hierarchy and every
Π
1
1
{\displaystyle \Pi _{1}^{1}}
set of natural numbers is many-one reducible to it. The definition of a
Π
1
1
{\displaystyle \Pi _{1}^{1}}
complete subset of Baire space (
N
N
{\displaystyle \mathbb {N} ^{\mathbb {N} }}
) is similar. Several sets associated with hyperarithmetic theory are
Π
1
1
{\displaystyle \Pi _{1}^{1}}
complete:
Kleene's
O
{\displaystyle {\mathcal {O}}}
, the set of natural numbers that are notations for ordinal numbers
The set of natural numbers e such that the computable function
ϕ
e
(
x
,
y
)
{\displaystyle \phi _{e}(x,y)}
computes the characteristic function of a well ordering of the natural numbers. These are the indices of recursive ordinals.
The set of elements of Baire space that are the characteristic functions of a well ordering of the natural numbers (using an effective isomorphism
N
N
≅
N
N
×
N
)
{\displaystyle \mathbb {N} ^{\mathbb {N} }\cong \mathbb {N} ^{\mathbb {N} \times \mathbb {N} })}
.
Results known as
Σ
1
1
{\displaystyle \Sigma _{1}^{1}}
bounding follow from these completeness results. For any
Σ
1
1
{\displaystyle \Sigma _{1}^{1}}
set S of ordinal notations, there is an
α
<
ω
1
C
K
{\displaystyle \alpha <\omega _{1}^{CK}}
such that every element of S is a notation for an ordinal less than
α
{\displaystyle \alpha }
. For any
Σ
1
1
{\displaystyle \Sigma _{1}^{1}}
subset T of Baire space consisting only of characteristic functions of well orderings, there is an
α
<
ω
1
C
K
{\displaystyle \alpha <\omega _{1}^{CK}}
such that each ordinal represented in T is less than
α
{\displaystyle \alpha }
.
== Relativized hyperarithmeticity and hyperdegrees ==
The definition of
O
{\displaystyle {\mathcal {O}}}
can be relativized to a set X of natural numbers: in the definition of an ordinal notation, the clause for limit ordinals is changed so that the computable enumeration of a sequence of ordinal notations is allowed to use X as an oracle. The set of numbers that are ordinal notations relative to X is denoted
O
X
{\displaystyle {\mathcal {O}}^{X}}
. The supremum of ordinals represented in
O
X
{\displaystyle {\mathcal {O}}^{X}}
is denoted
ω
1
X
{\displaystyle \omega _{1}^{X}}
; this is a countable ordinal no smaller than
ω
1
C
K
{\displaystyle \omega _{1}^{CK}}
.
The definition of
0
(
δ
)
{\displaystyle 0^{(\delta )}}
can also be relativized to an arbitrary set
X
{\displaystyle X}
of natural numbers. The only change in the definition is that
X
(
0
)
{\displaystyle X^{(0)}}
is defined to be X rather than the empty set, so that
X
(
1
)
=
X
′
{\displaystyle X^{(1)}=X'}
is the Turing jump of X, and so on. Rather than terminating at
ω
1
C
K
{\displaystyle \omega _{1}^{CK}}
the hierarchy relative to X runs through all ordinals less than
ω
1
X
{\displaystyle \omega _{1}^{X}}
.
The relativized hyperarithmetical hierarchy is used to define hyperarithmetical reducibility. Given sets X and Y, we say
X
≤
HYP
Y
{\displaystyle X\leq _{\text{HYP}}Y}
if and only if there is a
δ
<
ω
1
Y
{\displaystyle \delta <\omega _{1}^{Y}}
such that X is Turing reducible to
Y
(
δ
)
{\displaystyle Y^{(\delta )}}
. If
X
≤
HYP
Y
{\displaystyle X\leq _{\text{HYP}}Y}
and
Y
≤
HYP
X
{\displaystyle Y\leq _{\text{HYP}}X}
then the notation
X
≡
HYP
Y
{\displaystyle X\equiv _{\text{HYP}}Y}
is used to indicate X and Y are hyperarithmetically equivalent. This is a coarser equivalence relation than Turing equivalence; for example, every set of natural numbers is hyperarithmetically equivalent to its Turing jump but not Turing equivalent to its Turing jump. The equivalence classes of hyperarithmetical equivalence are known as hyperdegrees.
The function that takes a set X to
O
X
{\displaystyle {\mathcal {O}}^{X}}
is known as the hyperjump by analogy with the Turing jump. Many properties of the hyperjump and hyperdegrees have been established. In particular, it is known that Post's problem for hyperdegrees has a positive answer: for every set X of natural numbers there is a set Y of natural numbers such that
X
<
HYP
Y
<
HYP
O
X
{\displaystyle X<_{\text{HYP}}Y<_{\text{HYP}}{\mathcal {O}}^{X}}
.
== Generalizations ==
Hyperarithmetical theory is generalized by α-recursion theory, which is the study of definable subsets of admissible ordinals. Hyperarithmetical theory is the special case in which α is
ω
1
C
K
{\displaystyle \omega _{1}^{CK}}
.
== Relation to other hierarchies ==
== References ==
H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition 1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1
G. Sacks, 1990. Higher Recursion Theory, Springer-Verlag. ISBN 3-540-19305-7
S. Simpson, 1999. Subsystems of Second Order Arithmetic, Springer-Verlag.
C. J. Ash, J. F. Knight, 2000. Computable Structures and the Hyperarithmetical Hierarchy, Elsevier. ISBN 0-444-50072-3
== Citations ==
== External links ==
Descriptive set theory. Notes by David Marker, University of Illinois at Chicago. 2002.
Mathematical Logic II. Notes by Dag Normann, The University of Oslo. 2005.
Antonio Montalbán: University of California, Berkeley and YouTube content creator | Wikipedia/Hyperarithmetical_theory |
In mathematical logic, an alternative set theory is any of the alternative mathematical approaches to the concept of set and any alternative to the de facto standard set theory described in axiomatic set theory by the axioms of Zermelo–Fraenkel set theory.
== Alternative set theories ==
Alternative set theories include:
Vopěnka's alternative set theory
Von Neumann–Bernays–Gödel set theory
Morse–Kelley set theory
Tarski–Grothendieck set theory
Ackermann set theory
Type theory
New Foundations
Positive set theory
Internal set theory
Pocket set theory
Naive set theory
S (set theory)
Double extension set theory
Kripke–Platek set theory
Kripke–Platek set theory with urelements
Scott–Potter set theory
Constructive set theory
Zermelo set theory
General set theory
Mac Lane set theory
== See also ==
Non-well-founded set theory
List of first-order theories § Set theories
== Notes == | Wikipedia/Alternative_set_theory |
Cantor's first set theory article contains Georg Cantor's first theorems of transfinite set theory, which studies infinite sets and their properties. One of these theorems is his "revolutionary discovery" that the set of all real numbers is uncountably, rather than countably, infinite. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of real algebraic numbers is countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using the topological notion of a set being dense in an interval.
Cantor's article also contains a proof of the existence of transcendental numbers. Both constructive and non-constructive proofs have been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive. Cantor's correspondence with Richard Dedekind shows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it during proofreading. They have traced this and other facts about the article to the influence of Karl Weierstrass and Leopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory, measure theory, and the Lebesgue integral.
== The article ==
Cantor's article is short, less than four and a half pages. It begins with a discussion of the real algebraic numbers and a statement of his first theorem: The set of real algebraic numbers can be put into one-to-one correspondence with the set of positive integers. Cantor restates this theorem in terms more familiar to mathematicians of his time: "The set of real algebraic numbers can be written as an infinite sequence in which each number appears only once."
Cantor's second theorem works with a closed interval [a, b], which is the set of real numbers ≥ a and ≤ b. The theorem states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.
Cantor observes that combining his two theorems yields a new proof of Liouville's theorem that every interval [a, b] contains infinitely many transcendental numbers.
Cantor then remarks that his second theorem is:
the reason why collections of real numbers forming a so-called continuum (such as, all real numbers which are ≥ 0 and ≤ 1) cannot correspond one-to-one with the collection (ν) [the collection of all positive integers]; thus I have found the clear difference between a so-called continuum and a collection like the totality of real algebraic numbers.
This remark contains Cantor's uncountability theorem, which only states that an interval [a, b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of larger cardinality than the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.
Cantor only states his uncountability theorem. He does not use it in any proofs.
== The proofs ==
=== First theorem ===
To prove that the set of real algebraic numbers is countable, define the height of a polynomial of degree n with integer coefficients as: n − 1 + |a0| + |a1| + ... + |an|, where a0, a1, ..., an are the coefficients of the polynomial. Order the polynomials by their height, and order the real roots of polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that are irreducible over the integers. The following table contains the beginning of Cantor's enumeration.
=== Second theorem ===
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence.
To find a number in [a, b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in the open interval (a, b). Denote the smaller of these two numbers by a1 and the larger by b1. Similarly, find the first two numbers of the given sequence that are in (a1, b1). Denote the smaller by a2 and the larger by b2. Continuing this procedure generates a sequence of intervals (a1, b1), (a2, b2), (a3, b3), ... such that each interval in the sequence contains all succeeding intervals — that is, it generates a sequence of nested intervals. This implies that the sequence a1, a2, a3, ... is increasing and the sequence b1, b2, b3, ... is decreasing.
Either the number of intervals generated is finite or infinite. If finite, let (aL, bL) be the last interval. If infinite, take the limits a∞ = limn → ∞ an and b∞ = limn → ∞ bn. Since an < bn for all n, either a∞ = b∞ or a∞ < b∞. Thus, there are three cases to consider:
Case 1: There is a last interval (aL, bL). Since at most one xn can be in this interval, every y in this interval except xn (if it exists) is not in the given sequence.
Case 2: a∞ = b∞. Then a∞ is not in the sequence since for all n : a∞ is in the interval (an, bn) but xn does not belong to (an, bn). In symbols: a∞ ∈ (an, bn) but xn ∉ (an, bn).
Case 3: a∞ < b∞. Then every y in [a∞, b∞] is not contained in the given sequence since for all n : y belongs to (an, bn) but xn does not.
The proof is complete since, in all cases, at least one real number in [a, b] has been found that is not contained in the given sequence.
Cantor's proofs are constructive and have been used to write a computer program that generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.
=== Example of Cantor's construction ===
An example illustrates how Cantor's construction works. Consider the sequence: 1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 2/5, 3/5, 4/5, ... This sequence is obtained by ordering the rational numbers in (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omitting reducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an, bn). The second column lists the terms visited during the search for the first two terms in (an, bn). These two terms are in red.
Since the sequence contains all the rational numbers in (0, 1), the construction generates an irrational number, which turns out to be √2 − 1.
== Cantor's 1879 uncountability proof ==
=== Everywhere dense ===
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines the topological notion of a point set P being "everywhere dense in an interval":
If P lies partially or completely in the interval [α, β], then the remarkable case can happen that every interval [γ, δ] contained in [α, β], no matter how small, contains points of P. In such a case, we will say that P is everywhere dense in the interval [α, β].
In this discussion of Cantor's proof: a, b, c, d are used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a, b) implies a < b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A set P is everywhere dense in the interval [a, b] if and only if every open subinterval (c, d) of [a, b] contains at least one point of P.
Cantor did not specify how many points of P an open subinterval (c, d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point of P implies that every open subinterval contains infinitely many points of P.
=== Cantor's 1879 proof ===
Cantor modified his 1874 proof with a new proof of its second theorem: Given any sequence P of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in P. Cantor's new proof has only two cases. First, it handles the case of P not being dense in the interval, then it deals with the more difficult case of P being dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.
In the first case, P is not dense in [a, b]. By definition, P is dense in [a, b] if and only if for all subintervals (c, d) of [a, b], there is an x ∈ P such that x ∈ (c, d). Taking the negation of each side of the "if and only if" produces: P is not dense in [a, b] if and only if there exists a subinterval (c, d) of [a, b] such that for all x ∈ P : x ∉ (c, d). Therefore, every number in (c, d) is not contained in the sequence P. This case handles case 1 and case 3 of Cantor's 1874 proof.
In the second case, which handles case 2 of Cantor's 1874 proof, P is dense in [a, b]. The denseness of sequence P is used to recursively define a sequence of nested intervals that excludes all the numbers in P and whose intersection contains a single real number in [a, b]. The sequence of intervals starts with (a, b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong to P and to the current interval. These two numbers are the endpoints of the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequence P, which implies that the intersection of the nested intervals excludes all the numbers in P. Details of this proof and a proof that this intersection contains a single real number in [a, b] are given below.
== The development of Cantor's ideas ==
The development leading to Cantor's 1874 article appears in the correspondence between Cantor and Richard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1, n2, . . . , nν) where n1, n2, . . . , nν, and ν are positive integers.
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest". Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answered no, one would have a new proof of Liouville's theorem that there are transcendental numbers."
On December 7, Cantor sent Dedekind a proof by contradiction that the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in
[
0
,
1
]
{\displaystyle [0,1]}
can be written as a sequence. Then, he applies a construction to this sequence to produce a number in
[
0
,
1
]
{\displaystyle [0,1]}
that is not in the sequence, thus contradicting his assumption. Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers. Also, the proof in Cantor's December 7 letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article. The letter containing Cantor's December 7 proof was not published until 1937.
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
I show directly that if I start with a sequence
(1) ω1, ω2, ... , ωn, ...
I can determine, in every given interval [α, β], a number η that is not included in (1).
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a, b] to produce a transcendental number in this interval.
The non-constructive proof uses two proofs by contradiction:
The proof by contradiction used to prove the uncountability theorem (see Proof of Cantor's uncountability theorem).
The proof by contradiction used to prove the existence of transcendental numbers from the countability of the real algebraic numbers and the uncountability of real numbers. Cantor's December 2nd letter mentions this existence proof but does not contain it. Here is a proof: Assume that there are no transcendental numbers in [a, b]. Then all the numbers in [a, b] are algebraic. This implies that they form a subsequence of the sequence of all real algebraic numbers, which contradicts Cantor's uncountability theorem. Thus, the assumption that there are no transcendental numbers in [a, b] is false. Therefore, there is a transcendental number in [a, b].
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a, b]. This eliminates the subsequence step and all occurrences of [a, b] in the second proof by contradiction.
== A misconception about Cantor's work ==
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can be found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable. The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers. In that year, Oskar Perron gave the reverse-order proof and then stated: "... Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theorist Abraham Fraenkel stated that Cantor's method is "... a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential." In 1972, Irving Kaplansky wrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers ... and then apply the diagonal procedure ..., we get a perfectly definite transcendental number (it could be computed to any number of decimal places)." Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number in polynomial time. The program that uses Cantor's 1874 construction requires at least sub-exponential time.
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition), Eric Temple Bell's Men of Mathematics (1937; still being reprinted), Godfrey Hardy and E. M. Wright's An Introduction to the Theory of Numbers (1938; 2008 6th edition), Garrett Birkhoff and Saunders Mac Lane's A Survey of Modern Algebra (1941; 1997 5th edition), and Michael Spivak's Calculus (1967; 2008 4th edition). Since 2014, at least two books have appeared stating that Cantor's proof is constructive, and at least four have appeared stating that his proof does not construct any (or a single) transcendental.
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about the history of mathematics. In A Survey of Modern Algebra, Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number." The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. Even Leopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it. In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.
== The influence of Weierstrass and Kronecker on Cantor's article ==
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
Cantor's uncountability theorem was left out of the article he submitted. He added it during proofreading.
The article's title refers to the set of real algebraic numbers. The main topic in Cantor's correspondence was the set of real numbers.
The proof of Cantor's second theorem came from Dedekind. However, it omits Dedekind's explanation of why the limits a∞ and b∞ exist.
Cantor restricted his first theorem to the set of real algebraic numbers. The proof he was using demonstrates the countability of the set of all algebraic numbers.
To explain these facts, historians have pointed to the influence of Cantor's former professors, Karl Weierstrass and Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873. Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful. Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances ..." Cantor biographer Joseph Dauben believes that "local circumstances" refers to Kronecker who, as a member of the editorial board of Crelle's Journal, had delayed publication of an 1870 article by Eduard Heine, one of Cantor's colleagues. Cantor would submit his article to Crelle's Journal.
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did. It appears in a
remark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not. Weierstrass changed his opinion later. Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limits a∞ = limn → ∞ an and
b∞ = limn → ∞ bn exist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to the least upper bound property of the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers. Cantor did this for expository reasons and because of "local circumstances". This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers". This procedure would be acceptable to Weierstrass.
== Dedekind's contributions to Cantor's article ==
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example: ideals, which he used in algebraic number theory, and Dedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1, n2, ..., nν) where n1, n2, ..., nν, and ν are positive integers. Cantor's second result uses an indexed family of numbers: a set of the form (an1, n2, ..., nν) is the range of a function from the ν indices to the set of real numbers. His second result implies his first: let ν = 2 and an1, n2 = n1/n2. The function can be quite general—for example, an1, n2, n3, n4, n5 = (n1/n2)1/n3 + tan(n4/n5).
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable. In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I take n12 + n22 + ··· + nν2 =
N
{\displaystyle {\mathfrak {N}}}
and order the elements accordingly." However, Cantor's ordering is weaker than Dedekind's and cannot be extended to
n
{\displaystyle n}
-tuples of integers that include zeros.
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, which Cantor proved using infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences. So Cantor had a choice of proofs and chose to publish Dedekind's.
Cantor thanked Dedekind privately for his help: "... your comments (which I value highly) and your manner of putting some of the points were of great assistance to me." However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, and Hermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.
== The legacy of Cantor's article ==
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that the n-dimensional spaces Rn (where R is the set of real numbers) and the set of irrational numbers have the same cardinality as R.
In 1883, Cantor extended the positive integers with his infinite ordinals. This extension was necessary for his work on the Cantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities. His work on infinite sets together with Dedekind's set-theoretical work created set theory.
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countable unions of sets. In the 1890s, Émile Borel used countable unions in his theory of measure, and René Baire used countable ordinals to define his classes of functions. Building on the work of Borel and Baire, Henri Lebesgue created his theories of measure and integration, which were published from 1899 to 1901.
Countable models are used in set theory. In 1922, Thoralf Skolem proved that if conventional axioms of set theory are consistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is called Skolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, the first-order sentence that says the set of real numbers is uncountable is true within the model. In 1963, Paul Cohen used countable models to prove his independence theorems.
== See also ==
Cantor's theorem
== Notes ==
=== Note on Cantor's 1879 proof ===
== References ==
== Bibliography ==
Arkhangel'skii, A. V.; Fedorchuk, V. V. (1990), "The basic concepts and constructions of general topology", in Arkhangel'skii, A. V.; Pontryagin, L. S. (eds.), General Topology I, New York, Berlin: Springer-Verlag, pp. 1–90, ISBN 978-0-387-18178-3.
Audin, Michèle (2011), Remembering Sofya Kovalevskaya, London: Springer, ISBN 978-0-85729-928-4.
Bell, Eric Temple (1937), Men of Mathematics, New York: Simon & Schuster. Reprinted, 1984, ISBN 978-0-671-62818-5.
Birkhoff, Garrett; Mac Lane, Saunders (1941), A Survey of Modern Algebra, New York: Macmillan. Reprinted, Taylor & Francis, 1997, ISBN 978-1-56881-068-3.
Burton, David M. (1995), Burton's History of Mathematics (3rd ed.), Dubuque, Iowa: William C. Brown, ISBN 978-0-697-16089-8.
Cantor, Georg (1874), "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen", Journal für die Reine und Angewandte Mathematik (in German), 1874 (77): 258–262, doi:10.1515/crll.1874.77.258, S2CID 199545885.
Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik (in German), 1878 (84): 242–258, doi:10.1515/crll.1878.84.242 (inactive 1 November 2024){{citation}}: CS1 maint: DOI inactive as of November 2024 (link).
Cantor, Georg (1879), "Ueber unendliche, lineare Punktmannichfaltigkeiten. 1.", Mathematische Annalen (in German), 15: 1–7, doi:10.1007/bf01444101, S2CID 179177510.
Chowdhary, K. R. (2015), Fundamentals of Discrete Mathematical Structures (3rd ed.), Delhi, India: PHI Learning, ISBN 978-81-203-5074-8.
Cohen, Paul J. (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, PMC 221287, PMID 16578557.
Dasgupta, Abhijit (2014), Set Theory: With an Introduction to Real Point Sets, New York: Springer, ISBN 978-1-4614-8853-8.
Dauben, Joseph (1979), Georg Cantor: His Mathematics and Philosophy of the Infinite, Cambridge, Mass.: Harvard University Press, ISBN 978-0-674-34871-4.
Dauben, Joseph (1993), "Georg Cantor and the Battle for Transfinite Set Theory" (PDF), 9th ACMS Conference Proceedings.
Edwards, Harold M. (1989), "Kronecker's Views on the Foundations of Mathematics", in Rowe, David E.; McCleary, John (eds.), The History of Modern Mathematics, Volume 1, New York: Academic Press, pp. 67–77, ISBN 978-0-12-599662-4.
Ewald, William B., ed. (1996), From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, New York: Oxford University Press, ISBN 978-0-19-850536-5.
Ferreirós, José (1993), "On the relations between Georg Cantor and Richard Dedekind", Historia Mathematica, 20 (4): 343–363, doi:10.1006/hmat.1993.1030.
Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Basel: Birkhäuser, ISBN 978-3-7643-8349-7.
Fraenkel, Abraham (1930), "Georg Cantor", Jahresbericht der Deutschen Mathematiker-Vereinigung (in German), 39: 189–266.
Grattan-Guinness, Ivor (1971), "The Correspondence between Georg Cantor and Philip Jourdain", Jahresbericht der Deutschen Mathematiker-Vereinigung, 73: 111–130.
Gray, Robert (1994), "Georg Cantor and Transcendental Numbers" (PDF), American Mathematical Monthly, 101 (9): 819–832, doi:10.2307/2975129, JSTOR 2975129, MR 1300488, Zbl 0827.01004, archived from the original (PDF) on 2022-01-21, retrieved 2016-02-13.
Hardy, Godfrey; Wright, E. M. (1938), An Introduction to the Theory of Numbers, Oxford: Clarendon Press.
Havil, Julian (2012), The Irrationals, Princeton, Oxford: Princeton University Press, ISBN 978-0-691-16353-6.
Hawkins, Thomas (1970), Lebesgue's Theory of Integration, Madison, Wisconsin: University of Wisconsin Press, ISBN 978-0-299-05550-9.
Jarvis, Frazer (2014), Algebraic Number Theory, New York: Springer, ISBN 978-3-319-07544-0.
Kanamori, Akihiro (2012), "Set Theory from Cantor to Cohen" (PDF), in Gabbay, Dov M.; Kanamori, Akihiro; Woods, John H. (eds.), Sets and Extensions in the Twentieth Century, Amsterdam, Boston: Cambridge University Press, pp. 1–71, ISBN 978-0-444-51621-3.
Kaplansky, Irving (1972), Set Theory and Metric Spaces, Boston: Allyn and Bacon, ISBN 978-0-8284-0298-9.
Kelley, John L. (1991), General Topology, New York: Springer, ISBN 978-3-540-90125-9.
LeVeque, William J. (1956), Topics in Number Theory, vol. I, Reading, Massachusetts: Addison-Wesley. (Reprinted by Dover Publications, 2002, ISBN 978-0-486-42539-9.)
Noether, Emmy; Cavaillès, Jean, eds. (1937), Briefwechsel Cantor-Dedekind (in German), Paris: Hermann.
Perron, Oskar (1921), Irrationalzahlen (in German), Leipzig, Berlin: W. de Gruyter, OCLC 4636376.
Sheppard, Barnaby (2014), The Logic of Infinity, Cambridge: Cambridge University Press, ISBN 978-1-107-67866-8.
Spivak, Michael (1967), Calculus, London: W. A. Benjamin, ISBN 978-0914098911.
Stewart, Ian (2015), Galois Theory (4th ed.), Boca Raton, Florida: CRC Press, ISBN 978-1-4822-4582-0.
Stewart, Ian; Tall, David (2015), The Foundations of Mathematics (2nd ed.), New York: Oxford University Press, ISBN 978-0-19-870644-1.
Weisstein, Eric W., ed. (2003), "Continued Fraction", CRC Concise Encyclopedia of Mathematics, Boca Raton, Florida: Chapman & Hall/CRC, ISBN 978-1-58488-347-0. | Wikipedia/On_a_Property_of_the_Collection_of_All_Real_Algebraic_Numbers |
The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data are represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.
The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.
== History ==
The relational model was developed by Edgar F. Codd as a general model of data, and subsequently promoted by Chris Date and Hugh Darwen among others. In their 1995 The Third Manifesto, Date and Darwen try to demonstrate how the relational model can accommodate certain "desired" object-oriented features.
=== Extensions ===
Some years after publication of his 1970 model, Codd proposed a three-valued logic (True, False, Missing/NULL) version of it to deal with missing information, and in his The Relational Model for Database Management Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version.
== Conceptualization ==
=== Basic concepts ===
A relation consists of a heading and a body. The heading defines a set of attributes, each with a name and data type (sometimes called a domain). The number of attributes in this set is the relation's degree or arity. The body is a set of tuples. A tuple is a collection of n values, where n is the relation's degree, and each value in the tuple corresponds to a unique attribute. The number of tuples in this set is the relation's cardinality.: 17–22
Relations are represented by relational variables or relvars, which can be reassigned.: 22–24 A database is a collection of relvars.: 112–113
In this model, databases follow the Information Principle: At any given time, all information in the database is represented solely by values within tuples, corresponding to attributes, in relations identified by relvars.: 111
=== Constraints ===
A database may define arbitrary boolean expressions as constraints. If all constraints evaluate as true, the database is consistent; otherwise, it is inconsistent. If a change to a database's relvars would leave the database in an inconsistent state, that change is illegal and must not succeed.: 91
In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient.
Two special cases of constraints are expressed as keys and foreign keys:
==== Keys ====
A candidate key, or simply a key, is the smallest subset of attributes guaranteed to uniquely differentiate each tuple in a relation. Since each tuple in a relation must be unique, every relation necessarily has a key, which may be its complete set of attributes. A relation may have multiple keys, as there may be multiple ways to uniquely differentiate each tuple.: 31–33
An attribute may be unique across tuples without being a key. For example, a relation describing a company's employees may have two attributes: ID and Name. Even if no employees currently share a name, if it is possible to eventually hire a new employee with the same name as a current employee, the attribute subset {Name} is not a key. Conversely, if the subset {ID} is a key, this means not only that no employees currently share an ID, but that no employees will ever share an ID.: 31–33
==== Foreign keys ====
A foreign key is a subset of attributes A in a relation R1 that corresponds with a key of another relation R2, with the property that the projection of R1 on A is a subset of the projection of R2 on A. In other words, if a tuple in R1 contains values for a foreign key, there must be a corresponding tuple in R2 containing the same values for the corresponding key.: 34
=== Relational operations ===
Users (or programs) request data from a relational database by sending it a query. In response to a query, the database returns a result set.
Often, data from multiple tables are combined into one, by doing a join. Conceptually, this is done by taking all possible combinations of rows (the Cartesian product), and then filtering out everything except the answer.
There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (that lists the rows in one table that are not found in the other), intersect (that lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators – many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (relation-valued attribute), then operators such as group and ungroup.
The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.
=== Database normalization ===
Relations are classified based upon the types of anomalies to which they're vulnerable. A database that is in the first normal form is vulnerable to all types of anomalies, while a database that is in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.
== Logical interpretation ==
The relational model is a formal system. A relation's attributes define a set of logical propositions. Each proposition can be expressed as a tuple. The body of a relation is a subset of these tuples, representing which propositions are true. Constraints represent additional propositions which must also be true. Relational algebra is a set of logical rules that can validly infer conclusions from these propositions.: 95–101
The definition of a tuple allows for a unique empty tuple with no values, corresponding to the empty set of attributes. If a relation has a degree of 0 (i.e. its heading contains no attributes), it may have either a cardinality of 0 (a body containing no tuples) or a cardinality of 1 (a body containing the single empty tuple). These relations represent Boolean truth values. The relation with degree 0 and cardinality 0 is False, while the relation with degree 0 and cardinality 1 is True.: 221–223
=== Example ===
If a relation of Employees contains the attributes {Name, ID}, then the tuple {Alice, 1} represents the proposition: "There exists an employee named Alice with ID 1". This proposition may be true or false. If this tuple exists in the relation's body, the proposition is true (there is such an employee). If this tuple is not in the relation's body, the proposition is false (there is no such employee).: 96–97
Furthermore, if {ID} is a key, then a relation containing the tuples {Alice, 1} and {Bob, 1} would represent the following contradiction:
There exists an employee with the name Alice and the ID 1.
There exists an employee with the name Bob and the ID 1.
There do not exist multiple employees with the same ID.
Under the principle of explosion, this contradiction would allow the system to prove that any arbitrary proposition is true. The database must enforce the key constraint to prevent this.: 104
== Examples ==
=== Database ===
An idealized, very simple example of a description of some relvars (relation variables) and their attributes:
Customer (Customer ID, Name)
Order (Order ID, Customer ID, Invoice ID, Date)
Invoice (Invoice ID, Customer ID, Order ID, Status)
In this design we have three relvars: Customer, Order, and Invoice. The bold, underlined attributes are candidate keys. The non-bold, underlined attributes are foreign keys.
Usually one candidate key is chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys.
A candidate key is a unique identifier enforcing that no tuple will be duplicated; this would make the relation into something else, namely a bag, by violating the basic definition of a set. Both foreign keys and superkeys (that includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar.
=== Customer relation ===
If we attempted to insert a new customer with the ID 123, this would violate the design of the relvar since Customer ID is a primary key and we already have a customer 123. The DBMS must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint. However, it is possible to insert another customer named Alice, as long as this new customer has a unique ID, since the Name field is not part of the primary key.
Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example, in the Order relation the attribute Customer ID is a foreign key. A join is the operation that draws on information from several relations at once. By joining relvars from the example above we could query the database for all of the Customers, Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition. If we wanted to retrieve all of the Orders for Customer 123, we could query the database to return every row in the Order table with Customer ID 123 .
There is a flaw in our database design above. The Invoice relvar contains an Order ID attribute. So, each tuple in the Invoice relvar will have one Order ID, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice ID attribute, implying that each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words, there can be many Invoices per Order and many Orders per Invoice. This is a many-to-many relationship between Order and Invoice (also called a non-specific relationship). To represent this relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices:
OrderInvoice (Order ID, Invoice ID)
Now, the Order relvar has a one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders where Order ID in the Order relation equals the Order ID in OrderInvoice, and where Invoice ID in OrderInvoice equals the Invoice ID in Invoice.
== Application to relational databases ==
A data type in a relational database might be the set of integers, the set of character strings, the set of dates, etc. The relational model does not dictate what types are to be supported.
Attributes are commonly represented as columns, tuples as rows, and relations as tables. A table is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row.
A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by an update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluating a query are determined by the definitions of the operators used in that query.
=== SQL and the relational model ===
SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts.
According to the relational model, a Relation's attributes and tuples are mathematical sets, meaning they are unordered and unique. In a SQL table, neither rows nor columns are proper sets. A table may contain both duplicate rows and duplicate columns, and a table's columns are explicitly ordered. SQL uses a Null value to indicate missing data, which has no analog in the relational model. Because a row can represent unknown information, SQL does not adhere to the relational model's Information Principle.: 153–155, 162
== Set-theoretic formulation ==
Basic notions in the relational model are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables
r
,
s
,
t
,
…
{\displaystyle r,s,t,\ldots }
and
a
,
b
,
c
{\displaystyle a,b,c}
to range over them. Another basic notion is the set of atomic values that contains values such as numbers and strings.
Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table:
Tuple
A tuple is a partial function from attribute names to atomic values.
Header
A header is a finite set of attribute names.
Projection
The projection of a tuple
t
{\displaystyle t}
on a finite set of attributes
A
{\displaystyle A}
is
t
[
A
]
=
{
(
a
,
v
)
:
(
a
,
v
)
∈
t
,
a
∈
A
}
{\displaystyle t[A]=\{(a,v):(a,v)\in t,a\in A\}}
.
The next definition defines relation that formalizes the contents of a table as it is defined in the relational model.
Relation
A relation is a tuple
(
H
,
B
)
{\displaystyle (H,B)}
with
H
{\displaystyle H}
, the header, and
B
{\displaystyle B}
, the body, a set of tuples that all have the domain
H
{\displaystyle H}
.
Such a relation closely corresponds to what is usually called the extension of a predicate in first-order logic except that here we identify the places in the predicate with attribute names. Usually in the relational model a database schema is said to consist of a set of relation names, the headers that are associated with these names and the constraints that should hold for every instance of the database schema.
Relation universe
A relation universe
U
{\displaystyle U}
over a header
H
{\displaystyle H}
is a non-empty set of relations with header
H
{\displaystyle H}
.
Relation schema
A relation schema
(
H
,
C
)
{\displaystyle (H,C)}
consists of a header
H
{\displaystyle H}
and a predicate
C
(
R
)
{\displaystyle C(R)}
that is defined for all relations
R
{\displaystyle R}
with header
H
{\displaystyle H}
. A relation satisfies a relation schema
(
H
,
C
)
{\displaystyle (H,C)}
if it has header
H
{\displaystyle H}
and satisfies
C
{\displaystyle C}
.
=== Key constraints and functional dependencies ===
One of the simplest and most important types of relation constraints is the key constraint. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes.
Superkey
A superkey is a set of column headers for which the values of those columns concatenated are unique across all rows. Formally:
A superkey is written as a finite set of attribute names.
A superkey
K
{\displaystyle K}
holds in a relation
(
H
,
B
)
{\displaystyle (H,B)}
if:
K
⊆
H
{\displaystyle K\subseteq H}
and
there exist no two distinct tuples
t
1
,
t
2
∈
B
{\displaystyle t_{1},t_{2}\in B}
such that
t
1
[
K
]
=
t
2
[
K
]
{\displaystyle t_{1}[K]=t_{2}[K]}
.
A superkey holds in a relation universe
U
{\displaystyle U}
if it holds in all relations in
U
{\displaystyle U}
.
Theorem: A superkey
K
{\displaystyle K}
holds in a relation universe
U
{\displaystyle U}
over
H
{\displaystyle H}
if and only if
K
⊆
H
{\displaystyle K\subseteq H}
and
K
→
H
{\displaystyle K\rightarrow H}
holds in
U
{\displaystyle U}
.
Candidate key
A candidate key is a superkey that cannot be further subdivided to form another superkey.
A superkey
K
{\displaystyle K}
holds as a candidate key for a relation universe
U
{\displaystyle U}
if it holds as a superkey for
U
{\displaystyle U}
and there is no proper subset of
K
{\displaystyle K}
that also holds as a superkey for
U
{\displaystyle U}
.
Functional dependency
Functional dependency is the property that a value in a tuple may be derived from another value in that tuple.
A functional dependency (FD for short) is written as
X
→
Y
{\displaystyle X\rightarrow Y}
for
X
,
Y
{\displaystyle X,Y}
finite sets of attribute names.
A functional dependency
X
→
Y
{\displaystyle X\rightarrow Y}
holds in a relation
(
H
,
B
)
{\displaystyle (H,B)}
if:
X
,
Y
⊆
H
{\displaystyle X,Y\subseteq H}
and
∀
{\displaystyle \forall }
tuples
t
1
,
t
2
∈
B
{\displaystyle t_{1},t_{2}\in B}
,
t
1
[
X
]
=
t
2
[
X
]
⇒
t
1
[
Y
]
=
t
2
[
Y
]
{\displaystyle t_{1}[X]=t_{2}[X]~\Rightarrow ~t_{1}[Y]=t_{2}[Y]}
A functional dependency
X
→
Y
{\displaystyle X\rightarrow Y}
holds in a relation universe
U
{\displaystyle U}
if it holds in all relations in
U
{\displaystyle U}
.
Trivial functional dependency
A functional dependency is trivial under a header
H
{\displaystyle H}
if it holds in all relation universes over
H
{\displaystyle H}
.
Theorem: An FD
X
→
Y
{\displaystyle X\rightarrow Y}
is trivial under a header
H
{\displaystyle H}
if and only if
Y
⊆
X
⊆
H
{\displaystyle Y\subseteq X\subseteq H}
.
Closure
Armstrong's axioms: The closure of a set of FDs
S
{\displaystyle S}
under a header
H
{\displaystyle H}
, written as
S
+
{\displaystyle S^{+}}
, is the smallest superset of
S
{\displaystyle S}
such that:
Y
⊆
X
⊆
H
⇒
X
→
Y
∈
S
+
{\displaystyle Y\subseteq X\subseteq H~\Rightarrow ~X\rightarrow Y\in S^{+}}
(reflexivity)
X
→
Y
∈
S
+
∧
Y
→
Z
∈
S
+
⇒
X
→
Z
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}\land Y\rightarrow Z\in S^{+}~\Rightarrow ~X\rightarrow Z\in S^{+}}
(transitivity) and
X
→
Y
∈
S
+
∧
Z
⊆
H
⇒
(
X
∪
Z
)
→
(
Y
∪
Z
)
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}\land Z\subseteq H~\Rightarrow ~(X\cup Z)\rightarrow (Y\cup Z)\in S^{+}}
(augmentation)
Theorem: Armstrong's axioms are sound and complete; given a header
H
{\displaystyle H}
and a set
S
{\displaystyle S}
of FDs that only contain subsets of
H
{\displaystyle H}
,
X
→
Y
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}}
if and only if
X
→
Y
{\displaystyle X\rightarrow Y}
holds in all relation universes over
H
{\displaystyle H}
in which all FDs in
S
{\displaystyle S}
hold.
Completion
The completion of a finite set of attributes
X
{\displaystyle X}
under a finite set of FDs
S
{\displaystyle S}
, written as
X
+
{\displaystyle X^{+}}
, is the smallest superset of
X
{\displaystyle X}
such that:
Y
→
Z
∈
S
∧
Y
⊆
X
+
⇒
Z
⊆
X
+
{\displaystyle Y\rightarrow Z\in S\land Y\subseteq X^{+}~\Rightarrow ~Z\subseteq X^{+}}
The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.
Theorem: Given a set
S
{\displaystyle S}
of FDs,
X
→
Y
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}}
if and only if
Y
⊆
X
+
{\displaystyle Y\subseteq X^{+}}
.
Irreducible cover
An irreducible cover of a set
S
{\displaystyle S}
of FDs is a set
T
{\displaystyle T}
of FDs such that:
S
+
=
T
+
{\displaystyle S^{+}=T^{+}}
there exists no
U
⊂
T
{\displaystyle U\subset T}
such that
S
+
=
U
+
{\displaystyle S^{+}=U^{+}}
X
→
Y
∈
T
⇒
Y
{\displaystyle X\rightarrow Y\in T~\Rightarrow Y}
is a singleton set and
X
→
Y
∈
T
∧
Z
⊂
X
⇒
Z
→
Y
∉
S
+
{\displaystyle X\rightarrow Y\in T\land Z\subset X~\Rightarrow ~Z\rightarrow Y\notin S^{+}}
.
=== Algorithm to derive candidate keys from functional dependencies ===
algorithm derive candidate keys from functional dependencies is
input: a set S of FDs that contain only subsets of a header H
output: the set C of superkeys that hold as candidate keys in
all relation universes over H in which all FDs in S hold
C := ∅ // found candidate keys
Q := { H } // superkeys that contain candidate keys
while Q <> ∅ do
let K be some element from Q
Q := Q – { K }
minimal := true
for each X->Y in S do
K' := (K – Y) ∪ X // derive new superkey
if K' ⊂ K then
minimal := false
Q := Q ∪ { K' }
end if
end for
if minimal and there is not a subset of K in C then
remove all supersets of K from C
C := C ∪ { K }
end if
end while
== Alternatives ==
Other models include the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs, or where existing systems are so complex and abstract that it would be cost-prohibitive to migrate to systems employing the relational model. Also of note are newer object-oriented databases. and Datalog.
Datalog is a database definition language, which combines a relational view of data, as in the relational model, with a logical view, as in logic programming. Whereas relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, Datalog uses logical connectives, such as if, or, and and not to define relations as part of the database itself.
In contrast with the relational model, which cannot express recursive queries without introducing a least-fixed-point operator, recursive relations can be defined in Datalog, without introducing any new logical connectives or operators.
== See also ==
== Notes ==
== References ==
== Further reading ==
Date, Christopher J.; Darwen, Hugh (2000). Foundation for future database systems: the third manifesto; a detailed study of the impact of type theory on the relational model of data, including a comprehensive model of type inheritance (2 ed.). Reading, MA: Addison-Wesley. ISBN 978-0-201-70928-5.
——— (2007). An Introduction to Database Systems (8 ed.). Boston: Pearson Education. ISBN 978-0-321-19784-9.
== External links ==
Childs (1968), Feasibility of a set-theoretic data structure: a general structure based on a reconstituted definition of relation (research), Handle, hdl:2027.42/4164 cited in Codd's 1970 paper.
Darwen, Hugh, The Third Manifesto (TTM).
"Relational Model", C2.
Binary relations and tuples compared with respect to the semantic web (World Wide Web log), Sun. | Wikipedia/Relational_model |
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable.
Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. The concepts and techniques found in calculus have diverse applications in science, engineering, and other branches of mathematics.
== Etymology ==
In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis.
In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin.
In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
== History ==
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
=== Ancient precursors ===
==== Egypt ====
Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulae are simple instructions, with no indication as to how they were obtained.
==== Greece ====
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus (c. 390–337 BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
During the Hellenistic period, this method was further developed by Archimedes (c. 287 – c. 212 BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines.
==== China ====
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere.
=== Medieval ===
==== Middle East ====
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. He determined the equations to calculate the area enclosed by the curve represented by
y
=
x
k
{\displaystyle y=x^{k}}
(which translates to the integral
∫
x
k
d
x
{\displaystyle \int x^{k}\,dx}
in contemporary notation), for any given non-negative integer value of
k
{\displaystyle k}
.He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
==== India ====
Bhāskara II (c. 1114–1185) was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if
x
≈
y
{\displaystyle x\approx y}
then
sin
(
y
)
−
sin
(
x
)
≈
(
y
−
x
)
cos
(
y
)
.
{\displaystyle \sin(y)-\sin(x)\approx (y-x)\cos(y).}
This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus. They studied series equivalent to the Maclaurin expansions of
sin
(
x
)
{\displaystyle \sin(x)}
,
cos
(
x
)
{\displaystyle \cos(x)}
, and
arctan
(
x
)
{\displaystyle \arctan(x)}
more than two hundred years before their introduction in Europe. According to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".
=== Modern ===
Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.
Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670.
The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today.: 51–52 The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.: 100 The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
=== Foundations ===
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
=== Significance ===
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization.: 341–453 Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure.: 685–700 More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
== Principles ==
=== Limits and infinitesimals ===
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols
d
x
{\displaystyle dx}
and
d
y
{\displaystyle dy}
were taken to be infinitesimal, and the derivative
d
y
/
d
x
{\displaystyle dy/dx}
was their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
=== Differential calculus ===
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.: 32
In more explicit terms the "doubling function" may be denoted by g(x) = 2x and the "squaring function" by f(x) = x2. The "derivative" now takes the function f(x), defined by the expression "x2", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called f is denoted by f′, pronounced "f prime" or "f dash". For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if f is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.: 18–20
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
m
=
rise
run
=
change in
y
change in
x
=
Δ
y
Δ
x
.
{\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.}
This gives an exact value for the slope of a straight line.: 6 If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is
m
=
f
(
a
+
h
)
−
f
(
a
)
(
a
+
h
)
−
a
=
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle m={\frac {f(a+h)-f(a)}{(a+h)-a}}={\frac {f(a+h)-f(a)}{h}}.}
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The second line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
lim
h
→
0
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle \lim _{h\to 0}{f(a+h)-f(a) \over {h}}.}
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.: 61–63
Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.
f
′
(
3
)
=
lim
h
→
0
(
3
+
h
)
2
−
3
2
h
=
lim
h
→
0
9
+
6
h
+
h
2
−
9
h
=
lim
h
→
0
6
h
+
h
2
h
=
lim
h
→
0
(
6
+
h
)
=
6
{\displaystyle {\begin{aligned}f'(3)&=\lim _{h\to 0}{(3+h)^{2}-3^{2} \over {h}}\\&=\lim _{h\to 0}{9+6h+h^{2}-9 \over {h}}\\&=\lim _{h\to 0}{6h+h^{2} \over {h}}\\&=\lim _{h\to 0}(6+h)\\&=6\end{aligned}}}
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.: 63
=== Leibniz notation ===
A common notation, introduced by Leibniz, for the derivative in the example above is
y
=
x
2
d
y
d
x
=
2
x
.
{\displaystyle {\begin{aligned}y&=x^{2}\\{\frac {dy}{dx}}&=2x.\end{aligned}}}
In an approach based on limits, the symbol dy/ dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.: 74 Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/ dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
d
d
x
(
x
2
)
=
2
x
.
{\displaystyle {\frac {d}{dx}}(x^{2})=2x.}
In this usage, the dx in the denominator is read as "with respect to x".: 79 Another example of correct notation could be:
g
(
t
)
=
t
2
+
2
t
+
4
d
d
t
g
(
t
)
=
2
t
+
2
{\displaystyle {\begin{aligned}g(t)&=t^{2}+2t+4\\{d \over dt}g(t)&=2t+2\end{aligned}}}
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
=== Integral calculus ===
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration.: 508 The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative.: 163–165 F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.: 282
A motivating example is the distance traveled in a given time.: 153 If the speed is constant, only multiplication is needed:
D
i
s
t
a
n
c
e
=
S
p
e
e
d
⋅
T
i
m
e
{\displaystyle \mathrm {Distance} =\mathrm {Speed} \cdot \mathrm {Time} }
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.: 535 This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If f(x) represents speed as it varies over time, the distance traveled between the times represented by a and b is the area of the region between f(x) and the x-axis, between x = a and x = b.
To approximate that area, an intuitive method would be to divide up the distance between a and b into several equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as Δx approaches zero.: 512–522
The symbol of integration is
∫
{\displaystyle \int }
, an elongated S chosen to suggest summation.: 529 The definite integral is written as:
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width Δx becomes the infinitesimally small dx.: 44
The indefinite integral, or antiderivative, is written:
∫
f
(
x
)
d
x
.
{\displaystyle \int f(x)\,dx.}
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.: 326 Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter is given by:
∫
2
x
d
x
=
x
2
+
C
.
{\displaystyle \int 2x\,dx=x^{2}+C.}
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.: 135
=== Fundamental theorem ===
The fundamental theorem of calculus states that differentiation and integration are inverse operations.: 290 More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
Furthermore, for every x in the interval (a, b),
d
d
x
∫
a
x
f
(
t
)
d
t
=
f
(
x
)
.
{\displaystyle {\frac {d}{dx}}\int _{a}^{x}f(t)\,dt=f(x).}
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.: 351–352
== Applications ==
Calculus is used in every branch of the physical sciences,: 1 actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function.: 37 In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus.: 52–55 Chemistry also uses calculus in determining reaction rates: 599 and in studying radioactive decay.: 814 In biology, population dynamics starts with reproduction and death rates to model population changes.: 631
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.: 387
== See also ==
Glossary of calculus
List of calculus topics
List of derivatives and integrals in alternative calculi
List of differentiation identities
Publications in calculus
Table of integrals
== References ==
== Further reading ==
== External links ==
"Calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Calculus". MathWorld.
Topics on Calculus at PlanetMath.
Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
Calculus on In Our Time at the BBC
Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
The Role of Calculus in College Mathematics Archived 26 July 2021 at the Wayback Machine from ERICDigests.org
OpenCourseWare Calculus from the Massachusetts Institute of Technology
Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel.
Daniel Kleitman, MIT. "Calculus for Beginners and Artists".
Calculus training materials at imomath.com
(in English and Arabic) The Excursion of Calculus, 1772 | Wikipedia/calculus |
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
As its name implies, the moment-generating function can be used to compute a distribution’s moments: the n-th moment about 0 is the n-th derivative of the moment-generating function, evaluated at 0.
In addition to univariate real-valued distributions, moment-generating functions can also be defined for vector- or matrix-valued random variables, and can even be extended to more general cases.
The moment-generating function of a real-valued distribution does not always exist, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments.
== Definition ==
Let
X
{\displaystyle X}
be a random variable with CDF
F
X
{\displaystyle F_{X}}
. The moment generating function (mgf) of
X
{\displaystyle X}
(or
F
X
{\displaystyle F_{X}}
), denoted by
M
X
(
t
)
{\displaystyle M_{X}(t)}
, is
M
X
(
t
)
=
E
[
e
t
X
]
{\displaystyle M_{X}(t)=\operatorname {E} \left[e^{tX}\right]}
provided this expectation exists for
t
{\displaystyle t}
in some open neighborhood of 0. That is, there is an
h
>
0
{\displaystyle h>0}
such that for all
t
{\displaystyle t}
in
−
h
<
0
<
h
{\displaystyle -h<0<h}
,
E
[
e
t
X
]
{\displaystyle \operatorname {E} \left[e^{tX}\right]}
exists. If the expectation does not exist in an open neighborhood of 0, we say that the moment generating function does not exist.
In other words, the moment-generating function of X is the expectation of the random variable
e
t
X
{\displaystyle e^{tX}}
. More generally, when
X
=
(
X
1
,
…
,
X
n
)
T
{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathrm {T} }}
, an
n
{\displaystyle n}
-dimensional random vector, and
t
{\displaystyle \mathbf {t} }
is a fixed vector, one uses
t
⋅
X
=
t
T
X
{\displaystyle \mathbf {t} \cdot \mathbf {X} =\mathbf {t} ^{\mathrm {T} }\mathbf {X} }
instead of
t
X
{\displaystyle tX}
:
M
X
(
t
)
:=
E
[
e
t
T
X
]
.
{\displaystyle M_{\mathbf {X} }(\mathbf {t} ):=\operatorname {E} \left[e^{\mathbf {t} ^{\mathrm {T} }\mathbf {X} }\right].}
M
X
(
0
)
{\displaystyle M_{X}(0)}
always exists and is equal to 1. However, a key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. By contrast, the characteristic function or Fourier transform always exists (because it is the integral of a bounded function on a space of finite measure), and for some purposes may be used instead.
The moment-generating function is so named because it can be used to find the moments of the distribution. The series expansion of
e
t
X
{\displaystyle e^{tX}}
is
e
t
X
=
1
+
t
X
+
t
2
X
2
2
!
+
t
3
X
3
3
!
+
⋯
+
t
n
X
n
n
!
+
⋯
.
{\displaystyle e^{tX}=1+tX+{\frac {t^{2}X^{2}}{2!}}+{\frac {t^{3}X^{3}}{3!}}+\cdots +{\frac {t^{n}X^{n}}{n!}}+\cdots .}
Hence,
M
X
(
t
)
=
E
[
e
t
X
]
=
1
+
t
E
[
X
]
+
t
2
E
[
X
2
]
2
!
+
t
3
E
[
X
3
]
3
!
+
⋯
+
t
n
E
[
X
n
]
n
!
+
⋯
=
1
+
t
m
1
+
t
2
m
2
2
!
+
t
3
m
3
3
!
+
⋯
+
t
n
m
n
n
!
+
⋯
,
{\displaystyle {\begin{aligned}M_{X}(t)&=\operatorname {E} [e^{tX}]\\[1ex]&=1+t\operatorname {E} [X]+{\frac {t^{2}\operatorname {E} [X^{2}]}{2!}}+{\frac {t^{3}\operatorname {E} [X^{3}]}{3!}}+\cdots +{\frac {t^{n}\operatorname {E} [X^{n}]}{n!}}+\cdots \\[1ex]&=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+{\frac {t^{3}m_{3}}{3!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,\end{aligned}}}
where
m
n
{\displaystyle m_{n}}
is the
n
{\displaystyle n}
-th moment. Differentiating
M
X
(
t
)
{\displaystyle M_{X}(t)}
i
{\displaystyle i}
times with respect to
t
{\displaystyle t}
and setting
t
=
0
{\displaystyle t=0}
, we obtain the
i
{\displaystyle i}
-th moment about the origin,
m
i
{\displaystyle m_{i}}
; see § Calculations of moments below.
If
X
{\displaystyle X}
is a continuous random variable, the following relation between its moment-generating function
M
X
(
t
)
{\displaystyle M_{X}(t)}
and the two-sided Laplace transform of its probability density function
f
X
(
x
)
{\displaystyle f_{X}(x)}
holds:
M
X
(
t
)
=
L
{
f
X
}
(
−
t
)
,
{\displaystyle M_{X}(t)={\mathcal {L}}\{f_{X}\}(-t),}
since the PDF's two-sided Laplace transform is given as
L
{
f
X
}
(
s
)
=
∫
−
∞
∞
e
−
s
x
f
X
(
x
)
d
x
,
{\displaystyle {\mathcal {L}}\{f_{X}\}(s)=\int _{-\infty }^{\infty }e^{-sx}f_{X}(x)\,dx,}
and the moment-generating function's definition expands (by the law of the unconscious statistician) to
M
X
(
t
)
=
E
[
e
t
X
]
=
∫
−
∞
∞
e
t
x
f
X
(
x
)
d
x
.
{\displaystyle M_{X}(t)=\operatorname {E} \left[e^{tX}\right]=\int _{-\infty }^{\infty }e^{tx}f_{X}(x)\,dx.}
This is consistent with the characteristic function of
X
{\displaystyle X}
being a Wick rotation of
M
X
(
t
)
{\displaystyle M_{X}(t)}
when the moment generating function exists, as the characteristic function of a continuous random variable
X
{\displaystyle X}
is the Fourier transform of its probability density function
f
X
(
x
)
{\displaystyle f_{X}(x)}
, and in general when a function
f
(
x
)
{\displaystyle f(x)}
is of exponential order, the Fourier transform of
f
{\displaystyle f}
is a Wick rotation of its two-sided Laplace transform in the region of convergence. See the relation of the Fourier and Laplace transforms for further information.
== Examples ==
Here are some examples of the moment-generating function and the characteristic function for comparison. It can be seen that the characteristic function is a Wick rotation of the moment-generating function
M
X
(
t
)
{\displaystyle M_{X}(t)}
when the latter exists.
== Calculation ==
The moment-generating function is the expectation of a function of the random variable, it can be written as:
For a discrete probability mass function,
M
X
(
t
)
=
∑
i
=
0
∞
e
t
x
i
p
i
{\displaystyle M_{X}(t)=\sum _{i=0}^{\infty }e^{tx_{i}}\,p_{i}}
For a continuous probability density function,
M
X
(
t
)
=
∫
−
∞
∞
e
t
x
f
(
x
)
d
x
{\displaystyle M_{X}(t)=\int _{-\infty }^{\infty }e^{tx}f(x)\,dx}
In the general case:
M
X
(
t
)
=
∫
−
∞
∞
e
t
x
d
F
(
x
)
{\displaystyle M_{X}(t)=\int _{-\infty }^{\infty }e^{tx}\,dF(x)}
, using the Riemann–Stieltjes integral, and where
F
{\displaystyle F}
is the cumulative distribution function. This is simply the Laplace-Stieltjes transform of
F
{\displaystyle F}
, but with the sign of the argument reversed.
Note that for the case where
X
{\displaystyle X}
has a continuous probability density function
f
(
x
)
{\displaystyle f(x)}
,
M
X
(
−
t
)
{\displaystyle M_{X}(-t)}
is the two-sided Laplace transform of
f
(
x
)
{\displaystyle f(x)}
.
M
X
(
t
)
=
∫
−
∞
∞
e
t
x
f
(
x
)
d
x
=
∫
−
∞
∞
(
1
+
t
x
+
t
2
x
2
2
!
+
⋯
+
t
n
x
n
n
!
+
⋯
)
f
(
x
)
d
x
=
1
+
t
m
1
+
t
2
m
2
2
!
+
⋯
+
t
n
m
n
n
!
+
⋯
,
{\displaystyle {\begin{aligned}M_{X}(t)&=\int _{-\infty }^{\infty }e^{tx}f(x)\,dx\\[1ex]&=\int _{-\infty }^{\infty }\left(1+tx+{\frac {t^{2}x^{2}}{2!}}+\cdots +{\frac {t^{n}x^{n}}{n!}}+\cdots \right)f(x)\,dx\\[1ex]&=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,\end{aligned}}}
where
m
n
{\displaystyle m_{n}}
is the
n
{\displaystyle n}
th moment.
=== Linear transformations of random variables ===
If random variable
X
{\displaystyle X}
has moment generating function
M
X
(
t
)
{\displaystyle M_{X}(t)}
, then
α
X
+
β
{\displaystyle \alpha X+\beta }
has moment generating function
M
α
X
+
β
(
t
)
=
e
β
t
M
X
(
α
t
)
{\displaystyle M_{\alpha X+\beta }(t)=e^{\beta t}M_{X}(\alpha t)}
M
α
X
+
β
(
t
)
=
E
[
e
(
α
X
+
β
)
t
]
=
e
β
t
E
[
e
α
X
t
]
=
e
β
t
M
X
(
α
t
)
{\displaystyle M_{\alpha X+\beta }(t)=\operatorname {E} \left[e^{(\alpha X+\beta )t}\right]=e^{\beta t}\operatorname {E} \left[e^{\alpha Xt}\right]=e^{\beta t}M_{X}(\alpha t)}
=== Linear combination of independent random variables ===
If
S
n
=
∑
i
=
1
n
a
i
X
i
{\textstyle S_{n}=\sum _{i=1}^{n}a_{i}X_{i}}
, where the Xi are independent random variables and the ai are constants, then the probability density function for Sn is the convolution of the probability density functions of each of the Xi, and the moment-generating function for Sn is given by
M
S
n
(
t
)
=
M
X
1
(
a
1
t
)
M
X
2
(
a
2
t
)
⋯
M
X
n
(
a
n
t
)
.
{\displaystyle M_{S_{n}}(t)=M_{X_{1}}(a_{1}t)M_{X_{2}}(a_{2}t)\cdots M_{X_{n}}(a_{n}t)\,.}
=== Vector-valued random variables ===
For vector-valued random variables
X
{\displaystyle \mathbf {X} }
with real components, the moment-generating function is given by
M
X
(
t
)
=
E
[
e
⟨
t
,
X
⟩
]
{\displaystyle M_{X}(\mathbf {t} )=\operatorname {E} \left[e^{\langle \mathbf {t} ,\mathbf {X} \rangle }\right]}
where
t
{\displaystyle \mathbf {t} }
is a vector and
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is the dot product.
== Important properties ==
Moment generating functions are positive and log-convex, with M(0) = 1.
An important property of the moment-generating function is that it uniquely determines the distribution. In other words, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are two random variables and for all values of t,
M
X
(
t
)
=
M
Y
(
t
)
,
{\displaystyle M_{X}(t)=M_{Y}(t),}
then
F
X
(
x
)
=
F
Y
(
x
)
{\displaystyle F_{X}(x)=F_{Y}(x)}
for all values of x (or equivalently X and Y have the same distribution). This statement is not equivalent to the statement "if two distributions have the same moments, then they are identical at all points." This is because in some cases, the moments exist and yet the moment-generating function does not, because the limit
lim
n
→
∞
∑
i
=
0
n
t
i
m
i
i
!
{\displaystyle \lim _{n\to \infty }\sum _{i=0}^{n}{\frac {t^{i}m_{i}}{i!}}}
may not exist. The log-normal distribution is an example of when this occurs.
=== Calculations of moments ===
The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution:
m
n
=
E
[
X
n
]
=
M
X
(
n
)
(
0
)
=
d
n
M
X
d
t
n
|
t
=
0
.
{\displaystyle m_{n}=\operatorname {E} \left[X^{n}\right]=M_{X}^{(n)}(0)=\left.{\frac {d^{n}M_{X}}{dt^{n}}}\right|_{t=0}.}
That is, with n being a nonnegative integer, the n-th moment about 0 is the n-th derivative of the moment generating function, evaluated at t = 0.
== Other properties ==
Jensen's inequality provides a simple lower bound on the moment-generating function:
M
X
(
t
)
≥
e
μ
t
,
{\displaystyle M_{X}(t)\geq e^{\mu t},}
where
μ
{\displaystyle \mu }
is the mean of X.
The moment-generating function can be used in conjunction with Markov's inequality to bound the upper tail of a real random variable X. This statement is also called the Chernoff bound. Since
x
↦
e
x
t
{\displaystyle x\mapsto e^{xt}}
is monotonically increasing for
t
>
0
{\displaystyle t>0}
, we have
Pr
(
X
≥
a
)
=
Pr
(
e
t
X
≥
e
t
a
)
≤
e
−
a
t
E
[
e
t
X
]
=
e
−
a
t
M
X
(
t
)
{\displaystyle \Pr(X\geq a)=\Pr(e^{tX}\geq e^{ta})\leq e^{-at}\operatorname {E} \left[e^{tX}\right]=e^{-at}M_{X}(t)}
for any
t
>
0
{\displaystyle t>0}
and any a, provided
M
X
(
t
)
{\displaystyle M_{X}(t)}
exists. For example, when X is a standard normal distribution and
a
>
0
{\displaystyle a>0}
, we can choose
t
=
a
{\displaystyle t=a}
and recall that
M
X
(
t
)
=
e
t
2
/
2
{\displaystyle M_{X}(t)=e^{t^{2}/2}}
. This gives
Pr
(
X
≥
a
)
≤
e
−
a
2
/
2
{\displaystyle \Pr(X\geq a)\leq e^{-a^{2}/2}}
, which is within a factor of 1+a of the exact value.
Various lemmas, such as Hoeffding's lemma or Bennett's inequality provide bounds on the moment-generating function in the case of a zero-mean, bounded random variable.
When
X
{\displaystyle X}
is non-negative, the moment generating function gives a simple, useful bound on the moments:
E
[
X
m
]
≤
(
m
t
e
)
m
M
X
(
t
)
,
{\displaystyle \operatorname {E} [X^{m}]\leq \left({\frac {m}{te}}\right)^{m}M_{X}(t),}
For any
X
,
m
≥
0
{\displaystyle X,m\geq 0}
and
t
>
0
{\displaystyle t>0}
.
This follows from the inequality
1
+
x
≤
e
x
{\displaystyle 1+x\leq e^{x}}
into which we can substitute
x
′
=
t
x
/
m
−
1
{\displaystyle x'=tx/m-1}
implies
t
x
/
m
≤
e
t
x
/
m
−
1
{\displaystyle tx/m\leq e^{tx/m-1}}
for any
x
,
t
,
m
∈
R
{\displaystyle x,t,m\in \mathbb {R} }
.
Now, if
t
>
0
{\displaystyle t>0}
and
x
,
m
≥
0
{\displaystyle x,m\geq 0}
, this can be rearranged to
x
m
≤
(
m
/
(
t
e
)
)
m
e
t
x
{\displaystyle x^{m}\leq (m/(te))^{m}e^{tx}}
.
Taking the expectation on both sides gives the bound on
E
[
X
m
]
{\displaystyle \operatorname {E} [X^{m}]}
in terms of
E
[
e
t
X
]
{\displaystyle \operatorname {E} [e^{tX}]}
.
As an example, consider
X
∼
Chi-Squared
{\displaystyle X\sim {\text{Chi-Squared}}}
with
k
{\displaystyle k}
degrees of freedom. Then from the examples
M
X
(
t
)
=
(
1
−
2
t
)
−
k
/
2
{\displaystyle M_{X}(t)=(1-2t)^{-k/2}}
.
Picking
t
=
m
/
(
2
m
+
k
)
{\displaystyle t=m/(2m+k)}
and substituting into the bound:
E
[
X
m
]
≤
(
1
+
2
m
/
k
)
k
/
2
e
−
m
(
k
+
2
m
)
m
.
{\displaystyle \operatorname {E} [X^{m}]\leq {\left(1+2m/k\right)}^{k/2}e^{-m}{\left(k+2m\right)}^{m}.}
We know that in this case the correct bound is
E
[
X
m
]
≤
2
m
Γ
(
m
+
k
/
2
)
/
Γ
(
k
/
2
)
{\displaystyle \operatorname {E} [X^{m}]\leq 2^{m}\Gamma (m+k/2)/\Gamma (k/2)}
.
To compare the bounds, we can consider the asymptotics for large
k
{\displaystyle k}
.
Here the moment-generating function bound is
k
m
(
1
+
m
2
/
k
+
O
(
1
/
k
2
)
)
{\displaystyle k^{m}(1+m^{2}/k+O(1/k^{2}))}
,
where the real bound is
k
m
(
1
+
(
m
2
−
m
)
/
k
+
O
(
1
/
k
2
)
)
{\displaystyle k^{m}(1+(m^{2}-m)/k+O(1/k^{2}))}
.
The moment-generating function bound is thus very strong in this case.
== Relation to other functions ==
Related to the moment-generating function are a number of other transforms that are common in probability theory:
Characteristic function
The characteristic function
φ
X
(
t
)
{\displaystyle \varphi _{X}(t)}
is related to the moment-generating function via
φ
X
(
t
)
=
M
i
X
(
t
)
=
M
X
(
i
t
)
:
{\displaystyle \varphi _{X}(t)=M_{iX}(t)=M_{X}(it):}
the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform.
Cumulant-generating function
The cumulant-generating function is defined as the logarithm of the moment-generating function; some instead define the cumulant-generating function as the logarithm of the characteristic function, while others call this latter the second cumulant-generating function.
Probability-generating function
The probability-generating function is defined as
G
(
z
)
=
E
[
z
X
]
.
{\displaystyle G(z)=\operatorname {E} \left[z^{X}\right].}
This immediately implies that
G
(
e
t
)
=
E
[
e
t
X
]
=
M
X
(
t
)
.
{\displaystyle G(e^{t})=\operatorname {E} \left[e^{tX}\right]=M_{X}(t).}
== See also ==
Characteristic function (probability theory)
Factorial moment generating function
Rate function
Hamburger moment problem
== References ==
=== Citations ===
=== Sources === | Wikipedia/Moment-generating_function |
In probability and statistics, the quantile function is a function
Q
:
[
0
,
1
]
↦
R
{\displaystyle Q:[0,1]\mapsto \mathbb {R} }
which maps some probability
x
∈
[
0
,
1
]
{\displaystyle x\in [0,1]}
of a random variable
v
{\displaystyle v}
to the value of the variable
y
{\displaystyle y}
which such that
P
(
v
≤
y
)
=
x
{\displaystyle P(v\leq y)=x}
according to its probability distribution. In other words, the function returns the value of the variable below which the specified cumulative probability is contained. For example, if the distribution is a standard normal distribution then
Q
(
0.5
)
{\displaystyle Q(0.5)}
will return 0 as 0.5 of the probability mass is contained below 0.
The quartile function is also called the percentile function (after the percentile), percent-point function, inverse cumulative distribution function (after the cumulative distribution function or c.d.f.) or inverse distribution function.
== Definition ==
=== Strictly increasing distribution function ===
With reference to a continuous and strictly increasing cumulative distribution function (c.d.f.)
F
X
:
R
→
[
0
,
1
]
{\displaystyle F_{X}\colon \mathbb {R} \to [0,1]}
of a random variable X, the quantile function
Q
:
[
0
,
1
]
→
R
{\displaystyle Q\colon [0,1]\to \mathbb {R} }
maps its input p to a threshold value x so that the probability of X being less or equal than x is p. In terms of the distribution function F, the quantile function Q returns the value x such that
F
X
(
x
)
:=
Pr
(
X
≤
x
)
=
p
,
{\displaystyle F_{X}(x):=\Pr(X\leq x)=p,}
which can be written as inverse of the c.d.f.
Q
(
p
)
=
F
X
−
1
(
p
)
.
{\displaystyle Q(p)=F_{X}^{-1}(p).}
=== General distribution function ===
In the general case of distribution functions that are not strictly monotonic and therefore do not permit an inverse c.d.f., the quantile is a (potentially) set valued functional of a distribution function F, given by the interval
Q
(
p
)
=
[
sup
{
x
:
F
(
x
)
<
p
}
,
sup
{
x
:
F
(
x
)
≤
p
}
]
.
{\displaystyle Q(p)={\big [}\sup\{x\colon F(x)<p\},\sup\{x\colon F(x)\leq p\}{\big ]}.}
It is often standard to choose the lowest value, which can equivalently be written as (using right-continuity of F)
Q
(
p
)
=
inf
{
x
∈
R
:
p
≤
F
(
x
)
}
.
{\displaystyle Q(p)=\inf\{x\in \mathbb {R} :p\leq F(x)\}.}
Here we capture the fact that the quantile function returns the minimum value of x from amongst all those values whose c.d.f value exceeds p, which is equivalent to the previous probability statement in the special case that the distribution is continuous.
The quantile is the unique function satisfying the Galois inequalities
Q
(
p
)
≤
x
{\displaystyle Q(p)\leq x}
if and only if
p
≤
F
(
x
)
.
{\displaystyle p\leq F(x).}
If the function F is continuous and strictly monotonically increasing, then the inequalities can be replaced by equalities, and we have
Q
=
F
−
1
.
{\displaystyle Q=F^{-1}.}
In general, even though the distribution function F may fail to possess a left or right inverse, the quantile function Q behaves as an "almost sure left inverse" for the distribution function, in the sense that
Q
(
F
(
X
)
)
=
X
almost surely.
{\displaystyle Q{\bigl (}F(X){\bigr )}=X\quad {\text{almost surely.}}}
== Simple example ==
For example, the cumulative distribution function of Exponential(λ) (i.e. intensity λ and expected value (mean) 1/λ) is
F
(
x
;
λ
)
=
{
1
−
e
−
λ
x
x
≥
0
,
0
x
<
0.
{\displaystyle F(x;\lambda )={\begin{cases}1-e^{-\lambda x}&x\geq 0,\\0&x<0.\end{cases}}}
The quantile function for Exponential(λ) is derived by finding the value of Q for which
1
−
e
−
λ
Q
=
p
{\displaystyle 1-e^{-\lambda Q}=p}
:
Q
(
p
;
λ
)
=
−
ln
(
1
−
p
)
λ
,
{\displaystyle Q(p;\lambda )={\frac {-\ln(1-p)}{\lambda }},}
for 0 ≤ p < 1. The quartiles are therefore:
first quartile (p = 1/4)
−
ln
(
3
/
4
)
/
λ
,
{\displaystyle -\ln(3/4)/\lambda ,}
median (p = 2/4)
−
ln
(
1
/
2
)
/
λ
,
{\displaystyle -\ln(1/2)/\lambda ,}
third quartile (p = 3/4)
−
ln
(
1
/
4
)
/
λ
.
{\displaystyle -\ln(1/4)/\lambda .}
== Applications ==
Quantile functions are used in both statistical applications and Monte Carlo methods.
The quantile function is one way of prescribing a probability distribution, and it is an alternative to the probability density function (pdf) or probability mass function, the cumulative distribution function (cdf) and the characteristic function. The quantile function, Q, of a probability distribution is the inverse of its cumulative distribution function F. The derivative of the quantile function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function.
Consider a statistical application where a user needs to know key percentage points of a given distribution. For example, they require the median and 25% and 75% quartiles as in the example above or 5%, 95%, 2.5%, 97.5% levels for other applications such as assessing the statistical significance of an observation whose distribution is known; see the quantile entry. Before the popularization of computers, it was not uncommon for books to have appendices with statistical tables sampling the quantile function. Statistical applications of quantile functions are discussed extensively by Gilchrist.
Monte-Carlo simulations employ quantile functions to produce non-uniform random or pseudorandom numbers for use in diverse types of simulation calculations. A sample from a given distribution may be obtained in principle by applying its quantile function to a sample from a uniform distribution. The demands of simulation methods, for example in modern computational finance, are focusing increasing attention on methods based on quantile functions, as they work well with multivariate techniques based on either copula or quasi-Monte-Carlo methods and Monte Carlo methods in finance.
== Calculation ==
The evaluation of quantile functions often involves numerical methods, such as the exponential distribution above, which is one of the few distributions where a closed-form expression can be found (others include the uniform, the Weibull, the Tukey lambda (which includes the logistic) and the log-logistic). When the cdf itself has a closed-form expression, one can always use a numerical root-finding algorithm such as the bisection method to invert the cdf. Other methods rely on an approximation of the inverse via interpolation techniques. Further algorithms to evaluate quantile functions are given in the Numerical Recipes series of books. Algorithms for common distributions are built into many statistical software packages. General methods to numerically compute the quantile functions for general classes of distributions can be found in the following libraries:
C library UNU.RAN
R library Runuran
Python subpackage sampling in scipy.stats
Quantile functions may also be characterized as solutions of non-linear ordinary and partial differential equations. The ordinary differential equations for the cases of the normal, Student, beta and gamma distributions have been given and solved.
=== Normal distribution ===
The normal distribution is perhaps the most important case. Because the normal distribution is a location-scale family, its quantile function for arbitrary parameters can be derived from a simple transformation of the quantile function of the standard normal distribution, known as the probit function. Unfortunately, this function has no closed-form representation using basic algebraic functions; as a result, approximate representations are usually used. Thorough composite rational and polynomial approximations have been given by Wichura and Acklam. Non-composite rational approximations have been developed by Shaw.
==== Ordinary differential equation for the normal quantile ====
A non-linear ordinary differential equation for the normal quantile, w(p), may be given. It is
d
2
w
d
p
2
=
w
(
d
w
d
p
)
2
{\displaystyle {\frac {d^{2}w}{dp^{2}}}=w\left({\frac {dw}{dp}}\right)^{2}}
with the centre (initial) conditions
w
(
1
/
2
)
=
0
,
{\displaystyle w\left(1/2\right)=0,\,}
w
′
(
1
/
2
)
=
2
π
.
{\displaystyle w'\left(1/2\right)={\sqrt {2\pi }}.\,}
This equation may be solved by several methods, including the classical power series approach. From this solutions of arbitrarily high accuracy may be developed (see Steinbrecher and Shaw, 2008).
=== Student's t-distribution ===
This has historically been one of the more intractable cases, as the presence of a parameter, ν, the degrees of freedom, makes the use of rational and other approximations awkward. Simple formulas exist when the ν = 1, 2, 4 and the problem may be reduced to the solution of a polynomial when ν is even. In other cases the quantile functions may be developed as power series. The simple cases are as follows:
ν = 1 (Cauchy distribution)
Q
(
p
)
=
tan
(
π
(
p
−
1
/
2
)
)
{\displaystyle Q(p)=\tan(\pi (p-1/2))\!}
ν = 2
Q
(
p
)
=
2
(
p
−
1
/
2
)
2
α
{\displaystyle Q(p)=2(p-1/2){\sqrt {\frac {2}{\alpha }}}\!}
ν = 4
Q
(
p
)
=
sign
(
p
−
1
/
2
)
2
q
−
1
{\displaystyle Q(p)=\operatorname {sign} (p-1/2)\,2\,{\sqrt {q-1}}\!}
where
q
=
cos
(
1
3
arccos
(
α
)
)
α
{\displaystyle q={\frac {\cos \left({\frac {1}{3}}\arccos \left({\sqrt {\alpha }}\,\right)\right)}{\sqrt {\alpha }}}\!}
and
α
=
4
p
(
1
−
p
)
.
{\displaystyle \alpha =4p(1-p).\!}
In the above the "sign" function is +1 for positive arguments, −1 for negative arguments and zero at zero. It should not be confused with the trigonometric sine function.
== Quantile mixtures ==
Analogously to the mixtures of densities, distributions can be defined as quantile mixtures
Q
(
p
)
=
∑
i
=
1
m
a
i
Q
i
(
p
)
,
{\displaystyle Q(p)=\sum _{i=1}^{m}a_{i}Q_{i}(p),}
where
Q
i
(
p
)
{\displaystyle Q_{i}(p)}
,
i
=
1
,
…
,
m
{\displaystyle i=1,\ldots ,m}
are quantile functions and
a
i
{\displaystyle a_{i}}
,
i
=
1
,
…
,
m
{\displaystyle i=1,\ldots ,m}
are the model parameters. The parameters
a
i
{\displaystyle a_{i}}
must be selected so that
Q
(
p
)
{\displaystyle Q(p)}
is a quantile function.
Two four-parametric quantile mixtures, the normal-polynomial quantile mixture and the Cauchy-polynomial quantile mixture, are presented by Karvanen.
== Non-linear differential equations for quantile functions ==
The non-linear ordinary differential equation given for normal distribution is a special case of that available for any quantile function whose second derivative exists. In general the equation for a quantile, Q(p), may be given. It is
d
2
Q
d
p
2
=
H
(
Q
)
(
d
Q
d
p
)
2
{\displaystyle {\frac {d^{2}Q}{dp^{2}}}=H(Q)\left({\frac {dQ}{dp}}\right)^{2}}
augmented by suitable boundary conditions, where
H
(
x
)
=
−
f
′
(
x
)
f
(
x
)
=
−
d
d
x
ln
f
(
x
)
{\displaystyle H(x)=-{\frac {f'(x)}{f(x)}}=-{\frac {d}{dx}}\ln f(x)}
and f(x) is the probability density function. The forms of this equation, and its classical analysis by series and asymptotic solutions, for the cases of the normal, Student, gamma and beta distributions has been elucidated by Steinbrecher and Shaw (2008). Such solutions provide accurate benchmarks, and in the case of the Student, suitable series for live Monte Carlo use.
== See also ==
Inverse transform sampling
Percentage point
Probability integral transform
Quantile
Rank–size distribution
== References ==
== Further reading ==
Abernathy, Roger W. and Smith, Robert P. (1993) *"Applying series expansion to the inverse beta distribution to find percentiles of the F-distribution", ACM Trans. Math. Softw., 9 (4), 478–480 doi:10.1145/168173.168387
Refinement of the Normal Quantile
New Methods for Managing "Student's" T Distribution
ACM Algorithm 396: Student's t-Quantiles | Wikipedia/Quantile_function |
In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are often employed for their succinct description of the sequence of probabilities Pr(X = i) in the probability mass function for a random variable X, and to make available the well-developed theory of power series with non-negative coefficients.
== Definition ==
=== Univariate case ===
If X is a discrete random variable taking values x in the non-negative integers {0,1, ...}, then the probability generating function of X is defined as
G
(
z
)
=
E
(
z
X
)
=
∑
x
=
0
∞
p
(
x
)
z
x
,
{\displaystyle G(z)=\operatorname {E} (z^{X})=\sum _{x=0}^{\infty }p(x)z^{x},}
where
p
{\displaystyle p}
is the probability mass function of
X
{\displaystyle X}
. Note that the subscripted notations
G
X
{\displaystyle G_{X}}
and
p
X
{\displaystyle p_{X}}
are often used to emphasize that these pertain to a particular random variable
X
{\displaystyle X}
, and to its distribution. The power series converges absolutely at least for all complex numbers
z
{\displaystyle z}
with
|
z
|
<
1
{\displaystyle |z|<1}
; the radius of convergence being often larger.
=== Multivariate case ===
If X = (X1,...,Xd) is a discrete random variable taking values (x1, ..., xd) in the d-dimensional non-negative integer lattice {0,1, ...}d, then the probability generating function of X is defined as
G
(
z
)
=
G
(
z
1
,
…
,
z
d
)
=
E
(
z
1
X
1
⋯
z
d
X
d
)
=
∑
x
1
,
…
,
x
d
=
0
∞
p
(
x
1
,
…
,
x
d
)
z
1
x
1
⋯
z
d
x
d
,
{\displaystyle G(z)=G(z_{1},\ldots ,z_{d})=\operatorname {E} {\bigl (}z_{1}^{X_{1}}\cdots z_{d}^{X_{d}}{\bigr )}=\sum _{x_{1},\ldots ,x_{d}=0}^{\infty }p(x_{1},\ldots ,x_{d})z_{1}^{x_{1}}\cdots z_{d}^{x_{d}},}
where p is the probability mass function of X. The power series converges absolutely at least for all complex vectors
z
=
(
z
1
,
.
.
.
z
d
)
∈
C
d
{\displaystyle z=(z_{1},...z_{d})\in \mathbb {C} ^{d}}
with
max
{
|
z
1
|
,
.
.
.
,
|
z
d
|
}
≤
1.
{\displaystyle {\text{max}}\{|z_{1}|,...,|z_{d}|\}\leq 1.}
== Properties ==
=== Power series ===
Probability generating functions obey all the rules of power series with non-negative coefficients. In particular,
G
(
1
−
)
=
1
{\displaystyle G(1^{-})=1}
, where
G
(
1
−
)
=
lim
x
→
1
,
x
<
1
G
(
x
)
{\displaystyle G(1^{-})=\lim _{x\to 1,x<1}G(x)}
, x approaching 1 from below, since the probabilities must sum to one. So the radius of convergence of any probability generating function must be at least 1, by Abel's theorem for power series with non-negative coefficients.
=== Probabilities and expectations ===
The following properties allow the derivation of various basic quantities related to
X
{\displaystyle X}
:
The probability mass function of
X
{\displaystyle X}
is recovered by taking derivatives of
G
{\displaystyle G}
,
p
(
k
)
=
Pr
(
X
=
k
)
=
G
(
k
)
(
0
)
k
!
.
{\displaystyle p(k)=\operatorname {Pr} (X=k)={\frac {G^{(k)}(0)}{k!}}.}
It follows from Property 1 that if random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
have probability-generating functions that are equal,
G
X
=
G
Y
{\displaystyle G_{X}=G_{Y}}
, then
p
X
=
p
Y
{\displaystyle p_{X}=p_{Y}}
. That is, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
have identical probability-generating functions, then they have identical distributions.
The normalization of the probability mass function can be expressed in terms of the generating function by
E
[
1
]
=
G
(
1
−
)
=
∑
i
=
0
∞
p
(
i
)
=
1.
{\displaystyle \operatorname {E} [1]=G(1^{-})=\sum _{i=0}^{\infty }p(i)=1.}
The expectation of
X
{\displaystyle X}
is given by
E
[
X
]
=
G
′
(
1
−
)
.
{\displaystyle \operatorname {E} [X]=G'(1^{-}).}
More generally, the
k
t
h
{\displaystyle k^{th}}
factorial moment,
E
[
X
(
X
−
1
)
⋯
(
X
−
k
+
1
)
]
{\displaystyle \operatorname {E} [X(X-1)\cdots (X-k+1)]}
of
X
{\displaystyle X}
is given by
E
[
X
!
(
X
−
k
)
!
]
=
G
(
k
)
(
1
−
)
,
k
≥
0.
{\displaystyle \operatorname {E} \left[{\frac {X!}{(X-k)!}}\right]=G^{(k)}(1^{-}),\quad k\geq 0.}
So the variance of
X
{\displaystyle X}
is given by
Var
(
X
)
=
G
″
(
1
−
)
+
G
′
(
1
−
)
−
[
G
′
(
1
−
)
]
2
.
{\displaystyle \operatorname {Var} (X)=G''(1^{-})+G'(1^{-})-\left[G'(1^{-})\right]^{2}.}
Finally, the k-th raw moment of X is given by
E
[
X
k
]
=
(
z
∂
∂
z
)
k
G
(
z
)
|
z
=
1
−
{\displaystyle \operatorname {E} [X^{k}]=\left(z{\frac {\partial }{\partial z}}\right)^{k}G(z){\Big |}_{z=1^{-}}}
G
X
(
e
t
)
=
M
X
(
t
)
{\displaystyle G_{X}(e^{t})=M_{X}(t)}
where X is a random variable,
G
X
(
t
)
{\displaystyle G_{X}(t)}
is the probability generating function (of
X
{\displaystyle X}
) and
M
X
(
t
)
{\displaystyle M_{X}(t)}
is the moment-generating function (of
X
{\displaystyle X}
).
=== Functions of independent random variables ===
Probability generating functions are particularly useful for dealing with functions of independent random variables. For example:
If
X
i
,
i
=
1
,
2
,
⋯
,
N
{\displaystyle X_{i},i=1,2,\cdots ,N}
is a sequence of independent (and not necessarily identically distributed) random variables that take on natural-number values, and
S
N
=
∑
i
=
1
N
a
i
X
i
,
{\displaystyle S_{N}=\sum _{i=1}^{N}a_{i}X_{i},}
where the
a
i
{\displaystyle a_{i}}
are constant natural numbers, then the probability generating function is given by
G
S
N
(
z
)
=
E
(
z
S
N
)
=
E
(
z
∑
i
=
1
N
a
i
X
i
,
)
=
G
X
1
(
z
a
1
)
G
X
2
(
z
a
2
)
⋯
G
X
N
(
z
a
N
)
.
{\displaystyle G_{S_{N}}(z)=\operatorname {E} (z^{S_{N}})=\operatorname {E} \left(z^{\sum _{i=1}^{N}a_{i}X_{i},}\right)=G_{X_{1}}(z^{a_{1}})G_{X_{2}}(z^{a_{2}})\cdots G_{X_{N}}(z^{a_{N}}).}
In particular, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent random variables:
G
X
+
Y
(
z
)
=
G
X
(
z
)
⋅
G
Y
(
z
)
{\displaystyle G_{X+Y}(z)=G_{X}(z)\cdot G_{Y}(z)}
and
G
X
−
Y
(
z
)
=
G
X
(
z
)
⋅
G
Y
(
1
/
z
)
.
{\displaystyle G_{X-Y}(z)=G_{X}(z)\cdot G_{Y}(1/z).}
In the above, the number
N
{\displaystyle N}
of independent random variables in the sequence is fixed. Assume
N
{\displaystyle N}
is discrete random variable taking values on the non-negative integers, which is independent of the
X
i
{\displaystyle X_{i}}
, and consider the probability generating function
G
N
{\displaystyle G_{N}}
. If the
X
i
{\displaystyle X_{i}}
are not only independent but also identically distributed with common probability generating function
G
X
=
G
X
i
{\displaystyle G_{X}=G_{X_{i}}}
, then
G
S
N
(
z
)
=
G
N
(
G
X
(
z
)
)
.
{\displaystyle G_{S_{N}}(z)=G_{N}(G_{X}(z)).}
This can be seen, using the law of total expectation, as follows:
G
S
N
(
z
)
=
E
(
z
S
N
)
=
E
(
z
∑
i
=
1
N
X
i
)
=
E
(
E
(
z
∑
i
=
1
N
X
i
∣
N
)
)
=
E
(
(
G
X
(
z
)
)
N
)
=
G
N
(
G
X
(
z
)
)
.
{\displaystyle {\begin{aligned}G_{S_{N}}(z)&=\operatorname {E} (z^{S_{N}})=\operatorname {E} (z^{\sum _{i=1}^{N}X_{i}})\\[4pt]&=\operatorname {E} {\big (}\operatorname {E} (z^{\sum _{i=1}^{N}X_{i}}\mid N){\big )}=\operatorname {E} {\big (}(G_{X}(z))^{N}{\big )}=G_{N}(G_{X}(z)).\end{aligned}}}
This last fact is useful in the study of Galton–Watson processes and compound Poisson processes.
When the
X
i
{\displaystyle X_{i}}
are not supposed identically distributed (but still independent and independent of
N
{\displaystyle N}
), we have
G
S
N
(
z
)
=
∑
n
≥
1
f
n
∏
i
=
1
n
G
X
i
(
z
)
,
{\displaystyle G_{S_{N}}(z)=\sum _{n\geq 1}f_{n}\prod _{i=1}^{n}G_{X_{i}}(z),}
where
f
n
=
Pr
(
N
=
n
)
.
{\displaystyle f_{n}=\Pr(N=n).}
For identically distributed
X
i
{\displaystyle X_{i}}
s, this simplifies to the identity stated before, but the general case is sometimes useful to obtain a decomposition of
S
N
{\displaystyle S_{N}}
by means of generating functions.
== Examples ==
The probability generating function of an almost surely constant random variable, i.e. one with
P
r
(
X
=
c
)
=
1
{\displaystyle Pr(X=c)=1}
and
P
r
(
X
≠
c
)
=
0
{\displaystyle Pr(X\neq c)=0}
is
G
(
z
)
=
z
c
.
{\displaystyle G(z)=z^{c}.}
The probability generating function of a binomial random variable, the number of successes in
n
{\displaystyle n}
trials, with probability
p
{\displaystyle p}
of success in each trial, is
G
(
z
)
=
[
(
1
−
p
)
+
p
z
]
n
.
{\displaystyle G(z)=\left[(1-p)+pz\right]^{n}.}
Note: it is the
n
{\displaystyle n}
-fold product of the probability generating function of a Bernoulli random variable with parameter
p
{\displaystyle p}
. So the probability generating function of a fair coin, is
G
(
z
)
=
1
/
2
+
z
/
2.
{\displaystyle G(z)=1/2+z/2.}
The probability generating function of a negative binomial random variable on
{
0
,
1
,
2
⋯
}
{\displaystyle \{0,1,2\cdots \}}
, the number of failures until the
r
t
h
{\displaystyle r^{th}}
success with probability of success in each trial
p
{\displaystyle p}
, is
G
(
z
)
=
(
p
1
−
(
1
−
p
)
z
)
r
,
{\displaystyle G(z)=\left({\frac {p}{1-(1-p)z}}\right)^{r},}
which converges for
|
z
|
<
1
1
−
p
{\displaystyle |z|<{\frac {1}{1-p}}}
. Note that this is the
r
{\displaystyle r}
-fold product of the probability generating function of a geometric random variable with parameter
1
−
p
{\displaystyle 1-p}
on
{
0
,
1
,
2
,
⋯
}
{\displaystyle \{0,1,2,\cdots \}}
.
The probability generating function of a Poisson random variable with rate parameter
λ
{\displaystyle \lambda }
is
G
(
z
)
=
e
λ
(
z
−
1
)
.
{\displaystyle G(z)=e^{\lambda (z-1)}.}
== Related concepts ==
The probability generating function is an example of a generating function of a sequence: see also formal power series. It is equivalent to, and sometimes called, the z-transform of the probability mass function.
Other generating functions of random variables include the moment-generating function, the characteristic function and the cumulant generating function. The probability generating function is also equivalent to the factorial moment generating function, which as
E
[
z
X
]
{\displaystyle \operatorname {E} \left[z^{X}\right]}
can also be considered for continuous and other random variables.
== Notes ==
== References ==
Johnson, Norman Lloyd; Kotz, Samuel; Kemp, Adrienne W. (1992). Univariate Discrete Distributions. Wiley series in probability and mathematical statistics (2nd ed.). New York: J. Wiley & Sons. ISBN 978-0-471-54897-3. | Wikipedia/Probability-generating_function |
Conversion of units is the conversion of the unit of measurement in which a quantity is expressed, typically through a multiplicative conversion factor that changes the unit without changing the quantity. This is also often loosely taken to include replacement of a quantity with a corresponding quantity that describes the same physical property.
Unit conversion is often easier within a metric system such as the SI than in others, due to the system's coherence and its metric prefixes that act as power-of-10 multipliers.
== Overview ==
The definition and choice of units in which to express a quantity may depend on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:
the precision and accuracy of measurement and the associated uncertainty of measurement
the statistical confidence interval or tolerance interval of the initial measurement
the number of significant figures of the measurement
the intended use of the measurement, including the engineering tolerances
historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot.
For some purposes, conversions from one system of units to another are needed to be exact, without increasing or decreasing the precision of the expressed quantity. An adaptive conversion may not produce an exactly equivalent expression. Nominal values are sometimes allowed and used.
== Factor–label method ==
The factor–label method, also known as the unit–factor method or the unity bracket method, is a widely used technique for unit conversions that uses the rules of algebra.
The factor–label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below:
10
m
i
1
h
×
1609.344
m
1
m
i
×
1
h
3600
s
=
4.4704
m
s
.
{\displaystyle {\frac {\mathrm {10~{\cancel {mi}}} }{\mathrm {1~{\cancel {h}}} }}\times {\frac {\mathrm {1609.344~m} }{\mathrm {1~{\cancel {mi}}} }}\times {\frac {\mathrm {1~{\cancel {h}}} }{\mathrm {3600~s} }}=\mathrm {4.4704~{\frac {m}{s}}} .}
Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being rearranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and
1
m
i
=
1609.344
m
{\displaystyle \mathrm {1~mi} =\mathrm {1609.344~m} }
, "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields
1
m
i
1
m
i
=
1609.344
m
1
m
i
{\displaystyle {\frac {\mathrm {1~mi} }{\mathrm {1~mi} }}={\frac {\mathrm {1609.344~m} }{\mathrm {1~mi} }}}
, which when simplified results in the dimensionless
1
=
1609.344
m
1
m
i
{\displaystyle 1={\frac {\mathrm {1609.344~m} }{\mathrm {1~mi} }}}
. Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second.
As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below:
NOx concentration
= 10 parts per million by volume = 10 ppmv = 10 volumes/106 volumes
NOx molar mass
= 46 kg/kmol = 46 g/mol
Flow rate of flue gas
= 20 cubic metres per minute = 20 m3/min
The flue gas exits the furnace at 0 °C temperature and 101.325 kPa absolute pressure.
The molar volume of a gas at 0 °C temperature and 101.325 kPa is 22.414 m3/kmol.
1000
g
NO
x
1
kg
NO
x
×
46
kg
NO
x
1
kmol
NO
x
×
1
kmol
NO
x
22.414
m
3
NO
x
×
10
m
3
NO
x
10
6
m
3
gas
×
20
m
3
gas
1
minute
×
60
minute
1
hour
=
24.63
g
NO
x
hour
{\displaystyle {\frac {1000\ {\ce {g\ NO}}_{x}}{1{\cancel {{\ce {kg\ NO}}_{x}}}}}\times {\frac {46\ {\cancel {{\ce {kg\ NO}}_{x}}}}{1\ {\cancel {{\ce {kmol\ NO}}_{x}}}}}\times {\frac {1\ {\cancel {{\ce {kmol\ NO}}_{x}}}}{22.414\ {\cancel {{\ce {m}}^{3}\ {\ce {NO}}_{x}}}}}\times {\frac {10\ {\cancel {{\ce {m}}^{3}\ {\ce {NO}}_{x}}}}{10^{6}\ {\cancel {{\ce {m}}^{3}\ {\ce {gas}}}}}}\times {\frac {20\ {\cancel {{\ce {m}}^{3}\ {\ce {gas}}}}}{1\ {\cancel {\ce {minute}}}}}\times {\frac {60\ {\cancel {\ce {minute}}}}{1\ {\ce {hour}}}}=24.63\ {\frac {{\ce {g\ NO}}_{x}}{\ce {hour}}}}
After cancelling any dimensional units that appear both in the numerators and the denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.
=== Checking equations that involve dimensions ===
The factor–label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.
For example, check the universal gas law equation of PV = nRT, when:
the pressure P is in pascals (Pa)
the volume V is in cubic metres (m3)
the amount of substance n is in moles (mol)
the universal gas constant R is 8.3145 Pa⋅m3/(mol⋅K)
the temperature T is in kelvins (K)
P
a
⋅
m
3
=
m
o
l
1
×
P
a
⋅
m
3
m
o
l
K
×
K
1
{\displaystyle \mathrm {Pa{\cdot }m^{3}} ={\frac {\cancel {\mathrm {mol} }}{1}}\times {\frac {\mathrm {Pa{\cdot }m^{3}} }{{\cancel {\mathrm {mol} }}\ {\cancel {\mathrm {K} }}}}\times {\frac {\cancel {\mathrm {K} }}{1}}}
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal undiscovered or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.
=== Limitations ===
The factor–label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0 (ratio scale in Stevens's typology). Most conversions fit this paradigm. An example for which it cannot be used is the conversion between the Celsius scale and the Kelvin scale (or the Fahrenheit scale). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (
x
↦
a
x
+
b
{\displaystyle x\mapsto ax+b}
, rather than a linear transform
x
↦
a
x
{\displaystyle x\mapsto ax}
) between them.
For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, which yields the same formula.
Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used:
T[C] = (T[F] − 32) × 5/9.
To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used:
T[F] = (T[C] × 9/5) + 32.
=== Example ===
Starting with:
Z
=
n
i
×
[
Z
]
i
{\displaystyle Z=n_{i}\times [Z]_{i}}
replace the original unit
[
Z
]
i
{\displaystyle [Z]_{i}}
with its meaning in terms of the desired unit
[
Z
]
j
{\displaystyle [Z]_{j}}
, e.g. if
[
Z
]
i
=
c
i
j
×
[
Z
]
j
{\displaystyle [Z]_{i}=c_{ij}\times [Z]_{j}}
, then:
Z
=
n
i
×
(
c
i
j
×
[
Z
]
j
)
=
(
n
i
×
c
i
j
)
×
[
Z
]
j
{\displaystyle Z=n_{i}\times (c_{ij}\times [Z]_{j})=(n_{i}\times c_{ij})\times [Z]_{j}}
Now
n
i
{\displaystyle n_{i}}
and
c
i
j
{\displaystyle c_{ij}}
are both numerical values, so just calculate their product.
Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:
Z
=
n
i
×
[
Z
]
i
×
(
c
i
j
×
[
Z
]
j
/
[
Z
]
i
)
{\displaystyle Z=n_{i}\times [Z]_{i}\times (c_{ij}\times [Z]_{j}/[Z]_{i})}
For example, you have an expression for a physical value Z involving the unit feet per second (
[
Z
]
i
{\displaystyle [Z]_{i}}
) and you want it in terms of the unit miles per hour (
[
Z
]
j
{\displaystyle [Z]_{j}}
):
Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:
9
L
100
k
m
=
9
L
100
k
m
1000000
μ
L
1
L
1
k
m
1000
m
=
9
×
1000000
100
×
1000
μ
L
/
m
=
90
μ
L
/
m
{\displaystyle \mathrm {\frac {9\,{\rm {L}}}{100\,{\rm {km}}}} =\mathrm {\frac {9\,{\rm {L}}}{100\,{\rm {km}}}} \mathrm {\frac {1000000\,{\rm {\mu L}}}{1\,{\rm {L}}}} \mathrm {\frac {1\,{\rm {km}}}{1000\,{\rm {m}}}} ={\frac {9\times 1000000}{100\times 1000}}\,\mathrm {\mu L/m} =90\,\mathrm {\mu L/m} }
== Calculation involving non-SI Units ==
In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the factor, and then plug in the numerical values of the given/known quantities.
For example, in the study of Bose–Einstein condensate, atomic mass m is usually given in daltons, instead of kilograms, and chemical potential μ is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by:
ξ
=
ℏ
2
m
μ
.
{\displaystyle \xi ={\frac {\hbar }{\sqrt {2m\mu }}}\,.}
For a 23Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps:
=== Calculate the factor ===
Assume that
m
=
1
Da
,
μ
=
k
B
⋅
1
nK
{\displaystyle m=1\,{\text{Da}},\mu =k_{\text{B}}\cdot 1\,{\text{nK}}}
, this gives
ξ
=
ℏ
2
m
μ
=
15.574
μ
m
,
{\displaystyle \xi ={\frac {\hbar }{\sqrt {2m\mu }}}=15.574\,\mathrm {\mu m} \,,}
which is our factor.
=== Calculate the numbers ===
Now, make use of the fact that
ξ
∝
1
m
μ
{\displaystyle \xi \propto {\frac {1}{\sqrt {m\mu }}}}
. With
m
=
23
Da
,
μ
=
128
k
B
⋅
nK
{\displaystyle m=23\,{\text{Da}},\mu =128\,k_{\text{B}}\cdot {\text{nK}}}
,
ξ
=
15.574
23
⋅
128
μm
=
0.287
μm
{\displaystyle \xi ={\frac {15.574}{\sqrt {23\cdot 128}}}\,{\text{μm}}=0.287\,{\text{μm}}}
.
This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the factor calculated above, it is very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is
ξ
=
15.574
174
⋅
20.3
μm
=
0.262
μm
{\displaystyle \xi ={\frac {15.574}{\sqrt {174\cdot 20.3}}}\,{\text{μm}}=0.262\,{\text{μm}}}
.
== Software tools ==
There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications.
There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for GNU and Windows. The Unified Code for Units of Measure is also a popular option.
== See also ==
== Notes and references ==
Notes
== External links ==
Statutory Instrument 1995 No. 1804 Units of measurement regulations 1995 From legislation.gov.uk
"NIST: Fundamental physical constants – Non-SI units" (PDF). Archived from the original (PDF) on 2016-12-27. Retrieved 2004-03-15.
NIST Guide to SI Units Many conversion factors listed.
The Unified Code for Units of Measure
Units, Symbols, and Conversions XML Dictionary Archived 2023-05-02 at the Wayback Machine
"Instruction sur les poids et mesures républicaines – déduites de la grandeur de la terre, uniformes pour toute la République, et sur les calculs relatifs à leur division décimale" (in French)
Math Skills Review
Chapter 11: Behavior of Gases Chemistry: Concepts and Applications, Denton independent school District | Wikipedia/Conversion_of_units |
In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations (also called the Gauss–Codazzi–Weingarten-Mainardi equations or Gauss–Peterson–Codazzi formulas) are fundamental formulas that link together the induced metric and second fundamental form of a submanifold of (or immersion into) a Riemannian or pseudo-Riemannian manifold.
The equations were originally discovered in the context of surfaces in three-dimensional Euclidean space. In this context, the first equation, often called the Gauss equation (after its discoverer Carl Friedrich Gauss), says that the Gauss curvature of the surface, at any given point, is dictated by the derivatives of the Gauss map at that point, as encoded by the second fundamental form. The second equation, called the Codazzi equation or Codazzi-Mainardi equation, states that the covariant derivative of the second fundamental form is fully symmetric. It is named for Gaspare Mainardi (1856) and Delfino Codazzi (1868–1869), who independently derived the result, although it was discovered earlier by Karl Mikhailovich Peterson.
== Formal statement ==
Let
i
:
M
⊂
P
{\displaystyle i\colon M\subset P}
be an n-dimensional embedded submanifold of a Riemannian manifold P of dimension
n
+
p
{\displaystyle n+p}
. There is a natural inclusion of the tangent bundle of M into that of P by the pushforward, and the cokernel is the normal bundle of M:
0
→
T
x
M
→
T
x
P
|
M
→
T
x
⊥
M
→
0.
{\displaystyle 0\rightarrow T_{x}M\rightarrow T_{x}P|_{M}\rightarrow T_{x}^{\perp }M\rightarrow 0.}
The metric splits this short exact sequence, and so
T
P
|
M
=
T
M
⊕
T
⊥
M
.
{\displaystyle TP|_{M}=TM\oplus T^{\perp }M.}
Relative to this splitting, the Levi-Civita connection
∇
′
{\displaystyle \nabla '}
of P decomposes into tangential and normal components. For each
X
∈
T
M
{\displaystyle X\in TM}
and vector field Y on M,
∇
X
′
Y
=
⊤
(
∇
X
′
Y
)
+
⊥
(
∇
X
′
Y
)
.
{\displaystyle \nabla '_{X}Y=\top \left(\nabla '_{X}Y\right)+\bot \left(\nabla '_{X}Y\right).}
Let
∇
X
Y
=
⊤
(
∇
X
′
Y
)
,
α
(
X
,
Y
)
=
⊥
(
∇
X
′
Y
)
.
{\displaystyle \nabla _{X}Y=\top \left(\nabla '_{X}Y\right),\quad \alpha (X,Y)=\bot \left(\nabla '_{X}Y\right).}
The Gauss formula now asserts that
∇
X
{\displaystyle \nabla _{X}}
is the Levi-Civita connection for M, and
α
{\displaystyle \alpha }
is a symmetric vector-valued form with values in the normal bundle. It is often referred to as the second fundamental form.
An immediate corollary is the Gauss equation for the curvature tensor. For
X
,
Y
,
Z
,
W
∈
T
M
{\displaystyle X,Y,Z,W\in TM}
,
⟨
R
′
(
X
,
Y
)
Z
,
W
⟩
=
⟨
R
(
X
,
Y
)
Z
,
W
⟩
+
⟨
α
(
X
,
Z
)
,
α
(
Y
,
W
)
⟩
−
⟨
α
(
Y
,
Z
)
,
α
(
X
,
W
)
⟩
{\displaystyle \langle R'(X,Y)Z,W\rangle =\langle R(X,Y)Z,W\rangle +\langle \alpha (X,Z),\alpha (Y,W)\rangle -\langle \alpha (Y,Z),\alpha (X,W)\rangle }
where
R
′
{\displaystyle R'}
is the Riemann curvature tensor of P and R is that of M.
The Weingarten equation is an analog of the Gauss formula for a connection in the normal bundle. Let
X
∈
T
M
{\displaystyle X\in TM}
and
ξ
{\displaystyle \xi }
a normal vector field. Then decompose the ambient covariant derivative of
ξ
{\displaystyle \xi }
along X into tangential and normal components:
∇
X
′
ξ
=
⊤
(
∇
X
′
ξ
)
+
⊥
(
∇
X
′
ξ
)
=
−
A
ξ
(
X
)
+
D
X
(
ξ
)
.
{\displaystyle \nabla '_{X}\xi =\top \left(\nabla '_{X}\xi \right)+\bot \left(\nabla '_{X}\xi \right)=-A_{\xi }(X)+D_{X}(\xi ).}
Then
Weingarten's equation:
⟨
A
ξ
X
,
Y
⟩
=
⟨
α
(
X
,
Y
)
,
ξ
⟩
{\displaystyle \langle A_{\xi }X,Y\rangle =\langle \alpha (X,Y),\xi \rangle }
DX is a metric connection in the normal bundle.
There are thus a pair of connections: ∇, defined on the tangent bundle of M; and D, defined on the normal bundle of M. These combine to form a connection on any tensor product of copies of TM and T⊥M. In particular, they defined the covariant derivative of
α
{\displaystyle \alpha }
:
(
∇
~
X
α
)
(
Y
,
Z
)
=
D
X
(
α
(
Y
,
Z
)
)
−
α
(
∇
X
Y
,
Z
)
−
α
(
Y
,
∇
X
Z
)
.
{\displaystyle \left({\tilde {\nabla }}_{X}\alpha \right)(Y,Z)=D_{X}\left(\alpha (Y,Z)\right)-\alpha \left(\nabla _{X}Y,Z\right)-\alpha \left(Y,\nabla _{X}Z\right).}
The Codazzi–Mainardi equation is
⊥
(
R
′
(
X
,
Y
)
Z
)
=
(
∇
~
X
α
)
(
Y
,
Z
)
−
(
∇
~
Y
α
)
(
X
,
Z
)
.
{\displaystyle \bot \left(R'(X,Y)Z\right)=\left({\tilde {\nabla }}_{X}\alpha \right)(Y,Z)-\left({\tilde {\nabla }}_{Y}\alpha \right)(X,Z).}
Since every immersion is, in particular, a local embedding, the above formulas also hold for immersions.
== Gauss–Codazzi equations in classical differential geometry ==
=== Statement of classical equations ===
In classical differential geometry of surfaces, the Codazzi–Mainardi equations are expressed via the second fundamental form (L, M, N):
L
v
−
M
u
=
L
Γ
1
12
+
M
(
Γ
2
12
−
Γ
1
11
)
−
N
Γ
2
11
{\displaystyle L_{v}-M_{u}=L\Gamma ^{1}{}_{12}+M\left({\Gamma ^{2}}_{12}-{\Gamma ^{1}}_{11}\right)-N{\Gamma ^{2}}_{11}}
M
v
−
N
u
=
L
Γ
1
22
+
M
(
Γ
2
22
−
Γ
1
12
)
−
N
Γ
2
12
{\displaystyle M_{v}-N_{u}=L\Gamma ^{1}{}_{22}+M\left({\Gamma ^{2}}_{22}-{\Gamma ^{1}}_{12}\right)-N{\Gamma ^{2}}_{12}}
The Gauss formula, depending on how one chooses to define the Gaussian curvature, may be a tautology. It can be stated as
K
=
L
N
−
M
2
e
g
−
f
2
,
{\displaystyle K={\frac {LN-M^{2}}{eg-f^{2}}},}
where (e, f, g) are the components of the first fundamental form.
=== Derivation of classical equations ===
Consider a parametric surface in Euclidean 3-space,
r
(
u
,
v
)
=
(
x
(
u
,
v
)
,
y
(
u
,
v
)
,
z
(
u
,
v
)
)
{\displaystyle \mathbf {r} (u,v)=(x(u,v),y(u,v),z(u,v))}
where the three component functions depend smoothly on ordered pairs (u,v) in some open domain U in the uv-plane. Assume that this surface is regular, meaning that the vectors ru and rv are linearly independent. Complete this to a basis {ru,rv,n}, by selecting a unit vector n normal to the surface. It is possible to express the second partial derivatives of r (vectors of
R
3
{\displaystyle \mathbb {R^{3}} }
) with the Christoffel symbols and the elements of the second fundamental form. We choose the first two components of the basis as they are intrinsic to the surface and intend to prove intrinsic property of the Gaussian curvature. The last term in the basis is extrinsic.
r
u
u
=
Γ
1
11
r
u
+
Γ
2
11
r
v
+
L
n
{\displaystyle \mathbf {r} _{uu}={\Gamma ^{1}}_{11}\mathbf {r} _{u}+{\Gamma ^{2}}_{11}\mathbf {r} _{v}+L\mathbf {n} }
r
u
v
=
Γ
1
12
r
u
+
Γ
2
12
r
v
+
M
n
{\displaystyle \mathbf {r} _{uv}={\Gamma ^{1}}_{12}\mathbf {r} _{u}+{\Gamma ^{2}}_{12}\mathbf {r} _{v}+M\mathbf {n} }
r
v
v
=
Γ
1
22
r
u
+
Γ
2
22
r
v
+
N
n
{\displaystyle \mathbf {r} _{vv}={\Gamma ^{1}}_{22}\mathbf {r} _{u}+{\Gamma ^{2}}_{22}\mathbf {r} _{v}+N\mathbf {n} }
Clairaut's theorem states that partial derivatives commute:
(
r
u
u
)
v
=
(
r
u
v
)
u
{\displaystyle \left(\mathbf {r} _{uu}\right)_{v}=\left(\mathbf {r} _{uv}\right)_{u}}
If we differentiate ruu with respect to v and ruv with respect to u, we get:
(
Γ
1
11
)
v
r
u
+
Γ
1
11
r
u
v
+
(
Γ
2
11
)
v
r
v
+
Γ
2
11
r
v
v
+
L
v
n
+
L
n
v
{\displaystyle \left({\Gamma ^{1}}_{11}\right)_{v}\mathbf {r} _{u}+{\Gamma ^{1}}_{11}\mathbf {r} _{uv}+\left({\Gamma ^{2}}_{11}\right)_{v}\mathbf {r} _{v}+{\Gamma ^{2}}_{11}\mathbf {r} _{vv}+L_{v}\mathbf {n} +L\mathbf {n} _{v}}
=
(
Γ
1
12
)
u
r
u
+
Γ
1
12
r
u
u
+
(
Γ
12
2
)
u
r
v
+
Γ
2
12
r
u
v
+
M
u
n
+
M
n
u
{\displaystyle =\left({\Gamma ^{1}}_{12}\right)_{u}\mathbf {r} _{u}+{\Gamma ^{1}}_{12}\mathbf {r} _{uu}+\left(\Gamma _{12}^{2}\right)_{u}\mathbf {r} _{v}+{\Gamma ^{2}}_{12}\mathbf {r} _{uv}+M_{u}\mathbf {n} +M\mathbf {n} _{u}}
Now substitute the above expressions for the second derivatives and equate the coefficients of n:
M
Γ
1
11
+
N
Γ
2
11
+
L
v
=
L
Γ
1
12
+
M
Γ
2
12
+
M
u
{\displaystyle M{\Gamma ^{1}}_{11}+N{\Gamma ^{2}}_{11}+L_{v}=L{\Gamma ^{1}}_{12}+M{\Gamma ^{2}}_{12}+M_{u}}
Rearranging this equation gives the first Codazzi–Mainardi equation.
The second equation may be derived similarly.
== Mean curvature ==
Let M be a smooth m-dimensional manifold immersed in the (m + k)-dimensional smooth manifold P. Let
e
1
,
e
2
,
…
,
e
k
{\displaystyle e_{1},e_{2},\ldots ,e_{k}}
be a local orthonormal frame of vector fields normal to M. Then we can write,
α
(
X
,
Y
)
=
∑
j
=
1
k
α
j
(
X
,
Y
)
e
j
.
{\displaystyle \alpha (X,Y)=\sum _{j=1}^{k}\alpha _{j}(X,Y)e_{j}.}
If, now,
E
1
,
E
2
,
…
,
E
m
{\displaystyle E_{1},E_{2},\ldots ,E_{m}}
is a local orthonormal frame (of tangent vector fields) on the same open subset of M, then we can define the mean curvatures of the immersion by
H
j
=
∑
i
=
1
m
α
j
(
E
i
,
E
i
)
.
{\displaystyle H_{j}=\sum _{i=1}^{m}\alpha _{j}(E_{i},E_{i}).}
In particular, if M is a hypersurface of P, i.e.
k
=
1
{\displaystyle k=1}
, then there is only one mean curvature to speak of. The immersion is called minimal if all the
H
j
{\displaystyle H_{j}}
are identically zero.
Observe that the mean curvature is a trace, or average, of the second fundamental form, for any given component. Sometimes mean curvature is defined by multiplying the sum on the right-hand side by
1
/
m
{\displaystyle 1/m}
.
We can now write the Gauss–Codazzi equations as
⟨
R
′
(
X
,
Y
)
Z
,
W
⟩
=
⟨
R
(
X
,
Y
)
Z
,
W
⟩
+
∑
j
=
1
k
(
α
j
(
X
,
Z
)
α
j
(
Y
,
W
)
−
α
j
(
Y
,
Z
)
α
j
(
X
,
W
)
)
.
{\displaystyle \langle R'(X,Y)Z,W\rangle =\langle R(X,Y)Z,W\rangle +\sum _{j=1}^{k}\left(\alpha _{j}(X,Z)\alpha _{j}(Y,W)-\alpha _{j}(Y,Z)\alpha _{j}(X,W)\right).}
Contracting the
Y
,
Z
{\displaystyle Y,Z}
components gives us
Ric
′
(
X
,
W
)
=
Ric
(
X
,
W
)
+
∑
j
=
1
k
⟨
R
′
(
X
,
e
j
)
e
j
,
W
⟩
+
∑
j
=
1
k
(
∑
i
=
1
m
α
j
(
X
,
E
i
)
α
j
(
E
i
,
W
)
−
H
j
α
j
(
X
,
W
)
)
.
{\displaystyle \operatorname {Ric} '(X,W)=\operatorname {Ric} (X,W)+\sum _{j=1}^{k}\langle R'(X,e_{j})e_{j},W\rangle +\sum _{j=1}^{k}\left(\sum _{i=1}^{m}\alpha _{j}(X,E_{i})\alpha _{j}(E_{i},W)-H_{j}\alpha _{j}(X,W)\right).}
When M is a hypersurface, this simplifies to
Ric
′
(
X
,
W
)
=
Ric
(
X
,
W
)
+
⟨
R
′
(
X
,
n
)
n
,
W
⟩
+
∑
i
=
1
m
h
(
X
,
E
i
)
h
(
E
i
,
W
)
−
H
h
(
X
,
W
)
{\displaystyle \operatorname {Ric} '(X,W)=\operatorname {Ric} (X,W)+\langle R'(X,n)n,W\rangle +\sum _{i=1}^{m}h(X,E_{i})h(E_{i},W)-Hh(X,W)}
where
n
=
e
1
,
{\displaystyle n=e_{1},}
h
=
α
1
{\displaystyle h=\alpha _{1}}
and
H
=
H
1
{\displaystyle H=H_{1}}
. In that case, one more contraction yields,
R
′
=
R
+
2
Ric
′
(
n
,
n
)
+
‖
h
‖
2
−
H
2
{\displaystyle R'=R+2\operatorname {Ric} '(n,n)+\|h\|^{2}-H^{2}}
where
R
′
{\displaystyle R'}
and
R
{\displaystyle R}
are the scalar curvatures of P and M respectively, and
‖
h
‖
2
=
∑
i
,
j
=
1
m
h
(
E
i
,
E
j
)
2
.
{\displaystyle \|h\|^{2}=\sum _{i,j=1}^{m}h(E_{i},E_{j})^{2}.}
If
k
>
1
{\displaystyle k>1}
, the scalar curvature equation might be more complicated.
We can already use these equations to draw some conclusions. For example, any minimal immersion into the round sphere
x
1
2
+
x
2
2
+
⋯
+
x
m
+
k
+
1
2
=
1
{\displaystyle x_{1}^{2}+x_{2}^{2}+\cdots +x_{m+k+1}^{2}=1}
must be of the form
Δ
x
j
+
λ
x
j
=
0
{\displaystyle \Delta x_{j}+\lambda x_{j}=0}
where
j
{\displaystyle j}
runs from 1 to
m
+
k
+
1
{\displaystyle m+k+1}
and
Δ
=
∑
i
=
1
m
∇
E
i
∇
E
i
{\displaystyle \Delta =\sum _{i=1}^{m}\nabla _{E_{i}}\nabla _{E_{i}}}
is the Laplacian on M, and
λ
>
0
{\displaystyle \lambda >0}
is a positive constant.
== See also ==
Darboux frame
== Notes ==
== References ==
Historical references
Bonnet, Ossian (1867), "Memoire sur la theorie des surfaces applicables sur une surface donnee", Journal de l'École Polytechnique, 25: 31–151
Codazzi, Delfino (1868–1869), "Sulle coordinate curvilinee d'una superficie dello spazio", Ann. Mat. Pura Appl., 2: 101–19, doi:10.1007/BF02419605, S2CID 177803350
Gauss, Carl Friedrich (1828), "Disquisitiones Generales circa Superficies Curvas" [General Discussions about Curved Surfaces], Comm. Soc. Gott. (in Latin), 6 ("General Discussions about Curved Surfaces")
Ivanov, A.B. (2001) [1994], "Peterson–Codazzi equations", Encyclopedia of Mathematics, EMS Press
Kline, Morris (1972), Mathematical Thought from Ancient to Modern Times, Oxford University Press, ISBN 0-19-506137-3
Mainardi, Gaspare (1856), "Su la teoria generale delle superficie", Giornale Dell' Istituto Lombardo, 9: 385–404
Peterson, Karl Mikhailovich (1853), Über die Biegung der Flächen, Doctoral thesis, Dorpat University.
Textbooks
do Carmo, Manfredo P. Differential geometry of curves & surfaces. Revised & updated second edition. Dover Publications, Inc., Mineola, NY, 2016. xvi+510 pp. ISBN 978-0-486-80699-0, 0-486-80699-5
do Carmo, Manfredo Perdigão. Riemannian geometry. Translated from the second Portuguese edition by Francis Flaherty. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1992. xiv+300 pp. ISBN 0-8176-3490-8
Kobayashi, Shoshichi; Nomizu, Katsumi. Foundations of differential geometry. Vol. II. Interscience Tracts in Pure and Applied Mathematics, No. 15 Vol. II Interscience Publishers John Wiley & Sons, Inc., New York-London-Sydney 1969 xv+470 pp.
O'Neill, Barrett. Semi-Riemannian geometry. With applications to relativity. Pure and Applied Mathematics, 103. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1983. xiii+468 pp. ISBN 0-12-526740-1
Toponogov, Victor Andreevich (2006). Differential geometry of curves and surfaces: A concise guide. Boston: Birkhäuser. ISBN 978-0-8176-4384-3.
Articles
Takahashi, Tsunero (1966), "Minimal immersions of Riemannian manifolds", Journal of the Mathematical Society of Japan, 18 (4), doi:10.2969/jmsj/01840380, S2CID 122849496
Simons, James. Minimal varieties in riemannian manifolds. Ann. of Math. (2) 88 (1968), 62–105.
[1]
[2]
[3]
== External links ==
Peterson–Mainardi–Codazzi Equations – from Wolfram MathWorld
Peterson–Codazzi Equations | Wikipedia/Gauss-Codazzi_equations |
In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations (also called the Gauss–Codazzi–Weingarten-Mainardi equations or Gauss–Peterson–Codazzi formulas) are fundamental formulas that link together the induced metric and second fundamental form of a submanifold of (or immersion into) a Riemannian or pseudo-Riemannian manifold.
The equations were originally discovered in the context of surfaces in three-dimensional Euclidean space. In this context, the first equation, often called the Gauss equation (after its discoverer Carl Friedrich Gauss), says that the Gauss curvature of the surface, at any given point, is dictated by the derivatives of the Gauss map at that point, as encoded by the second fundamental form. The second equation, called the Codazzi equation or Codazzi-Mainardi equation, states that the covariant derivative of the second fundamental form is fully symmetric. It is named for Gaspare Mainardi (1856) and Delfino Codazzi (1868–1869), who independently derived the result, although it was discovered earlier by Karl Mikhailovich Peterson.
== Formal statement ==
Let
i
:
M
⊂
P
{\displaystyle i\colon M\subset P}
be an n-dimensional embedded submanifold of a Riemannian manifold P of dimension
n
+
p
{\displaystyle n+p}
. There is a natural inclusion of the tangent bundle of M into that of P by the pushforward, and the cokernel is the normal bundle of M:
0
→
T
x
M
→
T
x
P
|
M
→
T
x
⊥
M
→
0.
{\displaystyle 0\rightarrow T_{x}M\rightarrow T_{x}P|_{M}\rightarrow T_{x}^{\perp }M\rightarrow 0.}
The metric splits this short exact sequence, and so
T
P
|
M
=
T
M
⊕
T
⊥
M
.
{\displaystyle TP|_{M}=TM\oplus T^{\perp }M.}
Relative to this splitting, the Levi-Civita connection
∇
′
{\displaystyle \nabla '}
of P decomposes into tangential and normal components. For each
X
∈
T
M
{\displaystyle X\in TM}
and vector field Y on M,
∇
X
′
Y
=
⊤
(
∇
X
′
Y
)
+
⊥
(
∇
X
′
Y
)
.
{\displaystyle \nabla '_{X}Y=\top \left(\nabla '_{X}Y\right)+\bot \left(\nabla '_{X}Y\right).}
Let
∇
X
Y
=
⊤
(
∇
X
′
Y
)
,
α
(
X
,
Y
)
=
⊥
(
∇
X
′
Y
)
.
{\displaystyle \nabla _{X}Y=\top \left(\nabla '_{X}Y\right),\quad \alpha (X,Y)=\bot \left(\nabla '_{X}Y\right).}
The Gauss formula now asserts that
∇
X
{\displaystyle \nabla _{X}}
is the Levi-Civita connection for M, and
α
{\displaystyle \alpha }
is a symmetric vector-valued form with values in the normal bundle. It is often referred to as the second fundamental form.
An immediate corollary is the Gauss equation for the curvature tensor. For
X
,
Y
,
Z
,
W
∈
T
M
{\displaystyle X,Y,Z,W\in TM}
,
⟨
R
′
(
X
,
Y
)
Z
,
W
⟩
=
⟨
R
(
X
,
Y
)
Z
,
W
⟩
+
⟨
α
(
X
,
Z
)
,
α
(
Y
,
W
)
⟩
−
⟨
α
(
Y
,
Z
)
,
α
(
X
,
W
)
⟩
{\displaystyle \langle R'(X,Y)Z,W\rangle =\langle R(X,Y)Z,W\rangle +\langle \alpha (X,Z),\alpha (Y,W)\rangle -\langle \alpha (Y,Z),\alpha (X,W)\rangle }
where
R
′
{\displaystyle R'}
is the Riemann curvature tensor of P and R is that of M.
The Weingarten equation is an analog of the Gauss formula for a connection in the normal bundle. Let
X
∈
T
M
{\displaystyle X\in TM}
and
ξ
{\displaystyle \xi }
a normal vector field. Then decompose the ambient covariant derivative of
ξ
{\displaystyle \xi }
along X into tangential and normal components:
∇
X
′
ξ
=
⊤
(
∇
X
′
ξ
)
+
⊥
(
∇
X
′
ξ
)
=
−
A
ξ
(
X
)
+
D
X
(
ξ
)
.
{\displaystyle \nabla '_{X}\xi =\top \left(\nabla '_{X}\xi \right)+\bot \left(\nabla '_{X}\xi \right)=-A_{\xi }(X)+D_{X}(\xi ).}
Then
Weingarten's equation:
⟨
A
ξ
X
,
Y
⟩
=
⟨
α
(
X
,
Y
)
,
ξ
⟩
{\displaystyle \langle A_{\xi }X,Y\rangle =\langle \alpha (X,Y),\xi \rangle }
DX is a metric connection in the normal bundle.
There are thus a pair of connections: ∇, defined on the tangent bundle of M; and D, defined on the normal bundle of M. These combine to form a connection on any tensor product of copies of TM and T⊥M. In particular, they defined the covariant derivative of
α
{\displaystyle \alpha }
:
(
∇
~
X
α
)
(
Y
,
Z
)
=
D
X
(
α
(
Y
,
Z
)
)
−
α
(
∇
X
Y
,
Z
)
−
α
(
Y
,
∇
X
Z
)
.
{\displaystyle \left({\tilde {\nabla }}_{X}\alpha \right)(Y,Z)=D_{X}\left(\alpha (Y,Z)\right)-\alpha \left(\nabla _{X}Y,Z\right)-\alpha \left(Y,\nabla _{X}Z\right).}
The Codazzi–Mainardi equation is
⊥
(
R
′
(
X
,
Y
)
Z
)
=
(
∇
~
X
α
)
(
Y
,
Z
)
−
(
∇
~
Y
α
)
(
X
,
Z
)
.
{\displaystyle \bot \left(R'(X,Y)Z\right)=\left({\tilde {\nabla }}_{X}\alpha \right)(Y,Z)-\left({\tilde {\nabla }}_{Y}\alpha \right)(X,Z).}
Since every immersion is, in particular, a local embedding, the above formulas also hold for immersions.
== Gauss–Codazzi equations in classical differential geometry ==
=== Statement of classical equations ===
In classical differential geometry of surfaces, the Codazzi–Mainardi equations are expressed via the second fundamental form (L, M, N):
L
v
−
M
u
=
L
Γ
1
12
+
M
(
Γ
2
12
−
Γ
1
11
)
−
N
Γ
2
11
{\displaystyle L_{v}-M_{u}=L\Gamma ^{1}{}_{12}+M\left({\Gamma ^{2}}_{12}-{\Gamma ^{1}}_{11}\right)-N{\Gamma ^{2}}_{11}}
M
v
−
N
u
=
L
Γ
1
22
+
M
(
Γ
2
22
−
Γ
1
12
)
−
N
Γ
2
12
{\displaystyle M_{v}-N_{u}=L\Gamma ^{1}{}_{22}+M\left({\Gamma ^{2}}_{22}-{\Gamma ^{1}}_{12}\right)-N{\Gamma ^{2}}_{12}}
The Gauss formula, depending on how one chooses to define the Gaussian curvature, may be a tautology. It can be stated as
K
=
L
N
−
M
2
e
g
−
f
2
,
{\displaystyle K={\frac {LN-M^{2}}{eg-f^{2}}},}
where (e, f, g) are the components of the first fundamental form.
=== Derivation of classical equations ===
Consider a parametric surface in Euclidean 3-space,
r
(
u
,
v
)
=
(
x
(
u
,
v
)
,
y
(
u
,
v
)
,
z
(
u
,
v
)
)
{\displaystyle \mathbf {r} (u,v)=(x(u,v),y(u,v),z(u,v))}
where the three component functions depend smoothly on ordered pairs (u,v) in some open domain U in the uv-plane. Assume that this surface is regular, meaning that the vectors ru and rv are linearly independent. Complete this to a basis {ru,rv,n}, by selecting a unit vector n normal to the surface. It is possible to express the second partial derivatives of r (vectors of
R
3
{\displaystyle \mathbb {R^{3}} }
) with the Christoffel symbols and the elements of the second fundamental form. We choose the first two components of the basis as they are intrinsic to the surface and intend to prove intrinsic property of the Gaussian curvature. The last term in the basis is extrinsic.
r
u
u
=
Γ
1
11
r
u
+
Γ
2
11
r
v
+
L
n
{\displaystyle \mathbf {r} _{uu}={\Gamma ^{1}}_{11}\mathbf {r} _{u}+{\Gamma ^{2}}_{11}\mathbf {r} _{v}+L\mathbf {n} }
r
u
v
=
Γ
1
12
r
u
+
Γ
2
12
r
v
+
M
n
{\displaystyle \mathbf {r} _{uv}={\Gamma ^{1}}_{12}\mathbf {r} _{u}+{\Gamma ^{2}}_{12}\mathbf {r} _{v}+M\mathbf {n} }
r
v
v
=
Γ
1
22
r
u
+
Γ
2
22
r
v
+
N
n
{\displaystyle \mathbf {r} _{vv}={\Gamma ^{1}}_{22}\mathbf {r} _{u}+{\Gamma ^{2}}_{22}\mathbf {r} _{v}+N\mathbf {n} }
Clairaut's theorem states that partial derivatives commute:
(
r
u
u
)
v
=
(
r
u
v
)
u
{\displaystyle \left(\mathbf {r} _{uu}\right)_{v}=\left(\mathbf {r} _{uv}\right)_{u}}
If we differentiate ruu with respect to v and ruv with respect to u, we get:
(
Γ
1
11
)
v
r
u
+
Γ
1
11
r
u
v
+
(
Γ
2
11
)
v
r
v
+
Γ
2
11
r
v
v
+
L
v
n
+
L
n
v
{\displaystyle \left({\Gamma ^{1}}_{11}\right)_{v}\mathbf {r} _{u}+{\Gamma ^{1}}_{11}\mathbf {r} _{uv}+\left({\Gamma ^{2}}_{11}\right)_{v}\mathbf {r} _{v}+{\Gamma ^{2}}_{11}\mathbf {r} _{vv}+L_{v}\mathbf {n} +L\mathbf {n} _{v}}
=
(
Γ
1
12
)
u
r
u
+
Γ
1
12
r
u
u
+
(
Γ
12
2
)
u
r
v
+
Γ
2
12
r
u
v
+
M
u
n
+
M
n
u
{\displaystyle =\left({\Gamma ^{1}}_{12}\right)_{u}\mathbf {r} _{u}+{\Gamma ^{1}}_{12}\mathbf {r} _{uu}+\left(\Gamma _{12}^{2}\right)_{u}\mathbf {r} _{v}+{\Gamma ^{2}}_{12}\mathbf {r} _{uv}+M_{u}\mathbf {n} +M\mathbf {n} _{u}}
Now substitute the above expressions for the second derivatives and equate the coefficients of n:
M
Γ
1
11
+
N
Γ
2
11
+
L
v
=
L
Γ
1
12
+
M
Γ
2
12
+
M
u
{\displaystyle M{\Gamma ^{1}}_{11}+N{\Gamma ^{2}}_{11}+L_{v}=L{\Gamma ^{1}}_{12}+M{\Gamma ^{2}}_{12}+M_{u}}
Rearranging this equation gives the first Codazzi–Mainardi equation.
The second equation may be derived similarly.
== Mean curvature ==
Let M be a smooth m-dimensional manifold immersed in the (m + k)-dimensional smooth manifold P. Let
e
1
,
e
2
,
…
,
e
k
{\displaystyle e_{1},e_{2},\ldots ,e_{k}}
be a local orthonormal frame of vector fields normal to M. Then we can write,
α
(
X
,
Y
)
=
∑
j
=
1
k
α
j
(
X
,
Y
)
e
j
.
{\displaystyle \alpha (X,Y)=\sum _{j=1}^{k}\alpha _{j}(X,Y)e_{j}.}
If, now,
E
1
,
E
2
,
…
,
E
m
{\displaystyle E_{1},E_{2},\ldots ,E_{m}}
is a local orthonormal frame (of tangent vector fields) on the same open subset of M, then we can define the mean curvatures of the immersion by
H
j
=
∑
i
=
1
m
α
j
(
E
i
,
E
i
)
.
{\displaystyle H_{j}=\sum _{i=1}^{m}\alpha _{j}(E_{i},E_{i}).}
In particular, if M is a hypersurface of P, i.e.
k
=
1
{\displaystyle k=1}
, then there is only one mean curvature to speak of. The immersion is called minimal if all the
H
j
{\displaystyle H_{j}}
are identically zero.
Observe that the mean curvature is a trace, or average, of the second fundamental form, for any given component. Sometimes mean curvature is defined by multiplying the sum on the right-hand side by
1
/
m
{\displaystyle 1/m}
.
We can now write the Gauss–Codazzi equations as
⟨
R
′
(
X
,
Y
)
Z
,
W
⟩
=
⟨
R
(
X
,
Y
)
Z
,
W
⟩
+
∑
j
=
1
k
(
α
j
(
X
,
Z
)
α
j
(
Y
,
W
)
−
α
j
(
Y
,
Z
)
α
j
(
X
,
W
)
)
.
{\displaystyle \langle R'(X,Y)Z,W\rangle =\langle R(X,Y)Z,W\rangle +\sum _{j=1}^{k}\left(\alpha _{j}(X,Z)\alpha _{j}(Y,W)-\alpha _{j}(Y,Z)\alpha _{j}(X,W)\right).}
Contracting the
Y
,
Z
{\displaystyle Y,Z}
components gives us
Ric
′
(
X
,
W
)
=
Ric
(
X
,
W
)
+
∑
j
=
1
k
⟨
R
′
(
X
,
e
j
)
e
j
,
W
⟩
+
∑
j
=
1
k
(
∑
i
=
1
m
α
j
(
X
,
E
i
)
α
j
(
E
i
,
W
)
−
H
j
α
j
(
X
,
W
)
)
.
{\displaystyle \operatorname {Ric} '(X,W)=\operatorname {Ric} (X,W)+\sum _{j=1}^{k}\langle R'(X,e_{j})e_{j},W\rangle +\sum _{j=1}^{k}\left(\sum _{i=1}^{m}\alpha _{j}(X,E_{i})\alpha _{j}(E_{i},W)-H_{j}\alpha _{j}(X,W)\right).}
When M is a hypersurface, this simplifies to
Ric
′
(
X
,
W
)
=
Ric
(
X
,
W
)
+
⟨
R
′
(
X
,
n
)
n
,
W
⟩
+
∑
i
=
1
m
h
(
X
,
E
i
)
h
(
E
i
,
W
)
−
H
h
(
X
,
W
)
{\displaystyle \operatorname {Ric} '(X,W)=\operatorname {Ric} (X,W)+\langle R'(X,n)n,W\rangle +\sum _{i=1}^{m}h(X,E_{i})h(E_{i},W)-Hh(X,W)}
where
n
=
e
1
,
{\displaystyle n=e_{1},}
h
=
α
1
{\displaystyle h=\alpha _{1}}
and
H
=
H
1
{\displaystyle H=H_{1}}
. In that case, one more contraction yields,
R
′
=
R
+
2
Ric
′
(
n
,
n
)
+
‖
h
‖
2
−
H
2
{\displaystyle R'=R+2\operatorname {Ric} '(n,n)+\|h\|^{2}-H^{2}}
where
R
′
{\displaystyle R'}
and
R
{\displaystyle R}
are the scalar curvatures of P and M respectively, and
‖
h
‖
2
=
∑
i
,
j
=
1
m
h
(
E
i
,
E
j
)
2
.
{\displaystyle \|h\|^{2}=\sum _{i,j=1}^{m}h(E_{i},E_{j})^{2}.}
If
k
>
1
{\displaystyle k>1}
, the scalar curvature equation might be more complicated.
We can already use these equations to draw some conclusions. For example, any minimal immersion into the round sphere
x
1
2
+
x
2
2
+
⋯
+
x
m
+
k
+
1
2
=
1
{\displaystyle x_{1}^{2}+x_{2}^{2}+\cdots +x_{m+k+1}^{2}=1}
must be of the form
Δ
x
j
+
λ
x
j
=
0
{\displaystyle \Delta x_{j}+\lambda x_{j}=0}
where
j
{\displaystyle j}
runs from 1 to
m
+
k
+
1
{\displaystyle m+k+1}
and
Δ
=
∑
i
=
1
m
∇
E
i
∇
E
i
{\displaystyle \Delta =\sum _{i=1}^{m}\nabla _{E_{i}}\nabla _{E_{i}}}
is the Laplacian on M, and
λ
>
0
{\displaystyle \lambda >0}
is a positive constant.
== See also ==
Darboux frame
== Notes ==
== References ==
Historical references
Bonnet, Ossian (1867), "Memoire sur la theorie des surfaces applicables sur une surface donnee", Journal de l'École Polytechnique, 25: 31–151
Codazzi, Delfino (1868–1869), "Sulle coordinate curvilinee d'una superficie dello spazio", Ann. Mat. Pura Appl., 2: 101–19, doi:10.1007/BF02419605, S2CID 177803350
Gauss, Carl Friedrich (1828), "Disquisitiones Generales circa Superficies Curvas" [General Discussions about Curved Surfaces], Comm. Soc. Gott. (in Latin), 6 ("General Discussions about Curved Surfaces")
Ivanov, A.B. (2001) [1994], "Peterson–Codazzi equations", Encyclopedia of Mathematics, EMS Press
Kline, Morris (1972), Mathematical Thought from Ancient to Modern Times, Oxford University Press, ISBN 0-19-506137-3
Mainardi, Gaspare (1856), "Su la teoria generale delle superficie", Giornale Dell' Istituto Lombardo, 9: 385–404
Peterson, Karl Mikhailovich (1853), Über die Biegung der Flächen, Doctoral thesis, Dorpat University.
Textbooks
do Carmo, Manfredo P. Differential geometry of curves & surfaces. Revised & updated second edition. Dover Publications, Inc., Mineola, NY, 2016. xvi+510 pp. ISBN 978-0-486-80699-0, 0-486-80699-5
do Carmo, Manfredo Perdigão. Riemannian geometry. Translated from the second Portuguese edition by Francis Flaherty. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1992. xiv+300 pp. ISBN 0-8176-3490-8
Kobayashi, Shoshichi; Nomizu, Katsumi. Foundations of differential geometry. Vol. II. Interscience Tracts in Pure and Applied Mathematics, No. 15 Vol. II Interscience Publishers John Wiley & Sons, Inc., New York-London-Sydney 1969 xv+470 pp.
O'Neill, Barrett. Semi-Riemannian geometry. With applications to relativity. Pure and Applied Mathematics, 103. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1983. xiii+468 pp. ISBN 0-12-526740-1
Toponogov, Victor Andreevich (2006). Differential geometry of curves and surfaces: A concise guide. Boston: Birkhäuser. ISBN 978-0-8176-4384-3.
Articles
Takahashi, Tsunero (1966), "Minimal immersions of Riemannian manifolds", Journal of the Mathematical Society of Japan, 18 (4), doi:10.2969/jmsj/01840380, S2CID 122849496
Simons, James. Minimal varieties in riemannian manifolds. Ann. of Math. (2) 88 (1968), 62–105.
[1]
[2]
[3]
== External links ==
Peterson–Mainardi–Codazzi Equations – from Wolfram MathWorld
Peterson–Codazzi Equations | Wikipedia/Gauss–Codazzi_equations |
For Liouville's equation in dynamical systems, see Liouville's theorem (Hamiltonian).
For Liouville's equation in quantum mechanics, see Von Neumann equation.
For Liouville's equation in Euclidean space, see Liouville–Bratu–Gelfand equation.
In differential geometry, Liouville's equation, named after Joseph Liouville, is the nonlinear partial differential equation satisfied by the conformal factor f of a metric f2(dx2 + dy2) on a surface of constant Gaussian curvature K:
Δ
0
log
f
=
−
K
f
2
,
{\displaystyle \Delta _{0}\log f=-Kf^{2},}
where ∆0 is the flat Laplace operator
Δ
0
=
∂
2
∂
x
2
+
∂
2
∂
y
2
=
4
∂
∂
z
∂
∂
z
¯
.
{\displaystyle \Delta _{0}={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}=4{\frac {\partial }{\partial z}}{\frac {\partial }{\partial {\bar {z}}}}.}
Liouville's equation appears in the study of isothermal coordinates in differential geometry: the independent variables x,y are the coordinates, while f can be described as the conformal factor with respect to the flat metric. Occasionally it is the square f2 that is referred to as the conformal factor, instead of f itself.
Liouville's equation was also taken as an example by David Hilbert in the formulation of his nineteenth problem.
== Other common forms of Liouville's equation ==
By using the change of variables log f ↦ u, another commonly found form of Liouville's equation is obtained:
Δ
0
u
=
−
K
e
2
u
.
{\displaystyle \Delta _{0}u=-Ke^{2u}.}
Other two forms of the equation, commonly found in the literature, are obtained by using the slight variant 2 log f ↦ u of the previous change of variables and Wirtinger calculus:
Δ
0
u
=
−
2
K
e
u
⟺
∂
2
u
∂
z
∂
z
¯
=
−
K
2
e
u
.
{\displaystyle \Delta _{0}u=-2Ke^{u}\quad \Longleftrightarrow \quad {\frac {\partial ^{2}u}{{\partial z}{\partial {\bar {z}}}}}=-{\frac {K}{2}}e^{u}.}
Note that it is exactly in the first one of the preceding two forms that Liouville's equation was cited by David Hilbert in the formulation of his nineteenth problem.
=== A formulation using the Laplace–Beltrami operator ===
In a more invariant fashion, the equation can be written in terms of the intrinsic Laplace–Beltrami operator
Δ
L
B
=
1
f
2
Δ
0
{\displaystyle \Delta _{\mathrm {LB} }={\frac {1}{f^{2}}}\Delta _{0}}
as follows:
Δ
L
B
log
f
=
−
K
.
{\displaystyle \Delta _{\mathrm {LB} }\log \;f=-K.}
== Properties ==
=== Relation to Gauss–Codazzi equations ===
Liouville's equation is equivalent to the Gauss–Codazzi equations for minimal immersions into the 3-space, when the metric is written in isothermal coordinates
z
{\displaystyle z}
such that the Hopf differential is
d
z
2
{\displaystyle \mathrm {d} z^{2}}
.
=== General solution of the equation ===
In a simply connected domain Ω, the general solution of Liouville's equation can be found by using Wirtinger calculus. Its form is given by
u
(
z
,
z
¯
)
=
ln
(
4
|
d
f
(
z
)
/
d
z
|
2
(
1
+
K
|
f
(
z
)
|
2
)
2
)
{\displaystyle u(z,{\bar {z}})=\ln \left(4{\frac {\left|{\mathrm {d} f(z)}/{\mathrm {d} z}\right|^{2}}{(1+K\left|f(z)\right|^{2})^{2}}}\right)}
where f (z) is any meromorphic function such that
df/dz(z) ≠ 0 for every z ∈ Ω.
f (z) has at most simple poles in Ω.
== Application ==
Liouville's equation can be used to prove the following classification results for surfaces:
Theorem. A surface in the Euclidean 3-space with metric dl2 = g(z,_z)dzd_z, and with constant scalar curvature K is locally isometric to:
the sphere if K > 0;
the Euclidean plane if K = 0;
the Lobachevskian plane if K < 0.
== See also ==
Liouville field theory, a two-dimensional conformal field theory whose classical equation of motion is a generalization of Liouville's equation
== Notes ==
=== Citations ===
== Works cited == | Wikipedia/Liouville's_equation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.