text
stringlengths
559
401k
source
stringlengths
13
121
Object–role modeling (ORM) is used to model the semantics of a universe of discourse. ORM is often used for data modeling and software engineering. An object–role model uses graphical symbols that are based on first order predicate logic and set theory to enable the modeler to create an unambiguous definition of an arbitrary universe of discourse. Attribute free, the predicates of an ORM Model lend themselves to the analysis and design of graph database models in as much as ORM was originally conceived to benefit relational database design. The term "object–role model" was coined in the 1970s and ORM based tools have been used for more than 30 years – principally for data modeling. More recently ORM has been used to model business rules, XML-Schemas, data warehouses, requirements engineering and web forms. == History == The roots of ORM can be traced to research into semantic modeling for information systems in Europe during the 1970s. There were many pioneers and this short summary does not by any means mention them all. An early contribution came in 1973 when Michael Senko wrote about "data structuring" in the IBM Systems Journal. In 1974 Jean-Raymond Abrial contributed an article about "Data Semantics". In June 1975, Eckhard Falkenberg's doctoral thesis was published and in 1976 one of Falkenberg's papers mentions the term "object–role model". G.M. Nijssen made fundamental contributions by introducing the "circle-box" notation for object types and roles, and by formulating the first version of the conceptual schema design procedure. Robert Meersman extended the approach by adding subtyping, and introducing the first truly conceptual query language. Object role modeling also evolved from the Natural language Information Analysis Method, a methodology that was initially developed by the academic researcher, G.M. Nijssen in the Netherlands (Europe) in the mid-1970s and his research team at the Control Data Corporation Research Laboratory in Belgium, and later at the University of Queensland, Australia in the 1980s. The acronym NIAM originally stood for "Nijssen's Information Analysis Methodology", and later generalised to "Natural language Information Analysis Methodology" and Binary Relationship Modeling since G. M. Nijssen was only one of many people involved in the development of the method. In 1989, Terry Halpin completed his PhD thesis on ORM, providing the first full formalization of the approach and incorporating several extensions. Also in 1989, Terry Halpin and G.M. Nijssen co-authored the book "Conceptual Schema and Relational Database Design" and several joint papers, providing the first formalization of object–role modeling. A graphical NIAM design tool which included the ability to generate database-creation scripts for Oracle, DB2 and DBQ was developed in the early 1990s in Paris. It was originally named Genesys and was marketed successfully in France and later Canada. It could also handle ER diagram design. It was ported to SCO Unix, SunOs, DEC 3151's and Windows 3.0 platforms, and was later migrated to succeeding Microsoft operating systems, utilising XVT for cross operating system graphical portability. The tool was renamed OORIANE and is currently being used for large data warehouse and SOA projects. Also evolving from NIAM is "Fully Communication Oriented Information Modeling" FCO-IM (1992). It distinguishes itself from traditional ORM in that it takes a strict communication-oriented perspective. Rather than attempting to model the domain and its essential concepts, it models the communication in this domain (universe of discourse). Another important difference is that it does this on instance level, deriving type level and object/fact level during analysis. Another recent development is the use of ORM in combination with standardised relation types with associated roles and a standard machine-readable dictionary and taxonomy of concepts as are provided in the Gellish English dictionary. Standardisation of relation types (fact types), roles and concepts enables increased possibilities for model integration and model reuse. == Concepts == === Facts === Object–role models are based on elementary facts, and expressed in diagrams that can be verbalised into natural language. A fact is a proposition such as "John Smith was hired on 5 January 1995" or "Mary Jones was hired on 3 March 2010". With ORM, propositions such as these, are abstracted into "fact types" for example "Person was hired on Date" and the individual propositions are regarded as sample data. The difference between a "fact" and an "elementary fact" is that an elementary fact cannot be simplified without loss of meaning. This "fact-based" approach facilitates modeling, transforming, and querying information from any domain. === Attribute-free === ORM is attribute-free: unlike models in the entity–relationship (ER) and Unified Modeling Language (UML) methods, ORM treats all elementary facts as relationships and so treats decisions for grouping facts into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) as implementation concerns irrelevant to semantics. By avoiding attributes, ORM improves semantic stability and enables verbalization into natural language. === Fact-based modeling === Fact-based modeling includes procedures for mapping facts to attribute-based structures, such as those of ER or UML. Fact-based textual representations are based on formal subsets of native languages. ORM proponents argue that ORM models are easier to understand by people without a technical education. For example, proponents argue that object–role models are easier to understand than declarative languages such as Object Constraint Language (OCL) and other graphical languages such as UML class models. Fact-based graphical notations are more expressive than those of ER and UML. An object–role model can be automatically mapped to relational and deductive databases (such as datalog). === ORM 2 graphical notation === ORM2 is the latest generation of object–role modeling. The main objectives for the ORM 2 graphical notation are: More compact display of ORM models without compromising clarity Improved internationalization (e.g. avoid English language symbols) Simplified drawing rules to facilitate creation of a graphical editor Extended use of views for selectively displaying/suppressing detail Support for new features (e.g. role path delineation, closure aspects, modalities) === Design procedure === System development typically involves several stages such as: feasibility study; requirements analysis; conceptual design of data and operations; logical design; external design; prototyping; internal design and implementation; testing and validation; and maintenance. The seven steps of the conceptual schema design procedure are: Transform familiar information examples into elementary facts, and apply quality checks Draw the fact types, and apply a population check Check for entity types that should be combined, and note any arithmetic derivations Add uniqueness constraints, and check arity of fact types Add mandatory role constraints, and check for logical derivations Add value, set comparison and subtyping constraints Add other constraints and perform final checks ORM's conceptual schema design procedure (CSDP) focuses on the analysis and design of data. == See also == Concept map Conceptual schema Enhanced entity–relationship model (EER) Information flow diagram Ontology double articulation Ontology engineering Relational algebra Three-schema approach == References == == Further reading == Halpin, Terry (1989), Conceptual Schema and Relational Database Design, Sydney: Prentice Hall, ISBN 978-0-13-167263-5 Rossi, Matti; Siau, Keng (April 2001), Information Modeling in the New Millennium, IGI Global, ISBN 978-1-878289-77-3 Halpin, Terry; Evans, Ken; Hallock, Pat; Maclean, Bill (September 2003), Database Modeling with Microsoft Visio for Enterprise Architects, Morgan Kaufmann, ISBN 978-1-55860-919-8 Halpin, Terry; Morgan, Tony (March 2008), Information Modeling and Relational Databases: From Conceptual Analysis to Logical Design (2nd ed.), Morgan Kaufmann, ISBN 978-0-12-373568-3 == External links ==
Wikipedia/Object–relationship_modeling
An entity–attribute–value model (EAV) is a data model optimized for the space-efficient storage of sparse—or ad-hoc—property or data values, intended for situations where runtime usage patterns are arbitrary, subject to user variation, or otherwise unforeseeable using a fixed design. The use-case targets applications which offer a large or rich system of defined property types, which are in turn appropriate to a wide set of entities, but where typically only a small, specific selection of these are instantiated (or persisted) for a given entity. Therefore, this type of data model relates to the mathematical notion of a sparse matrix. EAV is also known as object–attribute–value model, vertical database model, and open schema. == Data structure == This data representation is analogous to space-efficient methods of storing a sparse matrix, where only non-empty values are stored. In an EAV data model, each attribute–value pair is a fact describing an entity, and a row in an EAV table stores a single fact. EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns. Data is recorded as three columns: The entity: the item being described. The attribute or parameter: typically implemented as a foreign key into a table of attribute definitions. The attribute definitions table might contain the following columns: an attribute ID, attribute name, description, data type, and columns assisting input validation, e.g., maximum string length and regular expression, set of permissible values, etc. The value of the attribute. === Example === Consider how one would try to represent a general-purpose clinical record in a relational database. Clearly creating a table (or a set of tables) with thousands of columns is not feasible, because the vast majority of columns would be null. To complicate things, in a longitudinal medical record that follows the patient over time, there may be multiple values of the same parameter: the height and weight of a child, for example, change as the child grows. Finally, the universe of clinical findings keeps growing: for example, diseases emerge and new lab tests are devised; this would require constant addition of columns, and constant revision of the user interface. The term "attribute volatility" is sometimes used to describe the problems or situations that arise when the list of available attributes or their definitions needs to evolve over time. The following shows a selection of rows of an EAV table for clinical findings from a visit to a doctor for a fever on the morning of 1998-05-01. The entries shown within angle brackets are references to entries in other tables, shown here as text rather than as encoded foreign key values for ease of understanding. In this example, the values are all literal values, but they could also be pre-defined value lists. The latter are particularly useful when the possible values are known to be limited (i.e., enumerable). The entity. For clinical findings, the entity is the patient event: a foreign key into a table that contains at a minimum a patient ID and one or more time-stamps (e.g., the start and end of the examination date/time) that record when the event being described happened. The attribute or parameter: a foreign key into a table of attribute definitions (in this example, definitions of clinical findings). At the very least, the attribute definitions table would contain the following columns: an attribute ID, attribute name, description, data type, units of measurement, and columns assisting input validation, e.g., maximum string length and regular expression, maximum and minimum permissible values, set of permissible values, etc. The value of the attribute. This would depend on the data type, and we discuss how values are stored shortly. The example below illustrates symptoms findings that might be seen in a patient with pneumonia. The EAV data described above is comparable to the contents of a supermarket sales receipt (which would be reflected in a Sales Line Items table in a database). The receipt lists only details of the items actually purchased, instead of listing every product in the shop that the customer might have purchased but didn't. Like the clinical findings for a given patient, the sales receipt is a compact representation of inherently sparse data. The "entity" is the sale/transaction id — a foreign key into a sales transactions table. This is used to tag each line item internally, though on the receipt the information about the Sale appears at the top (shop location, sale date/time) and at the bottom (total value of sale). The "attribute" is a foreign key into a products table, from where one looks up description, unit price, discounts and promotions, etc. (Products are just as volatile as clinical findings, possibly even more so: new products are introduced every month, while others are taken off the market if consumer acceptance is poor. No competent database designer would hard-code individual products such as Doritos or Diet Coke as columns in a table.) The "values" are the quantity purchased and total line item price. Row modeling, where facts about something (in this case, a sales transaction) are recorded as multiple rows rather than multiple columns, is a standard data modeling technique. The differences between row modeling and EAV (which may be considered a generalization of row-modeling) are: A row-modeled table is homogeneous in the facts that it describes: a Line Items table describes only products sold. By contrast, an EAV table contains almost any type of fact. The data type of the value column/s in a row-modeled table is pre-determined by the nature of the facts it records. By contrast, in an EAV table, the conceptual data type of a value in a particular row depends on the attribute in that row. It follows that in production systems, allowing direct data entry into an EAV table would be a recipe for disaster, because the database engine itself would not be able to perform robust input validation. We shall see later how it is possible to build generic frameworks that perform most of the tasks of input validation, without endless coding on an attribute-by-attribute basis. In a clinical data repository, row modeling also finds numerous uses; the laboratory test subschema is typically modeled this way, because lab test results are typically numeric, or can be encoded numerically. The circumstances where you would need to go beyond standard row-modeling to EAV are listed below: The data type of individual attributes varies (as seen with clinical findings). The categories of data are numerous, growing or fluctuating, but the number of instances (records/rows) within each category is very small. Here, with conventional modeling, the database’s entity–relationship diagram might have hundreds of tables: the tables that contain thousands/ millions of rows/instances are emphasized visually to the same extent as those with very few rows. The latter are candidates for conversion to an EAV representation. This situation arises in ontology-modeling environments, where categories ("classes") must often be created on the fly, and some classes are often eliminated in subsequent cycles of prototyping. Certain ("hybrid") classes have some attributes that are non-sparse (present in all or most instances), while other attributes are highly variable and sparse. The latter are suitable for EAV modeling. For example, descriptions of products made by a conglomerate corporation depend on the product category, e.g., the attributes necessary to describe a brand of light bulb are quite different from those required to describe a medical imaging device, but both have common attributes such as packaging unit and per-item cost. === Description of concepts === ==== The entity ==== In clinical data, the entity is typically a clinical event, as described above. In more general-purpose settings, the entity is a foreign key into an "objects" table that records common information about every "object" (thing) in the database – at the minimum, a preferred name and brief description, as well as the category/class of entity to which it belongs. Every record (object) in this table is assigned a machine-generated object ID. The "objects table" approach was pioneered by Tom Slezak and colleagues at Lawrence Livermore Laboratories for the Chromosome 19 database, and is now standard in most large bioinformatics databases. The use of an objects table does not mandate the concurrent use of an EAV design: conventional tables can be used to store the category-specific details of each object. The major benefit to a central objects table is that, by having a supporting table of object synonyms and keywords, one can provide a standard Google-like search mechanism across the entire system where the user can find information about any object of interest without having to first specify the category that it belongs to. (This is important in bioscience systems where a keyword like "acetylcholine" could refer either to the molecule itself, which is a neurotransmitter, or the biological receptor to which it binds.) ==== The attribute ==== In the EAV table itself, this is just an attribute ID, a foreign key into an Attribute Definitions table, as stated above. However, there are usually multiple metadata tables that contain attribute-related information, and these are discussed shortly. ==== The value ==== Coercing all values into strings, as in the EAV data example above, results in a simple, but non-scalable, structure: constant data type inter-conversions are required if one wants to do anything with the values, and an index on the value column of an EAV table is essentially useless. Also, it is not convenient to store large binary data, such as images, in Base64 encoded form in the same table as small integers or strings. Therefore, larger systems use separate EAV tables for each data type (including binary large objects, "BLOBS"), with the metadata for a given attribute identifying the EAV table in which its data will be stored. This approach is actually quite efficient because the modest amount of attribute metadata for a given class or form that a user chooses to work with can be cached readily in memory. However, it requires moving of data from one table to another if an attribute’s data type is changed. == History == EAV, as a general-purpose means of knowledge representation, originated with the concept of "association lists" (attribute–value pairs). Commonly used today, these were first introduced in the language LISP. Attribute–value pairs are widely used for diverse applications, such as configuration files (using a simple syntax like attribute = value). An example of non-database use of EAV is in UIMA (Unstructured Information Management Architecture), a standard now managed by the Apache Foundation and employed in areas such as natural language processing. Software that analyzes text typically marks up ("annotates") a segment: the example provided in the UIMA tutorial is a program that performs named-entity recognition (NER) on a document, annotating the text segment "President Bush" with the annotation–attribute–value triple (Person, Full_Name, "George W. Bush"). Such annotations may be stored in a database table. While EAV does not have a direct connection to AV-pairs, Stead and Hammond appear to be the first to have conceived of their use for persistent storage of arbitrarily complex data. The first medical record systems to employ EAV were the Regenstrief electronic medical record (the effort led by Clement MacDonald), William Stead and Ed Hammond's TMR (The Medical Record) system and the HELP Clinical Data Repository (CDR) created by Homer Warner's group at LDS Hospital, Salt Lake City, Utah. (The Regenstrief system actually used a Patient-Attribute-Timestamp-Value design: the use of the timestamp supported retrieval of values for a given patient/attribute in chronological order.) All these systems, developed in the 1970s, were released before commercial systems based on E.F. Codd's relational database model were available, though HELP was much later ported to a relational architecture and commercialized by the 3M corporation. (Note that while Codd's landmark paper was published in 1970, its heavily mathematical tone had the unfortunate effect of diminishing its accessibility among non-computer-science types and consequently delaying the model's acceptance in IT and software-vendor circles. The value of the subsequent contribution of Christopher J. Date, Codd's colleague at IBM, in translating these ideas into accessible language, accompanied by simple examples that illustrated their power, cannot be overstated.) A group at the Columbia-Presbyterian Medical Center was the first to use a relational database engine as the foundation of an EAV system. The open-source TrialDB clinical study data management system of Nadkarni et al. was the first to use multiple EAV tables, one for each DBMS data type. The EAV/CR framework, designed primarily by Luis Marenco and Prakash Nadkarni, overlaid the principles of object orientation onto EAV; it built on Tom Slezak's object table approach (described earlier in the "Entity" section). SenseLab, a publicly accessible neuroscience database, is built with the EAV/CR framework. == Use in databases == The term "EAV database" refers to a database design where a significant proportion of the data is modeled as EAV. However, even in a database described as "EAV-based", some tables in the system are traditional relational tables. As noted above, EAV modeling makes sense for categories of data, such as clinical findings, where attributes are numerous and sparse. Where these conditions do not hold, standard relational modeling (i.e., one column per attribute) is preferable; using EAV does not mean abandoning common sense or principles of good relational design. In clinical record systems, the subschemas dealing with patient demographics and billing are typically modeled conventionally. (While most vendor database schemas are proprietary, VistA, the system used throughout the United States Department of Veterans Affairs (VA) medical system, known as the Veterans Health Administration (VHA), is open-source and its schema is readily inspectable, though it uses a MUMPS database engine rather than a relational database.) As discussed shortly, an EAV database is essentially unmaintainable without numerous supporting tables that contain supporting metadata. The metadata tables, which typically outnumber the EAV tables by a factor of at least three or more, are typically standard relational tables. An example of a metadata table is the Attribute Definitions table mentioned above. == EAV/CR: representing substructure with classes and relationships == In a simple EAV design, the values of an attribute are simple or primitive data types as far as the database engine is concerned. However, in EAV systems used for the representation of highly diverse data, it is possible that a given object (class instance) may have substructure: that is, some of its attributes may represent other kinds of objects, which in turn may have substructure, to an arbitrary level of complexity. A car, for example, has an engine, a transmission, etc., and the engine has components such as cylinders. (The permissible substructure for a given class is defined within the system's attribute metadata, as discussed later. Thus, for example, the attribute "random-access-memory" could apply to the class "computer" but not to the class "engine".) To represent substructure, one incorporates a special EAV table where the value column contains references to other entities in the system (i.e., foreign key values into the objects table). To get all the information on a given object requires a recursive traversal of the metadata, followed by a recursive traversal of the data that stops when every attribute retrieved is simple (atomic). Recursive traversal is necessary whether details of an individual class are represented in conventional or EAV form; such traversal is performed in standard object–relational systems, for example. In practice, the number of levels of recursion tends to be relatively modest for most classes, so the performance penalties due to recursion are modest, especially with indexing of object IDs. EAV/CR (EAV with Classes and Relationships) refers to a framework that supports complex substructure. Its name is somewhat of a misnomer: while it was an outshoot of work on EAV systems, in practice, many or even most of the classes in such a system may be represented in standard relational form, based on whether the attributes are sparse or dense. EAV/CR is really characterized by its very detailed metadata, which is rich enough to support the automatic generation of browsing interfaces to individual classes without having to write class-by-class user-interface code. The basis of such browser interfaces is that it is possible to generate a batch of dynamic SQL queries that is independent of the class of the object, by first consulting its metadata and using metadata information to generate a sequence of queries against the data tables, and some of these queries may be arbitrarily recursive. This approach works well for object-at-a-time queries, as in Web-based browsing interfaces where clicking on the name of an object brings up all details of the object in a separate page: the metadata associated with that object's class also facilitates the presentation of the object's details, because it includes captions of individual attributes, the order in which they are to be presented as well as how they are to be grouped. One approach to EAV/CR is to allow columns to hold JSON structures, which thus provide the needed class structure. For example, PostgreSQL, as of version 9.4, offers JSON binary column (JSONB) support, allowing JSON attributes to be queried, indexed and joined. == Metadata == In the words of Prof. Dr. Daniel Masys (formerly Chair of Vanderbilt University's Medical Informatics Department), the challenges of working with EAV stem from the fact that in an EAV database, the "physical schema" (the way data are stored) is radically different from the "logical schema" – the way users, and many software applications such as statistics packages, regard it, i.e., as conventional rows and columns for individual classes. (Because an EAV table conceptually mixes apples, oranges, grapefruit and chop suey, if you want to do any analysis of the data using standard off-the-shelf software, in most cases you have to convert subsets of it into columnar form. The process of doing this, called pivoting, is important enough to be discussed separately.) Metadata helps perform the sleight of hand that lets users interact with the system in terms of the logical schema rather than the physical: the software continually consults the metadata for various operations such as data presentation, interactive validation, bulk data extraction and ad hoc query. The metadata can actually be used to customize the behavior of the system. EAV systems trade off simplicity in the physical and logical structure of the data for complexity in their metadata, which, among other things, plays the role that database constraints and referential integrity do in standard database designs. Such a tradeoff is generally worthwhile, because in the typical mixed schema of production systems, the data in conventional relational tables can also benefit from functionality such as automatic interface generation. The structure of the metadata is complex enough that it comprises its own subschema within the database: various foreign keys in the data tables refer to tables within this subschema. This subschema is standard-relational, with features such as constraints and referential integrity being used to the hilt. The correctness of the metadata contents, in terms of the intended system behavior, is critical and the task of ensuring correctness means that, when creating an EAV system, considerable design efforts must go into building user interfaces for metadata editing that can be used by people on the team who know the problem domain (e.g., clinical medicine) but are not necessarily programmers. (Historically, one of the main reasons why the pre-relational TMR system failed to be adopted at sites other than its home institution was that all metadata was stored in a single file with a non-intuitive structure. Customizing system behavior by altering the contents of this file, without causing the system to break, was such a delicate task that the system's authors only trusted themselves to do it.) Where an EAV system is implemented through RDF, the RDF Schema language may conveniently be used to express such metadata. This Schema information may then be used by the EAV database engine to dynamically re-organize its internal table structure for best efficiency. Some final caveats regarding metadata: Because the business logic is in the metadata rather than explicit in the database schema (i.e., one level removed, compared with traditionally designed systems), it is less apparent to one who is unfamiliar with the system. Metadata-browsing and metadata-reporting tools are therefore important in ensuring the maintainability of an EAV system. In the common scenario where metadata is implemented as a relational sub-schema, these tools are nothing more than applications built using off-the-shelf reporting or querying tools that operate on the metadata tables. It is easy for an insufficiently knowledgeable user to corrupt (i.e., introduce inconsistencies and errors in) metadata. Therefore, access to metadata must be restricted, and an audit trail of accesses and changes put into place to deal with situations where multiple individuals have metadata access. Using an RDBMS for metadata will simplify the process of maintaining consistency during metadata creation and editing, by leveraging RDBMS features such as support for transactions. Also, if the metadata is part of the same database as the data itself, this ensures that it will be backed up at least as frequently as the data itself, so that it can be recovered to a point in time. The quality of the annotation and documentation within the metadata (i.e., the narrative/explanatory text in the descriptive columns of the metadata sub-schema) must be much higher, in order to facilitate understanding by various members of the development team. Ensuring metadata quality (and keeping it current as the system evolves) takes very high priority in the long-term management and maintenance of any design that uses an EAV component. Poorly-documented or out-of-date metadata can compromise the system's long-term viability. === Information captured in metadata === ==== Attribute metadata ==== Validation metadata include data type, range of permissible values or membership in a set of values, regular expression match, default value, and whether the value is permitted to be null. In EAV systems representing classes with substructure, the validation metadata will also record what class, if any, a given attribute belongs to. Presentation metadata: how the attribute is to be displayed to the user (e.g., as a text box or image of specified dimensions, a pull-down list or a set of radio buttons). When a compound object is composed of multiple attributes, as in the EAV/CR design, there is additional metadata on the order in which the attributes should be presented, and how these attributes should optionally be grouped (under descriptive headings). For attributes which happen to be laboratory parameters, ranges of normal values, which may vary by age, sex, physiological state and assay method, are recorded. Grouping metadata: Attributes are typically presented as part of a higher-order group, e.g., a specialty-specific form. Grouping metadata includes information such as the order in which attributes are presented. Certain presentation metadata, such as fonts/colors and the number of attributes displayed per row, apply to the group as a whole. ==== Advanced validation metadata ==== Dependency metadata: in many user interfaces, entry of specific values into certain fields/attributes is required to either disable/hide certain other fields or enable/show other fields. (For example, if a user chooses the response "No" to a Boolean question "Does the patient have diabetes?", then subsequent questions about the duration of diabetes, medications for diabetes, etc. must be disabled.) To effect this in a generic framework involves storing of dependencies between the controlling attributes and the controlled attributes. Computations and complex validation: As in a spreadsheet, the value of certain attributes can be computed, and displayed, based on values entered into fields that are presented earlier in sequence. (For example, body surface area is a function of height and width). Similarly, there may be "constraints" that must be true for the data to be valid: for example, in a differential white cell count, the sum of the counts of the individual white cell types must always equal 100, because the individual counts represent percentages. Computed formulas and complex validation are generally effected by storing expressions in the metadata that are macro-substituted with the values that the user enters and can be evaluated. In Web browsers, both JavaScript and VBScript have an Eval() function that can be leveraged for this purpose. Validation, presentation and grouping metadata make possible the creation of code frameworks that support automatic user interface generation for both data browsing as well as interactive editing. In a production system that is delivered over the Web, the task of validation of EAV data is essentially moved from the back-end/database tier (which is powerless with respect to this task) to the middle /Web server tier. While back-end validation is always ideal, because it is impossible to subvert by attempting direct data entry into a table, middle tier validation through a generic framework is quite workable, though a significant amount of software design effort must go into building the framework first. The availability of open-source frameworks that can be studied and modified for individual needs can go a long way in avoiding wheel reinvention. == Usage scenarios == (The first part of this section is a précis of the Dinu/Nadkarni reference article in Central, to which the reader is directed for more details.) EAV modeling, under the alternative terms "generic data modeling" or "open schema", has long been a standard tool for advanced data modelers. Like any advanced technique, it can be double-edged, and should be used judiciously. Also, the employment of EAV does not preclude the employment of traditional relational database modeling approaches within the same database schema. In EMRs that rely on an RDBMS, such as Cerner, which use an EAV approach for their clinical-data subschema, the vast majority of tables in the schema are in fact traditionally modeled, with attributes represented as individual columns rather than as rows. The modeling of the metadata subschema of an EAV system, in fact, is a very good fit for traditional modeling, because of the inter-relationships between the various components of the metadata. In the TrialDB system, for example, the number of metadata tables in the schema outnumber the data tables by about ten to one. Because the correctness and consistency of metadata is critical to the correct operation of an EAV system, the system designer wants to take full advantage of all of the features that RDBMSs provide, such as referential integrity and programmable constraints, rather than having to reinvent the RDBMS-engine wheel. Consequently, the numerous metadata tables that support EAV designs are typically in third-normal relational form. Commercial electronic health record Systems (EHRs) use row-modeling for classes of data such as diagnoses, surgical procedures performed on and laboratory test results, which are segregated into separate tables. In each table, the "entity" is a composite of the patient ID and the date/time the diagnosis was made (or the surgery or lab test performed); the attribute is a foreign key into a specially designated lookup table that contains a controlled vocabulary - e.g., ICD-10 for diagnoses, Current Procedural Terminology for surgical procedures, with a set of value attributes. (E.g., for laboratory-test results, one may record the value measured, whether it is in the normal, low or high range, the ID of the person responsible for performing the test, the date/time the test was performed, and so on.) As stated earlier, this is not a full-fledged EAV approach because the domain of attributes for a given table is restricted, just as the domain of product IDs in a supermarket's Sales table would be restricted to the domain of Products in a Products table. However, to capture data on parameters that are not always defined in standard vocabularies, EHRs also provide a "pure" EAV mechanism, where specially designated power-users can define new attributes, their data type, maximum and minimal permissible values (or permissible set of values/codes), and then allow others to capture data based on these attributes. In the Epic (TM) EHR, this mechanism is termed "Flowsheets", and is commonly used to capture inpatient nursing observation data. === Modeling sparse attributes === The typical case for using the EAV model is for highly sparse, heterogeneous attributes, such as clinical parameters in the electronic medical record (EMRs), as stated above. Even here, however, it is accurate to state that the EAV modeling principle is applied to a sub-schema of the database rather than for all of its contents. (Patient demographics, for example, are most naturally modeled in one-column-per-attribute, traditional relational structure.) Consequently, the arguments about EAV vs. "relational" design reflect incomplete understanding of the problem: An EAV design should be employed only for that sub-schema of a database where sparse attributes need to be modeled: even here, they need to be supported by third normal form metadata tables. There are relatively few database-design problems where sparse attributes are encountered: this is why the circumstances where EAV design is applicable are relatively rare. Even where they are encountered, a set of EAV tables is not the only way to address sparse data: an XML-based solution (discussed below) is applicable when the maximum number of attributes per entity is relatively modest, and the total volume of sparse data is also similarly modest. An example of this situation is the problems of capturing variable attributes for different product types. Sparse attributes may also occur in E-commerce situations where an organization is purchasing or selling a vast and highly diverse set of commodities, with the details of individual categories of commodities being highly variable. === Modeling numerous classes with very few instances per class: highly dynamic schemas === Another application of EAV is in modeling classes and attributes that, while not sparse, are dynamic, but where the number of data rows per class will be relatively modest – a couple of hundred rows at most, but typically a few dozen – and the system developer is also required to provide a Web-based end-user interface within a very short turnaround time. "Dynamic" means that new classes and attributes need to be continually defined and altered to represent an evolving data model. This scenario can occur in rapidly evolving scientific fields as well as in ontology development, especially during the prototyping and iterative refinement phases. While the creation of new tables and columns to represent a new category of data is not especially labor-intensive, the programming of Web-based interfaces that support browsing or basic editing with type- and range-based validation is. In such a case, a more maintainable long-term solution is to create a framework where the class and attribute definitions are stored in metadata, and the software generates a basic user interface from this metadata dynamically. The EAV/CR framework, mentioned earlier, was created to address this very situation. Note that an EAV data model is not essential here, but the system designer may consider it an acceptable alternative to creating, say, sixty or more tables containing a total of not more than two thousand rows. Here, because the number of rows per class is so few, efficiency considerations are less important; with the standard indexing by class ID/attribute ID, DBMS optimizers can easily cache the data for a small class in memory when running a query involving that class or attribute. In the dynamic-attribute scenario, it is worth noting that Resource Description Framework (RDF) is being employed as the underpinning of Semantic-Web-related ontology work. RDF, intended to be a general method of representing information, is a form of EAV: an RDF triple comprises an object, a property, and a value. At the end of Jon Bentley's book "Writing Efficient Programs", the author warns that making code more efficient generally also makes it harder to understand and maintain, and so one does not rush in and tweak code unless one has first determined that there is a performance problem, and measures such as code profiling have pinpointed the exact location of the bottleneck. Once you have done so, you modify only the specific code that needs to run faster. Similar considerations apply to EAV modeling: you apply it only to the sub-system where traditional relational modeling is known a priori to be unwieldy (as in the clinical data domain), or is discovered, during system evolution, to pose significant maintenance challenges. Database Guru (and currently a vice-president of Core Technologies at Oracle Corporation) Tom Kyte, for example, correctly points out drawbacks of employing EAV in traditional business scenarios, and makes the point that mere "flexibility" is not a sufficient criterion for employing EAV. (However, he makes the sweeping claim that EAV should be avoided in all circumstances, even though Oracle's Health Sciences division itself employs EAV to model clinical-data attributes in its commercial systems ClinTrial and Oracle Clinical.) === Working with EAV data === The Achilles heel of EAV is the difficulty of working with large volumes of EAV data. It is often necessary to transiently or permanently inter-convert between columnar and row-or EAV-modeled representations of the same data; this can be both error-prone if done manually as well as CPU-intensive. Generic frameworks that utilize attribute and attribute-grouping metadata address the former but not the latter limitation; their use is more or less mandated in the case of mixed schemas that contain a mixture of conventional-relational and EAV data, where the error quotient can be very significant. The conversion operation is called pivoting. Pivoting is not required only for EAV data but also for any form of row-modeled data. (For example, implementations of the Apriori algorithm for Association Analysis, widely used to process supermarket sales data to identify other products that purchasers of a given product are also likely to buy, pivot row-modeled data as a first step.) Many database engines have proprietary SQL extensions to facilitate pivoting, and packages such as Microsoft Excel also support it. The circumstances where pivoting is necessary are considered below. Browsing of modest amounts of data for an individual entity, optionally followed by data editing based on inter-attribute dependencies. This operation is facilitated by caching the modest amounts of the requisite supporting metadata. Some programs, such as TrialDB, access the metadata to generate semi-static Web pages that contain embedded programming code as well as data structures holding metadata. Bulk extraction transforms large (but predictable) amounts of data (e.g., a clinical study’s complete data) into a set of relational tables. While CPU-intensive, this task is infrequent and does not need to be done in real-time; i.e., the user can wait for a batched process to complete. The importance of bulk extraction cannot be overestimated, especially when the data is to be processed or analyzed with standard third-party tools that are completely unaware of EAV structure. Here, it is not advisable to try to reinvent entire sets of wheels through a generic framework, and it is best just to bulk-extract EAV data into relational tables and then work with it using standard tools. Ad hoc query interfaces to row- or EAV-modeled data, when queried from the perspective of individual attributes, (e.g., "retrieve all patients with the presence of liver disease, with signs of liver failure and no history of alcohol abuse") must typically show the results of the query with individual attributes as separate columns. For most EAV database scenarios ad hoc query performance must be tolerable, but sub-second responses are not necessary, since the queries tend to be exploratory in nature. ==== Relational division ==== However, the structure of EAV data model is a perfect candidate for Relational Division, see relational algebra. With a good indexing strategy it's possible to get a response time in less than a few hundred milliseconds on a billion row EAV table. Microsoft SQL Server MVP Peter Larsson has proved this on a laptop and made the solution general available. ==== Optimizing pivoting performance ==== One possible optimization is the use of a separate "warehouse" or queryable schema whose contents are refreshed in batch mode from the production (transaction) schema. See data warehousing. The tables in the warehouse are heavily indexed and optimized using denormalization, which combines multiple tables into one to minimize performance penalty due to table joins. Certain EAV data in a warehouse may be converted into standard tables using "materialized views" (see data warehouse), but this is generally a last resort that must be used carefully, because the number of views of this kind tends to grow non-linearly with the number of attributes in a system. In-memory data structures: One can use hash tables and two-dimensional arrays in memory in conjunction with attribute-grouping metadata to pivot data, one group at a time. This data is written to disk as a flat delimited file, with the internal names for each attribute in the first row: this format can be readily bulk-imported into a relational table. This "in-memory" technique significantly outperforms alternative approaches by keeping the queries on EAV tables as simple as possible and minimizing the number of I/O operations. Each statement retrieves a large amount of data, and the hash tables help carry out the pivoting operation, which involves placing a value for a given attribute instance into the appropriate row and column. Random Access Memory (RAM) is sufficiently abundant and affordable in modern hardware that the complete data set for a single attribute group in even large data sets will usually fit completely into memory, though the algorithm can be made smarter by working on slices of the data if this turns out not to be the case. Obviously, no matter what approaches you take, querying EAV will not be as fast as querying standard column-modeled relational data for certain types of query, in much the same way that access of elements in sparse matrices are not as fast as those on non-sparse matrices if the latter fit entirely into main memory. (Sparse matrices, represented using structures such as linked lists, require list traversal to access an element at a given X-Y position, while access to elements in matrices represented as 2-D arrays can be performed using fast CPU register operations.) If, however, you chose the EAV approach correctly for the problem that you were trying to solve, this is the price that you pay; in this respect, EAV modeling is an example of a space (and schema maintenance) versus CPU-time tradeoff. == Alternatives == === EAV vs. the Universal Data Model === Originally postulated by Maier, Ullman and Vardi, the "Universal Data Model" (UDM) seeks to simplify the query of a complex relational schema by naive users, by creating the illusion that everything is stored in a single giant "universal table". It does this by utilizing inter-table relationships, so that the user does not need to be concerned about what table contains what attribute. C.J. Date, however, pointed out that in circumstances where a table is multiply related to another (as in genealogy databases, where an individual's father and mother are also individuals, or in some business databases where all addresses are stored centrally, and an organization can have different office addresses and shipping addresses), there is insufficient metadata within the database schema to specify unambiguous joins. When UDM has been commercialized, as in SAP BusinessObjects, this limitation is worked around through the creation of "Universes", which are relational views with predefined joins between sets of tables: the "Universe" developer disambiguates ambiguous joins by including the multiply-related table in a view multiple times using different aliases. Apart from the way in which data is explicitly modeled (UDM simply uses relational views to intercede between the user and the database schema), EAV differs from Universal Data Models in that it also applies to transactional systems, not only query oriented (read-only) systems as in UDM. Also, when used as the basis for clinical-data query systems, EAV implementations do not necessarily shield the user from having to specify the class of an object of interest. In the EAV-based i2b2 clinical data mart, for example, when the user searches for a term, she has the option of specifying the category of data that the user is interested in. For example, the phrase "lithium" can refer either to the medication (which is used to treat bipolar disorder), or a laboratory assay for lithium level in the patient's blood. (The blood level of lithium must be monitored carefully: too much of the drug causes severe side effects, while too little is ineffective.) === XML and JSON === An Open Schema implementation can use an XML column in a table to capture the variable/sparse information. Similar ideas can be applied to databases that support JSON-valued columns: sparse, hierarchical data can be represented as JSON. If the database has JSON support, such as PostgreSQL and (partially) SQL Server 2016 and later, then attributes can be queried, indexed and joined. This can offer performance improvements of over 1000x over naive EAV implementations., but does not necessarily make the overall database application more robust. Note that there are two ways in which XML or JSON data can be stored: one way is to store it as a plain string, opaque to the database server; the other way is to use a database server that can "see into" the structure. There are obviously some severe drawbacks to storing opaque strings: these cannot be queried directly, one cannot form an index based on their contents, and it is impossible to perform joins based on the content. Building an application that has to manage data gets extremely complicated when using EAV models, because of the extent of infrastructure that has to be developed in terms of metadata tables and application-framework code. Using XML solves the problem of server-based data validation (which must be done by middle-tier and browser-based code in EAV-based frameworks), but has the following drawbacks: It is programmer-intensive. XML schemas are notoriously tricky to write by hand, a recommended approach is to create them by defining relational tables, generating XML-schema code, and then dropping these tables. This is problematic in many production operations involving dynamic schemas, where new attributes are required to be defined by power-users who understand a specific application domain (e.g. inventory management or biomedicine) but are not necessarily programmers. By contrast, in production systems that use EAV, such users define new attributes (and the data-type and validation checks associated with each) through a GUI application. Because the validation-associated metadata is required to be stored in multiple relational tables in a normalized design, a GUI application that ties these tables together and enforces the appropriate metadata-consistency checks is the only practical way to allow entry of attribute information, even for advanced developers - even if the end-result uses XML or JSON instead of separate relational tables. The server-based diagnostics that result with an XML/JSON solution if incorrect data is attempted to be inserted (e.g., range check or regular-expression pattern violations) are cryptic to the end-user: to convey the error accurately, one would, at the least, need to associate a detailed and user-friendly error diagnostic with each attribute. The solution does not address the user-interface-generation problem. All of the above drawbacks are remediable by creating a layer of metadata and application code, but in creating this, the original "advantage" of not having to create a framework has vanished. The fact is that modeling sparse data attributes robustly is a hard database-application-design problem no matter which storage approach is used. Sarka's work, however, proves the viability of using an XML field instead of type-specific relational EAV tables for the data-storage layer, and in situations where the number of attributes per entity is modest (e.g., variable product attributes for different product types) the XML-based solution is more compact than an EAV-table-based one. (XML itself may be regarded as a means of attribute–value data representation, though it is based on structured text rather than on relational tables.) === Tree structures and relational databases === There exist several other approaches for the representation of tree-structured data, be it XML, JSON or other formats, such as the nested set model, in a relational database. On the other hand, database vendors have begun to include JSON and XML support into their data structures and query features, like in IBM Db2, where XML data is stored as XML separate from the tables, using XPath queries as part of SQL statements, or in PostgreSQL, with a JSON data type that can be indexed and queried. These developments accomplish, improve or substitute the EAV model approach. The uses of JSON and XML are not necessarily the same as the use of an EAV model, though they can overlap. XML is preferable to EAV for arbitrarily hierarchical data that is relatively modest in volume for a single entity: it is not intended to scale up to the multi-gigabyte level with respect to data-manipulation performance. XML is not concerned per-se with the sparse-attribute problem, and when the data model underlying the information to be represented can be decomposed straightforwardly into a relational structure, XML is better suited as a means of data interchange than as a primary storage mechanism. EAV, as stated earlier, is specifically (and only) applicable to the sparse-attribute scenario. When such a scenario holds, the use of datatype-specific attribute–value tables that can be indexed by entity, by attribute, and by value and manipulated through simple SQL statements is vastly more scalable than the use of an XML tree structure. The Google App Engine, mentioned above, uses strongly-typed-value tables for a good reason. === Graph databases === An alternative approach to managing the various problems encountered with EAV-structured data is to employ a graph database. These represent entities as the nodes of a graph or hypergraph, and attributes as links or edges of that graph. The issue of table joins are addressed by providing graph-specific query languages, such as Apache TinkerPop, or the OpenCog atomspace pattern matcher. Another alternative is to use SPARQL store. == Considerations for server software == === PostgreSQL: JSONB columns === PostgreSQL version 9.4 includes support for JSON binary columns (JSONB), which can be queried, indexed and joined. This allows performance improvements by factors of a thousand or more over traditional EAV table designs. A DB schema based on JSONB always has fewer tables: one may nest attribute–value pairs in JSONB type fields of the Entity table. That makes the DB schema easy to comprehend and SQL queries concise. The programming code to manipulate the database objects on the abstraction layer turns out much shorter. === SQL Server 2008 and later: sparse columns === Microsoft SQL Server 2008 offers a (proprietary) alternative to EAV. Columns with an atomic data type (e.g., numeric, varchar or datetime columns) can be designated as sparse simply by including the word SPARSE in the column definition of the CREATE TABLE statement. Sparse columns optimize the storage of NULL values (which now take up no space at all) and are useful when the majority records in a table will have NULL values for that column. Indexes on sparse columns are also optimized: only those rows with values are indexed. In addition, the contents of all sparse columns in a particular row of a table can be collectively aggregated into a single XML column (a column set), whose contents are of the form [<column-name>column contents </column-name>]*.... In fact, if a column set is defined for a table as part of a CREATE TABLE statement, all sparse columns subsequently defined are typically added to it. This has the interesting consequence that the SQL statement SELECT * from <tablename> will not return the individual sparse columns, but concatenate all of them into a single XML column whose name is that of the column set (which therefore acts as a virtual, computed column). Sparse columns are convenient for business applications such as product information, where the applicable attributes can be highly variable depending on the product type, but where the total number of variable attributes per product type are relatively modest. ==== Limitations of sparse attributes ==== However, this approach to modelling sparse attributes has several limitations: rival DBMSs have, notably, chosen not to borrow this idea for their own engines. Limitations include: The maximum number of sparse columns in a table is 10,000, which may fall short for some implementations, such as for storing clinical data, where the possible number of attributes is one order of magnitude larger. Therefore, this is not a solution for modelling *all* possible clinical attributes for a patient. Addition of new attributes – one of the primary reasons an EAV model might be sought – still requires a DBA. Further, the problem of building a user interface to sparse attribute data is not addressed: only the storage mechanism is streamlined. Applications can be written to dynamically add and remove sparse columns from a table at run-time: in contrast, an attempt to perform such an action in a multi-user scenario where other users/processes are still using the table would be prevented for tables without sparse columns. However, while this capability offers power and flexibility, it invites abuse, and should be used judiciously and infrequently. It can result in significant performance penalties, in part because any compiled query plans that use this table are automatically invalidated. Dynamic column addition or removal is an operation that should be audited, because column removal can cause data loss: allowing an application to modify a table without maintaining some kind of a trail, including a justification for the action, is not good software practice. SQL constraints (e.g., range checks, regular expression checks) cannot be applied to sparse columns. The only check that is applied is for correct data type. Constraints would have to be implemented in metadata tables and middle-tier code, as is done in production EAV systems. (This consideration also applies to business applications as well.) SQL Server has limitations on row size if attempting to change the storage format of a column: the total contents of all atomic-datatype columns, sparse and non-sparse, in a row that contain data cannot exceed 8016 bytes if that table contains a sparse column for the data to be automatically copied over. Sparse columns that happen to contain data have a storage overhead of 4 bytes per column in addition to storage for the data type itself (e.g., 4 bytes for datetime columns). This impacts the amount of sparse-column data that you can associate with a given row. This size restriction is relaxed for the varchar data type, which means that, if one hits row-size limits in a production system, one has to work around it by designating sparse columns as varchar even though they may have a different intrinsic data type. Unfortunately, this approach now subverts server-side data-type checking. == Cloud computing offerings == Many cloud computing vendors offer data stores based on the EAV model, where an arbitrary number of attributes can be associated with a given entity. Roger Jennings provides an in-depth comparison of these. In Amazon's offering, SimpleDB, the data type is limited to strings, and data that is intrinsically non-string must be coerced to string (e.g., numbers must be padded with leading zeros) if you wish to perform operations such as sorting. Microsoft's offering, Windows Azure Table Storage, offers a limited set of data types: byte[], bool, DateTime, double, Guid, int, long and string [1]. The Google App Engine [2] offers the greatest variety of data types: in addition to dividing numeric data into int, long, or float, it also defines custom data types such as phone number, E-mail address, geocode and hyperlink. Google, but not Amazon or Microsoft, lets you define metadata that would prevent invalid attributes from being associated with a particular class of entity, by letting you create a metadata model. Google lets you operate on the data using a subset of SQL; Microsoft offer a URL-based querying syntax that is abstracted via a LINQ provider; Amazon offer a more limited syntax. Of concern, built-in support for combining different entities through joins is currently (April '10) non-existent with all three engines. Such operations have to be performed by application code. This may not be a concern if the application servers are co-located with the data servers at the vendor's data center, but a lot of network traffic would be generated if the two were geographically separated. An EAV approach is justified only when the attributes that are being modeled are numerous and sparse: if the data being captured does not meet this requirement, the cloud vendors' default EAV approach is often a mismatch for applications that require a true back-end database (as opposed to merely a means of persistent data storage). Retrofitting the vast majority of existing database applications, which use a traditional data-modeling approach, to an EAV-type cloud architecture, would require major surgery. Microsoft discovered, for example, that its database-application-developer base was largely reluctant to invest such effort. In 2010 therefore, Microsoft launched a premium offering, SQL Server Azure, a cloud-accessible, fully-fledged relational engine which allows porting of existing database applications with only modest changes. As of the early 2020s, the service allows standard-tier physical database sizes of up to 8TB, with "hyperscale" and "business-critical" offerings also available. == See also == Attribute–value system – Knowledge representation framework Linked data – Structured data and method for its publication Resource Description Framework – Formal language for describing data models (RDF) Semantic triple – Data modeling construct Semantic Web – Extension of the Web to facilitate data exchange Slowly changing dimension – Structure in data warehousing Triplestore – Database for storage and retrieval of triples == References ==
Wikipedia/Entity-attribute-value_model
The principle of orthogonal design (abbreviated POOD) was developed by database researchers David McGoveran and Christopher J. Date in the early 1990s, and first published "A New Database Design Principle" in the July 1994 issue of Database Programming and Design and reprinted several times. It is the second of the two principles of database design, which seek to prevent databases from being too complicated or redundant, the first principle being the principle of full normalization (POFN). Simply put, it says that no two relations in a relational database should be defined in such a way that they can represent the same facts. As with database normalization, POOD serves to eliminate uncontrolled storage redundancy and expressive ambiguity, especially useful for applying updates to virtual relations (e.g., view (database)). Although simple in concept, POOD is frequently misunderstood and the formal expression of POOD continues to be refined. The principle is a restatement of the requirement that a database is a minimum cover set of the relational algebra. The relational algebra allows data duplication in the relations that are the elements of the algebra. One of the efficiency requirements of a database is that there be no data duplication. This requirement is met by the minimum cover set of the relational algebra. == Sources == Database Debunkings: The Principle of Orthogonal Design, Part I, by D. McGoveran and C. J. Date [1] Database Debunkings: The Principle of Orthogonal Design, Part II, by D. McGoveran and C. J. Date [2]
Wikipedia/Principle_of_orthogonal_design
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics. == History == A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets. Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably. Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first-order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity. == Values == Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which 1 + 1 = 2, but may be identified with the elements of the two-element field GF(2), that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction x ∨ y (inclusive-or) definable as x + y − xy and negation ¬x as 1 − x. In GF(2), − may be replaced by +, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which GF(2) is not implemented). Boolean algebra also deals with functions which have their values in the set {0,1}. A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set E: to a subset F of E, one can define the indicator function that takes the value 1 on F, and 0 outside F. The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables. == Operations == === Basic operations === While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND ( ∧ {\displaystyle \land } ) and OR ( ∨ {\displaystyle \lor } ) and the unary operator NOT ( ¬ {\displaystyle \neg } ), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows: Alternatively, the values of x ∧ y, x ∨ y, and ¬x can be expressed by tabulating their values with truth tables as follows: When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules. If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions: x ∧ y = x y = min ( x , y ) x ∨ y = x + y − x y = x + y ( 1 − x ) = max ( x , y ) ¬ x = 1 − x {\displaystyle {\begin{aligned}x\wedge y&=xy=\min(x,y)\\x\vee y&=x+y-xy=x+y(1-x)=\max(x,y)\\\neg x&=1-x\end{aligned}}} One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws): x ∧ y = ¬ ( ¬ x ∨ ¬ y ) x ∨ y = ¬ ( ¬ x ∧ ¬ y ) {\displaystyle {\begin{aligned}x\wedge y&=\neg (\neg x\vee \neg y)\\x\vee y&=\neg (\neg x\wedge \neg y)\end{aligned}}} === Secondary operations === Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Material conditional The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false). Exclusive OR (XOR) The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0. Logical equivalence The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1. == Laws == A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x ∨ (y ∧ z) = x ∨ (z ∧ y) from y ∧ z = z ∧ y (as treated in § Axiomatizing Boolean algebra). === Monotone laws === Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: The following laws hold in Boolean algebra, but not in ordinary algebra: Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be 1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows. === Nonmonotone laws === The complement operation is defined by the following two laws. Complementation 1 x ∧ ¬ x = 0 Complementation 2 x ∨ ¬ x = 1 {\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}} All properties of negation including the laws below follow from the above two laws alone. In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) Double negation ¬ ( ¬ x ) = x {\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}} But whereas ordinary algebra satisfies the two laws ( − x ) ( − y ) = x y ( − x ) + ( − y ) = − ( x + y ) {\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}} Boolean algebra satisfies De Morgan's laws: De Morgan 1 ¬ x ∧ ¬ y = ¬ ( x ∨ y ) De Morgan 2 ¬ x ∨ ¬ y = ¬ ( x ∧ y ) {\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}} === Completeness === The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in § Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in § Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent. === Duality principle === Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for x ∧ y and x ∨ y in the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x, y, z) = (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x, y, z, t. The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality.: 21–22  == Diagrammatic representations == === Venn diagrams === A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunction x ∧ y, disjunction x ∨ y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate that x ∧ y is 1 when both variables are 1. The other regions are left unshaded to indicate that x ∧ y is 0 for the other three combinations. The second diagram represents disjunction x ∨ y by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law, x ∧ (x ∨ y) = x, start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x ∨ (x ∧ y) = x, start with the left diagram for x∧y and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle. To visualize the first De Morgan's law, (¬x) ∧ (¬y) = ¬(x ∨ y), start with the middle diagram for x ∨ y and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law, (¬x) ∨ (¬y) = ¬(x ∧ y), works the same way with the two diagrams interchanged. The first complement law, x ∧ ¬x = 0, says that the interior and exterior of the x circle have no overlap. The second complement law, x ∨ ¬x = 1, says that everything is either inside or outside the x circle. === Digital logic gates === Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows: The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y. == Boolean algebras == The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion. === Concrete Boolean algebras === A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X. (Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.) Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable. Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide. Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves. === Subsets as bit vectors === A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by x ∈ X being 1 or 0 according to whether or not x ∈ Y. (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if ⁠ X = { a , b , c } {\displaystyle X=\{a,b,c\}} ⁠ where a, b, c are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in 1010∧0110 = 0010, 1010∨0110 = 1110, and ¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. === Prototypical Boolean algebra === The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation. The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. === Boolean algebras: the definition === The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition. A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. A Boolean algebra is a complemented distributive lattice. The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition. === Representable Boolean algebras === Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions. However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion. A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra. The next question is answered positively as follows. Every Boolean algebra is representable. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. == Axiomatizing Boolean algebra == The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom ( ( a ∣ b ) ∣ c ) ∣ ( a ∣ ( ( a ∣ c ) ∣ a ) ) = c {\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c} is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra. == Propositional logic == Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions. The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. === Applications === One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P. Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) === Deductive systems for propositional logic === An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem. ==== Sequent calculus ==== Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as A ∨ B, A ∧ C, ... ⊢ A, B → C, .... The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A ⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when x ∨ y = y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus. == Applications == Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics. === Computers === In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second. === Two-valued logic === Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. === Boolean operations === The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas. ==== Natural language ==== Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. ==== Digital logic ==== Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements. ==== Naive set theory ==== Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. ==== Video cards ==== The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all 223 = 256 ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants SRC = 0xaa or 0b10101010, DST = 0xcc or 0b11001100, and MSK = 0xf0 or 0b11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x80 in the (SRC^DST)&MSK example, 0x88 if just SRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. ==== Modeling and CAD ==== Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation x ∧ ¬y or x − y, which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. ==== Boolean searches ==== Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google. Doublequotes are used to combine whitespace-separated words into a single search term. Whitespace is used to specify logical AND, as it is the default operator for joining search terms: "Search term 1" "Search term 2" The OR keyword is used for logical OR: "Search term 1" OR "Search term 2" A prefixed minus sign is used for logical NOT: "Search term 1" −"Search term 2" == See also == == Notes == == References == == Further reading == Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8. Whitesitt, J. Eldon (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-486-68483-3. Dwinger, Philip (1971). Introduction to Boolean algebras. Würzburg, Germany: Physica Verlag. Sikorski, Roman (1969). Boolean Algebras (3 ed.). Berlin, Germany: Springer-Verlag. ISBN 978-0-387-04469-9. Bocheński, Józef Maria (1959). A Précis of Mathematical Logic. Translated from the French and German editions by Otto Bird. Dordrecht, South Holland: D. Reidel. === Historical perspective === Boole, George (1848). "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III: 183–198. Hailperin, Theodore (1986). Boole's logic and probability: a critical exposition from the standpoint of contemporary algebra, logic, and probability theory (2 ed.). Elsevier. ISBN 978-0-444-87952-3. Gabbay, Dov M.; Woods, John, eds. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the History of Logic. Vol. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia, and Grattan-Guinness Badesa, Calixto (2004). "Chapter 1. Algebra of Classes and Propositional Calculus". The birth of model theory: Löwenheim's theorem in the frame of the theory of relatives. Princeton University Press. ISBN 978-0-691-05853-5. Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. pp. xviii + 212. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. "The Algebra of Logic Tradition" entry by Burris, Stanley in the Stanford Encyclopedia of Philosophy, 21 February 2012 == External links ==
Wikipedia/Boolean_Algebra
In computer programming, a declaration is a language construct specifying identifier properties: it declares a word's (identifier's) meaning. Declarations are most commonly used for functions, variables, constants, and classes, but can also be used for other entities such as enumerations and type definitions. Beyond the name (the identifier itself) and the kind of entity (function, variable, etc.), declarations typically specify the data type (for variables and constants), or the type signature (for functions); types may also include dimensions, such as for arrays. A declaration is used to announce the existence of the entity to the compiler; this is important in those strongly typed languages that require functions, variables, and constants, and their types to be specified with a declaration before use, and is used in forward declaration. The term "declaration" is frequently contrasted with the term "definition", but meaning and usage varies significantly between languages; see below. Declarations are particularly prominent in languages in the ALGOL tradition, including the BCPL family, most prominently C and C++, and also Pascal. Java uses the term "declaration", though Java does not require separate declarations and definitions. == Declaration vs. definition == One basic dichotomy is whether or not a declaration contains a definition: for example, whether a variable or constant declaration specifies its value, or only its type; and similarly whether a declaration of a function specifies the body (implementation) of the function, or only its type signature. Not all languages make this distinction: in many languages, declarations always include a definition, and may be referred to as either "declarations" or "definitions", depending on the language. However, these concepts are distinguished in languages that require declaration before use (for which forward declarations are used), and in languages where interface and implementation are separated: the interface contains declarations, the implementation contains definitions. In informal usage, a "declaration" refers only to a pure declaration (types only, no value or body), while a "definition" refers to a declaration that includes a value or body. However, in formal usage (in language specifications), "declaration" includes both of these senses, with finer distinctions by language: in C and C++, a declaration of a function that does not include a body is called a function prototype, while a declaration of a function that does include a body is called a "function definition". In Java declarations occur in two forms. For public methods they can be presented in interfaces as method signatures, which consist of the method names, input types and output type. A similar notation can be used in the definition of abstract methods, which do not contain a definition. The enclosing class can be instantiated, rather a new derived class, which provides the definition of the method, would need to be created in order to create an instance of the class. Starting with Java 8, the lambda expression was included in the language, which could be viewed as a function declaration. == Declarations and definitions == In the C-family of programming languages, declarations are often collected into header files, which are included in other source files that reference and use these declarations, but don't have access to the definition. The information in the header file provides the interface between code that uses the declaration and that which defines it, a form of information hiding. A declaration is often used in order to access functions or variables defined in different source files, or in a library. A mismatch between the definition type and the declaration type generates a compiler error. For variables, definitions assign values to an area of memory that was reserved during the declaration phase. For functions, definitions supply the function body. While a variable or function may be declared many times, it is typically defined once (in C++, this is known as the One Definition Rule or ODR). Dynamic languages such as JavaScript or Python generally allow functions to be redefined, that is, re-bound; a function is a variable much like any other, with a name and a value (the definition). Here are some examples of declarations that are not definitions, in C: Here are some examples of declarations that are definitions, again in C: == Undefined variables == In some programming languages, an implicit declaration is provided the first time such a variable is encountered at compile time. In other languages, such a usage is considered to be an error, which may result in a diagnostic message. Some languages have started out with the implicit declaration behavior, but as they matured they provided an option to disable it (e.g. Perl's "use strict" or Visual Basic's "Option Explicit"). == See also == Scope (computer science) == Notes == == References == == External links == Declare vs Define in C and C++, Alex Allain 8.2. Declarations, Definitions and Accessibility, The C Book, GBdirect Declarations and Definitions (C++), MSDN "Declarations tell the compiler that a program element or name exists. Definitions specify what code or data the name describes."
Wikipedia/Declaration_(computer_science)
Mathematical Biosciences is a monthly peer-reviewed scientific journal publishing work that provides new concepts or new understanding of biological systems using mathematical models, or methodological articles likely to find application to multiple biological systems. Papers are expected to present a major research finding of broad significance for the biosciences, or mathematical biology. Mathematical Biosciences welcomes original research articles, letters, reviews and perspectives. The journal was established in 1967 and is published by Elsevier. The editor-in-chief is the mathematical biologist is Abba Gumel. His predecessor was the mathematical and theoretical biologist Santiago Schnell from the University of Notre Dame. Under Schnell's leadership, the journal raised its impact factor from 1.680 (in 2018) to 4.300 (in 2022). == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.935. == Bellman Prize == The Mathematical Biosciences "Bellman Prize" is a biennial award to a research team or single investigator, whose Mathematical Bioscience article has made an outstanding contribution to their research field over the last five years. The deadline for submitting nominations for the Bellman Prize is April 1 of the year for which the prize is awarded. Nominations are accepted for any Mathematical Biosciences original research paper published four and five years before the nomination year cycle. The prize committee does not consider self-nominations, but anyone else can submit a nomination. The prize was established in 1985 and is named for Richard Bellman, the first editor-in-chief. == References == == External links == Official website
Wikipedia/Mathematical_Biosciences
In computer science, the lexicographically minimal string rotation or lexicographically least circular substring is the problem of finding the rotation of a string possessing the lowest lexicographical order of all such rotations. For example, the lexicographically minimal rotation of "bbaaccaadd" would be "aaccaaddbb". It is possible for a string to have multiple lexicographically minimal rotations, but for most applications this does not matter as the rotations must be equivalent. Finding the lexicographically minimal rotation is useful as a way of normalizing strings. If the strings represent potentially isomorphic structures such as graphs, normalizing in this way allows for simple equality checking. A common implementation trick when dealing with circular strings is to concatenate the string to itself instead of having to perform modular arithmetic on the string indices. == Algorithms == === The Naive Algorithm === The naive algorithm for finding the lexicographically minimal rotation of a string is to iterate through successive rotations while keeping track of the most lexicographically minimal rotation encountered. If the string is of length n, this algorithm runs in O(n2) time in the worst case. === Booth's Algorithm === An efficient algorithm was proposed by Booth (1980). The algorithm uses a modified preprocessing function from the Knuth–Morris–Pratt string search algorithm. The failure function for the string is computed as normal, but the string is rotated during the computation so some indices must be computed more than once as they wrap around. Once all indices of the failure function have been successfully computed without the string rotating again, the minimal lexicographical rotation is known to be found and its starting index is returned. The correctness of the algorithm is somewhat difficult to understand, but it is easy to implement. Of interest is that removing all lines of code which modify the value of k results in the original Knuth-Morris-Pratt preprocessing function, as k (representing the rotation) will remain zero. Booth's algorithm runs in ⁠ O ( n ) {\displaystyle O(n)} ⁠ time, where n is the length of the string. The algorithm performs at most ⁠ 3 n {\displaystyle 3n} ⁠ comparisons in the worst case, and requires auxiliary memory of length n to hold the failure function table. === Shiloach's Fast Canonization Algorithm === Shiloach (1981) proposed an algorithm improving on Booth's result in terms of performance. It was observed that if there are q equivalent lexicographically minimal rotations of a string of length n, then the string must consist of q equal substrings of length ⁠ d = n / q {\displaystyle d=n/q} ⁠. The algorithm requires only ⁠ n + d / 2 {\displaystyle n+d/2} ⁠ comparisons and constant space in the worst case. The algorithm is divided into two phases. The first phase is a quick sieve which rules out indices that are obviously not starting locations for the lexicographically minimal rotation. The second phase then finds the lexicographically minimal rotation start index from the indices which remain. === Duval's Lyndon Factorization Algorithm === Duval (1983) proposed an efficient algorithm involving the factorization of the string into its component Lyndon words, which runs in linear time with a constant memory requirement. == Variants == Shiloach (1979) proposed an algorithm to efficiently compare two circular strings for equality without a normalization requirement. An additional application which arises from the algorithm is the fast generation of certain chemical structures without repetitions. == See also == Lyndon word Knuth–Morris–Pratt algorithm == References ==
Wikipedia/Lexicographically_minimal_string_rotation
Algorithm X is an algorithm for solving the exact cover problem. It is a straightforward recursive, nondeterministic, depth-first, backtracking algorithm used by Donald Knuth to demonstrate an efficient implementation called DLX, which uses the dancing links technique. == Algorithm == The exact cover problem is represented in Algorithm X by an incidence matrix A consisting of 0s and 1s. The goal is to select a subset of the rows such that the digit 1 appears in each column exactly once. Algorithm X works as follows: If the matrix A has no columns, the current partial solution is a valid solution; terminate successfully. Otherwise choose a column c (deterministically). Choose a row r such that Ar, c = 1 (nondeterministically). Include row r in the partial solution. For each column j such that Ar, j = 1, for each row i such that Ai, j = 1, delete row i from matrix A. delete column j from matrix A. Repeat this algorithm recursively on the reduced matrix A. The nondeterministic choice of r means that the algorithm recurses over independent subalgorithms; each subalgorithm inherits the current matrix A, but reduces it with respect to a different row r. If column c is entirely zero, there are no subalgorithms and the process terminates unsuccessfully. The subalgorithms form a search tree in a natural way, with the original problem at the root and with level k containing each subalgorithm that corresponds to k chosen rows. Backtracking is the process of traversing the tree in preorder, depth first. Any systematic rule for choosing column c in this procedure will find all solutions, but some rules work much better than others. To reduce the number of iterations, Knuth suggests that the column-choosing algorithm select a column with the smallest number of 1s in it. == Example == For example, consider the exact cover problem specified by the universe U = {1, 2, 3, 4, 5, 6, 7} and the collection of sets S = {A, B, C, D, E, F}, where: A = {1, 4, 7}; B = {1, 4}; C = {4, 5, 7}; D = {3, 5, 6}; E = {2, 3, 6, 7}; and F = {2, 7}. This problem is represented by the matrix: Algorithm X with Knuth's suggested heuristic for selecting columns solves this problem as follows: Level 0 Step 1—The matrix is not empty, so the algorithm proceeds. Step 2—The lowest number of 1s in any column is two. Column 1 is the first column with two 1s and thus is selected (deterministically): Step 3—Rows A and B each have a 1 in column 1 and thus are selected (nondeterministically). The algorithm moves to the first branch at level 1… Level 1: Select Row A Step 4—Row A is included in the partial solution. Step 5—Row A has a 1 in columns 1, 4, and 7: Column 1 has a 1 in rows A and B; column 4 has a 1 in rows A, B, and C; and column 7 has a 1 in rows A, C, E, and F. Thus, rows A, B, C, E, and F are to be removed and columns 1, 4 and 7 are to be removed: Row D remains and columns 2, 3, 5, and 6 remain: Step 1—The matrix is not empty, so the algorithm proceeds. Step 2—The lowest number of 1s in any column is zero and column 2 is the first column with zero 1s: Thus this branch of the algorithm terminates unsuccessfully. The algorithm moves to the next branch at level 1… Level 1: Select Row B Step 4—Row B is included in the partial solution. Row B has a 1 in columns 1 and 4: Column 1 has a 1 in rows A and B; and column 4 has a 1 in rows A, B, and C. Thus, rows A, B, and C are to be removed and columns 1 and 4 are to be removed: Rows D, E, and F remain and columns 2, 3, 5, 6, and 7 remain: Step 1—The matrix is not empty, so the algorithm proceeds. Step 2—The lowest number of 1s in any column is one. Column 5 is the first column with one 1 and thus is selected (deterministically): Step 3—Row D has a 1 in column 5 and thus is selected (nondeterministically). The algorithm moves to the first branch at level 2… Level 2: Select Row D Step 4—Row D is included in the partial solution. Step 5—Row D has a 1 in columns 3, 5, and 6: Column 3 has a 1 in rows D and E; column 5 has a 1 in row D; and column 6 has a 1 in rows D and E. Thus, rows D and E are to be removed and columns 3, 5, and 6 are to be removed: Row F remains and columns 2 and 7 remain: Step 1—The matrix is not empty, so the algorithm proceeds. Step 2—The lowest number of 1s in any column is one. Column 2 is the first column with one 1 and thus is selected (deterministically): Row F has a 1 in column 2 and thus is selected (nondeterministically). The algorithm moves to the first branch at level 3… Level 3: Select Row F Step 4—Row F is included in the partial solution. Row F has a 1 in columns 2 and 7: Column 2 has a 1 in row F; and column 7 has a 1 in row F. Thus, row F is to be removed and columns 2 and 7 are to be removed: No rows and no columns remain: Step 1—The matrix is empty, thus this branch of the algorithm terminates successfully. As rows B, D, and F have been selected (step 4), the final solution in this branch is: In other words, the subcollection {B, D, F} is an exact cover, since every element is contained in exactly one of the sets B = {1, 4}, D = {3, 5, 6}, or F = {2, 7}. There are no more selected rows at level 3, thus the algorithm moves to the next branch at level 2… There are no more selected rows at level 2, thus the algorithm moves to the next branch at level 1… There are no more selected rows at level 1, thus the algorithm moves to the next branch at level 0… There are no branches at level 0, thus the algorithm terminates. In summary, the algorithm determines there is only one exact cover: S* = {B, D, F}. == Implementations == Knuth's main purpose in describing Algorithm X was to demonstrate the utility of dancing links. Knuth showed that Algorithm X can be implemented efficiently on a computer using dancing links in a process Knuth calls "DLX". DLX uses the matrix representation of the exact cover problem, implemented as doubly linked lists of the 1s of the matrix: each 1 element has a link to the next 1 above, below, to the left, and to the right of itself. (Technically, because the lists are circular, this forms a torus). Because exact cover problems tend to be sparse, this representation is usually much more efficient in both size and processing time required. DLX then uses dancing links to quickly select permutations of rows as possible solutions and to efficiently backtrack (undo) mistaken guesses. == See also == Exact cover Dancing Links == References == Knuth, Donald E. (2000), "Dancing links", in Davies, Jim; Roscoe, Bill; Woodcock, Jim (eds.), Millennial Perspectives in Computer Science: Proceedings of the 1999 Oxford-Microsoft Symposium in Honour of Sir Tony Hoare, Palgrave, pp. 187–214, arXiv:cs/0011047, Bibcode:2000cs.......11047K, ISBN 978-0-333-92230-9. == External links == Knuth's paper - PDF file (also arXiv:cs/0011047 ) Knuth's Paper describing the Dancing Links optimization - Gzip'd postscript file.
Wikipedia/Knuth's_Algorithm_X
The TPK algorithm is a simple program introduced by Donald Knuth and Luis Trabb Pardo to illustrate the evolution of computer programming languages. In their 1977 work "The Early Development of Programming Languages", Trabb Pardo and Knuth introduced a small program that involved arrays, indexing, mathematical functions, subroutines, I/O, conditionals and iteration. They then wrote implementations of the algorithm in several early programming languages to show how such concepts were expressed. To explain the name "TPK", the authors referred to Grimm's law (which concerns the consonants 't', 'p', and 'k'), the sounds in the word "typical", and their own initials (Trabb Pardo and Knuth). In a talk based on the paper, Knuth said: You can only appreciate how deep the subject is by seeing how good people struggled with it and how the ideas emerged one at a time. In order to study this—Luis I think was the main instigator of this idea—we take one program—one algorithm—and we write it in every language. And that way from one example we can quickly psych out the flavor of that particular language. We call this the TPK program, and well, the fact that it has the initials of Trabb Pardo and Knuth is just a funny coincidence. == The algorithm == Knuth describes it as follows: We introduced a simple procedure called the “TPK algorithm,” and gave the flavor of each language by expressing TPK in each particular style. […] The TPK algorithm inputs eleven numbers a 0 , a 1 , … , a 10 {\displaystyle a_{0},a_{1},\ldots ,a_{10}} ; then it outputs a sequence of eleven pairs ( 10 , b 10 ) , ( 9 , b 9 ) , … , ( 0 , b 0 ) , {\displaystyle (10,b_{10}),(9,b_{9}),\ldots ,(0,b_{0}),} where b i = { f ( a i ) , if f ( a i ) ≤ 400 ; 999 , if f ( a i ) > 400 ; f ( x ) = | x | + 5 x 3 . {\displaystyle b_{i}={\begin{cases}f(a_{i}),&{\text{if }}f(a_{i})\leq 400;\\999,&{\text{if }}f(a_{i})>400;\end{cases}}\quad f(x)={\sqrt {|x|}}+5x^{3}.} This simple task is obviously not much of a challenge, in any decent computer language. In pseudocode: ask for 11 numbers to be read into a sequence S reverse sequence S for each item in sequence S call a function to do an operation if result overflows alert user else print result The algorithm reads eleven numbers from an input device, stores them in an array, and then processes them in reverse order, applying a user-defined function to each value and reporting either the value of the function or a message to the effect that the value has exceeded some threshold. == Implementations == === Implementations in the original paper === In the original paper, which covered "roughly the first decade" of the development of high-level programming languages (from 1945 up to 1957), they gave the following example implementation "in a dialect of ALGOL 60", noting that ALGOL 60 was a later development than the languages actually discussed in the paper: As many of the early high-level languages could not handle the TPK algorithm exactly, they allow the following modifications: If the language supports only integer variables, then assume that all inputs and outputs are integer-valued, and that sqrt(x) means the largest integer not exceeding x {\displaystyle {\sqrt {x}}} . If the language does not support alphabetic output, then instead of the string 'TOO LARGE', output the number 999. If the language does not allow any input and output, then assume that the 11 input values a 0 , a 1 , … , a 10 {\displaystyle a_{0},a_{1},\ldots ,a_{10}} have been supplied by an external process somehow, and the task is to compute the 22 output values 10 , f ( 10 ) , 9 , f ( 9 ) , … , 0 , f ( 0 ) {\displaystyle 10,f(10),9,f(9),\ldots ,0,f(0)} (with 999 replacing too-large values of f ( i ) {\displaystyle f(i)} ). If the language does not allow programmers to define their own functions, then replace f(a[i]) with an expression equivalent to | a i | + 5 x 3 {\displaystyle {\sqrt {|a_{i}|}}+5x^{3}} . With these modifications when necessary, the authors implement this algorithm in Konrad Zuse's Plankalkül, in Goldstine and von Neumann's flow diagrams, in Haskell Curry's proposed notation, in Short Code of John Mauchly and others, in the Intermediate Program Language of Arthur Burks, in the notation of Heinz Rutishauser, in the language and compiler by Corrado Böhm in 1951–52, in Autocode of Alick Glennie, in the A-2 system of Grace Hopper, in the Laning and Zierler system, in the earliest proposed Fortran (1954) of John Backus, in the Autocode for Mark 1 by Tony Brooker, in ПП-2 of Andrey Ershov, in BACAIC of Mandalay Grems and R. E. Porter, in Kompiler 2 of A. Kenton Elsworth and others, in ADES of E. K. Blum, the Internal Translator of Alan Perlis, in Fortran of John Backus, in ARITH-MATIC and MATH-MATIC from Grace Hopper's lab, in the system of Bauer and Samelson, and (in addenda in 2003 and 2009) PACT I and TRANSCODE. They then describe what kind of arithmetic was available, and provide a subjective rating of these languages on parameters of "implementation", "readability", "control structures", "data structures", "machine independence" and "impact", besides mentioning what each was the first to do. === Implementations in more recent languages === ==== C implementation ==== This shows a C implementation equivalent to the above ALGOL 60. ==== Python implementation ==== This shows a Python implementation. ==== Rust implementation ==== This shows a Rust implementation. == References == == External links == Implementations in many languages at Rosetta Code Implementations in several languages
Wikipedia/Trabb_Pardo–Knuth_algorithm
In object-oriented programming, inheritance is the mechanism of basing an object or class upon another object (prototype-based inheritance) or class (class-based inheritance), retaining similar implementation. Also defined as deriving new classes (sub classes) from existing ones such as super class or base class and then forming them into a hierarchy of classes. In most class-based object-oriented languages like C++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of: constructors, destructors, overloaded operators and friend functions of the base class. Inheritance allows programmers to create classes that are built upon existing classes, to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes and interfaces. The relationships of objects or classes through inheritance give rise to a directed acyclic graph. An inherited class is called a subclass of its parent class or super class. The term inheritance is loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one class inherits from another), with the corresponding technique in prototype-based programming being instead called delegation (one object delegates to another). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved. Inheritance should not be confused with subtyping. In some languages inheritance and subtyping agree, whereas in others they differ; in general, subtyping establishes an is-a relationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to as interface inheritance (without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known as implementation inheritance or code inheritance. Still, inheritance is a commonly used mechanism for establishing subtype relationships. Inheritance is contrasted with object composition, where one object contains another object (or objects of one class contain objects of another class); see composition over inheritance. In contrast to subtyping’s is-a relationship, composition implements a has-a relationship. Mathematically speaking, inheritance in any system of classes induces a strict partial order on the set of classes in that system. == History == In 1966, Tony Hoare presented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant. Influenced by this, in 1967 Ole-Johan Dahl and Kristen Nygaard presented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together. The attributes of a compound object would be accessible by dot notation. This idea was first adopted in the Simula 67 programming language. The idea then spread to Smalltalk, C++, Java, Python, and many other languages. == Types == There are various types of inheritance, based on paradigm and specific language. Single inheritance where subclasses inherit the features of one superclass. A class acquires the properties of another class. Multiple inheritance where one class can have more than one superclass and inherit features from all parent classes. "Multiple inheritance ... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book on Objective C, Brad Cox actually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events." Multilevel inheritance where a subclass is inherited from another subclass. It is not uncommon that a class is derived from another derived class as shown in the figure "Multilevel inheritance". The class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C. The class B is known as intermediate base class because it provides a link for the inheritance between A and C. The chain ABC is known as inheritance path. A derived class with multilevel inheritance is declared as follows: This process can be extended to any number of levels. Hierarchical inheritance This is where one class serves as a superclass (base class) for more than one sub class. For example, a parent class, A, can have two subclasses B and C. Both B and C's parent class is A, but B and C are two separate subclasses. Hybrid inheritance Hybrid inheritance is when a mix of two or more of the above types of inheritance occurs. An example of this is when a class A has a subclass B which has two subclasses, C and D. This is a mixture of both multilevel inheritance and hierarchal inheritance. == Subclasses and superclasses == Subclasses, derived classes, heir classes, or child classes are modular derivative classes that inherit one or more language entities from one or more other classes (called superclass, base classes, or parent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits the instance variables and member functions of its superclasses. The general form of defining a derived class is: The colon indicates that the subclass inherits from the superclass. The visibility is optional and, if present, may be either private or public. The default visibility is private. Visibility specifies whether the features of the base class are privately derived or publicly derived. Some languages also support the inheritance of other constructs. For example, in Eiffel, contracts that define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is considered reused in the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict at compile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the same method signature. === Non-subclassable classes === In some languages a class may be declared as non-subclassable by adding certain class modifiers to the class declaration. Examples include the final keyword in Java and C++11 onwards or the sealed keyword in C#. Such modifiers are added to the class declaration before the class keyword and the class identifier declaration. Such non-subclassable classes restrict reusability, particularly when developers only have access to precompiled binaries and not source code. A non-subclassable class has no subclasses, so it can be easily deduced at compile time that references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcasting a reference type violates the type system). Because the exact type of the object being referenced is known before execution, early binding (also called static dispatch) can be used instead of late binding (also called dynamic dispatch), which requires one or more virtual method table lookups depending on whether multiple inheritance or only single inheritance are supported in the programming language that is being used. === Non-overridable methods === Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). A private method is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). A final method in Java, a sealed method in C# or a frozen feature in Eiffel cannot be overridden. === Virtual methods === If a superclass method is a virtual method, then invocations of the superclass method will be dynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such as inline expansion. == Visibility of inherited members == The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++. == Applications == Inheritance is used to co-relate two or more classes to each other. === Overriding === Many object-oriented programming languages permit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is called overriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods. An alternative to overriding is hiding the inherited code. === Code reuse === Implementation inheritance is the mechanism whereby a subclass re-uses code in a base class. By default the subclass retains all of the operations of the base class, but the subclass may override some or all operations, replacing the base-class implementation with its own. In the following Python example, subclasses SquareSumComputer and CubeSumComputer override the transform() method of the base class SumComputer. The base class comprises operations to compute the sum of the squares between two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into its square and cube respectively. The subclasses therefore compute the sum of the squares/cubes between two integers. Below is an example of Python. In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor. The primary concern is that implementation inheritance does not provide any assurance of polymorphic substitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicit delegation, requires more programming effort, but avoids the substitutability issue. In C++ private inheritance can be used as a form of implementation inheritance without substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship. Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class. == Inheritance vs subtyping == Inheritance is similar to but distinct from subtyping. Subtyping enables a given type to be substituted for another type or abstraction and is said to establish an is-a relationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classes B and A, where B is both a subclass and a subtype of A and can be used as an A wherever a B is specified (via a reference, a pointer or the object itself). In programming languages that do not support inheritance as a subtyping mechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship between types. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entail behavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see the Liskov substitution principle. (Compare connotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another. === Design constraints === Using inheritance extensively in designing a program imposes certain constraints. For example, consider a class Person that contains a person's name, date of birth, address and phone number. We can define a subclass of Person called Student that contains the person's grade point average and classes taken, and another subclass of Person called Employee that contains the person's job-title, employer, and salary. In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable: Singleness Using single inheritance, a subclass can inherit from only one superclass. Continuing the example given above, a Person object can be either a Student or an Employee, but not both. Using multiple inheritance partially solves this problem, as one can then define a StudentEmployee class that inherits from both Student and Employee. However, in most implementations, it can still inherit from each superclass only once, and thus, does not support cases in which a student has two jobs or attends two institutions. The inheritance model available in Eiffel makes this possible through support for repeated inheritance. Static The inheritance hierarchy of an object is fixed at instantiation when the object's type is selected and does not change with time. For example, the inheritance graph does not allow a Student object to become an Employee object while retaining the state of its Person superclass. (This kind of behavior, however, can be achieved with the decorator pattern.) Some have criticized inheritance, contending that it locks developers into their original design standards. Visibility Whenever client code has access to an object, it generally has access to all the object's superclass data. Even if the superclass has not been declared public, the client can still cast the object to its superclass type. For example, there is no way to give a function a pointer to a Student's grade point average and transcript without also giving that function access to all of the personal data stored in the student's Person superclass. Many modern languages, including C++ and Java, provide a "protected" access modifier that allows subclasses to access the data, without allowing any code outside the chain of inheritance to access it. The composite reuse principle is an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes. == Issues and alternatives == Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors of Design Patterns, who advocate instead for interface inheritance, and favor composition over inheritance. For example, the decorator pattern (as mentioned above) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem, role-oriented programming introduces a distinct relationship, played-by, combining properties of inheritance and composition into a new concept. According to Allen Holub, the main problem with implementation inheritance is that it introduces unnecessary coupling in the form of the "fragile base class problem": modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API. Another way of stating this is that "inheritance breaks encapsulation". The problem surfaces clearly in open object-oriented systems such as frameworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms. Reportedly, Java inventor James Gosling has spoken against implementation inheritance, stating that he would not include it if he were to redesign Java. Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990; a modern example of this is the Go programming language. Complex inheritance, or inheritance used within an insufficiently mature design, may lead to the yo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code. Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged. Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such as Entity–component–system) allow program users to define variations of an entity at runtime. == See also == Archetype pattern – Software design pattern Circle–ellipse problem Defeasible reasoning – Reasoning that is rationally compelling, though not deductively valid Interface (computing) – Shared boundary between elements of a computing system Method overriding – Language feature in object-oriented programming Mixin – Class in object-oriented programming languages Polymorphism (computer science) – Using one interface or symbol with regards to multiple different types Protocol – Abstraction of a classPages displaying short descriptions of redirect targets Role-oriented programming – Programming paradigm based on conceptual understanding of objects Trait (computer programming) – Set of methods that extend the functionality of a class Virtual inheritance – Technique in the C++ language == Notes == == References == == Further reading == Meyer, Bertrand (1997). "24. Using Inheritance Well" (PDF). Object-Oriented Software Construction (2nd ed.). Prentice Hall. pp. 809–870. ISBN 978-0136291558. Samokhin, Vadim (2017). "Implementation Inheritance Is Evil". HackerNoon. Medium.
Wikipedia/Superclass_(computer_science)
In computer science, a union is a value that may have any of multiple representations or formats within the same area of memory; that consists of a variable that may hold such a data structure. Some programming languages support a union type for such a data type. In other words, a union type specifies the permitted types that may be stored in its instances, e.g., float and integer. In contrast with a record, which could be defined to contain both a float and an integer; a union would hold only one at a time. A union can be pictured as a chunk of memory that is used to store variables of different data types. Once a new value is assigned to a field, the existing data is overwritten with the new data. The memory area storing the value has no intrinsic type (other than just bytes or words of memory), but the value can be treated as one of several abstract data types, having the type of the value that was last written to the memory area. In type theory, a union has a sum type; this corresponds to disjoint union in mathematics. Depending on the language and type, a union value may be used in some operations, such as assignment and comparison for equality, without knowing its specific type. Other operations may require that knowledge, either by some external information, or by the use of a tagged union. == Untagged unions == Because of the limitations of their use, untagged unions are generally only provided in untyped languages or in a type-unsafe way (as in C). They have the advantage over simple tagged unions of not requiring space to store a data type tag. The name "union" stems from the type's formal definition. If a type is considered as the set of all values that that type can take on, a union type is simply the mathematical union of its constituting types, since it can take on any value any of its fields can. Also, because a mathematical union discards duplicates, if more than one field of the union can take on a single common value, it is impossible to tell from the value alone which field was last written. However, one useful programming function of unions is to map smaller data elements to larger ones for easier manipulation. A data structure consisting, for example, of 4 bytes and a 32-bit integer, can form a union with an unsigned 64-bit integer, and thus be more readily accessed for purposes of comparison etc. == Unions in various programming languages == === ALGOL 68 === ALGOL 68 has tagged unions, and uses a case clause to distinguish and extract the constituent type at runtime. A union containing another union is treated as the set of all its constituent possibilities, and if the context requires it a union is automatically coerced into the wider union. A union can explicitly contain no value, which can be distinguished at runtime. An example is: mode node = union (real, int, string, void); node n := "abc"; case n in (real r): print(("real:", r)), (int i): print(("int:", i)), (string s): print(("string:", s)), (void): print(("void:", "EMPTY")), out print(("?:", n)) esac The syntax of the C/C++ union type and the notion of casts was derived from ALGOL 68, though in an untagged form. === C/C++ === In C and C++, untagged unions are expressed nearly exactly like structures (structs), except that each data member is located at the same memory address. The data members, as in structures, need not be primitive values, and in fact may be structures or even other unions. C++ (since C++11) also allows for a data member to be any type that has a full-fledged constructor/destructor and/or copy constructor, or a non-trivial copy assignment operator. For example, it is possible to have the standard C++ string as a member of a union. The primary use of a union is allowing access to a common location by different data types, for example hardware input/output access, bitfield and word sharing, or type punning. Unions can also provide low-level polymorphism. However, there is no checking of types, so it is up to the programmer to be sure that the proper fields are accessed in different contexts. The relevant field of a union variable is typically determined by the state of other variables, possibly in an enclosing struct. One common C programming idiom uses unions to perform what C++ calls a reinterpret_cast, by assigning to one field of a union and reading from another, as is done in code which depends on the raw representation of the values. A practical example is the method of computing square roots using the IEEE representation. This is not, however, a safe use of unions in general. Structure and union specifiers have the same form. [ . . . ] The size of a union is sufficient to contain the largest of its members. The value of at most one of the members can be stored in a union object at any time. A pointer to a union object, suitably converted, points to each of its members (or if a member is a bit-field, then to the unit in which it resides), and vice versa. ==== Anonymous union ==== In C++, C11, and as a non-standard extension in many compilers, unions can also be anonymous. Their data members do not need to be referenced, are instead accessed directly. They have some restrictions as opposed to traditional unions: in C11, they must be a member of another structure or union, and in C++, they can not have methods or access specifiers. Simply omitting the class-name portion of the syntax does not make a union an anonymous union. For a union to qualify as an anonymous union, the declaration must not declare an object. Example: Anonymous unions are also useful in C struct definitions to provide a sense of namespacing. ==== Transparent union ==== In compilers such as GCC, Clang, and IBM XL C for AIX, a transparent_union attribute is available for union types. Types contained in the union can be converted transparently to the union type itself in a function call, provided that all types have the same size. It is mainly intended for function with multiple parameter interfaces, a use necessitated by early Unix extensions and later re-standardisation. === COBOL === In COBOL, union data items are defined in two ways. The first uses the RENAMES (66 level) keyword, which effectively maps a second alphanumeric data item on top of the same memory location as a preceding data item. In the example code below, data item PERSON-REC is defined as a group containing another group and a numeric data item. PERSON-DATA is defined as an alphanumeric data item that renames PERSON-REC, treating the data bytes continued within it as character data. The second way to define a union type is by using the REDEFINES keyword. In the example code below, data item VERS-NUM is defined as a 2-byte binary integer containing a version number. A second data item VERS-BYTES is defined as a two-character alphanumeric variable. Since the second item is redefined over the first item, the two items share the same address in memory, and therefore share the same underlying data bytes. The first item interprets the two data bytes as a binary value, while the second item interprets the bytes as character values. === Pascal === In Pascal, there are two ways to create unions. One is the standard way through a variant record. The second is a nonstandard means of declaring a variable as absolute, meaning it is placed at the same memory location as another variable or at an absolute address. While all Pascal compilers support variant records, only some support absolute variables. For the purposes of this example, the following are all integer types: a byte consists of 8 bits, a word is 16 bits, and an integer is 32 bits. The following example shows the non-standard absolute form: In the first example, each of the elements of the array B maps to one of the specific bytes of the variable A. In the second example, the variable C is assigned to the exact machine address 0. In the following example, a record has variants, some of which share the same location as others: === PL/I === In PL/I the original term for a union was cell, which is still accepted as a synonym for union by several compilers. The union declaration is similar to the structure definition, where elements at the same level within the union declaration occupy the same storage. Elements of the union can be any data type, including structures and array.: pp192–193  Here vers_num and vers_bytes occupy the same storage locations. An alternative to a union declaration is the DEFINED attribute, which allows alternative declarations of storage, however the data types of the base and defined variables must match.: pp.289–293  === Rust === Rust implements both tagged and untagged unions. In Rust, tagged unions are implemented using the enum keyword. Unlike enumerated types in most other languages, enum variants in Rust can contain additional data in the form of a tuple or struct, making them tagged unions rather than simple enumerated types. Rust also supports untagged unions using the union keyword. The memory layout of unions in Rust is undefined by default, but a union with the #[repr(C)] attribute will be laid out in memory exactly like the equivalent union in C. Reading the fields of a union can only be done within an unsafe function or block, as the compiler cannot guarantee that the data in the union will be valid for the type of the field; if this is not the case, it will result in undefined behavior. == Syntax and example == === C/C++ === In C and C++, the syntax is: A structure can also be a member of a union, as the following example shows: This example defines a variable uvar as a union (tagged as name1), which contains two members, a structure (tagged as name2) named svar (which in turn contains three members), and an integer variable named d. Unions may occur within structures and arrays, and vice versa: The number ival is referred to as symtab[i].u.ival and the first character of string sval by either of *symtab[i].u.sval or symtab[i].u.sval[0]. === PHP === Union types were introduced in PHP 8.0. The values are implicitly "tagged" with a type by the language, and may be retrieved by "gettype()". === Python === Support for typing was introduced in Python 3.5. The new syntax for union types were introduced in Python 3.10. === TypeScript === Union types are supported in TypeScript. The values are implicitly "tagged" with a type by the language, and may be retrieved using a typeof call for primitive values and an instanceof comparison for complex data types. Types with overlapping usage (e.g. a slice method exists on both strings and arrays, the plus operator works on both strings and numbers) don't need additional narrowing to use these features. === Rust === Tagged unions in Rust use the enum keyword, and can contain tuple and struct variants: Untagged unions in Rust use the union keyword: Reading from the fields of an untagged union results in undefined behavior if the data in the union is not valid as the type of the field, and thus requires an unsafe block: == References == Kernighan, Brian W.; Ritchie, Dennis M. (1978). The C Programming Language (1st ed.). Prentice Hall. p. 138. ISBN 978-0131101630. Retrieved Jan 23, 2018. == External links == boost::variant, a type-safe alternative to C++ unions MSDN: Classes, Structures & Unions, for examples and syntax differences, differences between union & structure Difference between struct and union in C++
Wikipedia/Union_(computer_science)
In object-oriented programming, inheritance is the mechanism of basing an object or class upon another object (prototype-based inheritance) or class (class-based inheritance), retaining similar implementation. Also defined as deriving new classes (sub classes) from existing ones such as super class or base class and then forming them into a hierarchy of classes. In most class-based object-oriented languages like C++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of: constructors, destructors, overloaded operators and friend functions of the base class. Inheritance allows programmers to create classes that are built upon existing classes, to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes and interfaces. The relationships of objects or classes through inheritance give rise to a directed acyclic graph. An inherited class is called a subclass of its parent class or super class. The term inheritance is loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one class inherits from another), with the corresponding technique in prototype-based programming being instead called delegation (one object delegates to another). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved. Inheritance should not be confused with subtyping. In some languages inheritance and subtyping agree, whereas in others they differ; in general, subtyping establishes an is-a relationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to as interface inheritance (without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known as implementation inheritance or code inheritance. Still, inheritance is a commonly used mechanism for establishing subtype relationships. Inheritance is contrasted with object composition, where one object contains another object (or objects of one class contain objects of another class); see composition over inheritance. In contrast to subtyping’s is-a relationship, composition implements a has-a relationship. Mathematically speaking, inheritance in any system of classes induces a strict partial order on the set of classes in that system. == History == In 1966, Tony Hoare presented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant. Influenced by this, in 1967 Ole-Johan Dahl and Kristen Nygaard presented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together. The attributes of a compound object would be accessible by dot notation. This idea was first adopted in the Simula 67 programming language. The idea then spread to Smalltalk, C++, Java, Python, and many other languages. == Types == There are various types of inheritance, based on paradigm and specific language. Single inheritance where subclasses inherit the features of one superclass. A class acquires the properties of another class. Multiple inheritance where one class can have more than one superclass and inherit features from all parent classes. "Multiple inheritance ... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book on Objective C, Brad Cox actually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events." Multilevel inheritance where a subclass is inherited from another subclass. It is not uncommon that a class is derived from another derived class as shown in the figure "Multilevel inheritance". The class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C. The class B is known as intermediate base class because it provides a link for the inheritance between A and C. The chain ABC is known as inheritance path. A derived class with multilevel inheritance is declared as follows: This process can be extended to any number of levels. Hierarchical inheritance This is where one class serves as a superclass (base class) for more than one sub class. For example, a parent class, A, can have two subclasses B and C. Both B and C's parent class is A, but B and C are two separate subclasses. Hybrid inheritance Hybrid inheritance is when a mix of two or more of the above types of inheritance occurs. An example of this is when a class A has a subclass B which has two subclasses, C and D. This is a mixture of both multilevel inheritance and hierarchal inheritance. == Subclasses and superclasses == Subclasses, derived classes, heir classes, or child classes are modular derivative classes that inherit one or more language entities from one or more other classes (called superclass, base classes, or parent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits the instance variables and member functions of its superclasses. The general form of defining a derived class is: The colon indicates that the subclass inherits from the superclass. The visibility is optional and, if present, may be either private or public. The default visibility is private. Visibility specifies whether the features of the base class are privately derived or publicly derived. Some languages also support the inheritance of other constructs. For example, in Eiffel, contracts that define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is considered reused in the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict at compile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the same method signature. === Non-subclassable classes === In some languages a class may be declared as non-subclassable by adding certain class modifiers to the class declaration. Examples include the final keyword in Java and C++11 onwards or the sealed keyword in C#. Such modifiers are added to the class declaration before the class keyword and the class identifier declaration. Such non-subclassable classes restrict reusability, particularly when developers only have access to precompiled binaries and not source code. A non-subclassable class has no subclasses, so it can be easily deduced at compile time that references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcasting a reference type violates the type system). Because the exact type of the object being referenced is known before execution, early binding (also called static dispatch) can be used instead of late binding (also called dynamic dispatch), which requires one or more virtual method table lookups depending on whether multiple inheritance or only single inheritance are supported in the programming language that is being used. === Non-overridable methods === Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). A private method is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). A final method in Java, a sealed method in C# or a frozen feature in Eiffel cannot be overridden. === Virtual methods === If a superclass method is a virtual method, then invocations of the superclass method will be dynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such as inline expansion. == Visibility of inherited members == The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++. == Applications == Inheritance is used to co-relate two or more classes to each other. === Overriding === Many object-oriented programming languages permit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is called overriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods. An alternative to overriding is hiding the inherited code. === Code reuse === Implementation inheritance is the mechanism whereby a subclass re-uses code in a base class. By default the subclass retains all of the operations of the base class, but the subclass may override some or all operations, replacing the base-class implementation with its own. In the following Python example, subclasses SquareSumComputer and CubeSumComputer override the transform() method of the base class SumComputer. The base class comprises operations to compute the sum of the squares between two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into its square and cube respectively. The subclasses therefore compute the sum of the squares/cubes between two integers. Below is an example of Python. In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor. The primary concern is that implementation inheritance does not provide any assurance of polymorphic substitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicit delegation, requires more programming effort, but avoids the substitutability issue. In C++ private inheritance can be used as a form of implementation inheritance without substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship. Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class. == Inheritance vs subtyping == Inheritance is similar to but distinct from subtyping. Subtyping enables a given type to be substituted for another type or abstraction and is said to establish an is-a relationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classes B and A, where B is both a subclass and a subtype of A and can be used as an A wherever a B is specified (via a reference, a pointer or the object itself). In programming languages that do not support inheritance as a subtyping mechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship between types. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entail behavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see the Liskov substitution principle. (Compare connotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another. === Design constraints === Using inheritance extensively in designing a program imposes certain constraints. For example, consider a class Person that contains a person's name, date of birth, address and phone number. We can define a subclass of Person called Student that contains the person's grade point average and classes taken, and another subclass of Person called Employee that contains the person's job-title, employer, and salary. In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable: Singleness Using single inheritance, a subclass can inherit from only one superclass. Continuing the example given above, a Person object can be either a Student or an Employee, but not both. Using multiple inheritance partially solves this problem, as one can then define a StudentEmployee class that inherits from both Student and Employee. However, in most implementations, it can still inherit from each superclass only once, and thus, does not support cases in which a student has two jobs or attends two institutions. The inheritance model available in Eiffel makes this possible through support for repeated inheritance. Static The inheritance hierarchy of an object is fixed at instantiation when the object's type is selected and does not change with time. For example, the inheritance graph does not allow a Student object to become an Employee object while retaining the state of its Person superclass. (This kind of behavior, however, can be achieved with the decorator pattern.) Some have criticized inheritance, contending that it locks developers into their original design standards. Visibility Whenever client code has access to an object, it generally has access to all the object's superclass data. Even if the superclass has not been declared public, the client can still cast the object to its superclass type. For example, there is no way to give a function a pointer to a Student's grade point average and transcript without also giving that function access to all of the personal data stored in the student's Person superclass. Many modern languages, including C++ and Java, provide a "protected" access modifier that allows subclasses to access the data, without allowing any code outside the chain of inheritance to access it. The composite reuse principle is an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes. == Issues and alternatives == Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors of Design Patterns, who advocate instead for interface inheritance, and favor composition over inheritance. For example, the decorator pattern (as mentioned above) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem, role-oriented programming introduces a distinct relationship, played-by, combining properties of inheritance and composition into a new concept. According to Allen Holub, the main problem with implementation inheritance is that it introduces unnecessary coupling in the form of the "fragile base class problem": modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API. Another way of stating this is that "inheritance breaks encapsulation". The problem surfaces clearly in open object-oriented systems such as frameworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms. Reportedly, Java inventor James Gosling has spoken against implementation inheritance, stating that he would not include it if he were to redesign Java. Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990; a modern example of this is the Go programming language. Complex inheritance, or inheritance used within an insufficiently mature design, may lead to the yo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code. Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged. Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such as Entity–component–system) allow program users to define variations of an entity at runtime. == See also == Archetype pattern – Software design pattern Circle–ellipse problem Defeasible reasoning – Reasoning that is rationally compelling, though not deductively valid Interface (computing) – Shared boundary between elements of a computing system Method overriding – Language feature in object-oriented programming Mixin – Class in object-oriented programming languages Polymorphism (computer science) – Using one interface or symbol with regards to multiple different types Protocol – Abstraction of a classPages displaying short descriptions of redirect targets Role-oriented programming – Programming paradigm based on conceptual understanding of objects Trait (computer programming) – Set of methods that extend the functionality of a class Virtual inheritance – Technique in the C++ language == Notes == == References == == Further reading == Meyer, Bertrand (1997). "24. Using Inheritance Well" (PDF). Object-Oriented Software Construction (2nd ed.). Prentice Hall. pp. 809–870. ISBN 978-0136291558. Samokhin, Vadim (2017). "Implementation Inheritance Is Evil". HackerNoon. Medium.
Wikipedia/Inheritance_(computer_science)
In object-oriented programming, inheritance is the mechanism of basing an object or class upon another object (prototype-based inheritance) or class (class-based inheritance), retaining similar implementation. Also defined as deriving new classes (sub classes) from existing ones such as super class or base class and then forming them into a hierarchy of classes. In most class-based object-oriented languages like C++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of: constructors, destructors, overloaded operators and friend functions of the base class. Inheritance allows programmers to create classes that are built upon existing classes, to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes and interfaces. The relationships of objects or classes through inheritance give rise to a directed acyclic graph. An inherited class is called a subclass of its parent class or super class. The term inheritance is loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one class inherits from another), with the corresponding technique in prototype-based programming being instead called delegation (one object delegates to another). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved. Inheritance should not be confused with subtyping. In some languages inheritance and subtyping agree, whereas in others they differ; in general, subtyping establishes an is-a relationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to as interface inheritance (without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known as implementation inheritance or code inheritance. Still, inheritance is a commonly used mechanism for establishing subtype relationships. Inheritance is contrasted with object composition, where one object contains another object (or objects of one class contain objects of another class); see composition over inheritance. In contrast to subtyping’s is-a relationship, composition implements a has-a relationship. Mathematically speaking, inheritance in any system of classes induces a strict partial order on the set of classes in that system. == History == In 1966, Tony Hoare presented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant. Influenced by this, in 1967 Ole-Johan Dahl and Kristen Nygaard presented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together. The attributes of a compound object would be accessible by dot notation. This idea was first adopted in the Simula 67 programming language. The idea then spread to Smalltalk, C++, Java, Python, and many other languages. == Types == There are various types of inheritance, based on paradigm and specific language. Single inheritance where subclasses inherit the features of one superclass. A class acquires the properties of another class. Multiple inheritance where one class can have more than one superclass and inherit features from all parent classes. "Multiple inheritance ... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book on Objective C, Brad Cox actually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events." Multilevel inheritance where a subclass is inherited from another subclass. It is not uncommon that a class is derived from another derived class as shown in the figure "Multilevel inheritance". The class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C. The class B is known as intermediate base class because it provides a link for the inheritance between A and C. The chain ABC is known as inheritance path. A derived class with multilevel inheritance is declared as follows: This process can be extended to any number of levels. Hierarchical inheritance This is where one class serves as a superclass (base class) for more than one sub class. For example, a parent class, A, can have two subclasses B and C. Both B and C's parent class is A, but B and C are two separate subclasses. Hybrid inheritance Hybrid inheritance is when a mix of two or more of the above types of inheritance occurs. An example of this is when a class A has a subclass B which has two subclasses, C and D. This is a mixture of both multilevel inheritance and hierarchal inheritance. == Subclasses and superclasses == Subclasses, derived classes, heir classes, or child classes are modular derivative classes that inherit one or more language entities from one or more other classes (called superclass, base classes, or parent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits the instance variables and member functions of its superclasses. The general form of defining a derived class is: The colon indicates that the subclass inherits from the superclass. The visibility is optional and, if present, may be either private or public. The default visibility is private. Visibility specifies whether the features of the base class are privately derived or publicly derived. Some languages also support the inheritance of other constructs. For example, in Eiffel, contracts that define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is considered reused in the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict at compile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the same method signature. === Non-subclassable classes === In some languages a class may be declared as non-subclassable by adding certain class modifiers to the class declaration. Examples include the final keyword in Java and C++11 onwards or the sealed keyword in C#. Such modifiers are added to the class declaration before the class keyword and the class identifier declaration. Such non-subclassable classes restrict reusability, particularly when developers only have access to precompiled binaries and not source code. A non-subclassable class has no subclasses, so it can be easily deduced at compile time that references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcasting a reference type violates the type system). Because the exact type of the object being referenced is known before execution, early binding (also called static dispatch) can be used instead of late binding (also called dynamic dispatch), which requires one or more virtual method table lookups depending on whether multiple inheritance or only single inheritance are supported in the programming language that is being used. === Non-overridable methods === Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). A private method is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). A final method in Java, a sealed method in C# or a frozen feature in Eiffel cannot be overridden. === Virtual methods === If a superclass method is a virtual method, then invocations of the superclass method will be dynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such as inline expansion. == Visibility of inherited members == The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++. == Applications == Inheritance is used to co-relate two or more classes to each other. === Overriding === Many object-oriented programming languages permit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is called overriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods. An alternative to overriding is hiding the inherited code. === Code reuse === Implementation inheritance is the mechanism whereby a subclass re-uses code in a base class. By default the subclass retains all of the operations of the base class, but the subclass may override some or all operations, replacing the base-class implementation with its own. In the following Python example, subclasses SquareSumComputer and CubeSumComputer override the transform() method of the base class SumComputer. The base class comprises operations to compute the sum of the squares between two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into its square and cube respectively. The subclasses therefore compute the sum of the squares/cubes between two integers. Below is an example of Python. In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor. The primary concern is that implementation inheritance does not provide any assurance of polymorphic substitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicit delegation, requires more programming effort, but avoids the substitutability issue. In C++ private inheritance can be used as a form of implementation inheritance without substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship. Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class. == Inheritance vs subtyping == Inheritance is similar to but distinct from subtyping. Subtyping enables a given type to be substituted for another type or abstraction and is said to establish an is-a relationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classes B and A, where B is both a subclass and a subtype of A and can be used as an A wherever a B is specified (via a reference, a pointer or the object itself). In programming languages that do not support inheritance as a subtyping mechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship between types. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entail behavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see the Liskov substitution principle. (Compare connotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another. === Design constraints === Using inheritance extensively in designing a program imposes certain constraints. For example, consider a class Person that contains a person's name, date of birth, address and phone number. We can define a subclass of Person called Student that contains the person's grade point average and classes taken, and another subclass of Person called Employee that contains the person's job-title, employer, and salary. In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable: Singleness Using single inheritance, a subclass can inherit from only one superclass. Continuing the example given above, a Person object can be either a Student or an Employee, but not both. Using multiple inheritance partially solves this problem, as one can then define a StudentEmployee class that inherits from both Student and Employee. However, in most implementations, it can still inherit from each superclass only once, and thus, does not support cases in which a student has two jobs or attends two institutions. The inheritance model available in Eiffel makes this possible through support for repeated inheritance. Static The inheritance hierarchy of an object is fixed at instantiation when the object's type is selected and does not change with time. For example, the inheritance graph does not allow a Student object to become an Employee object while retaining the state of its Person superclass. (This kind of behavior, however, can be achieved with the decorator pattern.) Some have criticized inheritance, contending that it locks developers into their original design standards. Visibility Whenever client code has access to an object, it generally has access to all the object's superclass data. Even if the superclass has not been declared public, the client can still cast the object to its superclass type. For example, there is no way to give a function a pointer to a Student's grade point average and transcript without also giving that function access to all of the personal data stored in the student's Person superclass. Many modern languages, including C++ and Java, provide a "protected" access modifier that allows subclasses to access the data, without allowing any code outside the chain of inheritance to access it. The composite reuse principle is an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes. == Issues and alternatives == Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors of Design Patterns, who advocate instead for interface inheritance, and favor composition over inheritance. For example, the decorator pattern (as mentioned above) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem, role-oriented programming introduces a distinct relationship, played-by, combining properties of inheritance and composition into a new concept. According to Allen Holub, the main problem with implementation inheritance is that it introduces unnecessary coupling in the form of the "fragile base class problem": modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API. Another way of stating this is that "inheritance breaks encapsulation". The problem surfaces clearly in open object-oriented systems such as frameworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms. Reportedly, Java inventor James Gosling has spoken against implementation inheritance, stating that he would not include it if he were to redesign Java. Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990; a modern example of this is the Go programming language. Complex inheritance, or inheritance used within an insufficiently mature design, may lead to the yo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code. Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged. Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such as Entity–component–system) allow program users to define variations of an entity at runtime. == See also == Archetype pattern – Software design pattern Circle–ellipse problem Defeasible reasoning – Reasoning that is rationally compelling, though not deductively valid Interface (computing) – Shared boundary between elements of a computing system Method overriding – Language feature in object-oriented programming Mixin – Class in object-oriented programming languages Polymorphism (computer science) – Using one interface or symbol with regards to multiple different types Protocol – Abstraction of a classPages displaying short descriptions of redirect targets Role-oriented programming – Programming paradigm based on conceptual understanding of objects Trait (computer programming) – Set of methods that extend the functionality of a class Virtual inheritance – Technique in the C++ language == Notes == == References == == Further reading == Meyer, Bertrand (1997). "24. Using Inheritance Well" (PDF). Object-Oriented Software Construction (2nd ed.). Prentice Hall. pp. 809–870. ISBN 978-0136291558. Samokhin, Vadim (2017). "Implementation Inheritance Is Evil". HackerNoon. Medium.
Wikipedia/Subclass_(computer_science)
In computer programming, a parameter, a.k.a. formal argument, is a variable that represents an argument, a.k.a. actual argument, a.k.a. actual parameter, to a subroutine call.. A function's signature defines its parameters. A call invocation involves evaluating each argument expression of a call and associating the result with the corresponding parameter. For example, consider subroutine def add(x, y): return x + y. Variables x and y are parameters. For call add(2, 3), the expressions 2 and 3 are arguments. For call add(a+1, b+2), the arguments are a+1 and b+2. Parameter passing is defined by a programming language. Evaluation strategy defines the semantics for how parameters can be declared and how arguments are passed to a subroutine. Generally, with call by value, a parameter acts like a new, local variable initialized to the value of the argument. If the argument is a variable, the subroutine cannot modify the argument state because the parameter is a copy. With call by reference, which requires the argument to be a variable, the parameter is an alias of the argument. == Example == The following program defines a function named SalesTax with one parameter named price; both typed double. For call SalesTax(10.00), the argument 10.00 is evaluated to a double value (10) and assigned to parameter variable price. The function is executed and returns the value 0.5. == Parameters and arguments == The terms parameter and argument may have different meanings in different programming languages. Sometimes they are used interchangeably, and the context is used to distinguish the meaning. The term parameter (sometimes called formal parameter) is often used to refer to the variable as found in the function declaration, while argument (sometimes called actual parameter) refers to the actual input supplied at a function call statement. For example, if one defines a function as def f(x): ..., then x is the parameter, and if it is called by a = ...; f(a) then a is the argument. A parameter is an (unbound) variable, while the argument can be a literal or variable or more complex expression involving literals and variables. In case of call by value, what is passed to the function is the value of the argument – for example, f(2) and a = 2; f(a) are equivalent calls – while in call by reference, with a variable as argument, what is passed is a reference to that variable - even though the syntax for the function call could stay the same. The specification for pass-by-reference or pass-by-value would be made in the function declaration and/or definition. Parameters appear in procedure definitions; arguments appear in procedure calls. In the function definition f(x) = x*x the variable x is a parameter; in the function call f(2) the value 2 is the argument of the function. Loosely, a parameter is a type, and an argument is an instance. A parameter is an intrinsic property of the procedure, included in its definition. For example, in many languages, a procedure to add two supplied integers together and calculate the sum would need two parameters, one for each integer. In general, a procedure may be defined with any number of parameters, or no parameters at all. If a procedure has parameters, the part of its definition that specifies the parameters is called its parameter list. By contrast, the arguments are the expressions supplied to the procedure when it is called, usually one expression matching one of the parameters. Unlike the parameters, which form an unchanging part of the procedure's definition, the arguments may vary from call to call. Each time a procedure is called, the part of the procedure call that specifies the arguments is called the argument list. Although parameters are also commonly referred to as arguments, arguments are sometimes thought of as the actual values or references assigned to the parameter variables when the subroutine is called at run-time. When discussing code that is calling into a subroutine, any values or references passed into the subroutine are the arguments, and the place in the code where these values or references are given is the parameter list. When discussing the code inside the subroutine definition, the variables in the subroutine's parameter list are the parameters, while the values of the parameters at runtime are the arguments. For example, in C, when dealing with threads it is common to pass in an argument of type void* and cast it to an expected type: To better understand the difference, consider the following function written in C: The function Sum has two parameters, named addend1 and addend2. It adds the values passed into the parameters, and returns the result to the subroutine's caller (using a technique automatically supplied by the C compiler). The code which calls the Sum function might look like this: The variables value1 and value2 are initialized with values. value1 and value2 are both arguments to the sum function in this context. At runtime, the values assigned to these variables are passed to the function Sum as arguments. In the Sum function, the parameters addend1 and addend2 are evaluated, yielding the arguments 40 and 2, respectively. The values of the arguments are added, and the result is returned to the caller, where it is assigned to the variable sum_value. Because of the difference between parameters and arguments, it is possible to supply inappropriate arguments to a procedure. The call may supply too many or too few arguments; one or more of the arguments may be a wrong type; or arguments may be supplied in the wrong order. Any of these situations causes a mismatch between the parameter and argument lists, and the procedure will often return an unintended answer or generate a runtime error. === Alternative convention in Eiffel === Within the Eiffel software development method and language, the terms argument and parameter have distinct uses established by convention. The term argument is used exclusively in reference to a routine's inputs, and the term parameter is used exclusively in type parameterization for generic classes. Consider the following routine definition: The routine sum takes two arguments addend1 and addend2, which are called the routine's formal arguments. A call to sum specifies actual arguments, as shown below with value1 and value2. Parameters are also thought of as either formal or actual. Formal generic parameters are used in the definition of generic classes. In the example below, the class HASH_TABLE is declared as a generic class which has two formal generic parameters, G representing data of interest and K representing the hash key for the data: When a class becomes a client to HASH_TABLE, the formal generic parameters are substituted with actual generic parameters in a generic derivation. In the following attribute declaration, my_dictionary is to be used as a character string based dictionary. As such, both data and key formal generic parameters are substituted with actual generic parameters of type STRING. == Datatypes == In strongly typed programming languages, each parameter's type must be specified in the procedure declaration. Languages using type inference attempt to discover the types automatically from the function's body and usage. Dynamically typed programming languages defer type resolution until run-time. Weakly typed languages perform little to no type resolution, relying instead on the programmer for correctness. Some languages use a special keyword (e.g. void) to indicate that the subroutine has no parameters; in formal type theory, such functions take an empty parameter list (whose type is not void, but rather unit). == Argument passing == The exact mechanism for assigning arguments to parameters, called argument passing, depends upon the evaluation strategy used for that parameter (typically call by value), which may be specified using keywords. === Default arguments === Some programming languages such as Ada, C++, Clojure, Common Lisp, Fortran 90, Python, Ruby, Tcl, and Windows PowerShell allow for a default argument to be explicitly or implicitly given in a subroutine's declaration. This allows the caller to omit that argument when calling the subroutine. If the default argument is explicitly given, then that value is used if it is not provided by the caller. If the default argument is implicit (sometimes by using a keyword such as Optional) then the language provides a well-known value (such as null, Empty, zero, an empty string, etc.) if a value is not provided by the caller. PowerShell example: Default arguments can be seen as a special case of the variable-length argument list. === Variable-length parameter lists === Some languages allow subroutines to be defined to accept a variable number of arguments. For such languages, the subroutines must iterate through the list of arguments. PowerShell example: === Named parameters === Some programming languages—such as Ada and Windows PowerShell—allow subroutines to have named parameters. This allows the calling code to be more self-documenting. It also provides more flexibility to the caller, often allowing the order of the arguments to be changed, or for arguments to be omitted as needed. PowerShell example: === Multiple parameters in functional languages === In lambda calculus, each function has exactly one parameter. What is thought of as functions with multiple parameters is usually represented in lambda calculus as a function which takes the first argument, and returns a function which takes the rest of the arguments; this is a transformation known as currying. Some programming languages, like ML and Haskell, follow this scheme. In these languages, every function has exactly one parameter, and what may look like the definition of a function of multiple parameters, is actually syntactic sugar for the definition of a function that returns a function, etc. Function application is left-associative in these languages as well as in lambda calculus, so what looks like an application of a function to multiple arguments is correctly evaluated as the function applied to the first argument, then the resulting function applied to the second argument, etc. == Output parameters == An output parameter, also known as an out parameter or return parameter, is a parameter used for output, rather than the more usual use for input. Using call by reference parameters, or call by value parameters where the value is a reference, as output parameters is an idiom in some languages, notably C and C++, while other languages have built-in support for output parameters. Languages with built-in support for output parameters include Ada (see Ada subprograms), Fortran (since Fortran 90; see Fortran "intent"), various procedural extensions to SQL, such as PL/SQL (see PL/SQL functions) and Transact-SQL, C# and the .NET Framework, Swift, and the scripting language TScript (see TScript function declarations). More precisely, one may distinguish three types of parameters or parameter modes: input parameters, output parameters, and input/output parameters; these are often denoted in, out, and in out or inout. An input argument (the argument to an input parameter) must be a value, such as an initialized variable or literal, and must not be redefined or assigned to; an output argument must be an assignable variable, but it need not be initialized, any existing value is not accessible, and must be assigned a value; and an input/output argument must be an initialized, assignable variable, and can optionally be assigned a value. The exact requirements and enforcement vary between languages – for example, in Ada 83 output parameters can only be assigned to, not read, even after assignment (this was removed in Ada 95 to remove the need for an auxiliary accumulator variable). These are analogous to the notion of a value in an expression being an r-value (has a value), an l-value (can be assigned), or an r-value/l-value (has a value and can be assigned), respectively, though these terms have specialized meanings in C. In some cases only input and input/output are distinguished, with output being considered a specific use of input/output, and in other cases only input and output (but not input/output) are supported. The default mode varies between languages: in Fortran 90 input/output is default, while in C# and SQL extensions input is default, and in TScript each parameter is explicitly specified as input or output. Syntactically, parameter mode is generally indicated with a keyword in the function declaration, such as void f(out int x) in C#. Conventionally output parameters are often put at the end of the parameter list to clearly distinguish them, though this is not always followed. TScript uses a different approach, where in the function declaration input parameters are listed, then output parameters, separated by a colon (:) and there is no return type to the function itself, as in this function, which computes the size of a text fragment: Parameter modes are a form of denotational semantics, stating the programmer's intent and allowing compilers to catch errors and apply optimizations – they do not necessarily imply operational semantics (how the parameter passing actually occurs). Notably, while input parameters can be implemented by call by value, and output and input/output parameters by call by reference – and this is a straightforward way to implement these modes in languages without built-in support – this is not always how they are implemented. This distinction is discussed in detail in the Ada '83 Rationale, which emphasizes that the parameter mode is abstracted from which parameter passing mechanism (by reference or by copy) is actually implemented. For instance, while in C# input parameters (default, no keyword) are passed by value, and output and input/output parameters (out and ref) are passed by reference, in PL/SQL input parameters (IN) are passed by reference, and output and input/output parameters (OUT and IN OUT) are by default passed by value and the result copied back, but can be passed by reference by using the NOCOPY compiler hint. A syntactically similar construction to output parameters is to assign the return value to a variable with the same name as the function. This is found in Pascal and Fortran 66 and Fortran 77, as in this Pascal example: This is semantically different in that when called, the function is simply evaluated – it is not passed a variable from the calling scope to store the output in. === Use === The primary use of output parameters is to return multiple values from a function, while the use of input/output parameters is to modify state using parameter passing (rather than by shared environment, as in global variables). An important use of returning multiple values is to solve the semipredicate problem of returning both a value and an error status – see Semipredicate problem: Multivalued return. For example, to return two variables from a function in C, one may write: where x is an input parameter and width and height are output parameters. A common use case in C and related languages is for exception handling, where a function places the return value in an output variable, and returns a Boolean corresponding to whether the function succeeded or not. An archetypal example is the TryParse method in .NET, especially C#, which parses a string into an integer, returning true on success and false on failure. This has the following signature: and may be used as follows: Similar considerations apply to returning a value of one of several possible types, where the return value can specify the type and then value is stored in one of several output variables. === Drawbacks === Output parameters are often discouraged in modern programming, essentially as being awkward, confusing, and too low-level – commonplace return values are considerably easier to understand and work with. Notably, output parameters involve functions with side effects (modifying the output parameter) and are semantically similar to references, which are more confusing than pure functions and values, and the distinction between output parameters and input/output parameters can be subtle. Further, since in common programming styles most parameters are simply input parameters, output parameters and input/output parameters are unusual and hence susceptible to misunderstanding. Output and input/output parameters prevent function composition, since the output is stored in variables, rather than in the value of an expression. Thus one must initially declare a variable, and then each step of a chain of functions must be a separate statement. For example, in C++ the following function composition: when written with output and input/output parameters instead becomes (for F it is an output parameter, for G an input/output parameter): In the special case of a function with a single output or input/output parameter and no return value, function composition is possible if the output or input/output parameter (or in C/C++, its address) is also returned by the function, in which case the above becomes: === Alternatives === There are various alternatives to the use cases of output parameters. For returning multiple values from a function, an alternative is to return a tuple. Syntactically this is clearer if automatic sequence unpacking and parallel assignment can be used, as in Go or Python, such as: For returning a value of one of several types, a tagged union can be used instead; the most common cases are nullable types (option types), where the return value can be null to indicate failure. For exception handling, one can return a nullable type, or raise an exception. For example, in Python one might have either: or, more idiomatically: The micro-optimization of not requiring a local variable and copying the return when using output variables can also be applied to conventional functions and return values by sufficiently sophisticated compilers. The usual alternative to output parameters in C and related languages is to return a single data structure containing all return values. For example, given a structure encapsulating width and height, one can write: In object-oriented languages, instead of using input/output parameters, one can often use call by sharing, passing a reference to an object and then mutating the object, though not changing which object the variable refers to. == See also == Command-line argument Evaluation strategy Operator overloading Free variables and bound variables == Notes == == References ==
Wikipedia/Parameter_(computer_science)
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure), returns a function as its result. All other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example, since it maps a function to its derivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, see Functor (disambiguation). In the untyped lambda calculus, all functions are higher-order; in a typed lambda calculus, from which most functional programming languages are derived, higher-order functions that take one function as argument are values with types of the form ( τ 1 → τ 2 ) → τ 3 {\displaystyle (\tau _{1}\to \tau _{2})\to \tau _{3}} . == General examples == map function, found in many functional programming languages, is one example of a higher-order function. It takes arguments as a function f and a collection of elements, and as the result, returns a new collection with f applied to each element from the collection. Sorting functions, which take a comparison function as a parameter, allowing the programmer to separate the sorting algorithm from the comparisons of the items being sorted. The C standard function qsort is an example of this. filter fold scan apply Function composition Integration Callback Tree traversal Montague grammar, a semantic theory of natural language, uses higher-order functions == Support in programming languages == === Direct support === The examples are not intended to compare and contrast programming languages, but to serve as examples of higher-order function syntax In the following examples, the higher-order function twice takes a function, and applies the function to some value twice. If twice has to be applied several times for the same f it preferably should return a function rather than a value. This is in line with the "don't repeat yourself" principle. ==== APL ==== Or in a tacit manner: ==== C++ ==== Using std::function in C++11: Or, with generic lambdas provided by C++14: ==== C# ==== Using just delegates: Or equivalently, with static methods: ==== Clojure ==== ==== ColdFusion Markup Language (CFML) ==== ==== Common Lisp ==== ==== D ==== ==== Dart ==== ==== Elixir ==== In Elixir, you can mix module definitions and anonymous functions Alternatively, we can also compose using pure anonymous functions. ==== Erlang ==== In this Erlang example, the higher-order function or_else/2 takes a list of functions (Fs) and argument (X). It evaluates the function F with the argument X as argument. If the function F returns false then the next function in Fs will be evaluated. If the function F returns {false, Y} then the next function in Fs with argument Y will be evaluated. If the function F returns R the higher-order function or_else/2 will return R. Note that X, Y, and R can be functions. The example returns false. ==== F# ==== ==== Go ==== Notice a function literal can be defined either with an identifier (twice) or anonymously (assigned to variable plusThree). ==== Groovy ==== ==== Haskell ==== ==== J ==== Explicitly, or tacitly, ==== Java (1.8+) ==== Using just functional interfaces: Or equivalently, with static methods: ==== JavaScript ==== With arrow functions: Or with classical syntax: ==== Julia ==== ==== Kotlin ==== ==== Lua ==== ==== MATLAB ==== ==== OCaml ==== ==== PHP ==== or with all functions in variables: Note that arrow functions implicitly capture any variables that come from the parent scope, whereas anonymous functions require the use keyword to do the same. ==== Perl ==== or with all functions in variables: ==== Python ==== Python decorator syntax is often used to replace a function with the result of passing that function through a higher-order function. E.g., the function g could be implemented equivalently: ==== R ==== ==== Raku ==== In Raku, all code objects are closures and therefore can reference inner "lexical" variables from an outer scope because the lexical variable is "closed" inside of the function. Raku also supports "pointy block" syntax for lambda expressions which can be assigned to a variable or invoked anonymously. ==== Ruby ==== ==== Rust ==== ==== Scala ==== ==== Scheme ==== ==== Swift ==== ==== Tcl ==== Tcl uses apply command to apply an anonymous function (since 8.6). ==== XACML ==== The XACML standard defines higher-order functions in the standard to apply a function to multiple values of attribute bags. The list of higher-order functions in XACML can be found here. ==== XQuery ==== === Alternatives === ==== Function pointers ==== Function pointers in languages such as C, C++, Fortran, and Pascal allow programmers to pass around references to functions. The following C code computes an approximation of the integral of an arbitrary function: The qsort function from the C standard library uses a function pointer to emulate the behavior of a higher-order function. ==== Macros ==== Macros can also be used to achieve some of the effects of higher-order functions. However, macros cannot easily avoid the problem of variable capture; they may also result in large amounts of duplicated code, which can be more difficult for a compiler to optimize. Macros are generally not strongly typed, although they may produce strongly typed code. ==== Dynamic code evaluation ==== In other imperative programming languages, it is possible to achieve some of the same algorithmic results as are obtained via higher-order functions by dynamically executing code (sometimes called Eval or Execute operations) in the scope of evaluation. There can be significant drawbacks to this approach: The argument code to be executed is usually not statically typed; these languages generally rely on dynamic typing to determine the well-formedness and safety of the code to be executed. The argument is usually provided as a string, the value of which may not be known until run-time. This string must either be compiled during program execution (using just-in-time compilation) or evaluated by interpretation, causing some added overhead at run-time, and usually generating less efficient code. ==== Objects ==== In object-oriented programming languages that do not support higher-order functions, objects can be an effective substitute. An object's methods act in essence like functions, and a method may accept objects as parameters and produce objects as return values. Objects often carry added run-time overhead compared to pure functions, however, and added boilerplate code for defining and instantiating an object and its method(s). Languages that permit stack-based (versus heap-based) objects or structs can provide more flexibility with this method. An example of using a simple stack based record in Free Pascal with a function that returns a function: The function a() takes a Txy record as input and returns the integer value of the sum of the record's x and y fields (3 + 7). ==== Defunctionalization ==== Defunctionalization can be used to implement higher-order functions in languages that lack first-class functions: In this case, different types are used to trigger different functions via function overloading. The overloaded function in this example has the signature auto apply. == See also == First-class function Combinatory logic Function-level programming Functional programming Kappa calculus - a formalism for functions which excludes higher-order functions Strategy pattern Higher order messages == References ==
Wikipedia/Higher_order_function
The Boyer–Moore majority vote algorithm is an algorithm for finding the majority of a sequence of elements using linear time and a constant number of words of memory. It is named after Robert S. Boyer and J Strother Moore, who published it in 1981, and is a prototypical example of a streaming algorithm. In its simplest form, the algorithm finds a majority element, if there is one: that is, an element that occurs repeatedly for more than half of the elements of the input. A version of the algorithm that makes a second pass through the data can be used to verify that the element found in the first pass really is a majority. If a second pass is not performed and there is no majority, the algorithm will not detect that no majority exists. In the case that no strict majority exists, the returned element can be arbitrary; it is not guaranteed to be the element that occurs most often (the mode of the sequence). It is not possible for a streaming algorithm to find the most frequent element in less than linear space, for sequences whose number of repetitions can be small. == Description == The algorithm maintains in its local variables a sequence element and a counter, with the counter initially zero. It then processes the elements of the sequence, one at a time. When processing an element x, if the counter is zero, the algorithm stores x as its remembered sequence element and sets the counter to one. Otherwise, it compares x to the stored element and either increments the counter (if they are equal) or decrements the counter (otherwise). At the end of this process, if the sequence has a majority, it will be the element stored by the algorithm. This can be expressed in pseudocode as the following steps: Initialize an element m and a counter c with c = 0 For each element x of the input sequence: If c = 0, then assign m = x and c = 1 else if m = x, then assign c = c + 1 else assign c = c − 1 Return m Even when the input sequence has no majority, the algorithm will report one of the sequence elements as its result. However, it is possible to perform a second pass over the same input sequence in order to count the number of times the reported element occurs and determine whether it is actually a majority. This second pass is needed, as it is not possible for a sublinear-space algorithm to determine whether there exists a majority element in a single pass through the input. == Analysis == The amount of memory that the algorithm needs is the space for one element and one counter. In the random access model of computing usually used for the analysis of algorithms, each of these values can be stored in a machine word and the total space needed is O(1). If an array index is needed to keep track of the algorithm's position in the input sequence, it doesn't change the overall constant space bound. The algorithm's bit complexity (the space it would need, for instance, on a Turing machine) is higher, the sum of the binary logarithms of the input length and the size of the universe from which the elements are drawn. Both the random access model and bit complexity analyses only count the working storage of the algorithm, and not the storage for the input sequence itself. Similarly, on a random access machine, the algorithm takes time O(n) (linear time) on an input sequence of n items, because it performs only a constant number of operations per input item. The algorithm can also be implemented on a Turing machine in time linear in the input length (n times the number of bits per input item). == Correctness == After processing n input elements, the input sequence can be partitioned into (n−c) / 2 pairs of unequal elements, and c copies of m left over. This is a proof by induction; it is trivially true when n = c = 0, and is maintained every time an element x is added: If x = m, add it to the set of c copies of m (and increment c). If x ≠ m and c > 0, then remove one of the c copies of m from the left-over set and pair it with the final value (and decrement c). If c = 0, then set m ← x and add x to the (previously empty) set of copies of m (and set c to 1). In all cases, the loop invariant is maintained. After the entire sequence has been processed, it follows that no element x ≠ m can have a majority, because x can equal at most one element of each unequal pair and none of the remaining c copies of m. Thus, if there is a majority element, it can only be m. == See also == Element distinctness problem, the problem of testing whether a collection of elements has any repeated elements Majority function, the majority of a collection of Boolean values Majority problem (cellular automaton), the problem of finding a majority element in the cellular automaton computational model Misra–Gries heavy hitters algorithm and Misra–Gries summary, a natural generalization of the Boyer–Moore majority vote algorithm that stores more than one item and more than one count == References ==
Wikipedia/Boyer–Moore_majority_vote_algorithm
In computer programming, the scope of a name binding (an association of a name to an entity, such as a variable) is the part of a program where the name binding is valid; that is, where the name can be used to refer to the entity. In other parts of the program, the name may refer to a different entity (it may have a different binding), or to nothing at all (it may be unbound). Scope helps prevent name collisions by allowing the same name to refer to different objects – as long as the names have separate scopes. The scope of a name binding is also known as the visibility of an entity, particularly in older or more technical literature—this is in relation to the referenced entity, not the referencing name. The term "scope" is also used to refer to the set of all name bindings that are valid within a part of a program or at a given point in a program, which is more correctly referred to as context or environment. Strictly speaking and in practice for most programming languages, "part of a program" refers to a portion of source code (area of text), and is known as lexical scope. In some languages, however, "part of a program" refers to a portion of run time (period during execution), and is known as dynamic scope. Both of these terms are somewhat misleading—they misuse technical terms, as discussed in the definition—but the distinction itself is accurate and precise, and these are the standard respective terms. Lexical scope is the main focus of this article, with dynamic scope understood by contrast with lexical scope. In most cases, name resolution based on lexical scope is relatively straightforward to use and to implement, as in use one can read backwards in the source code to determine to which entity a name refers, and in implementation one can maintain a list of names and contexts when compiling or interpreting a program. Difficulties arise in name masking, forward declarations, and hoisting, while considerably subtler ones arise with non-local variables, particularly in closures. == Definition == The strict definition of the (lexical) "scope" of a name (identifier) is unambiguous: lexical scope is "the portion of source code in which a binding of a name with an entity applies". This is virtually unchanged from its 1960 definition in the specification of ALGOL 60. Representative language specifications follow: ALGOL 60 (1960) The following kinds of quantities are distinguished: simple variables, arrays, labels, switches, and procedures. The scope of a quantity is the set of statements and expressions in which the declaration of the identifier associated with that quantity is valid. C (2007) An identifier can denote an object; a function; a tag or a member of a structure, union, or enumeration; a typedef name; a label name; a macro name; or a macro parameter. The same identifier can denote different entities at different points in the program. [...] For each different entity that an identifier designates, the identifier is visible (i.e., can be used) only within a region of program text called its scope. Go (2013) A declaration binds a non-blank identifier to a constant, type, variable, function, label, or package. [...] The scope of a declared identifier is the extent of source text in which the identifier denotes the specified constant, type, variable, function, label, or package. Most commonly "scope" refers to when a given name can refer to a given variable—when a declaration has effect—but can also apply to other entities, such as functions, types, classes, labels, constants, and enumerations. === Lexical scope vs. dynamic scope === A fundamental distinction in scope is what "part of a program" means. In languages with lexical scope (also called static scope), name resolution depends on the location in the source code and the lexical context (also called static context), which is defined by where the named variable or function is defined. In contrast, in languages with dynamic scope, the name resolution depends upon the program state when the name is encountered which is determined by the execution context (also called runtime context, calling context or dynamic context). In practice, with lexical scope a name is resolved by searching the local lexical context, then if that fails, by searching the outer lexical context, and so on; whereas with dynamic scope, a name is resolved by searching the local execution context, then if that fails, by searching the outer execution context, and so on, progressing up the call stack. Most modern languages use lexical scope for variables and functions, though dynamic scope is used in some languages, notably some dialects of Lisp, some "scripting" languages, and some template languages. Perl 5 offers both lexical and dynamic scope. Even in lexically scoped languages, scope for closures can be confusing to the uninitiated, as these depend on the lexical context where the closure is defined, not where it is called. Lexical resolution can be determined at compile time, and is also known as early binding, while dynamic resolution can in general only be determined at run time, and thus is known as late binding. === Related concepts === In object-oriented programming, dynamic dispatch selects an object method at runtime, though whether the actual name binding is done at compile time or run time depends on the language. De facto dynamic scope is common in macro languages, which do not directly do name resolution, but instead expand in place. Some programming frameworks like AngularJS use the term "scope" to mean something entirely different than how it is used in this article. In those frameworks, the scope is just an object of the programming language that they use (JavaScript in case of AngularJS) that is used in certain ways by the framework to emulate dynamic scope in a language that uses lexical scope for its variables. Those AngularJS scopes can themselves be in context or not in context (using the usual meaning of the term) in any given part of the program, following the usual rules of variable scope of the language like any other object, and using their own inheritance and transclusion rules. In the context of AngularJS, sometimes the term "$scope" (with a dollar sign) is used to avoid confusion, but using the dollar sign in variable names is often discouraged by the style guides. == Use == Scope is an important component of name resolution, which is in turn fundamental to language semantics. Name resolution (including scope) varies between programming languages, and within a programming language, varies by type of entity; the rules for scope are called scope rules (or scoping rules). Together with namespaces, scope rules are crucial in modular programming, so a change in one part of the program does not break an unrelated part. == Overview == When discussing scope, there are three basic concepts: scope, extent, and context. "Scope" and "context" in particular are frequently confused: scope is a property of a name binding, while context is a property of a part of a program, that is either a portion of source code (lexical context or static context) or a portion of run time (execution context, runtime context, calling context or dynamic context). Execution context consists of lexical context (at the current execution point) plus additional runtime state such as the call stack. Strictly speaking, during execution a program enters and exits various name bindings' scopes, and at a point in execution name bindings are "in context" or "not in context", hence name bindings "come into context" or "go out of context" as the program execution enters or exits the scope. However, in practice usage is much looser. Scope is a source-code level concept, and a property of name bindings, particularly variable or function name bindings—names in the source code are references to entities in the program—and is part of the behavior of a compiler or interpreter of a language. As such, issues of scope are similar to pointers, which are a type of reference used in programs more generally. Using the value of a variable when the name is in context but the variable is uninitialized is analogous to dereferencing (accessing the value of) a wild pointer, as it is undefined. However, as variables are not destroyed until they go out of context, the analog of a dangling pointer does not exist. For entities such as variables, scope is a subset of lifetime (also known as extent)—a name can only refer to a variable that exists (possibly with undefined value), but variables that exist are not necessarily visible: a variable may exist but be inaccessible (the value is stored but not referred to within a given context), or accessible but not via the given name, in which case it is not in context (the program is "out of the scope of the name"). In other cases "lifetime" is irrelevant—a label (named position in the source code) has lifetime identical with the program (for statically compiled languages), but may be in context or not at a given point in the program, and likewise for static variables—a static global variable is in context for the entire program, while a static local variable is only in context within a function or other local context, but both have lifetime of the entire run of the program. Determining which entity a name refers to is known as name resolution or name binding (particularly in object-oriented programming), and varies between languages. Given a name, the language (properly, the compiler or interpreter) checks all entities that are in context for matches; in case of ambiguity (two entities with the same name, such as a global and local variable with the same name), the name resolution rules are used to distinguish them. Most frequently, name resolution relies on an "inner-to-outer context" rule, such as the Python LEGB (Local, Enclosing, Global, Built-in) rule: names implicitly resolve to the narrowest relevant context. In some cases name resolution can be explicitly specified, such as by the global and nonlocal keywords in Python; in other cases the default rules cannot be overridden. When two identical names are in context at the same time, referring to different entities, one says that name masking is occurring, where the higher-priority name (usually innermost) is "masking" the lower-priority name. At the level of variables, this is known as variable shadowing. Due to the potential for logic errors from masking, some languages disallow or discourage masking, raising an error or warning at compile time or run time. Various programming languages have various different scope rules for different kinds of declarations and names. Such scope rules have a large effect on language semantics and, consequently, on the behavior and correctness of programs. In languages like C++, accessing an unbound variable does not have well-defined semantics and may result in undefined behavior, similar to referring to a dangling pointer; and declarations or names used outside their scope will generate syntax errors. Scopes are frequently tied to other language constructs and determined implicitly, but many languages also offer constructs specifically for controlling scope. == Levels of scope == Scope can vary from as little as a single expression to as much as the entire program, with many possible gradations in between. The simplest scope rule is global scope—all entities are visible throughout the entire program. The most basic modular scope rule is two-level scope, with a global scope anywhere in the program, and local scope within a function. More sophisticated modular programming allows a separate module scope, where names are visible within the module (private to the module) but not visible outside it. Within a function, some languages, such as C, allow block scope to restrict scope to a subset of a function; others, notably functional languages, allow expression scope, to restrict scope to a single expression. Other scopes include file scope (notably in C) which behaves similarly to module scope, and block scope outside of functions (notably in Perl). A subtle issue is exactly when a scope begins and ends. In some languages, such as C, a name's scope begins at the name declaration, and thus different names declared within a given block can have different scopes. This requires declaring functions before use, though not necessarily defining them, and requires forward declaration in some cases, notably for mutual recursion. In other languages, such as Python, a name's scope begins at the start of the relevant block where the name is declared (such as the start of a function), regardless of where it is defined, so all names within a given block have the same scope. In JavaScript, the scope of a name declared with let or const begins at the name declaration, and the scope of a name declared with var begins at the start of the function where the name is declared, which is known as variable hoisting. Behavior of names in context that have undefined value differs: in Python use of undefined names yields a runtime error, while in JavaScript undefined names declared with var are usable throughout the function because they are implicitly bound to the value undefined. === Expression scope === The scope of a name binding is an expression, which is known as expression scope. Expression scope is available in many languages, especially functional languages which offer a feature called let expressions allowing a declaration's scope to be a single expression. This is convenient if, for example, an intermediate value is needed for a computation. For example, in Standard ML, if f() returns 12, then let val x = f() in x * x end is an expression that evaluates to 144, using a temporary variable named x to avoid calling f() twice. Some languages with block scope approximate this functionality by offering syntax for a block to be embedded into an expression; for example, the aforementioned Standard ML expression could be written in Perl as do { my $x = f(); $x * $x }, or in GNU C as ({ int x = f(); x * x; }). In Python, auxiliary variables in generator expressions and list comprehensions (in Python 3) have expression scope. In C, variable names in a function prototype have expression scope, known in this context as function protocol scope. As the variable names in the prototype are not referred to (they may be different in the actual definition)—they are just dummies—these are often omitted, though they may be used for generating documentation, for instance. === Block scope === The scope of a name binding is a block, which is known as block scope. Block scope is available in many, but not all, block-structured programming languages. This began with ALGOL 60, where "[e]very declaration ... is valid only for that block.", and today is particularly associated with languages in the Pascal and C families and traditions. Most often this block is contained within a function, thus restricting the scope to a part of a function, but in some cases, such as Perl, the block may not be within a function. A representative example of the use of block scope is the C code shown here, where two variables are scoped to the loop: the loop variable n, which is initialized once and incremented on each iteration of the loop, and the auxiliary variable n_squared, which is initialized at each iteration. The purpose is to avoid adding variables to the function scope that are only relevant to a particular block—for example, this prevents errors where the generic loop variable i has accidentally already been set to another value. In this example the expression n * n would generally not be assigned to an auxiliary variable, and the body of the loop would simply be written ret += n * n but in more complicated examples auxiliary variables are useful. Blocks are primarily used for control flow, such as with if, while, and for loops, and in these cases block scope means the scope of variable depends on the structure of a function's flow of execution. However, languages with block scope typically also allow the use of "naked" blocks, whose sole purpose is to allow fine-grained control of variable scope. For example, an auxiliary variable may be defined in a block, then used (say, added to a variable with function scope) and discarded when the block ends, or a while loop might be enclosed in a block that initializes variables used inside the loop that should only be initialized once. A subtlety of several programming languages, such as Algol 68 and C (demonstrated in this example and standardized since C99), is that block-scope variables can be declared not only within the body of the block, but also within the control statement, if any. This is analogous to function parameters, which are declared in the function declaration (before the block of the function body starts), and in scope for the whole function body. This is primarily used in for loops, which have an initialization statement separate from the loop condition, unlike while loops, and is a common idiom. Block scope can be used for shadowing. In this example, inside the block the auxiliary variable could also have been called n, shadowing the parameter name, but this is considered poor style due to the potential for errors. Furthermore, some descendants of C, such as Java and C#, despite having support for block scope (in that a local variable can be made to go out of context before the end of a function), do not allow one local variable to hide another. In such languages, the attempted declaration of the second n would result in a syntax error, and one of the n variables would have to be renamed. If a block is used to set the value of a variable, block scope requires that the variable be declared outside of the block. This complicates the use of conditional statements with single assignment. For example, in Python, which does not use block scope, one may initialize a variable as such: where a is accessible after the if statement. In Perl, which has block scope, this instead requires declaring the variable prior to the block: Often this is instead rewritten using multiple assignment, initializing the variable to a default value. In Python (where it is not necessary) this would be: while in Perl this would be: In case of a single variable assignment, an alternative is to use the ternary operator to avoid a block, but this is not in general possible for multiple variable assignments, and is difficult to read for complex logic. This is a more significant issue in C, notably for string assignment, as string initialization can automatically allocate memory, while string assignment to an already initialized variable requires allocating memory, a string copy, and checking that these are successful. Some languages allow the concept of block scope to be applied, to varying extents, outside of a function. For example, in the Perl snippet at right, $counter is a variable name with block scope (due to the use of the my keyword), while increment_counter is a function name with global scope. Each call to increment_counter will increase the value of $counter by one, and return the new value. Code outside of this block can call increment_counter, but cannot otherwise obtain or alter the value of $counter. This idiom allows one to define closures in Perl. === Function scope === When the scope of variables declared within a function does not extend beyond that function, this is known as function scope. Function scope is available in most programming languages which offer a way to create a local variable in a function or subroutine: a variable whose scope ends (that goes out of context) when the function returns. In most cases the lifetime of the variable is the duration of the function call—it is an automatic variable, created when the function starts (or the variable is declared), destroyed when the function returns—while the scope of the variable is within the function, though the meaning of "within" depends on whether scope is lexical or dynamic. However, some languages, such as C, also provide for static local variables, where the lifetime of the variable is the entire lifetime of the program, but the variable is only in context when inside the function. In the case of static local variables, the variable is created when the program initializes, and destroyed only when the program terminates, as with a static global variable, but is only in context within a function, like an automatic local variable. Importantly, in lexical scope a variable with function scope has scope only within the lexical context of the function: it goes out of context when another function is called within the function, and comes back into context when the function returns—called functions have no access to the local variables of calling functions, and local variables are only in context within the body of the function in which they are declared. By contrast, in dynamic scope, the scope extends to the execution context of the function: local variables stay in context when another function is called, only going out of context when the defining function ends, and thus local variables are in context of the function in which they are defined and all called functions. In languages with lexical scope and nested functions, local variables are in context for nested functions, since these are within the same lexical context, but not for other functions that are not lexically nested. A local variable of an enclosing function is known as a non-local variable for the nested function. Function scope is also applicable to anonymous functions. For example, in the snippet of Python code on the right, two functions are defined: square and sum_of_squares. square computes the square of a number; sum_of_squares computes the sum of all squares up to a number. (For example, square(4) is 42 = 16, and sum_of_squares(4) is 02 + 12 + 22 + 32 + 42 = 30.) Each of these functions has a variable named n that represents the argument to the function. These two n variables are completely separate and unrelated, despite having the same name, because they are lexically scoped local variables with function scope: each one's scope is its own, lexically separate function and thus, they don't overlap. Therefore, sum_of_squares can call square without its own n being altered. Similarly, sum_of_squares has variables named total and i; these variables, because of their limited scope, will not interfere with any variables named total or i that might belong to any other function. In other words, there is no risk of a name collision between these names and any unrelated names, even if they are identical. No name masking is occurring: only one variable named n is in context at any given time, as the scopes do not overlap. By contrast, were a similar fragment to be written in a language with dynamic scope, the n in the calling function would remain in context in the called function—the scopes would overlap—and would be masked ("shadowed") by the new n in the called function. Function scope is significantly more complicated if functions are first-class objects and can be created locally to a function and then returned. In this case any variables in the nested function that are not local to it (unbound variables in the function definition, that resolve to variables in an enclosing context) create a closure, as not only the function itself, but also its context (of variables) must be returned, and then potentially called in a different context. This requires significantly more support from the compiler, and can complicate program analysis. === File scope === The scope of a name binding is a file, which is known as file scope. File scope is largely particular to C (and C++), where scope of variables and functions declared at the top level of a file (not within any function) is for the entire file—or rather for C, from the declaration until the end of the source file, or more precisely translation unit (internal linking). This can be seen as a form of module scope, where modules are identified with files, and in more modern languages is replaced by an explicit module scope. Due to the presence of include statements, which add variables and functions to the internal context and may themselves call further include statements, it can be difficult to determine what is in context in the body of a file. In the C code snippet above, the function name sum_of_squares has global scope (in C, extern linkage). Adding static to the function signature would result in file scope (internal linkage). === Module scope === The scope of a name binding is a module, which is known as module scope. Module scope is available in modular programming languages where modules (which may span various files) are the basic unit of a complex program, as they allow information hiding and exposing a limited interface. Module scope was pioneered in the Modula family of languages, and Python (which was influenced by Modula) is a representative contemporary example. In some object-oriented programming languages that lack direct support for modules, such as C++ before C++20, a similar structure is instead provided by the class hierarchy, where classes are the basic unit of the program, and a class can have private methods. This is properly understood in the context of dynamic dispatch rather than name resolution and scope, though they often play analogous roles. In some cases both these facilities are available, such as in Python, which has both modules and classes, and code organization (as a module-level function or a conventionally private method) is a choice of the programmer. === Global scope === The scope of a name binding is an entire program, which is known as global scope. Variable names with global scope—called global variables—are frequently considered bad practice, at least in some languages, due to the possibility of name collisions and unintentional masking, together with poor modularity, and function scope or block scope are considered preferable. However, global scope is typically used (depending on the language) for various other sorts of names, such as names of functions, names of classes and names of other data types. In these cases mechanisms such as namespaces are used to avoid collisions. == Lexical scope vs. dynamic scope == The use of local variables — of variable names with limited scope, that only exist within a specific function — helps avoid the risk of a name collision between two identically named variables. However, there are two very different approaches to answering this question: What does it mean to be "within" a function? In lexical scope (or lexical scoping; also called static scope or static scoping), if a variable name's scope is a certain function, then its scope is the program text of the function definition: within that text, the variable name exists, and is bound to the variable's value, but outside that text, the variable name does not exist. By contrast, in dynamic scope (or dynamic scoping), if a variable name's scope is a certain function, then its scope is the time-period during which the function is executing: while the function is running, the variable name exists, and is bound to its value, but after the function returns, the variable name does not exist. This means that if function f invokes a separately defined function g, then under lexical scope, function g does not have access to f's local variables (assuming the text of g is not inside the text of f), while under dynamic scope, function g does have access to f's local variables (since g is invoked during the invocation of f). Consider, for example, the program on the right. The first line, x=1, creates a global variable x and initializes it to 1. The second line, function g() { echo $x ; x=2 ; }, defines a function g that prints out ("echoes") the current value of x, and then sets x to 2 (overwriting the previous value). The third line, function f() { local x=3 ; g ; } defines a function f that creates a local variable x (hiding the identically named global variable) and initializes it to 3, and then calls g. The fourth line, f, calls f. The fifth line, echo $x, prints out the current value of x. So, what exactly does this program print? It depends on the scope rules. If the language of this program is one that uses lexical scope, then g prints and modifies the global variable x (because g is defined outside f), so the program prints 1 and then 2. By contrast, if this language uses dynamic scope, then g prints and modifies f's local variable x (because g is called from within f), so the program prints 3 and then 1. (As it happens, the language of the program is Bash, which uses dynamic scope; so the program prints 3 and then 1. If the same code was run with ksh93 which uses lexical scope, the results would be different.) == Lexical scope == With lexical scope, a name always refers to its lexical context. This is a property of the program text and is made independent of the runtime call stack by the language implementation. Because this matching only requires analysis of the static program text, this type of scope is also called static scope. Lexical scope is standard in all ALGOL-based languages such as Pascal, Modula-2 and Ada as well as in modern functional languages such as ML and Haskell. It is also used in the C language and its syntactic and semantic relatives, although with different kinds of limitations. Static scope allows the programmer to reason about object references such as parameters, variables, constants, types, functions, etc., as simple name substitutions. This makes it much easier to make modular code and reason about it, since the local naming structure can be understood in isolation. In contrast, dynamic scope forces the programmer to anticipate all possible execution contexts in which the module's code may be invoked. For example, Pascal is lexically scoped. Consider the Pascal program fragment at right. The variable I is visible at all points, because it is never hidden by another variable of the same name. The char variable K is visible only in the main program because it is hidden by the real variable K visible only in procedures A and B. Variable L is also visible only in procedure A and B but it does not hide any other variable. Variable M is only visible in procedure B and therefore not accessible either from procedure A or the main program. Also, procedure B is visible only in procedure A and can therefore not be called from the main program. There could have been another procedure namedB declared in the program outside of procedure B. The place in the program where "B" is mentioned then determines which of the two procedures named B it represents, analogous with the scope of variables. Correct implementation of lexical scope in languages with first-class nested functions is not trivial, as it requires each function value to carry with it a record of the values of the variables that it depends on (the pair of the function and this context is called a closure). Depending on implementation and computer architecture, variable lookup may become slightly inefficient when very deeply lexically nested functions are used, although there are well-known techniques to mitigate this. Also, for nested functions that only refer to their own arguments and (immediately) local variables, all relative locations can be known at compile time. No overhead at all is therefore incurred when using that type of nested function. The same applies to particular parts of a program where nested functions are not used, and, naturally, to programs written in a language where nested functions are not available (such as in the C language). === History === Lexical scope was first used in the early 1960s for the imperative language ALGOL 60 and has been picked up in most other imperative languages since then. Languages like Pascal and C have always had lexical scope, since they are both influenced by the ideas that went into ALGOL 60 and ALGOL 68 (although C did not include lexically nested functions). Perl is a language with dynamic scope that added static scope afterwards. The original Lisp interpreter (1960) used dynamic scope. Deep binding, which approximates static (lexical) scope, was introduced around 1962 in LISP 1.5 (via the Funarg device developed by Steve Russell, working under John McCarthy). All early Lisps used dynamic scope, when based on interpreters. In 1982, Guy L. Steele Jr. and the Common LISP Group published An overview of Common LISP, a short review of the history and the divergent implementations of Lisp up to that moment and a review of the features that a Common Lisp implementation should have. On page 102, we read: Most LISP implementations are internally inconsistent in that by default the interpreter and compiler may assign different semantics to correct programs; this stems primarily from the fact that the interpreter assumes all variables to be dynamically scoped, while the compiler assumes all variables to be local unless forced to assume otherwise. This has been done for the sake of convenience and efficiency, but can lead to very subtle bugs. The definition of Common LISP avoids such anomalies by explicitly requiring the interpreter and compiler to impose identical semantics on correct programs. Implementations of Common LISP were thus required to have lexical scope. Again, from An overview of Common LISP: In addition, Common LISP offers the following facilities (most of which are borrowed from MacLisp, InterLisp or Lisp Machines Lisp): (...) Fully lexically scoped variables. The so-called "FUNARG problem" is completely solved, in both the downward and upward cases. By the same year in which An overview of Common LISP was published (1982), initial designs (also by Guy L. Steele Jr.) of a compiled, lexically scoped Lisp, called Scheme had been published and compiler implementations were being attempted. At that time, lexical scope in Lisp was commonly feared to be inefficient to implement. In A History of T, Olin Shivers writes: All serious Lisps in production use at that time were dynamically scoped. No one who hadn't carefully read the Rabbit thesis (written by Guy Lewis Steele Jr. in 1978) believed lexical scope would fly; even the few people who had read it were taking a bit of a leap of faith that this was going to work in serious production use. The term "lexical scope" dates at least to 1967, while the term "lexical scoping" dates at least to 1970, where it was used in Project MAC to describe the scope rules of the Lisp dialect MDL (then known as "Muddle"). == Dynamic scope == With dynamic scope, a name refers to execution context. In technical terms, this means that each name has a global stack of bindings. Introducing a local variable with name x pushes a binding onto the global x stack (which may have been empty), which is popped off when the control flow leaves the scope. Evaluating x in any context always yields the top binding. Note that this cannot be done at compile-time because the binding stack only exists at run-time, which is why this type of scope is called dynamic scope. Dynamic scope is uncommon in modern languages. Generally, certain blocks are defined to create bindings whose lifetime is the execution time of the block; this adds some features of static scope to the dynamic scope process. However, since a section of code can be called from many different locations and situations, it can be difficult to determine at the outset what bindings will apply when a variable is used (or if one exists at all). This can be beneficial; application of the principle of least knowledge suggests that code avoid depending on the reasons for (or circumstances of) a variable's value, but simply use the value according to the variable's definition. This narrow interpretation of shared data can provide a very flexible system for adapting the behavior of a function to the current state (or policy) of the system. However, this benefit relies on careful documentation of all variables used this way as well as on careful avoidance of assumptions about a variable's behavior, and does not provide any mechanism to detect interference between different parts of a program. Some languages, like Perl and Common Lisp, allow the programmer to choose static or dynamic scope when defining or redefining a variable. Examples of languages that use dynamic scope include Logo, Emacs Lisp, LaTeX and the shell languages bash, dash, and PowerShell. Dynamic scope is fairly easy to implement. To find an name's value, the program could traverse the runtime stack, checking each activation record (each function's stack frame) for a value for the name. In practice, this is made more efficient via the use of an association list, which is a stack of name/value pairs. Pairs are pushed onto this stack whenever declarations are made, and popped whenever variables go out of context. Shallow binding is an alternative strategy that is considerably faster, making use of a central reference table, which associates each name with its own stack of meanings. This avoids a linear search during run-time to find a particular name, but care should be taken to properly maintain this table. Note that both of these strategies assume a last-in-first-out (LIFO) ordering to bindings for any one variable; in practice all bindings are so ordered. An even simpler implementation is the representation of dynamic variables with simple global variables. The local binding is performed by saving the original value in an anonymous location on the stack that is invisible to the program. When that binding scope terminates, the original value is restored from this location. In fact, dynamic scope originated in this manner. Early implementations of Lisp used this obvious strategy for implementing local variables, and the practice survives in some dialects which are still in use, such as GNU Emacs Lisp. Lexical scope was introduced into Lisp later. This is equivalent to the above shallow binding scheme, except that the central reference table is simply the global variable binding context, in which the current meaning of the variable is its global value. Maintaining global variables isn't complex. For instance, a symbol object can have a dedicated slot for its global value. Dynamic scope provides an excellent abstraction for thread-local storage, but if it is used that way it cannot be based on saving and restoring a global variable. A possible implementation strategy is for each variable to have a thread-local key. When the variable is accessed, the thread-local key is used to access the thread-local memory location (by code generated by the compiler, which knows which variables are dynamic and which are lexical). If the thread-local key does not exist for the calling thread, then the global location is used. When a variable is locally bound, the prior value is stored in a hidden location on the stack. The thread-local storage is created under the variable's key, and the new value is stored there. Further nested overrides of the variable within that thread simply save and restore this thread-local location. When the initial, outermost override's context terminates, the thread-local key is deleted, exposing the global version of the variable once again to that thread. With referential transparency the dynamic scope is restricted to the argument stack of the current function only, and coincides with the lexical scope. === Macro expansion === In modern languages, macro expansion in a preprocessor is a key example of de facto dynamic scope. The macro language itself only transforms the source code, without resolving names, but since the expansion is done in place, when the names in the expanded text are then resolved (notably free variables), they are resolved based on where they are expanded (loosely "called"), as if dynamic scope were occurring. The C preprocessor, used for macro expansion, has de facto dynamic scope, as it does not do name resolution by itself and it is independent of where the macro is defined. For example, the macro: will expand to add a to the passed variable, with this name only later resolved by the compiler based on where the macro ADD_A is "called" (properly, expanded). Properly, the C preprocessor only does lexical analysis, expanding the macro during the tokenization stage, but not parsing into a syntax tree or doing name resolution. For example, in the following code, the name a in the macro is resolved (after expansion) to the local variable at the expansion site: == Qualified names == As we have seen, one of the key reasons for scope is that it helps prevent name collisions, by allowing identical names to refer to distinct things, with the restriction that the names must have separate scopes. Sometimes this restriction is inconvenient; when many different things need to be accessible throughout a program, they generally all need names with global scope, so different techniques are required to avoid name collisions. To address this, many languages offer mechanisms for organizing global names. The details of these mechanisms, and the terms used, depend on the language; but the general idea is that a group of names can itself be given a name — a prefix — and, when necessary, an entity can be referred to by a qualified name consisting of the name plus the prefix. Normally such names will have, in a sense, two sets of scopes: a scope (usually the global scope) in which the qualified name is visible, and one or more narrower scopes in which the unqualified name (without the prefix) is visible as well. And normally these groups can themselves be organized into groups; that is, they can be nested. Although many languages support this concept, the details vary greatly. Some languages have mechanisms, such as namespaces in C++ and C#, that serve almost exclusively to enable global names to be organized into groups. Other languages have mechanisms, such as packages in Ada and structures in Standard ML, that combine this with the additional purpose of allowing some names to be visible only to other members of their group. And object-oriented languages often allow classes or singleton objects to fulfill this purpose (whether or not they also have a mechanism for which this is the primary purpose). Furthermore, languages often meld these approaches; for example, Perl's packages are largely similar to C++'s namespaces, but optionally double as classes for object-oriented programming; and Java organizes its variables and functions into classes, but then organizes those classes into Ada-like packages. == By language == Scope rules for representative languages follow. === C === In C, scope is traditionally known as linkage or visibility, particularly for variables. C is a lexically scoped language with global scope (known as external linkage), a form of module scope or file scope (known as internal linkage), and local scope (within a function); within a function scopes can further be nested via block scope. However, standard C does not support nested functions. The lifetime and visibility of a variable are determined by its storage class. There are three types of lifetimes in C: static (program execution), automatic (block execution, allocated on the stack), and manual (allocated on the heap). Only static and automatic are supported for variables and handled by the compiler, while manually allocated memory must be tracked manually across different variables. There are three levels of visibility in C: external linkage (global), internal linkage (roughly file), and block scope (which includes functions); block scopes can be nested, and different levels of internal linkage is possible by use of includes. Internal linkage in C is visibility at the translation unit level, namely a source file after being processed by the C preprocessor, notably including all relevant includes. C programs are compiled as separate object files, which are then linked into an executable or library via a linker. Thus name resolution is split across the compiler, which resolves names within a translation unit (more loosely, "compilation unit", but this is properly a different concept), and the linker, which resolves names across translation units; see linkage for further discussion. In C, variables with block scope enter context when they are declared (not at the top of the block), go out of context if any (non-nested) function is called within the block, come back into context when the function returns, and go out of context at the end of the block. In the case of automatic local variables, they are also allocated on declaration and deallocated at the end of the block, while for static local variables, they are allocated at program initialization and deallocated at program termination. The following program demonstrates a variable with block scope coming into context partway through the block, then exiting context (and in fact being deallocated) when the block ends: The program outputs: m m b m There are other levels of scope in C. Variable names used in a function prototype have function prototype visibility, and exit context at the end of the function prototype. Since the name is not used, this is not useful for compilation, but may be useful for documentation. Label names for GOTO statement have function scope. === C++ === All the variables that we intend to use in a program must have been declared with its type specifier in an earlier point in the code, like we did in the previous code at the beginning of the body of the function main when we declared that a, b, and result were of type int. A variable can be either of global or local scope. A global variable is a variable declared in the main body of the source code, outside all functions, while a local variable is one declared within the body of a function or a block. Modern versions allow nested lexical scope. === Swift === Swift has a similar rule for scopes with C++, but contains different access modifiers. === Go === Go is lexically scoped using blocks. === Java === Java is lexically scoped. A Java class has several kinds of variables: Local variables are defined inside a method, or a particular block. These variables are local to where they were defined and lower levels. For example, a loop inside a method can use that method's local variables, but not the other way around. The loop's variables (local to that loop) are destroyed as soon as the loop ends. Member variables also called fields are variables declared within the class, outside of any method. By default, these variables are available for all methods within that class and also for all classes in the package. Parameters are variables in method declarations. In general, a set of brackets defines a particular scope, but variables at top level within a class can differ in their behavior depending on the modifier keywords used in their definition. The following table shows the access to members permitted by each modifier. === JavaScript === JavaScript has simple scope rules, but variable initialization and name resolution rules can cause problems, and the widespread use of closures for callbacks means the lexical context of a function when defined (which is used for name resolution) can be very different from the lexical context when it is called (which is irrelevant for name resolution). JavaScript objects have name resolution for properties, but this is a separate topic. JavaScript has lexical scope nested at the function level, with the global context being the outermost context. This scope is used for both variables and for functions (meaning function declarations, as opposed to variables of function type). Block scope with the let and const keywords is standard since ECMAScript 6. Block scope can be produced by wrapping the entire block in a function and then executing it; this is known as the immediately-invoked function expression (IIFE) pattern. While JavaScript scope is simple—lexical, function-level—the associated initialization and name resolution rules are a cause of confusion. Firstly, assignment to a name not in scope defaults to creating a new global variable, not a local one. Secondly, to create a new local variable one must use the var keyword; the variable is then created at the top of the function, with value undefined and the variable is assigned its value when the assignment expression is reached: A variable with an Initialiser is assigned the value of its AssignmentExpression when the VariableStatement is executed, not when the variable is created. This is known as variable hoisting—the declaration, but not the initialization, is hoisted to the top of the function. Thirdly, accessing variables before initialization yields undefined, rather than a syntax error. Fourthly, for function declarations, the declaration and the initialization are both hoisted to the top of the function, unlike for variable initialization. For example, the following code produces a dialog with output undefined, as the local variable declaration is hoisted, shadowing the global variable, but the initialization is not, so the variable is undefined when used: Further, as functions are first-class objects in JavaScript and are frequently assigned as callbacks or returned from functions, when a function is executed, the name resolution depends on where it was originally defined (the lexical context of the definition), not the lexical context or execution context where it is called. The nested scopes of a particular function (from most global to most local) in JavaScript, particularly of a closure, used as a callback, are sometimes referred to as the scope chain, by analogy with the prototype chain of an object. Closures can be produced in JavaScript by using nested functions, as functions are first-class objects. Returning a nested function from an enclosing function includes the local variables of the enclosing function as the (non-local) lexical context of the returned function, yielding a closure. For example: Closures are frequently used in JavaScript, due to being used for callbacks. Indeed, any hooking of a function in the local context as a callback or returning it from a function creates a closure if there are any unbound variables in the function body (with the context of the closure based on the nested scopes of the current lexical context, or "scope chain"); this may be accidental. When creating a callback based on parameters, the parameters must be stored in a closure, otherwise it will accidentally create a closure that refers to the variables in the enclosing context, which may change. Name resolution of properties of JavaScript objects is based on inheritance in the prototype tree—a path to the root in the tree is called a prototype chain—and is separate from name resolution of variables and functions. === Lisp === Lisp dialects have various rules for scope. The original Lisp used dynamic scope; it was Scheme, inspired by ALGOL, that introduced static (lexical) scope to the Lisp family. Maclisp used dynamic scope by default in the interpreter and lexical scope by default in compiled code, though compiled code could access dynamic bindings by use of SPECIAL declarations for particular variables. However, Maclisp treated lexical binding more as an optimization than one would expect in modern languages, and it did not come with the closure feature one might expect of lexical scope in modern Lisps. A separate operation, *FUNCTION, was available to somewhat clumsily work around some of that issue. Common Lisp adopted lexical scope from Scheme, as did Clojure. ISLISP has lexical scope for ordinary variables. It also has dynamic variables, but they are in all cases explicitly marked; they must be defined by a defdynamic special form, bound by a dynamic-let special form, and accessed by an explicit dynamic special form. Some other dialects of Lisp, like Emacs Lisp, still use dynamic scope by default. Emacs Lisp now has lexical scope available on a per-buffer basis. === Python === For variables, Python has function scope, module scope, and global scope. Names enter context at the start of a scope (function, module, or global scope), and exit context when a non-nested function is called or the scope ends. If a name is used prior to variable initialization, this raises a runtime exception. If a variable is simply accessed (not assigned to), name resolution follows the LEGB (Local, Enclosing, Global, Built-in) rule which resolves names to the narrowest relevant context. However, if a variable is assigned to, it defaults to declaring a variable whose scope starts at the start of the level (function, module, or global), not at the assignment. Both these rules can be overridden with a global or nonlocal (in Python 3) declaration prior to use, which allows accessing global variables even if there is a masking nonlocal variable, and assigning to global or nonlocal variables. As a simple example, a function resolves a variable to the global scope: Note that x is defined before f is called, so no error is raised, even though it is defined after its reference in the definition of f. Lexically this is a forward reference, which is allowed in Python. Here assignment creates a new local variable, which does not change the value of the global variable: Assignment to a variable within a function causes it to be declared local to the function, hence its scope is the entire function, and thus using it prior to this assignment raises an error. This differs from C, where the scope of the local variable start at its declaration. This code raises an error: The default name resolution rules can be overridden with the global or nonlocal (in Python 3) keywords. In the below code, the global x declaration in g means that x resolves to the global variable. It thus can be accessed (as it has already been defined), and assignment assigns to the global variable, rather than declaring a new local variable. Note that no global declaration is needed in f—since it does not assign to the variable, it defaults to resolving to the global variable. global can also be used for nested functions. In addition to allowing assignment to a global variable, as in an unnested function, this can also be used to access the global variable in the presence of a nonlocal variable: For nested functions, there is also the nonlocal declaration, for assigning to a nonlocal variable, similar to using global in an unnested function: === R === R is a lexically scoped language, unlike other implementations of S where the values of free variables are determined by a set of global variables, while in R they are determined by the context in which the function was created. The scope contexts may be accessed using a variety of features (such as parent.frame()) which can simulate the experience of dynamic scope should the programmer desire. There is no block scope: Functions have access to scope they were created in: Variables created or modified within a function stay there: Variables created or modified within a function stay there unless assignment to enclosing scope is explicitly requested: Although R has lexical scope by default, function scopes can be changed: == Notes == == References ==
Wikipedia/Scope_(computer_science)
In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications. == Types == Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access. The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade. Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process rescheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread. Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation. Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues. The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code: The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available. Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.) Some languages do support locks syntactically. An example in C# follows: C# introduced System.Threading.Lock in C# 13 on .NET 9. The code lock(this) can lead to problems if the instance can be accessed publicly. Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute. == Granularity == Before being introduced to lock granularity, one needs to understand three concepts about locks: lock overhead: the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage; lock contention: this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row); deadlock: the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever. There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization. An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock. In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users. == Database locks == Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps. There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking: Pessimistic locking: a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration. Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development. Optimistic locking: this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users. Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications. == Lock compatibility table == Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called incompatible; otherwise the locks are compatible. Often, lock types blocking interactions are presented in the technical literature by a Lock compatibility table. The following is an example with the common, major lock types: ✔ indicates compatibility X indicates incompatibility, i.e., a case when a lock of the first type (in left column) on an object blocks a lock of the second type (in top row) from being acquired on the same object (by another transaction). An object typically has a queue of waiting requested (by transactions) operations with respective locks. The first blocked lock for operation in the queue is acquired as soon as the existing blocking lock is removed from the object, and then its respective operation is executed. If a lock for operation in the queue is not blocked by any existing lock (existence of multiple compatible locks on a same object is possible concurrently), it is acquired immediately. Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no". == Disadvantages == Lock-based resource protection and thread/process synchronization have many disadvantages: Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is power cycled. Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a race condition.) Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as deadlocks. Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance). Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions. Priority inversion: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding. Priority inheritance can be used to reduce priority-inversion duration. The priority ceiling protocol can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent deadlock. Convoying: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault. Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application. In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands. === Lack of composability === One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application: design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another. The lock-based solution to the first part of the problem is: class Account: member balance: Integer member mutex: Lock method deposit(n: Integer) mutex.lock() balance ← balance + n mutex.unlock() method withdraw(n: Integer) deposit(−n) The second part of the problem is much more complicated. A transfer routine that is correct for sequential programs would be function transfer(from: Account, to: Account, amount: Integer) from.withdraw(amount) to.deposit(amount) In a concurrent program, this algorithm is incorrect because when one thread is halfway through transfer, another might observe a state where amount has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock: function transfer(from: Account, to: Account, amount: Integer) if from < to // arbitrary ordering on the locks from.lock() to.lock() else to.lock() from.lock() from.withdraw(amount) to.deposit(amount) from.unlock() to.unlock() This solution gets more complicated when more locks are involved, and the transfer function needs to know about all of the locks, so they cannot be hidden. == Language support == Programming languages vary in their support for synchronization: Ada provides protected objects that have visible protected subprograms or entries as well as rendezvous. The ISO/IEC C standard provides a standard mutual exclusion (locks) application programming interface (API) since C11. The current ISO/IEC C++ standard supports threading facilities since C++11. The OpenMP standard is supported by some compilers, and allows critical sections to be specified using pragmas. The POSIX pthread API provides lock support. Visual C++ provides the synchronize attribute of methods to be synchronized, but this is specific to COM objects in the Windows architecture and Visual C++ compiler. C and C++ can easily access any native operating system locking features. C# provides the lock keyword on a thread to ensure its exclusive access to a resource. Visual Basic (.NET) provides a SyncLock keyword like C#'s lock keyword. Java provides the keyword synchronized to lock code blocks, methods or objects and libraries featuring concurrency-safe data structures. Objective-C provides the keyword @synchronized to put locks on blocks of code and also provides the classes NSLock, NSRecursiveLock, and NSConditionLock along with the NSLocking protocol for locking as well. PHP provides a file-based locking as well as a Mutex class in the pthreads extension. Python provides a low-level mutex mechanism with a Lock class from the threading module. The ISO/IEC Fortran standard (ISO/IEC 1539-1:2010) provides the lock_type derived type in the intrinsic module iso_fortran_env and the lock/unlock statements since Fortran 2008. Ruby provides a low-level mutex object and no keyword. Rust provides the Mutex<T> struct. x86 assembly language provides the LOCK prefix on certain operations to guarantee their atomicity. Haskell implements locking via a mutable data structure called an MVar, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the MVar, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty MVar results in the thread blocking until the resource is available. As an alternative to locking, an implementation of software transactional memory also exists. Go provides a low-level Mutex object in standard's library sync package. It can be used for locking code blocks, methods or objects. == Mutexes vs. semaphores == == See also == Critical section Double-checked locking File locking Lock-free and wait-free algorithms Monitor (synchronization) Mutual exclusion Read/write lock pattern == References == == External links == Tutorial on Locks and Critical Sections
Wikipedia/Lock_(computer_science)
In computer programming, the scope of a name binding (an association of a name to an entity, such as a variable) is the part of a program where the name binding is valid; that is, where the name can be used to refer to the entity. In other parts of the program, the name may refer to a different entity (it may have a different binding), or to nothing at all (it may be unbound). Scope helps prevent name collisions by allowing the same name to refer to different objects – as long as the names have separate scopes. The scope of a name binding is also known as the visibility of an entity, particularly in older or more technical literature—this is in relation to the referenced entity, not the referencing name. The term "scope" is also used to refer to the set of all name bindings that are valid within a part of a program or at a given point in a program, which is more correctly referred to as context or environment. Strictly speaking and in practice for most programming languages, "part of a program" refers to a portion of source code (area of text), and is known as lexical scope. In some languages, however, "part of a program" refers to a portion of run time (period during execution), and is known as dynamic scope. Both of these terms are somewhat misleading—they misuse technical terms, as discussed in the definition—but the distinction itself is accurate and precise, and these are the standard respective terms. Lexical scope is the main focus of this article, with dynamic scope understood by contrast with lexical scope. In most cases, name resolution based on lexical scope is relatively straightforward to use and to implement, as in use one can read backwards in the source code to determine to which entity a name refers, and in implementation one can maintain a list of names and contexts when compiling or interpreting a program. Difficulties arise in name masking, forward declarations, and hoisting, while considerably subtler ones arise with non-local variables, particularly in closures. == Definition == The strict definition of the (lexical) "scope" of a name (identifier) is unambiguous: lexical scope is "the portion of source code in which a binding of a name with an entity applies". This is virtually unchanged from its 1960 definition in the specification of ALGOL 60. Representative language specifications follow: ALGOL 60 (1960) The following kinds of quantities are distinguished: simple variables, arrays, labels, switches, and procedures. The scope of a quantity is the set of statements and expressions in which the declaration of the identifier associated with that quantity is valid. C (2007) An identifier can denote an object; a function; a tag or a member of a structure, union, or enumeration; a typedef name; a label name; a macro name; or a macro parameter. The same identifier can denote different entities at different points in the program. [...] For each different entity that an identifier designates, the identifier is visible (i.e., can be used) only within a region of program text called its scope. Go (2013) A declaration binds a non-blank identifier to a constant, type, variable, function, label, or package. [...] The scope of a declared identifier is the extent of source text in which the identifier denotes the specified constant, type, variable, function, label, or package. Most commonly "scope" refers to when a given name can refer to a given variable—when a declaration has effect—but can also apply to other entities, such as functions, types, classes, labels, constants, and enumerations. === Lexical scope vs. dynamic scope === A fundamental distinction in scope is what "part of a program" means. In languages with lexical scope (also called static scope), name resolution depends on the location in the source code and the lexical context (also called static context), which is defined by where the named variable or function is defined. In contrast, in languages with dynamic scope, the name resolution depends upon the program state when the name is encountered which is determined by the execution context (also called runtime context, calling context or dynamic context). In practice, with lexical scope a name is resolved by searching the local lexical context, then if that fails, by searching the outer lexical context, and so on; whereas with dynamic scope, a name is resolved by searching the local execution context, then if that fails, by searching the outer execution context, and so on, progressing up the call stack. Most modern languages use lexical scope for variables and functions, though dynamic scope is used in some languages, notably some dialects of Lisp, some "scripting" languages, and some template languages. Perl 5 offers both lexical and dynamic scope. Even in lexically scoped languages, scope for closures can be confusing to the uninitiated, as these depend on the lexical context where the closure is defined, not where it is called. Lexical resolution can be determined at compile time, and is also known as early binding, while dynamic resolution can in general only be determined at run time, and thus is known as late binding. === Related concepts === In object-oriented programming, dynamic dispatch selects an object method at runtime, though whether the actual name binding is done at compile time or run time depends on the language. De facto dynamic scope is common in macro languages, which do not directly do name resolution, but instead expand in place. Some programming frameworks like AngularJS use the term "scope" to mean something entirely different than how it is used in this article. In those frameworks, the scope is just an object of the programming language that they use (JavaScript in case of AngularJS) that is used in certain ways by the framework to emulate dynamic scope in a language that uses lexical scope for its variables. Those AngularJS scopes can themselves be in context or not in context (using the usual meaning of the term) in any given part of the program, following the usual rules of variable scope of the language like any other object, and using their own inheritance and transclusion rules. In the context of AngularJS, sometimes the term "$scope" (with a dollar sign) is used to avoid confusion, but using the dollar sign in variable names is often discouraged by the style guides. == Use == Scope is an important component of name resolution, which is in turn fundamental to language semantics. Name resolution (including scope) varies between programming languages, and within a programming language, varies by type of entity; the rules for scope are called scope rules (or scoping rules). Together with namespaces, scope rules are crucial in modular programming, so a change in one part of the program does not break an unrelated part. == Overview == When discussing scope, there are three basic concepts: scope, extent, and context. "Scope" and "context" in particular are frequently confused: scope is a property of a name binding, while context is a property of a part of a program, that is either a portion of source code (lexical context or static context) or a portion of run time (execution context, runtime context, calling context or dynamic context). Execution context consists of lexical context (at the current execution point) plus additional runtime state such as the call stack. Strictly speaking, during execution a program enters and exits various name bindings' scopes, and at a point in execution name bindings are "in context" or "not in context", hence name bindings "come into context" or "go out of context" as the program execution enters or exits the scope. However, in practice usage is much looser. Scope is a source-code level concept, and a property of name bindings, particularly variable or function name bindings—names in the source code are references to entities in the program—and is part of the behavior of a compiler or interpreter of a language. As such, issues of scope are similar to pointers, which are a type of reference used in programs more generally. Using the value of a variable when the name is in context but the variable is uninitialized is analogous to dereferencing (accessing the value of) a wild pointer, as it is undefined. However, as variables are not destroyed until they go out of context, the analog of a dangling pointer does not exist. For entities such as variables, scope is a subset of lifetime (also known as extent)—a name can only refer to a variable that exists (possibly with undefined value), but variables that exist are not necessarily visible: a variable may exist but be inaccessible (the value is stored but not referred to within a given context), or accessible but not via the given name, in which case it is not in context (the program is "out of the scope of the name"). In other cases "lifetime" is irrelevant—a label (named position in the source code) has lifetime identical with the program (for statically compiled languages), but may be in context or not at a given point in the program, and likewise for static variables—a static global variable is in context for the entire program, while a static local variable is only in context within a function or other local context, but both have lifetime of the entire run of the program. Determining which entity a name refers to is known as name resolution or name binding (particularly in object-oriented programming), and varies between languages. Given a name, the language (properly, the compiler or interpreter) checks all entities that are in context for matches; in case of ambiguity (two entities with the same name, such as a global and local variable with the same name), the name resolution rules are used to distinguish them. Most frequently, name resolution relies on an "inner-to-outer context" rule, such as the Python LEGB (Local, Enclosing, Global, Built-in) rule: names implicitly resolve to the narrowest relevant context. In some cases name resolution can be explicitly specified, such as by the global and nonlocal keywords in Python; in other cases the default rules cannot be overridden. When two identical names are in context at the same time, referring to different entities, one says that name masking is occurring, where the higher-priority name (usually innermost) is "masking" the lower-priority name. At the level of variables, this is known as variable shadowing. Due to the potential for logic errors from masking, some languages disallow or discourage masking, raising an error or warning at compile time or run time. Various programming languages have various different scope rules for different kinds of declarations and names. Such scope rules have a large effect on language semantics and, consequently, on the behavior and correctness of programs. In languages like C++, accessing an unbound variable does not have well-defined semantics and may result in undefined behavior, similar to referring to a dangling pointer; and declarations or names used outside their scope will generate syntax errors. Scopes are frequently tied to other language constructs and determined implicitly, but many languages also offer constructs specifically for controlling scope. == Levels of scope == Scope can vary from as little as a single expression to as much as the entire program, with many possible gradations in between. The simplest scope rule is global scope—all entities are visible throughout the entire program. The most basic modular scope rule is two-level scope, with a global scope anywhere in the program, and local scope within a function. More sophisticated modular programming allows a separate module scope, where names are visible within the module (private to the module) but not visible outside it. Within a function, some languages, such as C, allow block scope to restrict scope to a subset of a function; others, notably functional languages, allow expression scope, to restrict scope to a single expression. Other scopes include file scope (notably in C) which behaves similarly to module scope, and block scope outside of functions (notably in Perl). A subtle issue is exactly when a scope begins and ends. In some languages, such as C, a name's scope begins at the name declaration, and thus different names declared within a given block can have different scopes. This requires declaring functions before use, though not necessarily defining them, and requires forward declaration in some cases, notably for mutual recursion. In other languages, such as Python, a name's scope begins at the start of the relevant block where the name is declared (such as the start of a function), regardless of where it is defined, so all names within a given block have the same scope. In JavaScript, the scope of a name declared with let or const begins at the name declaration, and the scope of a name declared with var begins at the start of the function where the name is declared, which is known as variable hoisting. Behavior of names in context that have undefined value differs: in Python use of undefined names yields a runtime error, while in JavaScript undefined names declared with var are usable throughout the function because they are implicitly bound to the value undefined. === Expression scope === The scope of a name binding is an expression, which is known as expression scope. Expression scope is available in many languages, especially functional languages which offer a feature called let expressions allowing a declaration's scope to be a single expression. This is convenient if, for example, an intermediate value is needed for a computation. For example, in Standard ML, if f() returns 12, then let val x = f() in x * x end is an expression that evaluates to 144, using a temporary variable named x to avoid calling f() twice. Some languages with block scope approximate this functionality by offering syntax for a block to be embedded into an expression; for example, the aforementioned Standard ML expression could be written in Perl as do { my $x = f(); $x * $x }, or in GNU C as ({ int x = f(); x * x; }). In Python, auxiliary variables in generator expressions and list comprehensions (in Python 3) have expression scope. In C, variable names in a function prototype have expression scope, known in this context as function protocol scope. As the variable names in the prototype are not referred to (they may be different in the actual definition)—they are just dummies—these are often omitted, though they may be used for generating documentation, for instance. === Block scope === The scope of a name binding is a block, which is known as block scope. Block scope is available in many, but not all, block-structured programming languages. This began with ALGOL 60, where "[e]very declaration ... is valid only for that block.", and today is particularly associated with languages in the Pascal and C families and traditions. Most often this block is contained within a function, thus restricting the scope to a part of a function, but in some cases, such as Perl, the block may not be within a function. A representative example of the use of block scope is the C code shown here, where two variables are scoped to the loop: the loop variable n, which is initialized once and incremented on each iteration of the loop, and the auxiliary variable n_squared, which is initialized at each iteration. The purpose is to avoid adding variables to the function scope that are only relevant to a particular block—for example, this prevents errors where the generic loop variable i has accidentally already been set to another value. In this example the expression n * n would generally not be assigned to an auxiliary variable, and the body of the loop would simply be written ret += n * n but in more complicated examples auxiliary variables are useful. Blocks are primarily used for control flow, such as with if, while, and for loops, and in these cases block scope means the scope of variable depends on the structure of a function's flow of execution. However, languages with block scope typically also allow the use of "naked" blocks, whose sole purpose is to allow fine-grained control of variable scope. For example, an auxiliary variable may be defined in a block, then used (say, added to a variable with function scope) and discarded when the block ends, or a while loop might be enclosed in a block that initializes variables used inside the loop that should only be initialized once. A subtlety of several programming languages, such as Algol 68 and C (demonstrated in this example and standardized since C99), is that block-scope variables can be declared not only within the body of the block, but also within the control statement, if any. This is analogous to function parameters, which are declared in the function declaration (before the block of the function body starts), and in scope for the whole function body. This is primarily used in for loops, which have an initialization statement separate from the loop condition, unlike while loops, and is a common idiom. Block scope can be used for shadowing. In this example, inside the block the auxiliary variable could also have been called n, shadowing the parameter name, but this is considered poor style due to the potential for errors. Furthermore, some descendants of C, such as Java and C#, despite having support for block scope (in that a local variable can be made to go out of context before the end of a function), do not allow one local variable to hide another. In such languages, the attempted declaration of the second n would result in a syntax error, and one of the n variables would have to be renamed. If a block is used to set the value of a variable, block scope requires that the variable be declared outside of the block. This complicates the use of conditional statements with single assignment. For example, in Python, which does not use block scope, one may initialize a variable as such: where a is accessible after the if statement. In Perl, which has block scope, this instead requires declaring the variable prior to the block: Often this is instead rewritten using multiple assignment, initializing the variable to a default value. In Python (where it is not necessary) this would be: while in Perl this would be: In case of a single variable assignment, an alternative is to use the ternary operator to avoid a block, but this is not in general possible for multiple variable assignments, and is difficult to read for complex logic. This is a more significant issue in C, notably for string assignment, as string initialization can automatically allocate memory, while string assignment to an already initialized variable requires allocating memory, a string copy, and checking that these are successful. Some languages allow the concept of block scope to be applied, to varying extents, outside of a function. For example, in the Perl snippet at right, $counter is a variable name with block scope (due to the use of the my keyword), while increment_counter is a function name with global scope. Each call to increment_counter will increase the value of $counter by one, and return the new value. Code outside of this block can call increment_counter, but cannot otherwise obtain or alter the value of $counter. This idiom allows one to define closures in Perl. === Function scope === When the scope of variables declared within a function does not extend beyond that function, this is known as function scope. Function scope is available in most programming languages which offer a way to create a local variable in a function or subroutine: a variable whose scope ends (that goes out of context) when the function returns. In most cases the lifetime of the variable is the duration of the function call—it is an automatic variable, created when the function starts (or the variable is declared), destroyed when the function returns—while the scope of the variable is within the function, though the meaning of "within" depends on whether scope is lexical or dynamic. However, some languages, such as C, also provide for static local variables, where the lifetime of the variable is the entire lifetime of the program, but the variable is only in context when inside the function. In the case of static local variables, the variable is created when the program initializes, and destroyed only when the program terminates, as with a static global variable, but is only in context within a function, like an automatic local variable. Importantly, in lexical scope a variable with function scope has scope only within the lexical context of the function: it goes out of context when another function is called within the function, and comes back into context when the function returns—called functions have no access to the local variables of calling functions, and local variables are only in context within the body of the function in which they are declared. By contrast, in dynamic scope, the scope extends to the execution context of the function: local variables stay in context when another function is called, only going out of context when the defining function ends, and thus local variables are in context of the function in which they are defined and all called functions. In languages with lexical scope and nested functions, local variables are in context for nested functions, since these are within the same lexical context, but not for other functions that are not lexically nested. A local variable of an enclosing function is known as a non-local variable for the nested function. Function scope is also applicable to anonymous functions. For example, in the snippet of Python code on the right, two functions are defined: square and sum_of_squares. square computes the square of a number; sum_of_squares computes the sum of all squares up to a number. (For example, square(4) is 42 = 16, and sum_of_squares(4) is 02 + 12 + 22 + 32 + 42 = 30.) Each of these functions has a variable named n that represents the argument to the function. These two n variables are completely separate and unrelated, despite having the same name, because they are lexically scoped local variables with function scope: each one's scope is its own, lexically separate function and thus, they don't overlap. Therefore, sum_of_squares can call square without its own n being altered. Similarly, sum_of_squares has variables named total and i; these variables, because of their limited scope, will not interfere with any variables named total or i that might belong to any other function. In other words, there is no risk of a name collision between these names and any unrelated names, even if they are identical. No name masking is occurring: only one variable named n is in context at any given time, as the scopes do not overlap. By contrast, were a similar fragment to be written in a language with dynamic scope, the n in the calling function would remain in context in the called function—the scopes would overlap—and would be masked ("shadowed") by the new n in the called function. Function scope is significantly more complicated if functions are first-class objects and can be created locally to a function and then returned. In this case any variables in the nested function that are not local to it (unbound variables in the function definition, that resolve to variables in an enclosing context) create a closure, as not only the function itself, but also its context (of variables) must be returned, and then potentially called in a different context. This requires significantly more support from the compiler, and can complicate program analysis. === File scope === The scope of a name binding is a file, which is known as file scope. File scope is largely particular to C (and C++), where scope of variables and functions declared at the top level of a file (not within any function) is for the entire file—or rather for C, from the declaration until the end of the source file, or more precisely translation unit (internal linking). This can be seen as a form of module scope, where modules are identified with files, and in more modern languages is replaced by an explicit module scope. Due to the presence of include statements, which add variables and functions to the internal context and may themselves call further include statements, it can be difficult to determine what is in context in the body of a file. In the C code snippet above, the function name sum_of_squares has global scope (in C, extern linkage). Adding static to the function signature would result in file scope (internal linkage). === Module scope === The scope of a name binding is a module, which is known as module scope. Module scope is available in modular programming languages where modules (which may span various files) are the basic unit of a complex program, as they allow information hiding and exposing a limited interface. Module scope was pioneered in the Modula family of languages, and Python (which was influenced by Modula) is a representative contemporary example. In some object-oriented programming languages that lack direct support for modules, such as C++ before C++20, a similar structure is instead provided by the class hierarchy, where classes are the basic unit of the program, and a class can have private methods. This is properly understood in the context of dynamic dispatch rather than name resolution and scope, though they often play analogous roles. In some cases both these facilities are available, such as in Python, which has both modules and classes, and code organization (as a module-level function or a conventionally private method) is a choice of the programmer. === Global scope === The scope of a name binding is an entire program, which is known as global scope. Variable names with global scope—called global variables—are frequently considered bad practice, at least in some languages, due to the possibility of name collisions and unintentional masking, together with poor modularity, and function scope or block scope are considered preferable. However, global scope is typically used (depending on the language) for various other sorts of names, such as names of functions, names of classes and names of other data types. In these cases mechanisms such as namespaces are used to avoid collisions. == Lexical scope vs. dynamic scope == The use of local variables — of variable names with limited scope, that only exist within a specific function — helps avoid the risk of a name collision between two identically named variables. However, there are two very different approaches to answering this question: What does it mean to be "within" a function? In lexical scope (or lexical scoping; also called static scope or static scoping), if a variable name's scope is a certain function, then its scope is the program text of the function definition: within that text, the variable name exists, and is bound to the variable's value, but outside that text, the variable name does not exist. By contrast, in dynamic scope (or dynamic scoping), if a variable name's scope is a certain function, then its scope is the time-period during which the function is executing: while the function is running, the variable name exists, and is bound to its value, but after the function returns, the variable name does not exist. This means that if function f invokes a separately defined function g, then under lexical scope, function g does not have access to f's local variables (assuming the text of g is not inside the text of f), while under dynamic scope, function g does have access to f's local variables (since g is invoked during the invocation of f). Consider, for example, the program on the right. The first line, x=1, creates a global variable x and initializes it to 1. The second line, function g() { echo $x ; x=2 ; }, defines a function g that prints out ("echoes") the current value of x, and then sets x to 2 (overwriting the previous value). The third line, function f() { local x=3 ; g ; } defines a function f that creates a local variable x (hiding the identically named global variable) and initializes it to 3, and then calls g. The fourth line, f, calls f. The fifth line, echo $x, prints out the current value of x. So, what exactly does this program print? It depends on the scope rules. If the language of this program is one that uses lexical scope, then g prints and modifies the global variable x (because g is defined outside f), so the program prints 1 and then 2. By contrast, if this language uses dynamic scope, then g prints and modifies f's local variable x (because g is called from within f), so the program prints 3 and then 1. (As it happens, the language of the program is Bash, which uses dynamic scope; so the program prints 3 and then 1. If the same code was run with ksh93 which uses lexical scope, the results would be different.) == Lexical scope == With lexical scope, a name always refers to its lexical context. This is a property of the program text and is made independent of the runtime call stack by the language implementation. Because this matching only requires analysis of the static program text, this type of scope is also called static scope. Lexical scope is standard in all ALGOL-based languages such as Pascal, Modula-2 and Ada as well as in modern functional languages such as ML and Haskell. It is also used in the C language and its syntactic and semantic relatives, although with different kinds of limitations. Static scope allows the programmer to reason about object references such as parameters, variables, constants, types, functions, etc., as simple name substitutions. This makes it much easier to make modular code and reason about it, since the local naming structure can be understood in isolation. In contrast, dynamic scope forces the programmer to anticipate all possible execution contexts in which the module's code may be invoked. For example, Pascal is lexically scoped. Consider the Pascal program fragment at right. The variable I is visible at all points, because it is never hidden by another variable of the same name. The char variable K is visible only in the main program because it is hidden by the real variable K visible only in procedures A and B. Variable L is also visible only in procedure A and B but it does not hide any other variable. Variable M is only visible in procedure B and therefore not accessible either from procedure A or the main program. Also, procedure B is visible only in procedure A and can therefore not be called from the main program. There could have been another procedure namedB declared in the program outside of procedure B. The place in the program where "B" is mentioned then determines which of the two procedures named B it represents, analogous with the scope of variables. Correct implementation of lexical scope in languages with first-class nested functions is not trivial, as it requires each function value to carry with it a record of the values of the variables that it depends on (the pair of the function and this context is called a closure). Depending on implementation and computer architecture, variable lookup may become slightly inefficient when very deeply lexically nested functions are used, although there are well-known techniques to mitigate this. Also, for nested functions that only refer to their own arguments and (immediately) local variables, all relative locations can be known at compile time. No overhead at all is therefore incurred when using that type of nested function. The same applies to particular parts of a program where nested functions are not used, and, naturally, to programs written in a language where nested functions are not available (such as in the C language). === History === Lexical scope was first used in the early 1960s for the imperative language ALGOL 60 and has been picked up in most other imperative languages since then. Languages like Pascal and C have always had lexical scope, since they are both influenced by the ideas that went into ALGOL 60 and ALGOL 68 (although C did not include lexically nested functions). Perl is a language with dynamic scope that added static scope afterwards. The original Lisp interpreter (1960) used dynamic scope. Deep binding, which approximates static (lexical) scope, was introduced around 1962 in LISP 1.5 (via the Funarg device developed by Steve Russell, working under John McCarthy). All early Lisps used dynamic scope, when based on interpreters. In 1982, Guy L. Steele Jr. and the Common LISP Group published An overview of Common LISP, a short review of the history and the divergent implementations of Lisp up to that moment and a review of the features that a Common Lisp implementation should have. On page 102, we read: Most LISP implementations are internally inconsistent in that by default the interpreter and compiler may assign different semantics to correct programs; this stems primarily from the fact that the interpreter assumes all variables to be dynamically scoped, while the compiler assumes all variables to be local unless forced to assume otherwise. This has been done for the sake of convenience and efficiency, but can lead to very subtle bugs. The definition of Common LISP avoids such anomalies by explicitly requiring the interpreter and compiler to impose identical semantics on correct programs. Implementations of Common LISP were thus required to have lexical scope. Again, from An overview of Common LISP: In addition, Common LISP offers the following facilities (most of which are borrowed from MacLisp, InterLisp or Lisp Machines Lisp): (...) Fully lexically scoped variables. The so-called "FUNARG problem" is completely solved, in both the downward and upward cases. By the same year in which An overview of Common LISP was published (1982), initial designs (also by Guy L. Steele Jr.) of a compiled, lexically scoped Lisp, called Scheme had been published and compiler implementations were being attempted. At that time, lexical scope in Lisp was commonly feared to be inefficient to implement. In A History of T, Olin Shivers writes: All serious Lisps in production use at that time were dynamically scoped. No one who hadn't carefully read the Rabbit thesis (written by Guy Lewis Steele Jr. in 1978) believed lexical scope would fly; even the few people who had read it were taking a bit of a leap of faith that this was going to work in serious production use. The term "lexical scope" dates at least to 1967, while the term "lexical scoping" dates at least to 1970, where it was used in Project MAC to describe the scope rules of the Lisp dialect MDL (then known as "Muddle"). == Dynamic scope == With dynamic scope, a name refers to execution context. In technical terms, this means that each name has a global stack of bindings. Introducing a local variable with name x pushes a binding onto the global x stack (which may have been empty), which is popped off when the control flow leaves the scope. Evaluating x in any context always yields the top binding. Note that this cannot be done at compile-time because the binding stack only exists at run-time, which is why this type of scope is called dynamic scope. Dynamic scope is uncommon in modern languages. Generally, certain blocks are defined to create bindings whose lifetime is the execution time of the block; this adds some features of static scope to the dynamic scope process. However, since a section of code can be called from many different locations and situations, it can be difficult to determine at the outset what bindings will apply when a variable is used (or if one exists at all). This can be beneficial; application of the principle of least knowledge suggests that code avoid depending on the reasons for (or circumstances of) a variable's value, but simply use the value according to the variable's definition. This narrow interpretation of shared data can provide a very flexible system for adapting the behavior of a function to the current state (or policy) of the system. However, this benefit relies on careful documentation of all variables used this way as well as on careful avoidance of assumptions about a variable's behavior, and does not provide any mechanism to detect interference between different parts of a program. Some languages, like Perl and Common Lisp, allow the programmer to choose static or dynamic scope when defining or redefining a variable. Examples of languages that use dynamic scope include Logo, Emacs Lisp, LaTeX and the shell languages bash, dash, and PowerShell. Dynamic scope is fairly easy to implement. To find an name's value, the program could traverse the runtime stack, checking each activation record (each function's stack frame) for a value for the name. In practice, this is made more efficient via the use of an association list, which is a stack of name/value pairs. Pairs are pushed onto this stack whenever declarations are made, and popped whenever variables go out of context. Shallow binding is an alternative strategy that is considerably faster, making use of a central reference table, which associates each name with its own stack of meanings. This avoids a linear search during run-time to find a particular name, but care should be taken to properly maintain this table. Note that both of these strategies assume a last-in-first-out (LIFO) ordering to bindings for any one variable; in practice all bindings are so ordered. An even simpler implementation is the representation of dynamic variables with simple global variables. The local binding is performed by saving the original value in an anonymous location on the stack that is invisible to the program. When that binding scope terminates, the original value is restored from this location. In fact, dynamic scope originated in this manner. Early implementations of Lisp used this obvious strategy for implementing local variables, and the practice survives in some dialects which are still in use, such as GNU Emacs Lisp. Lexical scope was introduced into Lisp later. This is equivalent to the above shallow binding scheme, except that the central reference table is simply the global variable binding context, in which the current meaning of the variable is its global value. Maintaining global variables isn't complex. For instance, a symbol object can have a dedicated slot for its global value. Dynamic scope provides an excellent abstraction for thread-local storage, but if it is used that way it cannot be based on saving and restoring a global variable. A possible implementation strategy is for each variable to have a thread-local key. When the variable is accessed, the thread-local key is used to access the thread-local memory location (by code generated by the compiler, which knows which variables are dynamic and which are lexical). If the thread-local key does not exist for the calling thread, then the global location is used. When a variable is locally bound, the prior value is stored in a hidden location on the stack. The thread-local storage is created under the variable's key, and the new value is stored there. Further nested overrides of the variable within that thread simply save and restore this thread-local location. When the initial, outermost override's context terminates, the thread-local key is deleted, exposing the global version of the variable once again to that thread. With referential transparency the dynamic scope is restricted to the argument stack of the current function only, and coincides with the lexical scope. === Macro expansion === In modern languages, macro expansion in a preprocessor is a key example of de facto dynamic scope. The macro language itself only transforms the source code, without resolving names, but since the expansion is done in place, when the names in the expanded text are then resolved (notably free variables), they are resolved based on where they are expanded (loosely "called"), as if dynamic scope were occurring. The C preprocessor, used for macro expansion, has de facto dynamic scope, as it does not do name resolution by itself and it is independent of where the macro is defined. For example, the macro: will expand to add a to the passed variable, with this name only later resolved by the compiler based on where the macro ADD_A is "called" (properly, expanded). Properly, the C preprocessor only does lexical analysis, expanding the macro during the tokenization stage, but not parsing into a syntax tree or doing name resolution. For example, in the following code, the name a in the macro is resolved (after expansion) to the local variable at the expansion site: == Qualified names == As we have seen, one of the key reasons for scope is that it helps prevent name collisions, by allowing identical names to refer to distinct things, with the restriction that the names must have separate scopes. Sometimes this restriction is inconvenient; when many different things need to be accessible throughout a program, they generally all need names with global scope, so different techniques are required to avoid name collisions. To address this, many languages offer mechanisms for organizing global names. The details of these mechanisms, and the terms used, depend on the language; but the general idea is that a group of names can itself be given a name — a prefix — and, when necessary, an entity can be referred to by a qualified name consisting of the name plus the prefix. Normally such names will have, in a sense, two sets of scopes: a scope (usually the global scope) in which the qualified name is visible, and one or more narrower scopes in which the unqualified name (without the prefix) is visible as well. And normally these groups can themselves be organized into groups; that is, they can be nested. Although many languages support this concept, the details vary greatly. Some languages have mechanisms, such as namespaces in C++ and C#, that serve almost exclusively to enable global names to be organized into groups. Other languages have mechanisms, such as packages in Ada and structures in Standard ML, that combine this with the additional purpose of allowing some names to be visible only to other members of their group. And object-oriented languages often allow classes or singleton objects to fulfill this purpose (whether or not they also have a mechanism for which this is the primary purpose). Furthermore, languages often meld these approaches; for example, Perl's packages are largely similar to C++'s namespaces, but optionally double as classes for object-oriented programming; and Java organizes its variables and functions into classes, but then organizes those classes into Ada-like packages. == By language == Scope rules for representative languages follow. === C === In C, scope is traditionally known as linkage or visibility, particularly for variables. C is a lexically scoped language with global scope (known as external linkage), a form of module scope or file scope (known as internal linkage), and local scope (within a function); within a function scopes can further be nested via block scope. However, standard C does not support nested functions. The lifetime and visibility of a variable are determined by its storage class. There are three types of lifetimes in C: static (program execution), automatic (block execution, allocated on the stack), and manual (allocated on the heap). Only static and automatic are supported for variables and handled by the compiler, while manually allocated memory must be tracked manually across different variables. There are three levels of visibility in C: external linkage (global), internal linkage (roughly file), and block scope (which includes functions); block scopes can be nested, and different levels of internal linkage is possible by use of includes. Internal linkage in C is visibility at the translation unit level, namely a source file after being processed by the C preprocessor, notably including all relevant includes. C programs are compiled as separate object files, which are then linked into an executable or library via a linker. Thus name resolution is split across the compiler, which resolves names within a translation unit (more loosely, "compilation unit", but this is properly a different concept), and the linker, which resolves names across translation units; see linkage for further discussion. In C, variables with block scope enter context when they are declared (not at the top of the block), go out of context if any (non-nested) function is called within the block, come back into context when the function returns, and go out of context at the end of the block. In the case of automatic local variables, they are also allocated on declaration and deallocated at the end of the block, while for static local variables, they are allocated at program initialization and deallocated at program termination. The following program demonstrates a variable with block scope coming into context partway through the block, then exiting context (and in fact being deallocated) when the block ends: The program outputs: m m b m There are other levels of scope in C. Variable names used in a function prototype have function prototype visibility, and exit context at the end of the function prototype. Since the name is not used, this is not useful for compilation, but may be useful for documentation. Label names for GOTO statement have function scope. === C++ === All the variables that we intend to use in a program must have been declared with its type specifier in an earlier point in the code, like we did in the previous code at the beginning of the body of the function main when we declared that a, b, and result were of type int. A variable can be either of global or local scope. A global variable is a variable declared in the main body of the source code, outside all functions, while a local variable is one declared within the body of a function or a block. Modern versions allow nested lexical scope. === Swift === Swift has a similar rule for scopes with C++, but contains different access modifiers. === Go === Go is lexically scoped using blocks. === Java === Java is lexically scoped. A Java class has several kinds of variables: Local variables are defined inside a method, or a particular block. These variables are local to where they were defined and lower levels. For example, a loop inside a method can use that method's local variables, but not the other way around. The loop's variables (local to that loop) are destroyed as soon as the loop ends. Member variables also called fields are variables declared within the class, outside of any method. By default, these variables are available for all methods within that class and also for all classes in the package. Parameters are variables in method declarations. In general, a set of brackets defines a particular scope, but variables at top level within a class can differ in their behavior depending on the modifier keywords used in their definition. The following table shows the access to members permitted by each modifier. === JavaScript === JavaScript has simple scope rules, but variable initialization and name resolution rules can cause problems, and the widespread use of closures for callbacks means the lexical context of a function when defined (which is used for name resolution) can be very different from the lexical context when it is called (which is irrelevant for name resolution). JavaScript objects have name resolution for properties, but this is a separate topic. JavaScript has lexical scope nested at the function level, with the global context being the outermost context. This scope is used for both variables and for functions (meaning function declarations, as opposed to variables of function type). Block scope with the let and const keywords is standard since ECMAScript 6. Block scope can be produced by wrapping the entire block in a function and then executing it; this is known as the immediately-invoked function expression (IIFE) pattern. While JavaScript scope is simple—lexical, function-level—the associated initialization and name resolution rules are a cause of confusion. Firstly, assignment to a name not in scope defaults to creating a new global variable, not a local one. Secondly, to create a new local variable one must use the var keyword; the variable is then created at the top of the function, with value undefined and the variable is assigned its value when the assignment expression is reached: A variable with an Initialiser is assigned the value of its AssignmentExpression when the VariableStatement is executed, not when the variable is created. This is known as variable hoisting—the declaration, but not the initialization, is hoisted to the top of the function. Thirdly, accessing variables before initialization yields undefined, rather than a syntax error. Fourthly, for function declarations, the declaration and the initialization are both hoisted to the top of the function, unlike for variable initialization. For example, the following code produces a dialog with output undefined, as the local variable declaration is hoisted, shadowing the global variable, but the initialization is not, so the variable is undefined when used: Further, as functions are first-class objects in JavaScript and are frequently assigned as callbacks or returned from functions, when a function is executed, the name resolution depends on where it was originally defined (the lexical context of the definition), not the lexical context or execution context where it is called. The nested scopes of a particular function (from most global to most local) in JavaScript, particularly of a closure, used as a callback, are sometimes referred to as the scope chain, by analogy with the prototype chain of an object. Closures can be produced in JavaScript by using nested functions, as functions are first-class objects. Returning a nested function from an enclosing function includes the local variables of the enclosing function as the (non-local) lexical context of the returned function, yielding a closure. For example: Closures are frequently used in JavaScript, due to being used for callbacks. Indeed, any hooking of a function in the local context as a callback or returning it from a function creates a closure if there are any unbound variables in the function body (with the context of the closure based on the nested scopes of the current lexical context, or "scope chain"); this may be accidental. When creating a callback based on parameters, the parameters must be stored in a closure, otherwise it will accidentally create a closure that refers to the variables in the enclosing context, which may change. Name resolution of properties of JavaScript objects is based on inheritance in the prototype tree—a path to the root in the tree is called a prototype chain—and is separate from name resolution of variables and functions. === Lisp === Lisp dialects have various rules for scope. The original Lisp used dynamic scope; it was Scheme, inspired by ALGOL, that introduced static (lexical) scope to the Lisp family. Maclisp used dynamic scope by default in the interpreter and lexical scope by default in compiled code, though compiled code could access dynamic bindings by use of SPECIAL declarations for particular variables. However, Maclisp treated lexical binding more as an optimization than one would expect in modern languages, and it did not come with the closure feature one might expect of lexical scope in modern Lisps. A separate operation, *FUNCTION, was available to somewhat clumsily work around some of that issue. Common Lisp adopted lexical scope from Scheme, as did Clojure. ISLISP has lexical scope for ordinary variables. It also has dynamic variables, but they are in all cases explicitly marked; they must be defined by a defdynamic special form, bound by a dynamic-let special form, and accessed by an explicit dynamic special form. Some other dialects of Lisp, like Emacs Lisp, still use dynamic scope by default. Emacs Lisp now has lexical scope available on a per-buffer basis. === Python === For variables, Python has function scope, module scope, and global scope. Names enter context at the start of a scope (function, module, or global scope), and exit context when a non-nested function is called or the scope ends. If a name is used prior to variable initialization, this raises a runtime exception. If a variable is simply accessed (not assigned to), name resolution follows the LEGB (Local, Enclosing, Global, Built-in) rule which resolves names to the narrowest relevant context. However, if a variable is assigned to, it defaults to declaring a variable whose scope starts at the start of the level (function, module, or global), not at the assignment. Both these rules can be overridden with a global or nonlocal (in Python 3) declaration prior to use, which allows accessing global variables even if there is a masking nonlocal variable, and assigning to global or nonlocal variables. As a simple example, a function resolves a variable to the global scope: Note that x is defined before f is called, so no error is raised, even though it is defined after its reference in the definition of f. Lexically this is a forward reference, which is allowed in Python. Here assignment creates a new local variable, which does not change the value of the global variable: Assignment to a variable within a function causes it to be declared local to the function, hence its scope is the entire function, and thus using it prior to this assignment raises an error. This differs from C, where the scope of the local variable start at its declaration. This code raises an error: The default name resolution rules can be overridden with the global or nonlocal (in Python 3) keywords. In the below code, the global x declaration in g means that x resolves to the global variable. It thus can be accessed (as it has already been defined), and assignment assigns to the global variable, rather than declaring a new local variable. Note that no global declaration is needed in f—since it does not assign to the variable, it defaults to resolving to the global variable. global can also be used for nested functions. In addition to allowing assignment to a global variable, as in an unnested function, this can also be used to access the global variable in the presence of a nonlocal variable: For nested functions, there is also the nonlocal declaration, for assigning to a nonlocal variable, similar to using global in an unnested function: === R === R is a lexically scoped language, unlike other implementations of S where the values of free variables are determined by a set of global variables, while in R they are determined by the context in which the function was created. The scope contexts may be accessed using a variety of features (such as parent.frame()) which can simulate the experience of dynamic scope should the programmer desire. There is no block scope: Functions have access to scope they were created in: Variables created or modified within a function stay there: Variables created or modified within a function stay there unless assignment to enclosing scope is explicitly requested: Although R has lexical scope by default, function scopes can be changed: == Notes == == References ==
Wikipedia/Function_scope
In object-oriented programming such as is often used in C++ and Object Pascal, a virtual function or virtual method is an inheritable and overridable function or method that is dispatched dynamically. Virtual functions are an important part of (runtime) polymorphism in object-oriented programming (OOP). They allow for the execution of target functions that were not precisely identified at compile time. Most programming languages, such as JavaScript, PHP and Python, treat all methods as virtual by default and do not provide a modifier to change this behavior. However, some languages provide modifiers to prevent methods from being overridden by derived classes (such as the final and private keywords in Java and PHP). == Purpose == The concept of the virtual function solves the following problem: In object-oriented programming, when a derived class inherits from a base class, an object of the derived class may be referred to via a pointer or reference of the base class type instead of the derived class type. If there are base class methods overridden by the derived class, the method actually called by such a reference or pointer can be bound (linked) either "early" (by the compiler), according to the declared type of the pointer or reference, or "late" (i.e., by the runtime system of the language), according to the actual type of the object referred to. Virtual functions are resolved "late". If the function in question is "virtual" in the base class, the most-derived class's implementation of the function is called according to the actual type of the object referred to, regardless of the declared type of the pointer or reference. If it is not "virtual", the method is resolved "early" and selected according to the declared type of the pointer or reference. Virtual functions allow a program to call methods that don't necessarily even exist at the moment the code is compiled. In C++, virtual methods are declared by prepending the virtual keyword to the function's declaration in the base class. This modifier is inherited by all implementations of that method in derived classes, meaning that they can continue to over-ride each other and be late-bound. And even if methods owned by the base class call the virtual method, they will instead be calling the derived method. Overloading occurs when two or more methods in one class have the same method name but different parameters. Overriding means having two methods with the same method name and parameters. Overloading is also referred to as function matching, and overriding as dynamic function mapping. == Example == === C++ === For example, a base class Animal could have a virtual function Eat. Subclass Llama would implement Eat differently than subclass Wolf, but one can invoke Eat on any class instance referred to as Animal, and get the Eat behavior of the specific subclass. This allows a programmer to process a list of objects of class Animal, telling each in turn to eat (by calling Eat), without needing to know what kind of animal may be in the list, how each animal eats, or what the complete set of possible animal types might be. In C, the mechanism behind virtual functions could be provided in the following manner: == Abstract classes and pure virtual functions == A pure virtual function or pure virtual method is a virtual function that is required to be implemented by a derived class if the derived class is not abstract. Classes containing pure virtual methods are termed "abstract" and they cannot be instantiated directly. A subclass of an abstract class can only be instantiated directly if all inherited pure virtual methods have been implemented by that class or a parent class. Pure virtual methods typically have a declaration (signature) and no definition (implementation). As an example, an abstract base class MathSymbol may provide a pure virtual function doOperation(), and derived classes Plus and Minus implement doOperation() to provide concrete implementations. Implementing doOperation() would not make sense in the MathSymbol class, as MathSymbol is an abstract concept whose behaviour is defined solely for each given kind (subclass) of MathSymbol. Similarly, a given subclass of MathSymbol would not be complete without an implementation of doOperation(). Although pure virtual methods typically have no implementation in the class that declares them, pure virtual methods in some languages (e.g. C++ and Python) are permitted to contain an implementation in their declaring class, providing fallback or default behaviour that a derived class can delegate to, if appropriate. Pure virtual functions can also be used where the method declarations are being used to define an interface - similar to what the interface keyword in Java explicitly specifies. In such a use, derived classes will supply all implementations. In such a design pattern, the abstract class which serves as an interface will contain only pure virtual functions, but no data members or ordinary methods. In C++, using such purely abstract classes as interfaces works because C++ supports multiple inheritance. However, because many OOP languages do not support multiple inheritance, they often provide a separate interface mechanism. An example is the Java programming language. == Behavior during construction and destruction == Languages differ in their behavior while the constructor or destructor of an object is running. For this reason, calling virtual functions in constructors is generally discouraged. In C++, the "base" function is called. Specifically, the most derived function that is not more derived than the current constructor or destructor's class is called.: §15.7.3  If that function is a pure virtual function, then undefined behavior occurs.: §13.4.6  This is true even if the class contains an implementation for that pure virtual function, since a call to a pure virtual function must be explicitly qualified. A conforming C++ implementation is not required (and generally not able) to detect indirect calls to pure virtual functions at compile time or link time. Some runtime systems will issue a pure virtual function call error when encountering a call to a pure virtual function at run time. In Java and C#, the derived implementation is called, but some fields are not yet initialized by the derived constructor (although they are initialized to their default zero values). Some design patterns, such as the Abstract Factory Pattern, actively promote this usage in languages supporting this ability. == Virtual destructors == Object-oriented languages typically manage memory allocation and de-allocation automatically when objects are created and destroyed. However, some object-oriented languages allow a custom destructor method to be implemented, if desired. If the language in question uses automatic memory management, the custom destructor (generally called a finalizer in this context) that is called is certain to be the appropriate one for the object in question. For example, if an object of type Wolf that inherits Animal is created, and both have custom destructors, the one called will be the one declared in Wolf. In manual memory management contexts, the situation can be more complex, particularly in relation to static dispatch. If an object of type Wolf is created but pointed to by an Animal pointer, and it is this Animal pointer type that is deleted, the destructor called may actually be the one defined for Animal and not the one for Wolf, unless the destructor is virtual. This is particularly the case with C++, where the behavior is a common source of programming errors if destructors are not virtual. == See also == Abstract method Inheritance (object-oriented programming) Superclass (computer science) Virtual inheritance Virtual class Interface (object oriented programming) Component object model Virtual method table == References ==
Wikipedia/Virtual_functions
Action at a distance is an anti-pattern in computer science in which behavior in one part of a program varies wildly based on difficult or impossible to identify operations in another part of the program. The way to avoid the problems associated with action at a distance is a proper design, which avoids global variables and alters data only in a controlled and local manner, or usage of a pure functional programming style with referential transparency. The term is based on the concept of action at a distance in physics, which may refer to a process that allows objects to interact without a mediator particle such as the gluon. In particular, Albert Einstein referred to quantum nonlocality as "spooky action at a distance". Software bugs due to action at a distance may arise because a program component is doing something at the wrong time, or affecting something it should not. It is very difficult, however, to track down which component is responsible. Side effects from innocent actions can put the program in an unknown state, so local data is not necessarily local. The solution in this particular scenario is to define which components should be interacting with which others. A proper design that accurately defines the interface between parts of a program, and that avoids shared states, can largely eliminate problems caused by action at a distance. == Example == This example, from the Perl programming language, demonstrates an especially serious case of action at a distance (note the $[ variable was deprecated in later versions of Perl): Array indices normally begin at 0 because the value of $[ is normally 0; if you set $[ to 1, then arrays start at 1, which makes Fortran and Lua programmers happy, and so we see examples like this in the perl(3) man page: And of course you could set $[ to 17 to have arrays start at some random number such as 17 or 4 instead of at 0 or 1. This was a great way to sabotage module authors. Fortunately, sanity prevailed. These features are now recognized to have been mistakes. The perl5-porters mailing list now has a catchphrase for such features: they're called "action at a distance". The principle is that a declaration in one part of the program shouldn't drastically and invisibly alter the behavior of some other part of the program. == Action at a distance across objects == Proper object-oriented programming involves design principles that avoid action at a distance. The Law of Demeter states that an object should only interact with other objects near itself. Should action in a distant part of the system be required then it should be implemented by propagating a message. Proper design severely limits occurrences of action at a distance, contributing to maintainable programs. Pressure to create an object orgy results from poor interface design, perhaps taking the form of a God object, not implementing true objects, or failing to heed the Law of Demeter. One of the advantages of functional programming is that action at a distance is de-emphasised, sometimes to the point of being impossible to express at all in the source language. Being aware of the danger of allowing action at a distance into a design, and being able to recognize the presence of action at a distance, is useful in developing programs that are correct, reliable and maintainable. Given that the majority of the expense of a program may be in the maintenance phase, and that action at a distance makes maintenance difficult, expensive and error prone, it is worth effort during design to avoid. == See also == COMEFROM Loose coupling State pattern == References ==
Wikipedia/Action_at_distance_(computer_science)
Service choreography in business computing is a form of service composition in which the interaction protocol between several partner services is defined from a global perspective. The idea underlying the notion of service choreography can be summarised as follows: "Dancers dance following a global scenario without a single point of control" That is, at run-time each participant in a service choreography executes its part according to the behavior of the other participants. A choreography's role specifies the expected messaging behavior of the participants that will play it in terms of the sequencing and timing of the messages that they can consume and produce. Choreography describes the sequence and conditions in which the data is exchanged between two or more participants in order to meet some useful purpose. == Service choreography and service orchestration == Service choreography is better understood through the comparison with another paradigm of service composition: service orchestration. On one hand, in service choreographies the logic of the message-based interactions among the participants is specified from a global perspective. In service orchestration, on the other hand, the logic is specified from the local point of view of one controlling participant, called the orchestrator. In the service orchestration language BPEL, for example, the specification of the service orchestration (e.g. the BPEL process file) is a workflow that can be deployed on the service infrastructure (for example a BPEL execution engine like Apache ODE). The deployment of the service orchestration specification transforms a workflow into a composite service. In a sense, service choreography and orchestrations are two sides of the same coin. On one hand, the roles of a service choreography can be extracted as service orchestrations through a process called projection. Through projection it is possible to realize skeletons, i.e. incomplete service orchestrations that can be used as baselines to realize the web services that participate to the service choreography. On the other hand, already existing service orchestrations may be composed in service choreographies. == Enactment of service choreographies == Service choreographies are not executed: they are enacted. A service choreography is enacted when its participants execute their roles. That is, unlike service orchestration, service choreographies are not run by some engine on the service infrastructure, but they “happen" when their roles are executed. This is because the logic of the service choreography is specified from a global point of view, and thus it is not realized by one single service like in service orchestration. The key question which much of the research into choreography seeks to answer is this: Suppose a global choreography is constructed that describes the possible interactions between the participants in a collaboration. What conditions does the choreography need to obey if it is to be guaranteed that the collaboration succeeds? Here, succeeds means that the emergent behaviour that results when the collaboration is enacted, with each participant acting independently according to its own skeleton, exactly follows the choreography from which the skeletons were originally projected. When this is the case, the choreography is said to be realizable. In general, determining realizability of a choreography is a non-trivial question, particularly where the collaboration uses asynchronous messaging and it is possible for different participants to send messages simultaneously. == Service choreography languages == In the ambit of the specifications concerning Web services, the following specifications have focused on defining languages to model service choreographies: Web Service Choreography Description Language (WS-CDL) is a XML-based specification from the W3C for modelling choreographies using constructs inspired by Pi calculus Web Service Choreography Interface (WSCI) is an XML-based specification that was put forward to the W3C by Intalio, Sun Microsystems, BEA Systems and SAP AG, and that served as input to the Web Service Choreography Description Language (WS-CDL) Moreover, the OMG specification BPMN version 2.0 includes diagrams to model service choreographies. Academic proposals for service choreography languages include: Let's Dance BPEL4Chor Chor Moreover, a number of service choreography formalisms have been proposed based on: Petri Nets, for example Interaction Petri Nets and Open Workflow Nets Finite-State Machines Guarded Automata Timed Automata Pi calculus Process calculi === Web service choreography === Web service choreography (WS-Choreography) is a specification by the W3C defining an XML-based business process modeling language that describes collaboration protocols of cooperating Web Service participants, in which services act as peers, and interactions may be long-lived and stateful. (Orchestration is another term with a very similar, but still different meaning.) The main effort to get a choreography, The W3C Web Services Choreography Working Group, was closed on 10 July 2009 leaving WS-CDL as a Candidate Recommendation. "Many presentations at the W3C Workshop on Web services of 11–12 April 2001 pointed to the need for a common interface and composition language to help address choreography. The Web Services Architecture Requirements Working Draft created by the Web Services Architecture Working Group also lists the idea of Web service choreography capabilities as a Critical Success Factor, in support of several different top-level goals for the nascent Web services architecture"[1]. The problem of choreography was of great interest to the industry during that time; efforts such as WSCL (Web Service Conversation Language) and WSCI (Web Service Choreography Interface) were submitted to W3C and were published as Technical Notes. Moreover, complementary efforts were launched: BPML, now BPMN BPSS by ebXML [2] WSFL by IBM XLANG by Microsoft BPEL4WS by IBM, Microsoft and BEA "In June 2002, Intalio, Sun, BEA and SAP released a joint specification named Web Services Choreography Interface (WSCI). This specification was also submitted to W3C as a note in August 2002. W3C has since formed a new Working Group called Web Services Choreography Working Group within the Web services Activity. The WSCI specification is one of the primary inputs into the Web Services Choreography Working Group which published a Candidate Recommendation on WS-CDL version 1.0 on November 9th, 2005"[3]. "XLang, WSFL and WSCI are no longer being supported by any standard organization or companies. BPEL replaced Xlang and WSFL WSCI was superseded by WS-CDL"[4]. The upcoming Business Process Modeling Notation version 2.0 will introduce diagrams for specifying service choreographies. The academic field has put forward other service choreography languages, for example Let's Dance, BPEL4Chor and MAP. == Paradigms of service choreographies == Service choreographies specify message-based interactions among participants from a global perspective. In the same way as programming languages can be grouped into programming paradigms, service choreography languages can be grouped in styles: Interaction modelling: the logic of the choreography is specified as a workflow in which the activities represent the message exchanges between the participants (for example Web Service Choreography Description Language (WS-CDL) and Let's Dance) Interconnected interfaces modelling: the logic of the choreography is split across its participants through the roles they play (i.e. their expected messaging behavior). The roles are connected using message flows, channels, or equivalent constructs (this is for example the case of BPEL4Chor) == Research projects on choreographies == There are several active research projects on the topic of service choreography. CHOReVOLUTION: Automated Synthesis of Dynamic and Secured Choreographies for the Future Internet CRC: Choreographies for Reliable and efficient Communication software SwarmESB - a light, open source, ESB or message hub for node.js PrivateSKY - experimental development in public-private partnership for local cloud platforms with advanced data protection features == See also == Choreographic programming - A programming paradigm where programs are choreographies. BPEL - Business Process Execution Language, OASIS standard Executable choreography Service composability principle Web Service Choreography Description Language - A language for describing choreographies developed in the scope of the W3C == References == == External links == Web Service Choreography Description Language - W3C specification for WS-Choreography Web Service Choreography Description Language: Primer - Web Service Choreography Interface (WSCI) 1.0 - specification by Intalio, Sun, BEA and SAP; input into WS-Choreography Large-scale Choreographies for the Future Internet - European Commission FP7 Research Project Web services choreography in practice - Motivation and description of WSCI Service Choreographies - Site promoting the concept of service choreography as a basis for service-oriented systems design. The site also describes a language for modeling choreographies on top of WSCI, namely Let's Dance. Web Services Choreography Description Language Version 1.0 W3C Web Services Choreography Working Group Formal Modelling of Web Services A Theoretical Basis of Communication-Centred Concurrent Programming Towards the Theoretical Foundation of Choreography Exploring Into the Essence of Choreography
Wikipedia/Web_Services_Choreography_Description_Language
In business analysis, the Decision Model and Notation (DMN) is a standard published by the Object Management Group. It is a standard approach for describing and modeling repeatable decisions within organizations to ensure that decision models are interchangeable across organizations. The DMN standard provides the industry with a modeling notation for decisions that will support decision management and business rules. The notation is designed to be readable by business and IT users alike. This enables various groups to effectively collaborate in defining a decision model: the business people who manage and monitor the decisions, the business analysts or functional analysts who document the initial decision requirements and specify the detailed decision models and decision logic, the technical developers responsible for the automation of systems that make the decisions. The DMN standard can be effectively used standalone but it is also complementary to the BPMN and CMMN standards. BPMN defines a special kind of activity, the Business Rule Task, which "provides a mechanism for the process to provide input to a business rule engine and to get the output of calculations that the business rule engine might provide" that can be used to show where in a BPMN process a decision defined using DMN should be used. DMN has been made a standard for Business Analysis according to BABOK v3. == Elements of the standard == The standard includes three main elements Decision Requirements Diagrams that show how the elements of decision-making are linked into a dependency network. Decision tables to represent how each decision in such a network can be made. Business context for decisions such as the roles of organizations or the impact on performance metrics. A Friendly Enough Expression Language (FEEL) that can be used to evaluate expressions in a decision table and other logic formats. == Use cases == The standard identifies three main use cases for DMN Defining manual decision making Specifying the requirements for automated decision-making Representing a complete, executable model of decision-making == Benefits == Using the DMN standard will improve business analysis and business process management, since other popular requirement management techniques such as BPMN and UML do not handle decision making growth of projects using business rule management systems or BRMS, which allow faster changes it facilitates better communications between business, IT and analytic roles in a company it provides an effective requirements modeling approach for Predictive Analytics projects and fulfills the need for "business understanding" in methodologies for advanced analytics such as CRISP-DM it provides a standard notation for decision tables, the most common style of business rules in a BRMS == Relationship to BPMN == DMN has been designed to work with BPMN. Business process models can be simplified by moving process logic into decision services. DMN is a separate domain within the OMG that provides an explicit way to connect to processes in BPMN. Decisions in DMN can be explicitly linked to processes and tasks that use the decisions. This integration of DMN and BPMN has been studied extensively. DMN expects that the logic of a decision will be deployed as a stateless, side-effect free Decision Service. Such a service can be invoked from a business process and the data in the process can be mapped to the inputs and outputs of the decision service. == DMN BPMN example == As mentioned, BPMN is a related OMG Standard for process modeling. DMN complements BPMN, providing a separation of concerns between the decision and the process. The example here describes a BPMN process and DMN DRD (Decision Requirements Diagram) for onboarding a bank customer. Several decisions are modeled and these decisions will direct the processes response. === New bank account process === In the BPMN process model shown in the figure, a customer makes a request to open a new bank account. The account application provides the account representative with all the information needed to create an account and provide the requested services. This includes the name, address and various forms of identification. In the next steps of the work flow, the 'Know Your Customer' (KYC) services are called. In the 'KYC' services, the name and address are validated; followed by a check against the international criminal database (Interpol) and the database of persons that are 'Politically exposed persons (PEP)'. The PEP is a person who is either entrusted with a prominent political position or a close relative thereof. Deposits from persons on the PEP list are potentially corrupt. This is shown as two services on the process model. Anti-money-laundering (AML) regulations require these checks before the customer account is certified. The results of these services plus the forms of identification are sent to the Certify New Account decision. This is shown as a 'rule' activity, verify account, on the process diagram. If the new customer passes certification, then the account is classified into onboarding for Business Retail, Retail, Wealth Management and High Value Business. Otherwise the customer application is declined. The Classify New Customer Decision classifies the customer. If the verify-account process returns a result of 'Manual' then the PEP or the Interpol check returned a close match. The account representative must visually inspect the name and the application to determine if the match is valid and accept or decline the application. === Certify new account decision === An account is certified for opening if the individual's' address is verified, and if valid identification is provided, and if the applicant is not on a list of criminals or politically exposed persons. These are shown as sub-decisions below the 'certify new account' decision. The account verification services provides a 100% match of the applicants address. For identification to be valid, the customer must provide a driver's license, passport or government issued ID. The checks against PEP and Interpol are 'Fuzzy' matches and return matching score values. Scores above 85 are considered a 'match' and scores between 65 and 85 would require a 'manual' screening process. People who match either of these lists are rejected by the account application process. If there is a partial match with a score between 65 and 85, against the Interpol or PEP list then the certification is set to manual and an account representative performs a manual verification of the applicant's data. These rules are reflected in the figure below, which presents the decision table for whether to pass the provided name for the lists checks. === Client category === The client's on-boarding process is driven by what category they fall in. The category is decided by the: Type of client, business or private The size of the funds on deposit And the estimated net worth This decision is shown below: There are 6 business rules that determine the client's category and these are shown in the decision table here: === Summary example === In this example, the outcome of the 'Verify Account' decision directed the responses of the new account process. The same is true for the 'Classify Customer' decision. By adding or changing the business rules in the tables, one can easily change the criteria for these decisions and control the process differently. Modeling is a critical aspect of improving an existing process or business challenge. Modeling is generally done by a team of business analysts, IT personnel, and modeling experts. The expressive modeling capabilities of BPMN allows business analyst to understand the functions of the activities of the process. Now with the addition of DMN, business analysts can construct an understandable model of complex decisions. Combining BPMN and DMN yields a very powerful combination of models that work synergistically to simplify processes. == Relationship to decision mining and process mining == Automated discovery techniques that infer decision models from process execution data have been proposed as well. Here, a DMN decision model is derived from a data-enriched event log, along with the process that uses the decisions. In doing so, decision mining complements process mining with traditional data mining approaches. == cDMN extension == Constraint Decision Model and Notation (cDMN) is a formal notation for expressing knowledge in a tabular, intuitive format. It extends DMN with constraint reasoning and related concepts while aiming to retain the user-friendliness of the original. cDMN is also meant to express other problems besides business modeling, such as complex component design. It extends DMN in four ways: Constraint modelling (see Constraint programming) Adding expressive data representation, such as typed predicates and functions (similar to First-order logic) Data tables, in which each entry represents a different problem instance Quantification Due to these additions, cDMN models can express more complex problems. Furthermore, they can also express some DMN models in more compact, less-convoluted ways. Unlike DMN, cDMN is not deterministic, in the sense that a set of input values could have multiple different solutions. Indeed, where a DMN model always defines a single solution, a cDMN model defines a solution space. Usage of cDMN models can also be integrated in Business Process Model and Notation process models, just like DMN. === Example === As an example, consider the well-known map coloring or Graph coloring problem. Here, we wish to color a map in such a way that no bordering countries share the same color. The constraint table shown in the figure (as denoted by its E* hit policy in the top-left corner) expresses this logic. It is read as "For each country c1, country c2 holds that if they are different countries which border, then the color of c1 is not the color of c2. Here, the first two columns introduce two quantifiers, both of type country, which serve as universal quantifier. In the third column, the 2-ary predicate borders is used to express when two countries have a shared border. Finally, the last column uses the 1-ary function color of, which maps each country on a color. == References == == External links == DMN specifications published by Object Management Group DMN Technology Capability Kit: Test platform for evaluating DMN standard conformance of DMN software products cDMN on readthedocs.io
Wikipedia/Decision_Model_and_Notation
HIPO model (hierarchical input process output model) is a systems analysis design aid and documentation technique from the 1970s, used for representing the modules of a system as a hierarchy and for documenting each module. == History == It was used to develop requirements, construct the design, and support implementation of an expert system to demonstrate automated rendezvous. Verification was then conducted systematically because of the method of design and implementation. The overall design of the system is documented using HIPO charts or structure charts. The structure chart is similar in appearance to an organizational chart, but has been modified to show additional detail. Structure charts can be used to display several types of information, but are used most commonly to diagram either data structures or code structures. == See also == IPO model SIPOC == References ==
Wikipedia/HIPO_model
Business process re-engineering (BPR) is a business management strategy originally pioneered in the early 1990s, focusing on the analysis and design of workflows and business processes within an organization. BPR aims to help organizations fundamentally rethink how they do their work in order to improve customer service, cut operational costs, and become world-class competitors. BPR seeks to help companies radically restructure their organizations by focusing on the ground-up design of their business processes. According to early BPR proponent Thomas H. Davenport (1990), a business process is a set of logically related tasks performed to achieve a defined business outcome. Re-engineering emphasized a holistic focus on business objectives and how processes related to them, encouraging full-scale recreation of processes, rather than iterative optimization of sub-processes. BPR is influenced by technological innovations as industry players replace old methods of business operations with cost-saving innovative technologies such as automation that can radically transform business operations. Business process re-engineering is also known as business process redesign, business transformation, or business process change management. == Overview == Business process re-engineering (BPR) is a comprehensive approach to redesigning and optimizing organizational processes to improve efficiency, effectiveness, and adaptability. This approach involves analyzing and restructuring key business aspects—such as workflow, communication, and decision-making—to achieve significant performance improvements, including increased productivity, cost reduction, and enhanced customer satisfaction.. BPR is a powerful tool that can be applied to various industries and organizations of all sizes, and it can be achieved through various methodologies and techniques, such as process mapping, process simulation, and process automation. Organizations re-engineer two key areas of their businesses. First, they use modern technology to enhance data dissemination and decision-making processes. Then, they alter functional organizations to form functional teams. Re-engineering starts with a high-level assessment of the organization's mission, strategic goals, and customer needs. Basic questions are asked, such as "Does our mission need to be redefined? Are our strategic goals aligned with our mission? Who are our customers?" An organization may find that it is operating on questionable assumptions, particularly in terms of the wants and needs of its customers. Only after the organization rethinks what it should be doing, does it go on to decide how to best do it. Within the framework of this basic assessment of mission and goals, re-engineering focuses on the organization's business processes—the steps and procedures that govern how resources are used to create products and services that meet the needs of particular customers or markets. As a structured ordering of work steps across time and place, a business process can be decomposed into specific activities, measured, modeled, and improved. It can also be completely redesigned or eliminated altogether. Re-engineering identifies, analyzes, and re-designs an organization's core business processes with the aim of achieving improvements in critical performance measures, such as cost, quality, service, and speed. Re-engineering recognizes that an organization's business processes are usually fragmented into sub-processes and tasks that are carried out by several specialized functional areas within the organization. Often, no one is responsible for the overall performance of the entire process. Re-engineering maintains that optimizing the performance of sub-processes can result in some benefits but cannot yield improvements if the process itself is fundamentally inefficient and outmoded. For that reason, re-engineering focuses on re-designing the process as a whole in order to achieve the greatest possible benefits to the organization and their customers. This drive for realizing improvements by fundamentally re-thinking how the organization's work should be done distinguishes the re-engineering from process improvement efforts that focus on functional or incremental improvement. == History == BPR began as a private sector technique to help organizations rethink how they do their work in order to improve customer service, cut operational costs, and become world-class competitors. A key stimulus for re-engineering has been the continuing development and deployment of information systems and networks. Organizations are becoming bolder in using this technology to support business processes, rather than refining current ways of doing work. === Reengineering Work: Don't Automate, Obliterate, 1990 === In 1990, Michael Hammer, a former professor of computer science at the Massachusetts Institute of Technology (MIT), published the article "Reengineering Work: Don't Automate, Obliterate" in the Harvard Business Review, in which he claimed that the major challenge for managers is to obliterate forms of work that do not add value, rather than using technology for automating it. This statement implicitly accused managers of having focused on the wrong issues, namely that technology in general, and more specifically information technology, has been used primarily for automating existing processes rather than using it as an enabler for making non-value adding work obsolete. Hammer's claim was simple: Most of the work being done does not add any value for customers, and this work should be removed, not accelerated through automation. Instead, companies should reconsider their inability to satisfy customer needs, and their insufficient cost structure. Even well-established management thinkers, such as Peter Drucker and Tom Peters, were accepting and advocating BPR as a new tool for (re-)achieving success in a dynamic world. During the following years, a fast-growing number of publications, books as well as journal articles, were dedicated to BPR, and many consulting firms embarked on this trend and developed BPR methods. However, the critics were fast to claim that BPR was a way to dehumanize the work place, increase managerial control, and to justify downsizing, i.e. major reductions of the work force, and a rebirth of Taylorism under a different label. Despite this critique, re-engineering was adopted at an accelerating pace and by 1993, as many as 60% of the Fortune 500 companies claimed to either have initiated re-engineering efforts, or to have plans to do so. This trend was fueled by the fast adoption of BPR by the consulting industry, but also by the study Made in America, conducted by MIT, that showed how companies in many US industries had lagged behind their foreign counterparts in terms of competitiveness, time-to-market and productivity. === Development after 1995 === With the publication of critiques in 1995 and 1996 by some of the early BPR proponents, coupled with abuses and misuses of the concept by others, the re-engineering fervor in the U.S. began to wane. Since then, considering business processes as a starting point for business analysis and redesign has become a widely accepted approach and is a standard part of the change methodology portfolio, but is typically performed in a less radical way than originally proposed. More recently, the concept of Business Process Management (BPM) has gained major attention in the corporate world and can be considered a successor to the BPR wave of the 1990s, as it is evenly driven by a striving for process efficiency supported by information technology. Equivalently to the critique brought forward against BPR, BPM is now accused of focusing on technology and disregarding the people aspects of change. == Topics == The most notable definitions of reengineering are: "... the fundamental rethinking and radical redesign of business processes to achieve ... improvements in critical contemporary modern measures of performance, such as cost, quality, service, and speed." "encompasses the envisioning of new work strategies, the actual process design activity, and the implementation of the change in all its complex technological, human, and organizational dimensions." BPR is different from other approaches to organization development (OD), especially the continuous improvement or TQM movement, by virtue of its aim for fundamental and radical change rather than iterative improvement. In order to achieve the major improvements BPR is seeking for, the change of structural organizational variables, and other ways of managing and performing work is often considered insufficient. For being able to reap the achievable benefits fully, the use of information technology (IT) is conceived as a major contributing factor. While IT traditionally has been used for supporting the existing business functions, i.e. it was used for increasing organizational efficiency, it now plays a role as enabler of new organizational forms, and patterns of collaboration within and between organizations. BPR derives its existence from different disciplines, and four major areas can be identified as being subjected to change in BPR – organization, technology, strategy, and people – where a process view is used as common framework for considering these dimensions. Business strategy is the primary driver of BPR initiatives and the other dimensions are governed by strategy's encompassing role. The organization dimension reflects the structural elements of the company, such as hierarchical levels, the composition of organizational units, and the distribution of work between them. Technology is concerned with the use of computer systems and other forms of communication technology in the business. In BPR, information technology is generally considered to act as enabler of new forms of organizing and collaborating, rather than supporting existing business functions. The people / human resources dimension deals with aspects such as education, training, motivation and reward systems. The concept of business processes – interrelated activities aiming at creating a value added output to a customer – is the basic underlying idea of BPR. These processes are characterized by a number of attributes: Process ownership, customer focus, value adding, and cross-functionality. === The role of information technology === Information technology (IT) has historically played an important role in the reengineering concept. It is regarded by some as a major enabler for new forms of working and collaborating within an organization and across organizational borders. BPR literature identified several so called disruptive technologies that were supposed to challenge traditional wisdom about how work should be performed. Shared databases, making information available at many places Expert systems, allowing generalists to perform specialist tasks Telecommunication networks, allowing organizations to be centralized and decentralized at the same time Decision-support tools, allowing decision-making to be a part of everybody's job Wireless data communication and portable computers, allowing field personnel to work office independent Interactive videodisk, to get in immediate contact with potential buyers Automatic identification and tracking, allowing things to tell where they are, instead of requiring to be found High performance computing, allowing on-the-fly planning and revisioning In the mid-1990s especially, workflow management systems were considered a significant contributor to improved process efficiency. Also, ERP (enterprise resource planning) vendors, such as SAP, JD Edwards, Oracle, and PeopleSoft, positioned their solutions as vehicles for business process redesign and improvement. === Research and methodology === Although the labels and steps differ slightly, the early methodologies that were rooted in IT-centric BPR solutions share many of the same basic principles and elements. The following outline is one such model, based on the Process Reengineering Life Cycle (PRLC) approach developed by Guha. Simplified schematic outline of using a business process approach, exemplified for pharmaceutical R&D Structural organization with functional units Introduction of New Product Development as cross-functional process Re-structuring and streamlining activities, removal of non-value adding tasks Benefiting from lessons learned from the early adopters, some BPR practitioners advocated a change in emphasis to a customer-centric, as opposed to an IT-centric, methodology. One such methodology, that also incorporated a Risk and Impact Assessment to account for the effect that BPR can have on jobs and operations, was described by Lon Roberts (1994). Roberts also stressed the use of change management tools to proactively address resistance to change a factor linked to the demise of many reengineering initiatives that looked good on the drawing board. Some items to use on a process analysis checklist are: Reduce handoffs, Centralize data, Reduce delays, Free resources faster, Combine similar activities. Also within the management consulting industry, a significant number of methodological approaches have been developed. === Framework === An easy to follow seven step INSPIRE framework is developed by Bhudeb Chakravarti which can be followed by any Process Analyst to perform BPR. The seven steps of the framework are Initiate a new process reengineering project and prepare a business case for the same; Negotiate with senior management to get approval to start the process reengineering project; Select the key processes that need to be reengineered; Plan the process reengineering activities; Investigate the processes to analyze the problem areas; Redesign the selected processes to improve the performance and Ensure the successful implementation of redesigned processes through proper monitoring and evaluation. == Factors for success and failure == Factors that are important to BPR success include: BPR team composition. Business needs analysis. Adequate IT infrastructure. Effective change management. Ongoing continuous improvement. The aspects of a BPM effort that are modified include organizational structures, management systems, employee responsibilities, and performance measurements, incentive systems, skills development, and the use of IT. BPR can potentially affect every aspect of how business is conducted today. Wholesale changes can cause results ranging from enviable success to complete failure. If successful, a BPM initiative can result in improved quality, customer service, and competitiveness, as well as reductions in cost or cycle time. However, 50-70% of reengineering projects are either failures or do not achieve significant benefit. There are many reasons for sub-optimal business processes which include: One department may be optimized at the expense of another Lack of time to focus on improving business process Lack of recognition of the extent of the problem Lack of training People involved use the best tool they have at their disposal which is usually Excel to fix problems Inadequate infrastructure Overly bureaucratic processes Lack of motivation Many unsuccessful BPR attempts may have been due to the confusion surrounding BPR, and how it should be performed. Organizations were well aware that changes needed to be made but did not know which areas to change or how to change them. As a result, process reengineering is a management concept that has been formed by trial and error or, in other words, practical experience. As more and more businesses reengineer their processes, knowledge of what caused the successes or failures is becoming apparent. To reap lasting benefits, companies must be willing to examine how strategy and reengineering complement each other by learning to quantify strategy in terms of cost, milestones, and timetables, by accepting ownership of the strategy throughout the organization, by assessing the organization's current capabilities and process realistically, and by linking strategy to the budgeting process. Otherwise, BPR is only a short-term efficiency exercise. === Organization-wide commitment === Major changes to business processes have a direct effect on processes, technology, job roles, and workplace culture. Significant changes to even one of those areas require resources, money, and leadership. Changing them simultaneously is an extraordinary task. Like any large and complex undertaking, implementing re engineering requires the talents and energies of a broad spectrum of experts. Since BPR can involve multiple areas within the organization, it is important to get support from all affected departments. Through the involvement of selected department members, the organization can gain valuable input before a process is implemented; a step that promotes both the cooperation and the vital acceptance of the re engineered process by all segments of the organization. Getting enterprise-wide commitment involves the following: top management sponsorship, bottom-up buy-in from process users, dedicated BPR team, and budget allocation for the total solution with measures to demonstrate value. Before any BPR project can be implemented successfully, there must be a commitment to the project by the management of the organization, and strong leadership must be provided. Re engineering efforts can by no means be exercised without a company-wide commitment to the goals. However, top management commitment is imperative for success. Top management must recognize the need for change, develop a complete understanding of what BPR is, and plan how to achieve it. Leadership has to be effective, strong, visible, and creative in thinking and understanding in order to provide a clear vision. Convincing every affected group within the organization of the need for BPR is a key step in successfully implementing a process. By informing all affected groups at every stage, and emphasizing the positive end results of the re engineering process, it is possible to minimize resistance to change and increase the odds for success. The ultimate success of BPR depends on the strong, consistent, and continuous involvement of all departmental levels within the organization. === Team composition === Once an organization-wide commitment has been secured from all departments involved in the re engineering effort and at different levels, the critical step of selecting a BPR team must be taken. This team will form the nucleus of the BPR effort, make key decisions and recommendations, and help communicate the details and benefits of the BPR program to the entire organization. The determinants of an effective BPR team may be summarized as follows: competency of the members of the team, their motivation, their credibility within the organization and their creativity, team empowerment, training of members in process mapping and brainstorming techniques, effective team leadership, proper organization of the team, complementary skills among team members, adequate size, interchangeable accountability, clarity of work approach, and specificity of goals. The most effective BPR teams include active representatives from the following work groups: top management, the business area responsible for the process being addressed, technology groups, finance, and members of all ultimate process users' groups. Team members who are selected from each work group within the organization will affect the outcome of the re engineered process according to their desired requirements. The BPR team should be mixed in-depth and knowledge. For example, it may include members with the following characteristics: Members who do not know the process at all. Members who know the process inside-out. Customers, if possible. Members representing affected departments. One or two members of the best, brightest, passionate, and committed technology experts. Members from outside of the organization. Moreover, Covert (1997) recommends that in order to have an effective BPR team, it must be kept under ten players. If the organization fails to keep the team at a manageable size, the entire process will be much more difficult to execute efficiently and effectively. The efforts of the team must be focused on identifying breakthrough opportunities and designing new work steps or processes that will create quantum gains and competitive advantage. === Business needs analysis === Another important factor in the success of any BPR effort is performing a thorough business needs analysis. Too often, BPR teams jump directly into the technology without first assessing the current processes of the organization and determining what exactly needs re engineering. In this analysis phase, a series of sessions should be held with process owners and stakeholders, regarding the need and strategy for BPR. These sessions build a consensus as to the vision of the ideal business process. They help identify essential goals for BPR within each department and then collectively define objectives for how the project will affect each work group or department on an individual basis and the business organization as a whole. The idea of these sessions is to conceptualize the ideal business process for the organization and build a business process model. Those items that seem unnecessary or unrealistic may be eliminated or modified later on in the diagnosing stage of the BPR project. It is important to acknowledge and evaluate all ideas in order to make all participants feel that they are a part of this important and crucial process. The results of these meetings will help formulate the basic plan for the project. This plan includes the following: identifying specific problem areas, solidifying particular goals, and defining business objectives. The business needs analysis contributes tremendously to the re-engineering effort by helping the BPR team to prioritize and determine where it should focus its improvements efforts. The business needs analysis also helps in relating the BPR project goals back to key business objectives and the overall strategic direction for the organization. This linkage should show the thread from the top to the bottom of the organization, so each person can easily connect the overall business direction with the re-engineering effort. This alignment must be demonstrated from the perspective of financial performance, customer service, associate value, and the vision for the organization. Developing a business vision and process objectives rely, on the one hand, on a clear understanding of organizational strengths, weaknesses, and market structure, and on the other, on awareness and knowledge about innovative activities undertaken by competitors and other organizations. BPR projects that are not in alignment with the organization's strategic direction can be counterproductive. There is always a possibility that an organization may make significant investments in an area that is not a core competency for the company and later outsource this capability. Such re engineering initiatives are wasteful and steal resources from other strategic projects. Moreover, without strategic alignment, the organization's key stakeholders and sponsors may find themselves unable to provide the level of support the organization needs in terms of resources, especially if there are other more critical projects to the future of the business, and are more aligned with the strategic direction. === Adequate IT infrastructure === Researchers consider adequate IT infrastructure reassessment and composition as a vital factor in successful BPR implementation. Hammer (1990) prescribes the use of IT to challenge the assumptions inherent in the work process that have existed since long before the advent of modern computer and communications technology. Factors related to IT infrastructure have been increasingly considered by many researchers and practitioners as a vital component of successful BPR efforts. Effective alignment of IT infrastructure and BPR strategy, Building an effective IT infrastructure, adequate IT infrastructure investment decision, adequate measurement of IT infrastructure effectiveness, proper information systems (IS) integration, effective re engineering of legacy IS, increasing IT function competency, and effective use of software tools is the most important factor that contributes to the success of BPR projects. These are vital factors that contribute to building an effective IT infrastructure for business processes. BPR must be accompanied by strategic planning which addresses leveraging IT as a competitive tool. An IT infrastructure is made up of physical assets, intellectual assets, shared services, and their linkages. The way in which the IT infrastructure components are composed and their linkages determine the extent to which information resources can be delivered. An effective IT infrastructure composition process follows a top-down approach, beginning with business strategy and IS strategy and passing through designs of data, systems, and computer architecture. Linkages between the IT infrastructure components, as well as descriptions of their contexts of interaction, are important for ensuring integrity and consistency among the IT infrastructure components. Furthermore, IT standards have a major role in reconciling various infrastructure components to provide shared IT services that are of a certain degree of effectiveness to support business process applications, as well as to guide the process of acquiring, managing, and utilizing IT assets. The IT infrastructure shared services and the human IT infrastructure components, in terms of their responsibilities and their needed expertise, are both vital to the process of the IT infrastructure composition. IT strategic alignment is approached through the process of integration between business and IT strategies, as well as between IT and organizational infrastructures. Most analysts view BPR and IT as irrevocably linked. Walmart, for example, would not have been able to reengineer the processes used to procure and distribute mass-market retail goods without IT. Ford was able to decrease its headcount in the procurement department by 75 percent by using IT in conjunction with BPR, in another well-known example. The IT infrastructure and BPR are interdependent in the sense that deciding the information requirements for the new business processes determines the IT infrastructure constituents, and a recognition of IT capabilities provides alternatives for BPR. Building a responsive IT infrastructure is highly dependent on an appropriate determination of business process information needs. This, in turn, is determined by the types of activities embedded in a business process, and their sequencing and reliance on other organizational processes. === Effective change management === Al-Mashari and Zairi (2000) suggest that BPR involves changes in people's behavior and culture, processes, and technology. As a result, there are many factors that prevent the effective implementation of BPR and hence restrict innovation and continuous improvement. Change management, which involves all human and social related changes and cultural adjustment techniques needed by management to facilitate the insertion of newly designed processes and structures into working practice and to deal effectively with resistance, is considered by many researchers to be a crucial component of any BPR effort. One of the most overlooked obstacles to successful BPR project implementation is resistance from those whom implementer believe will benefit the most. Most projects underestimate the cultural effect of major process and structural change and as a result, do not achieve the full potential of their change effort. Many people fail to understand that change is not an event, but rather a management technique. Change management is the discipline of managing change as a process, with due consideration that employees are people, not programmable machines. Change is implicitly driven by motivation which is fueled by the recognition of the need for change. An important step towards any successful reengineering effort is to convey an understanding of the necessity for change. It is a well-known fact that organizations do not change unless people change; the better the change is managed, the less painful the transition is. Organizational culture is a determining factor in successful BPR implementation. Organizational culture influences the organization's ability to adapt to change. Culture in an organization is a self-reinforcing set of beliefs, attitudes, and behavior. Culture is one of the most resistant elements of organizational behavior and is extremely difficult to change. BPR must consider current culture in order to change these beliefs, attitudes, and behaviors effectively. Messages conveyed from management in an organization continually enforce current culture. Change is implicitly driven by motivation which is fueled by the recognition of the need for change. The first step towards any successful transformation effort is to convey an understanding of the necessity for change. Management rewards system, stories of company origin and early successes of founders, physical symbols, and company icons constantly enforce the message of the current culture. Implementing BPR successfully is dependent on how thoroughly management conveys the new cultural messages to the organization. These messages provide people in the organization with a guideline to predict the outcome of acceptable behavior patterns. People should be the focus of any successful business change. BPR is not a recipe for successful business transformation if it focuses on only computer technology and process redesign. In fact, many BPR projects have failed because they did not recognize the importance of the human element in implementing BPR. Understanding the people in organizations, the current company culture, motivation, leadership, and past performance is essential to recognize, understand, and integrate into the vision and implementation of BPR. If the human element is given equal or greater emphasis in BPR, the odds of successful business transformation increase substantially. === Ongoing continuous improvement === Many organizational change theorists hold a common view of organizations adjusting gradually and incrementally and responding locally to individual crises as they arise Common elements are: BPR is a successive and ongoing process and should be regarded as an improvement strategy that enables an organization to make the move from a traditional functional orientation to one that aligns with strategic business processes. Continuous improvement is defined as the propensity of the organization to pursue incremental and innovative improvements in its processes, products, and services. The incremental change is governed by the knowledge gained from each previous change cycle. It is essential that the automation infrastructure of the BPR activity provides for performance measurements in order to support continuous improvements. It will need to efficiently capture appropriate data and allow access to appropriate individuals. To ensure that the process generates the desired benefits, it must be tested before it is deployed to the end-users. If it does not perform satisfactorily, more time should be taken to modify the process until it does. A fundamental concept for quality practitioners is the use of feedback loops at every step of the process and an environment that encourages constant evaluation of results and individual efforts to improve. At the end user's level, there must be a proactive feedback mechanism that provides for and facilitates resolutions of problems and issues. This will also contribute to a continuous risk assessment and evaluation which are needed throughout the implementation process to deal with any risks at their initial state and to ensure the success of the reengineering efforts. Anticipating and planning for risk handling is important for dealing effectively with any risk when it first occurs and as early as possible in the BPR process. It is interesting that many of the successful applications of reengineering described by its proponents are in organizations practicing continuous improvement programs. Hammer and Champy (1993) use the IBM Credit Corporation as well as Ford and Kodak, as examples of companies that carried out BPR successfully due to the fact that they had long-running continuous improvement programs. In conclusion, successful BPR can potentially create substantial improvements in the way organizations do business and can actually produce fundamental improvements for business operations. However, in order to achieve that, there are some key success factors that must be taken into consideration when performing BPR. BPR success factors are a collection of lessons learned from reengineering projects and from these lessons common themes have emerged. In addition, the ultimate success of BPR depends on the people who do it and on how well they can be committed and motivated to be creative and to apply their detailed knowledge to the reengineering initiative. Organizations planning to undertake BPR must take into consideration the success factors of BPR in order to ensure that their reengineering related change efforts are comprehensive, well-implemented, and have a minimum chance of failure. This has been very beneficial in all terms == Critique == Many companies used reengineering as a pretext to downsizing, though this was not the intent of reengineering's proponents; consequently, reengineering earned a reputation for being synonymous with downsizing and layoffs. In many circumstances, reengineering has not always lived up to its expectations. Some prominent reasons include: Reengineering assumes that the factor that limits an organization's performance is the ineffectiveness of its processes (which may or may not be true) and offers no means of validating that assumption. Reengineering assumes the need to start the process of performance improvement with a "clean slate," i.e. totally disregard the status quo. According to Eliyahu M. Goldratt (and his Theory of Constraints) reengineering does not provide an effective way to focus improvement efforts on the organization's constraint. Others have claimed that reengineering was a recycled buzzword for commonly-held ideas. Abrahamson (1996) argued that fashionable management terms tend to follow a lifecycle, which for Reengineering peaked between 1993 and 1996 (Ponzi and Koenig 2002). They argue that Reengineering was in fact nothing new (as e.g. when Henry Ford implemented the assembly line in 1908, he was in fact reengineering, radically changing the way of thinking in an organization). The most frequent critique against BPR concerns the strict focus on efficiency and technology and the disregard of people in the organization that is subjected to a reengineering initiative. Very often, the label BPR was used for major workforce reductions. Thomas Davenport, an early BPR proponent, stated that: "When I wrote about "business process redesign" in 1990, I explicitly said that using it for cost reduction alone was not a sensible goal. And consultants Michael Hammer and James Champy, the two names most closely associated with reengineering, have insisted all along that layoffs shouldn't be the point. But the fact is, once out of the bottle, the reengineering genie quickly turned ugly." Hammer similarly admitted that: "I wasn't smart enough about that. I was reflecting my engineering background and was insufficiently appreciative of the human dimension. I've learned that's critical." == See also == Business process management – Business management discipline (BPM) Business process outsourcing – Form of outsourcing (BPO) Business Process Modeling Notation – Graphical representation for specifying business processesPages displaying short descriptions of redirect targets (BPMN) Kaizen – Concept about continuous improvement Learning agenda – Set of questions about what needs to be learned before planning == References == This article incorporates public domain material from Business Process Re-engineering Assessment Guide, May 1997 (PDF). United States General Accounting Office. == Further reading == Abrahamson, E. (1996). Management fashion, Academy of Management Review, 21, 254–285. Champy, J. (1995). Reengineering Management, Harper Business Books, New York. Davenport, Thomas & Short, J. (1990), "The New Industrial Engineering: Information Technology and Business Process Redesign", in: Sloan Management Review, Summer 1990, pp 11–27 Davenport, Thomas (1993), Process Innovation: Reengineering work through information technology, Harvard Business School Press, Boston Davenport, Thomas (1995), Reengineering – The Fad That Forgot People, Fast Company, November 1995. Drucker, Peter (1972), "Work and Tools", in: W. Kranzberg and W.H. Davenport (eds), Technology and Culture, New York Dubois, H. F. W. (2002). "Harmonization of the European vaccination policy and the role TQM and reengineering could play", Quality Management in Health Care, 10(2): pp. 47–57. "PDF" Greenbaum, Joan (1995), Windows on the workplace, Cornerstone Guha, S.; Kettinger, W.J. & Teng, T.C., Business Process Reengineering: Building a Comprehensive Methodology, Information Systems Management, Summer 1993 Hammer, M., (1990). "Reengineering Work: Don't Automate, Obliterate", Harvard Business Review, July/August, pp. 104–112. Hammer, M. and Champy, J. A.: (1993) Reengineering the Corporation: A Manifesto for Business Revolution, Harper Business Books, New York, 1993. ISBN 0-06-662112-7. Hammer, M. and Stanton, S. (1995). "The Reengineering Revolution", Harper Collins, London, 1995. Hansen, Gregory (1993) "Automating Business Process Reengineering", Prentice Hall. Hussein, Bassam (2008), PRISM: Process Re-engineering Integrated Spiral Model, VDM Verlag [1] Industry Week (1994), "De-engineering the corporation", Industry Week article, 4/18/94 Johansson, Henry J. et al. (1993), Business Process Reengineering: BreakPoint Strategies for Market Dominance, John Wiley & Sons Leavitt, H.J. (1965), "Applied Organizational Change in Industry: Structural, Technological and Humanistic Approaches", in: James March (ed.), Handbook of Organizations, Rand McNally, Chicago Loyd, Tom (1994), "Giants with Feet of Clay", Financial Times, 5 Dec 1994, p 8 Malhotra, Yogesh (1998), "Business Process Redesign: An Overview", IEEE Engineering Management Review, vol. 26, no. 3, Fall 1998. Ponzi, L.; Koenig, M. (2002). "Knowledge management: another management fad?". Information Research. 8 (1). "Reengineering Reviewed", (1994). The Economist, 2 July 1994, pp 66. Roberts, Lon (1994), Process Reengineering: The Key To Achieving Breakthrough Success, Quality Press, Milwaukee. Rummler, Geary A. and Brache, Alan P. Improving Performance: How to Manage the White Space in the Organization Chart, ISBN 0-7879-0090-7. Taylor (1911), Frederick, The principles of scientific management, Harper & Row, New York] Thompson, James D. (1969), Organizations in Action, MacGraw-Hill, New York White, JB (1996), Wall Street Journal. New York, N.Y.: 26 Nov 1996. pg. A.1 Business Process Redesign: An Overview, IEEE Engineering Management Review == External links == BPR : Decision engineering in a strained industrial and business environment
Wikipedia/Business_process_redesign
A control-flow diagram (CFD) is a diagram to describe the control flow of a business process, process or review. Control-flow diagrams were developed in the 1950s, and are widely used in multiple engineering disciplines. They are one of the classic business process modeling methodologies, along with flow charts, drakon-charts, data flow diagrams, functional flow block diagram, Gantt charts, PERT diagrams, and IDEF. == Overview == A control-flow diagram can consist of a subdivision to show sequential steps, with if-then-else conditions, repetition, and/or case conditions. Suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another. There are several types of control-flow diagrams, for example: Change-control-flow diagram, used in project management Configuration-decision control-flow diagram, used in configuration management Process-control-flow diagram, used in process management Quality-control-flow diagram, used in quality control. In software and systems development, control-flow diagrams can be used in control-flow analysis, data-flow analysis, algorithm analysis, and simulation. Control and data are most applicable for real time and data-driven systems. These flow analyses transform logic and data requirements text into graphic flows which are easier to analyze than the text. PERT, state transition, and transaction diagrams are examples of control-flow diagrams. == Types of control-flow diagrams == === Process-control-flow diagram === A flow diagram can be developed for the process [control system] for each critical activity. Process control is normally a closed cycle in which a sensor. The application determines if the sensor information is within the predetermined (or calculated) data parameters and constraints. The results of this comparison, which controls the critical component. This [feedback] may control the component electronically or may indicate the need for a manual action. This closed-cycle process has many checks and balances to ensure that it stays safe. It may be fully computer controlled and automated, or it may be a hybrid in which only the sensor is automated and the action requires manual intervention. Further, some process control systems may use prior generations of hardware and software, while others are state of the art. === Performance-seeking control-flow diagram === The figure presents an example of a performance-seeking control-flow diagram of the algorithm. The control law consists of estimation, modeling, and optimization processes. In the Kalman filter estimator, the inputs, outputs, and residuals were recorded. At the compact propulsion-system-modeling stage, all the estimated inlet and engine parameters were recorded. In addition to temperatures, pressures, and control positions, such estimated parameters as stall margins, thrust, and drag components were recorded. In the optimization phase, the operating-condition constraints, optimal solution, and linear-programming health-status condition codes were recorded. Finally, the actual commands that were sent to the engine through the DEEC were recorded. == See also == Data-flow diagram Data and information visualization Control-flow graph DRAKON Flow process chart == References == This article incorporates public domain material from the National Institute of Standards and Technology
Wikipedia/Control_flow_diagram
A document management system (DMS) is usually a computerized system used to store, share, track and manage files or documents. Some systems include history tracking where a log of the various versions created and modified by different users is recorded. The term has some overlap with the concepts of content management systems. It is often viewed as a component of enterprise content management (ECM) systems and related to digital asset management, document imaging, workflow systems and records management systems. == History == While many electronic document management systems store documents in their native file format (Microsoft Word or Excel, PDF), some web-based document management systems are beginning to store content in the form of HTML. These HTML-based document management systems can act as publishing systems or policy management systems. Content is captured either by using browser based editors or the importing and conversion of not HTML content. Storing documents as HTML enables a simpler full-text workflow as most search engines deal with HTML natively. DMS without an HTML storage format is required to extract the text from the proprietary format making the full text search workflow slightly more complicated. Search capabilities including boolean queries, cluster analysis, and stemming have become critical components of DMS as users have grown used to internet searching and spend less time organizing their content. == Components == Document management systems commonly provide storage, versioning, metadata, security, as well as indexing and retrieval capabilities. Here is a description of these components: == Standardization == Many industry associations publish their own lists of particular document control standards that are used in their particular field. Following is a list of some of the relevant ISO documents. Divisions ICS 01.140.10 and 01.140.20. The ISO has also published a series of standards regarding the technical documentation, covered by the division of 01.110. ISO 2709 Information and documentation – Format for information exchange ISO 15836 Information and documentation – The Dublin Core metadata element set ISO 15489 Information and documentation – Records management ISO 21127 Information and documentation – A reference ontology for the interchange of cultural heritage information ISO 23950 Information and documentation – Information retrieval (Z39.50) – Application service definition and protocol specification ISO 10244 Document management – Business process baselining and analysis ISO 32000 Document management – Portable document format ISO/IEC 27001 Information security, cybersecurity and privacy protection — Information security management systems == Document control == Government regulations typically require that companies working in certain industries control their documents. A Document Controller is responsible to control these documents strictly. These industries include accounting (for example: 8th EU Directive, Sarbanes–Oxley Act), food safety (for example the Food Safety Modernization Act in the US), ISO (mentioned above), medical device manufacturing (FDA), manufacture of blood, human cells, and tissue products (FDA), healthcare (JCAHO), and information technology (ITIL). Some industries work under stricter document control requirements due to the type of information they retain for privacy, warranty, or other highly regulated purposes. Examples include protected health information (PHI) as required by HIPAA or construction project documents required for warranty periods. An information systems strategy plan (ISSP) can shape organisational information systems over medium to long-term periods. Documents stored in a document management system—such as procedures, work instructions, and policy statements—provide evidence of documents under control. Failing to comply can cause fines, the loss of business, or damage to a business's reputation. Document control includes: reviewing and approving documents prior to release ensuring changes and revisions are clearly identified ensuring that relevant versions of applicable documents are available at their "points of use" ensuring that documents remain legible and identifiable ensuring that external documents (such as customer-supplied documents or supplier manuals) are identified and controlled preventing “unintended” use of obsolete documents These document control requirements form part of an organisation's compliance costs alongside related functions such as a data protection officer and internal audit. == Integrated DM == Integrated document management comprises the technologies, tools, and methods used to capture, manage, store, preserve, deliver and dispose of 'documents' across an enterprise. In this context 'documents' are any of a myriad of information assets including images, office documents, graphics, and drawings as well as the new electronic objects such as Web pages, email, instant messages, and video. == Document management software == Paper documents have long been used in storing information. However, paper can be costly and, if used excessively, wasteful. Document management software is not simply a tool but it lets a user manage access, track and edit information stored. Document management software is an electronic cabinet that can be used to organize all paper and digital files. The software helps the businesses to combine paper to digital files and store it into a single hub after it is scanned and digital formats get imported. One of the most important benefits of digital document management is a “fail-safe” environment for safeguarding all documents and data. In the heavy construction industry specifically, document management software allows team members to securely view and upload documents for projects they are assigned to from anywhere and at any time to help streamline day-to-day operations. == See also == == References == == External links ==
Wikipedia/Document_control
The ARIS concept (Architecture of Integrated Information Systems) by August-Wilhelm Scheer aims to ensure that an enterprise information system can completely meet its requirements. This framework is based on a division of the model into description views and levels, which allows a description of the individual elements through specially designed methods, without having to include the entire model. The methodology serves as a systems development life cycle for mapping and optimizing business processes. These processes are mapped for each description view, starting with the business management question up to the implementation on data processing level. == ARIS house (description views) == ARIS relies mainly on its own five-view architecture (ARIS house). These five views are based on function, organization, data, product or service views of a process, and the process view itself, that integrates the other views. The classification is made to break down the complexity of the model into five facets and thus make business process modeling simpler. Each view of the ARIS concept represents the model of a business process under a specific aspect: Function view: The activities, groupings and hierarchical relationships that exist between them are described in the function view, for example in a function tree. Since functions support goals and are controlled by them, goals are also assigned to the function view Organization view: This provides an overview of the organizational structure of a company, including human resources, machines, hardware and their relationships, see also Organizational chart Data view: This includes all events (that generate data) and environmental data, such as correspondence, documents, etc., i.e. all company-relevant information objects, see also Entity Relationship Model Product/Service view: Provides an overview of the entire product/service portfolio (incl. services, products, financial) Process view: The process view connects all other views into a time-logical schedule, for example in an event-driven process chain or BPMN == Description levels == Each description view of the ARIS house is divided into three description levels: Concept Structured representation of the business processes by means of description models that are understandable for the business side (depending on the view, e.g.: ERM, EPC, organization chart, function tree) Data Concept (= data processing concept, IT concept) Implementation of the technical concept in IT-related description models (depending on the view e.g. relations, structure charts, topologies) Implementation IT-technical realization of the described process parts (depending on the view, e.g. by creating program code, database systems, use of protocols) == Dissemination and related work == The ARIS concept forms the basis of various software products, including the ARIS Toolset from Software AG, which has been the owner of ARIS trademarks since IDS Scheer AG was acquired. At the end of 2004, part of the concept was reflected in the graphical process integration of SAP Exchange Infrastructure. Although ARIS is a well-known approach for the description of information system architectures, especially in German-speaking countries, it is not as well known on a larger scale. With in the Management Frameworks group it is one of over fifty existing frameworks for information management on the market. The architecture of interoperable information systems (AIOS) was also published in 2010 at the Institut für Wirtschaftsinformatik (Institute for Information Systems) in Saarbrücken, which was founded by Scheer. While ARIS describes company-internal information systems and business processes, AIOS describes how cross-company business processes can be realized by adapting and loosely coupling information systems. With the "Model-to-Execute" approach, business processes can be modelled in ARIS and automatically transferred to webMethods BPM for technical execution. == Applications == As one of the Enterprise Modeling methods, ARIS provides four different aspects of applications: The ARIS concept: is the architecture for describing business processes. provides modelling methods, the meta structures of which are comprised in information models. is the foundation for the ARIS Toolset software system for the support of modelling. The ARIS house of Business Engineering (HOBE) represents a concept for comprehensive computer-aided Business Process Management. == Examples == == See also == ARIS Express, free modeling tool by Software AG Architecture of Interoperable Information Systems DRAKON == References == == Further reading == Ulrich Frank (2002) "Multi-Perspective Enterprise Modeling (MEMO) Conceptual Framework and Modeling Languages" Universität Koblenz-Landau; Rheinau 1, D-56075 Koblenz, Germany Thomas R. Gulledge and Rainer A. Sommer (1999) "Process Coupling in Business Process Engineering" George Mason University, USA. Knowledge and Process Management Volume 6 Number 3 pp 158–165 Henk Jonkers, Marc Lankhorst, et al. (2004) "Concepts for Modeling Enterprise Architectures" Telematica Instituut, the Netherlands; University of Nijmegen, Nijmegen, the Netherlands; Leiden Institute for Advanced Computer Science, Leiden, the Netherlands; CWI, Amsterdam, the Netherlands August-Wilhelm Scheer, Markus Nüttgens "ARIS Architecture and Reference Models for Business Process Management" Institut für Wirtschaftsinformatik, Universität des Saarlandes, Im Stadtwald Geb. 14.1, D-66123 Saarbrücken August-Wilhelm Scheer (1996) "ARIS-Toolset:Von Forschungs-Prototypen zum Produkt" Informatik-Spektrum 19: 71–78 (1996) © Springer-Verlag 1996 August-Wilhelm Scheer: Architektur integrierter Informationssysteme. Springer, Berlin 1992, ISBN 3-540-55401-7. August-Wilhelm Scheer, Wolfram Jost: ARIS in der Praxis. Springer, Berlin 2002, ISBN 3-540-43029-6. Jörg Krüger, Christian Uhlig: Praxis der Geschäftsprozessmodellierung - ARIS erfolgreich anwenden. VDE-Verlag, Berlin 2009, ISBN 978-3-8007-3122-0. Dirk Matthes: . 2011. Auflage. Springer Science+Business Media, 2011, ISBN 978-3-642-12954-4. Thomas Allweyer: Geschäftsprozessmanagement. W3L, Bochum 2005, ISBN 3-937137-11-4. Rob Davis, Eric Brabaender: ARIS Design Platform: Getting Started with BPM. Springer, London 2007, ISBN 1-84628-612-3. Rob Davis: ARIS Design Platform: Advanced Process Modelling and Administration. Springer, London 2008, ISBN 978-1-84800-110-7. Peter Stahlknecht, Ulrich Hasenkamp: Einführung in die Wirtschaftsinformatik. 11. Auflage. Springer, Berlin 2005, ISBN 3-540-01183-8. == External links == Software AG product page ARIS Community official ARIS community by Software AG From Event-driven modeling to Process monitoring. Presentation by Helge Hess, IDS Scheer, 2006.
Wikipedia/Architecture_of_Integrated_Information_Systems
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers. The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. Examples of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered. A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing. == Introduction == The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used as: There are several autonomous computational entities (computers or nodes), each of which has its own local memory. The entities communicate with each other by message passing. A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: The system has to tolerate failures in individual computers. The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program. Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input. == Patterns == Here are common architectural patterns used for distributed computing: Saga interaction pattern Microservices Event driven architecture == Events vs. Messages == In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself. In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics. Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination. Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements. == Parallel and distributed computing == Distributed systems are groups of networked computers which share a common goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria: In parallel computing, all processors may have access to a shared memory to exchange information between processors. In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory. The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms. == History == The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s. ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs. == Architectures == Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system. Whether these CPUs share resources or not determines a first distinction between three types of architecture: Shared memory Shared disk Shared nothing. Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling. Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change. Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier. n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers. Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources.: 227  Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network. Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database. === Cell-Based Architecture === Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability. A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets of services across multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures. Some implementations employ circuit breakers within and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational. Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives. == Applications == Reasons for using distributed systems and distributed computing may include: The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location. There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example: It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine. It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system. It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. == Examples == Examples of distributed systems and applications of distributed computing include the following: telecommunications networks: telephone networks and cellular networks, computer networks such as the Internet, wireless sensor networks, routing algorithms; network applications: World Wide Web and peer-to-peer networks, massively multiplayer online games and virtual reality communities, distributed databases and distributed database management systems, network file systems, distributed cache such as burst buffers, distributed information processing systems such as banking systems and airline reservation systems; real-time process control: aircraft control systems, industrial control systems; parallel computation: scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects, distributed rendering in computer graphics. peer-to-peer == Reactive distributed systems == According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive. == Theoretical foundations == === Models === Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions. Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm. The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer? The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer. Three viewpoints are commonly used: Parallel algorithms in shared-memory model All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor. One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory. Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems. A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature. Parallel algorithms in message-passing model The algorithm designer chooses the structure of the network, as well as the program executed by each computer. Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer. Distributed algorithms in message-passing model The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network. A commonly used model is a graph with one finite-state machine per node. In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example. === An example === Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches: Centralized algorithms The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result. Parallel algorithms Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part. The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel. Distributed algorithms The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output. The main focus is on coordinating the operation of an arbitrary distributed system. While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm. Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing). === Complexity measures === In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa. In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task. This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds). On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model. Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits. === Other problems === Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur. There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation. Much research is also focused on understanding the asynchronous nature of distributed systems: Synchronizers can be used to run synchronous algorithms in asynchronous systems. Logical clocks provide a causal happened-before ordering of events. Clock synchronization algorithms provide globally consistent physical time stamps. Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading. === Election === Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator. The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator. The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost. Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing. Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran. In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist. === Properties of distributed systems === So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system. The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer. However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks. == See also == == Notes == == References == == Further reading == == External links == Media related to Distributed computing at Wikimedia Commons
Wikipedia/Distributed_systems
Integrated business planning (IBP) is a business management process that aims to align strategic, operational, and financial planning into a single, integrated process. == Objective == Integrated business planning (IBP) is used by organizations to integrate various functions and align strategic, operational, and financial planning. Key aspects often included in IBP frameworks are: 1. Functional Integration: IBP aims to integrate departments within an organization and align them functionally. It seeks to create a unified planning process that connects functions such as research and development (R&D), manufacturing, supply chain management, marketing, and sales, incorporating inputs from each to form a common business plan. 2. Harmonization of Planning Cycles:: IBP involves synchronizing planning activities across multiple timelines, aligning monthly, quarterly, and annual planning cycles. It uses a framework designed to address discrepancies arising from separate starting points and data sets to create a unified schedule. 3. Integrating across multiple Planning Horizons:: IBP facilitates collaboration between sales and marketing teams to capture demand and create a consensus plan for the short and medium term. 4. Medium and Long-Term Financial Planning:: IBP aims to align demand forecasts with pricing data and inputs from marketing teams to develop financial plans and predict financial outcomes. 5. Long-Term Strategic Planning:: IBP may integrate New Product Introduction phases by incorporating insights from product development portfolios. 6. Capacity Expansion Planning:: IBP may include capacity expansion planning by aligning long-term plans with new product development, cost improvement projects, and management of existing product portfolios. The long-term strategic plan serves as an input. These demand elements are matched against mapped capacity data in the system to conduct gap analysis. IBP seeks to balance different objectives to achieve an overall result, potentially using prescriptive analytics. These tools are often used to mathematically optimize parts of a plan, such as inventory investment. Some advanced IBP processes may attempt to mathematically optimize multiple aspects of a plan. == History == The history of integrated business planning (IBP) is linked to the development of sales and operations planning (S&OP), which emerged in the 1980s. S&OP sought to balance demand and manufacturing resources. IBP evolved from S&OP, aiming to address perceived limitations by combining financial planning, strategic planning, sales, and operations planning into a unified process. Over time, IBP has evolved, sometimes combining elements of Enterprise Performance Management (EPM) and S&OP. Some IBP platforms leverage predictive analytics and machine-learning technology. == Criticism == Some sources argue that IBP is not distinct from S&OP. Patrick Bower has described IBP as a marketing hoax,⁣ claiming it is a name developed to create confusion and sell consulting and system services. Others assert that IBP is not a marketing hoax⁣ but a component of Enterprise Performance Management (EPM) system. Another criticism is that IBP lacks a widely accepted academic definition and is perceived by some as having a bias towards a supply chain perspective. The absence of a standard academic definition allows for varying interpretations, which can lead to confusion among practitioners. A 2015 S&OP survey found that 32% of participants felt there was no difference between S&OP and IBP, 20% "did not know," and 71% felt there was a need for more industry standards around S&OP. The lack of formal governance and a unified industry definition for IBP has been noted. In the absence of widely accepted standards, there has been an attempt to create an open-source definition for IBP: A holistic planning philosophy, where all organizational functions participate in providing executives periodically with valid, reliable information, in order to decide how to align the enterprise around executing the plans to achieve budget, strategic intent and the envisioned future. == Academic literature == According to The Journal of Business Forecasting, integrating Sales and Operations Planning (S&OP) and Collaborative Planning, Forecasting and Replenishment (CPFR) can provide information for decision-making and may influence success factors and performance outcomes. The journal Information & Management discusses the integration of information systems planning (ISP) with business planning (BP) for strategic information systems planning. The study examines four ways these plans can be integrated and how this integration relates to organizational success, based on a survey of business planners and IS executives. The results suggest that increased integration may correlate with improved IS contribution to organizational success and fewer ISP problems, according to the paper. Pal Singh Toor & Dhir discuss integrated business planning, forecasting, and process management. The paper highlights the role of business intelligence and integrated planning processes, using case studies to illustrate potential benefits. == See also == Business process modeling Business reference model Business intelligence Business Relationship Management == References ==
Wikipedia/Integrated_business_planning
Harbarian process modeling (HPM) is a method for obtaining internal process information from an organization and then documenting that information in a visually effective, simple manner. The HPM method involves two levels: Process diagrams: High-level overviews of specific processes or workflows. Systems diagrams: Mapping how each process is correlated, as well as various inputs, outputs, goals, feedback loops, and external factors. == HPM method purpose == The primary purpose of the HPM method is to first elicit process information from all relevant stakeholders and subsequently document existing processes completed within an organization. This method addresses the problem of workplace inefficiency, which can largely be attributed to the majority of processes being undocumented and informally completed. The formal documentation of processes offers to replace ambiguity and uncertainty with clarity and transparency for the work being completed, both for process stakeholders and for upper management. The development of formal documentation also provides the opportunity to reassess process efficacy. Stakeholders will be given the chance to offer their innate insight into process strengths, weaknesses, and redundancies. == HPM output == The final output of the HPM method is the formalized master documentation of an organization's or branch's workflows and processes. This collection is divided into specific process series, each for a specific group or team. Each process series is divided into the team's major workflows which are individually documented into HPM process diagrams. Each process series also includes an HPM systems diagram which shows the relationships and connections between the various processes, inputs, outputs, feedback loops, external environment, and system goals. === HPM process diagram === HPM process diagrams provide a high-level overview of a specific workflow or process completed by a business unit. These diagrams are not meant to provide detailed instructions on procedures or codes, but instead address all major steps, decisions, and evaluations that are included in a process. Once finalized, these documents can be used as a reference for anyone in the organization. For example: The process owners can utilize the diagrams to train new employees. Other groups can reference the diagrams for enhanced understanding and communication. Upper management can reference the diagrams for increased process transparency and decision-making. HPM process diagrams can be customized to fit the specific needs of an organization, however, typically include: Process title Process phases Timeline (if applicable) Sequential process steps Legend/key === HPM system diagram === HPM system diagrams provide a holistic view of a set of process diagrams. The system focuses on the connections and relationships between various processes. These diagrams also address the system as a collection of: Inputs Transformations Outputs Goals Feedback External factors == HPM implementation == The HPM method implementation is completed in five main phases. Meetings with stakeholders from organizational teams are conducted to identify major processes, document each process in detail, and develop implementable solutions. Information is elicited from stakeholders and then formally documented into process flowchart diagrams and systems thinking diagrams for use within the organization: initial elicitation and collaboration, preliminary documentation, follow-up elicitation and collaboration, final documentation, and project package submission. === Initial elicitation and collaboration === The first phase of the HPM method involves scheduling and meeting with each major team that makes up an organization or branch. Meetings are then conducted in the form of an interview and followed a detailed protocol to establish the meeting purpose, convey expected benefits, and to elicit information about the respective team's processes. Meetings begin with an explanation of the purpose, as well as a list of expected benefits to each team: Clarification should then be given to all questions posed by stakeholders to ensure buy-in from all members of the respective team. Next, each team should provide a high-level overview of all of the major processes they complete on a regular basis. Each of these processes can then be discussed in detail. The chronological order of tasks for each process is elicited and inputs, outputs, operations, decision points, and evaluations are identified. === Preliminary documentation === The second phase, preliminary documentation, begins after all process information is elicited from all organizational teams. Each process is then organized and formatted into a HPM process diagram. Processes are designed with a title, overview of process phases, timeline (if applicable), and specific steps in sequential order. === Follow-up elicitation & collaboration === After all preliminary HPM process diagrams are drafted, follow-up meetings with each of the teams is conducted. These meetings open with a review of the respective team's HPM process diagrams for accuracy. This review also serves as a means to prime stakeholders for the three stages of brainstorming: (1) prepare the group, (2) present the problem, and (3) guide the discussion. ==== Prepare the group ==== Teams are primed for brainstorming through the review of their HPM process diagrams. This step reminds stakeholders about the content being discussed and allows them to think about each process in detail, reviewing what works well and what may be improved. Additionally, the time between the initial interview and follow-up meeting should have provided each stakeholder with the opportunity to independently think about the processes. ==== Present the problem ==== Once prepared for brainstorming, teams are tasked with problem identification. While the act of formally documenting processes innately addresses existing problems with process efficiency and ambiguity, brainstorming is meant to focus on further solving these problems. This involves a brief independent reflection for each stakeholder of their existing processes' efficacy, strengths, and areas that could be or need to be improved. ==== Guide the discussion ==== To facilitate the brainstorming session, teams are guided through the four stages of AI: (1) discovery, (2) dream, (3) design, and (4) destiny. Each stage consists of a discussion guided with specific AI-based questions crafted to elicit ideas and solutions founded out of positivity. ===== Discovery ===== The first stage, discovery, appraises stakeholders and existing workflows, identifying what already works well and "appreciating and valuing the best of 'what is'". Stakeholders are asked AI-based questions designed to elicit the best of their respective team. For example, stakeholders could identify personal strengths of specific stakeholders, strong points within existing processes, and environmental factors that enabled the team to operate at their best. ===== Dream ===== The second stage, dream, asks teams to envision a future based on the positives discovered in the first stage of AI. Questions posed to teams allow them to explore optimistic possibilities of what could be accomplished while intentionally overlooking deficits and struggles that existed in the past. For example, stakeholders could envision what their team would be able to accomplish when operating at their best or what factors would enable the team to operate with an elevated sense of purpose. ===== Design ===== The third stage, design, focuses on teams articulating how they could turn what was identified in the dream stage into a reality. indicate that "once strategic focus or dream is articulated attention turns to the creation of the ideal organization" and the "actual design of the system" (p. 10). Questions should focus on action planning and identifying where specific improvements could be made within existing processes to make their optimistic futures tangible. Where the dream stage asked stakeholders to overlook deficits and struggles, the design stage asked stakeholders to develop new solutions that fixed or bypassed existing issues by using the teams' strengths. ===== Destiny ===== The fourth stage, destiny, concludes the AI process by having teams develop a plan to sustain what was identified in the first three stages. Utilizing the positive momentum built throughout the brainstorming session, stakeholders are likely to agree to perform specific actions. Cognitive dissonance theory postulates that by making a public commitment of behavioral intent, stakeholders will feel a strong need to maintain consistency between their words and their actions. For this reason, questions focus on eliciting self-identified commitments from stakeholders. For example, stakeholders were asked to identify a small action they could each do immediately to help make their envisioned future become a reality. These answers served as public commitments to the rest of their team. === Final documentation === At this point, all relevant information has been elicited from the organizational teams and is ready to be documented. First, HPM process diagrams should be updated to reflect feedback and insights from stakeholders. Second, the collective HPM process diagrams of each team are reviewed and analyzed. Systems thinking is then applied to identify a "deeper understanding of the linkages, relationships, interactions and behaviours among the elements that characterize the entire system". == Business psychology concepts == The HPM method utilizes four core concepts derived from business psychology: (a) flowcharts, (b) brainstorming, (c) appreciative inquiry (AI), and (d) systems thinking. === Flowcharts === Flowcharts are "easy-to-understand diagrams that show how the steps of a process fit together". They provide a visual reference to stakeholders so that steps can clearly be followed in a chronological order. Flowcharts are "used commonly with non-technical audiences and are good for gaining both alignment with what the process is and context for a solution". This neuroscience tool was incorporated into the HPM method for its numerous applications: (a) defining a process, (b) standardizing a process, (c) communicating a process, (d) identifying bottlenecks or waste in a process, (e) solving a problem, and (f) improving a process. Flowcharts provide a useful and straightforward visual reference for all members of an organization. Utilizing flowcharts offers increased process transparency and decreased ambiguity, often resulting in an increase to overall workplace efficiency. === Brainstorming === Brainstorming is an effective neuroscience tool that can be used with groups to generate ideas that draw on the experience and strengths of all stakeholders. This tool was incorporated into the HPM method for its potential to provide teams with the opportunity to "open up possibilities and break down incorrect assumptions about the problem's limits." Additionally, studies have shown that groups that engage in brainstorming "can be cognitively stimulated as a result of exposure to the ideas of others". This implies there is a synergistic relationship among stakeholders' individual strengths and the ideas generated throughout a brainstorming session. === Appreciative inquiry and the 4-D cycle === Appreciative inquiry (AI) is based on recognizing a "positive core" by appreciating the qualities and strengths of the people who make up an organization. assert that "human systems grow in the direction of what they persistently ask questions about and this propensity is strongest and most sustainable when the means and ends of inquiry are positively correlated" (pp. 3–4). This implies that asking positive and optimistic questions will likely guide a group or organization towards a positive, optimistic future. AI involves four key stages, known as the 4-D cycle: (1) discovery, (2) dream, (3) design, and (4) destiny. Each stage engages stakeholders in appreciating their organization, constructing a holistic appreciation for the people they work with, and creating a "positive core" that allows the organization to change and grow. AI was incorporated into the HPM method for its promotion of positive perspectives to stakeholders., the creators of AI, assert that AI focuses on the positive philosophy behind the approach rather than viewing AI solely as a problem-solving technique. AI-based questions can be used to elicit constructive ideas and solutions from stakeholders throughout the elicitation portion of the project. === Systems thinking === Systems thinking is a theory that provides stakeholders with an "understanding [of] how the people, processes, and technology within an organization interact allow[ing] business analysts to understand the enterprise from a holistic point of view". While traditional forms of analysis look at specific parts of a system, systems thinking looks at the "big picture," focusing on the interactions between parts including dependencies and synergistic relationships. While there are many approaches and models of systems thinking, provide an open system (systems theory) that analyzes a system by its (a) inputs, (b) throughputs or transformations, (c) outputs, (d) feedback, and (e) environment. This model has been adapted for use in analyzing each of the organizational teams as a system through their (a) inputs, (b) transformations, (c) outputs, (d) feedback loops, (e) goals, and (f) environment. == References ==
Wikipedia/Harbarian_process_modeling
Generalised Enterprise Reference Architecture and Methodology (GERAM) is a generalised enterprise architecture framework for enterprise integration and business process engineering. It identifies the set of components recommended for use in enterprise engineering. This framework was developed in the 1990s by a joint task force of both the International Federation of Automatic Control (IFAC) and the International Federation of Information Processing (IFIP) on enterprise architectures for enterprise integration. The development started with the evaluation of then-existing frameworks for enterprise application integration, which was developed into an overall definition of a so-called "generalised architecture". == Overview == One of the basics of GERAM is that enterprise modelling was seen as the major issue in enterprise engineering and integration. It contained several of building blocks, in which the methodologies and the corresponding languages have been implemented, such as: Enterprise modelling tools (GEMT) to support the enterprise integration process. Ontological theories (OT), Generic enterprise models (GEMs) and Generic modules (GMs) The building blocks were designed to support the modelling process by providing means for more efficient modelling. The resulting enterprise model (EM) represents all or part of the enterprise operation. These models will allow simulation of operational alternatives and thereby their evaluation leading. GERAM provides a generic description of all the elements recommended in enterprise engineering and integration. Generalised Enterprise Reference Architecture and Methodology (GERAM) is an enterprise-reference architecture that models the whole life history of an enterprise integration project from its initial concept in the eyes of the entrepreneurs who initially developed it, through its definition, functional design or specification, detailed design, physical implementation or construction, and finally operation to obsolescence. The architecture aims to be a relatively simple framework upon which all the functions and activities involved in the aforementioned phases of the life of the enterprise-integration project can be mapped. It also will permit the tools used by the investigators or practitioners at each phase to be indicated. The architecture defined will apply to projects, products, and processes; as well as to enterprises. == History == Generalised Enterprise Reference Architecture and Methodology (GERAM) was developed in the 1990s by an IFAC/IFIP Task Force on Architectures for Enterprise Integration, which consisted of Peter Bernus, James G. Nell and others. The IFAC/IFIP Task Force on Architectures for Enterprise Integration was established in 1990 and has studied enterprise-reference architectures ever since. The task force established the requirements to be satisfied by candidate enterprise-reference architectures and their associated methodologies to fulfill the needs of industry for such aids to enterprise integration. The result has been called GERAM, for "Generalized Enterprise-Reference Architecture and Methodology", by the Task Force. The Task Force has shown that such an architecture is feasible and that several architectures presently available in the literature can already or potentially can fulfill such requirements. The development of enterprise-reference architecture has evolved from the development of Design Methodology for Advanced Manufacturing Systems in the 1980s, such as CIMOSA, the Open System Architecture for CIM. The GERAM framework was first published by Peter Bernus and Laszlo Nemes in 1994. == Topics == === Components === The eight main components, as shown in figure 1 are: Generic Enterprise Reference Architecture (GERA): Defines the enterprise related generic concepts recommended for use in enterprise integration projects. These concepts include enterprise systems life cycle; business process modeling; modeling languages for different users of the architecture (business users, system designers, IT modeling specialists, others); integrated model representation in different model views. Generic Enterprise Engineering Methodologies (GEEM): Describe the generic processes of enterprise integration. These methodologies may be described in terms of process models with detailed instruction for each step of the integration process. Generic Enterprise Modeling Languages (GEML): Define the generic constructs (building blocks) for enterprise modeling adapted to the different needs of people creating and using enterprise models. Generic Enterprise Modeling Tools (GEMT): Define the generic implementation of enterprise-integration methodologies and modeling languages and other support for creation and use of enterprise models. Enterprise Models (EM): Represents the enterprise operation. These models will be represented using generic modeling language constructs. Ontological Theories (OT): Formalise the most generic aspects of enterprise-related concepts in terms of essential properties and axioms. Generic Enterprise Models (GEMs): Identify reference models (partial models) which capture concepts common to many enterprises. GEMs will be used in enterprise modeling to increase modeling process efficiency. Generic Modules (GMs): Identify generally applicable products to be employed in enterprise integration (e.g. tools, integrating infrastructures, others.). === Generic Enterprise Reference Architecture === Generic Enterprise Reference Architecture (GERA) defines the enterprise related generic concepts recommended for use in enterprise integration projects. These concepts include life cycle; enterprise entity types, enterprise modelling with business process modelling; integrated model representation in different model views and modelling languages for different users of the enterprise architecture (business users, system designers, IT modelling specialists, among others). ==== Life-Cycle Concept ==== Provides for the identification of the life-cycle phases for any enterprise entity from entity conception to its final end. The Figure 2: GERA Life-Cycle Concept, shows the GERA life cycle phases of enterprise entities. A total of 9 life cycle phases has been defined. Identification phase allows the identification of the enterprise business or any part of it in terms of its relation to both its internal and external environment. This includes the definition general commitments of the integration or engineering activities to be carried out in relevant projects. Concept phase provides for the presentation of the management visions, missions, values, operational concepts (build/buy, etc.), policies, plus others. Requirement phase allows the description of operational processes and collection of all their functional, behavioural, informational and capability requirements. Design phase is the specification of operational system with all its components satisfying the above requirements. Process and resources alternatives may be specified which provide operational alternatives to be used during the operation. Implementation phase describes the real operational system which may deviate from the designed system due to enterprise preferences or availability of components. Build phase supports the system manifestation, physical implementation of resources, testing and validation for the designed processes and the subsequent release for operation. Operation phase employs the released operational processes and the provided resources to support the life cycle phases of the enterprise products. System Change/Re-Engineering phase allows to modify or re-engineer the operational processes according to newly identified needs or capabilities provided by new technologies. End of Life phase supports the recycling or disposal of the operational system at the ending of its use in the enterprise operation. This phase has to provide concepts for recycling and/or disposal of all or part of the system. ==== Enterprise Entity Type Concept ==== Identifies entity types to be used in enterprise engineering and enterprise integration. Adopting a recursive view of integration altogether five entity types with their associated life-cycles can be identified. The recursiveness of the first four entity types can be demonstrated by identifying the role of the different entities, their products and the relations between them. Figure 3: GERA Enterprise Entity Concept, shows the GERA life cycle phases of enterprise entities. A total of 9 life cycle phases has been defined. Strategic Enterprise Management Entity (type 1): defines the necessity and the starting of any enterprise engineering effort. Enterprise Engineering/Integration Entity (type 2): provides the means to carry out the enterprise entity type 1. It employs methodologies (type 5 entity) to define, design, implement and build the operation of the enterprise entity (type 3 entity). Enterprise Entity (type 3): is the result of the operation of entity type 2. It uses methodologies (entity type 5) and the operational system provided by entity type 2 to define, design, implement and build the products (services) of the enterprise (type 4 entity). Product Entity (type 4): is the result of the operation of entity type 3. It represents all products (services) of the enterprise. Methodology Entity (type 5): represents the methodology to be employed in any enterprise entity type. Figure 3 represents the chain of enterprise entity developments. The type 1 entity will always start creation of any lower level entity by identifying goal, scope and objectives for the particular entity. Development and implementation of a new enterprise entity (or new business unit) will then be done by a type 2 entity; whereas a type 3 entity will be responsible for developing and manufacturing a new product (type 4 entity). With the possible exception of the type 1 entity all enterprise entities will have an associated entity-life cycle. However, it is always the operational phase of the entity-life cycle in which the lower entity is defined, created, developed and built. The operation itself is supported by an associated methodology for enterprise engineering, enterprise operation, product development and production support Figure 3 also shows the life cycle of the methodology (type 5 entity) and the process model developed during the early life cycle phases of the methodology. However, there must be a clear distinction between the life cycle of the methodology with its different phases and its process model. The latter is used to support the operational phase of a particular enterprise entity. The operational relations of the different entity types are also shown in Figure 4: GERA Enterprise Entity Concept (Type 3), which demonstrates the contributions of the different entities to the type 3 entity life-cycle phases. The manufacturing entity itself produces the enterprise product in the course of its operation phase (type 3 entity). ==== Enterprise Modelling concept ==== Enterprise Modelling concept provides process models of enterprise operations. Process oriented modelling allows to represent the operation of enterprise entities and entity types in all its aspects: functional, behaviour, information, resources and organisation. Models which can be used for decision support by evaluating operational alternatives or for model driven operation control and monitoring. To hide complexity of the resulting model it will be presented to the user in different sub-sets (views). This view concept is shown in Figure 5: GERA Generic Reference Architecture Concept. It is applicable during all phases of the life cycle. Please note that the views will be generated from the underlying integrated model and any model manipulation. That means any change being done in one particular view will be reflected in all relevant aspects of the model. The GERA life cycle model has defined four different views: function, information, decision/organisation and resource/structure. Other views may be defined if needed and supported by the modelling tool. In addition, the life cycle model of GERA provides for two different categories of modelling: operation control and customer-service related. ==== Modelling Language concept ==== Modelling languages increase the efficiency of enterprise modelling. In addition they allow a common representation of the enterprise operation. Modelling languages have to accommodate different users of enterprise models; for example, business users, system designers, and IT-modelling specialists. Modelling languages have to support the modelling of all entity types across all phases of their respective life cycles. In addition, modelling languages have to provide generic constructs as well as macro constructs (GEMs) built from generic ones. The latter will further enhance modelling productivity. Figure 5 shows the reference architecture for those enterprise entity life cycle phases which require generic constructs. The partial level shows the place of the GEMs in the reference architecture. The particular level indicates the life cycle phases of the enterprise entity itself. === Generic Enterprise Engineering Methodologies === Generic Enterprise engineering methodologies (GEEM) describe the process of enterprise integration and, according to the GERAM framework (Figure 1), will result in a model of the enterprise operation. The methodologies will guide the user in the engineering task of enterprise modelling and integration. Different methodologies may exist which will guide the user through the different tasks required in the integration process. Enterprise-engineering methodologies should orient themselves on the life-cycle concept identified in GERA and should support the different life cycle phases shown in Figure 2. The enterprise integration process itself is usually directed towards the enterprise entity type 3 (see above) operation and carried out as an enterprise engineering task by an enterprise entity type 2 (Figures 2 and 4). The integration task may start at any relevant engineering phase (indicated in Figure 6: Enterprise Engineering and the Life-Cycle Concept.) of the entity life cycle and may employ any of those phases. Therefore, the processes relating to the different phases of enterprise engineering should be independent of each other to support different sequences of engineering tasks. Enterprise engineering methodologies may be described in terms of process models with detailed instruction for each step of the integration process. This allows not only a very good representation of the methodology for its understanding, but provides for identification of information to be used and produced, resources needed and relevant responsibilities to be assigned for the integration process. Process representation of methodologies should employ the relevant modelling language discussed below. === Generic Enterprise-Modelling Language === Generic enterprise modelling languages (GEML) define generic constructs (building blocks) for enterprise modelling. Generic constructs which represent the different elements of the operation improve both modelling efficiency and model understanding. These constructs have to be adapted to the different needs of people creating and using enterprise models. Therefore, different languages may exist which accommodate different users (e.g. business users, system designers, IT modelling specialists, others). Modelling the enterprise operation means to describe its processes and the necessary information, resources and organisational aspects. Therefore, modelling languages have to provide constructs capable of capturing the semantics of enterprise operations. This is especially important if enterprise models are to support the enterprise operation itself. Model-based decision support and model-driven operation control and monitoring require modelling constructs which are supporting the end users and which represent the operational processes according to the users perception. Modelling languages increase the efficiency of enterprise modelling. In addition they allow a common representation of the enterprise operation. Modelling languages have to support the modelling of all entity types across all phases of their respective life cycles. In addition, modelling languages have to provide generic constructs as well as macro constructs (GEMs) build from generic ones. The latter will further enhance modelling productivity. === Generic Enterprise-Modelling Tool === Generic enterprise modelling tools (GEMT) define the generic implementation of the enterprise integration methodologies and modelling languages and other support for creation and use of enterprise models. Modelling tools should provide user guidance for both the modelling process itself and for the operational use of the models. Therefore, enterprise modelling tools designs have to encompass not only the modelling methodology, but should provide model enactment capability for simulation of operational processes as well. The latter should also include analysis and evaluation capabilities for the simulation results. === Enterprise Models === Enterprise models (EMs) represent the enterprise operation mostly in the form of business processes. However, in certain cases other representations may be suitable as well. Business processes will be represented using the generic modelling-language constructs defined above for the relevant engineering methodology. Enterprise operations are usually rather complex and therefore difficult to understand if all relevant aspects of the operation are represented in a common model. To reduce the model complexity for the user, different views should be provided which allow the users only to see the aspect of concern. === Ontological Theories === Ontological theories (OT) formalise the most generic aspects of enterprise related concepts in terms of essential properties and axioms. Ontological theories may be considered as 'meta-models' since they consider facts and rules about the facts and rules of the enterprise and its models. === Generic Enterprise Models === Generic enterprise models (GEMs) identify reference models (partial models) which capture concepts common to many enterprises. GEMs will be used in enterprise modelling to increase modelling process efficiency. === Generic Modules === Generic Modules (GMs) identify generally applicable products to be employed in enterprise integration (e.g. tools, integrating infrastructures, others.). == See also == Computer Integrated Manufacturing CIMOSA Functional Software Architecture ISO 19439 == References == This article incorporates public domain material from the National Institute of Standards and Technology == Further reading == F.B. Vernadat (1996). "Enterprise Modeling and Integration: Principles and Applications", Chapman & Hall, London. ISBN 0-412-60550-3 T.J. Williams and Hong Li, A Specification and Statement of Requirements for GERAM (The Generalised Enterprise Reference Architecture and Methodology) with all Requirements illustrated by Examples from the Purdue Enterprise Reference Architecture and Methodology PERA, REPORT NUMBER 159 Purdue Laboratory for Applied Industrial Control November 1995, Version 1.1 D. Shorter, Editor, "An evaluation of CIM modelling constructs - Evaluation report of constructs for views according to ENV 40 003", In: Computers in Industry - Vol. 24, Nrs 2-3 T.J. Williams, et al., "Architectures for integrating manufacturing activities and enterprises", In: Computers in Industry - Vol. 24, Nrs 2-3 ENV 40 003 Computer Integrated Manufacturing - Systems Architecture - Framework for Enterprise Modelling CEN/CENELEC, 1990 ENV 12 204 Advanced Manufacturing Technology - Systems Architecture - Constructs for Enterprise Modelling CEN TC 310/WG1, 1995 Charles J. Petrie, Jr (1992). Enterprise Integration Modelling; ICEIMT Conference Proceedings, The MIT Press. ISBN 0-262-66080-6 == External links == GERAM: Generalised Enterprise Reference Architecture and Methodology Version 1.6.3. by Peter Bernus, March 1999.
Wikipedia/Generalised_Enterprise_Reference_Architecture_and_Methodology
Corporate governance refers to the mechanisms, processes, practices, and relations by which corporations are controlled and operated by their boards of directors, managers, shareholders, and stakeholders. == Definitions == "Corporate governance" may be defined, described or delineated in diverse ways, depending on the writer's purpose. Writers focused on a disciplinary interest or context (such as accounting, finance, corporate law law, or management) often adopt narrow definitions that appear purpose specific. Writers concerned with regulatory policy in relation to corporate governance practices often use broader structural descriptions. A broad (meta) definition that encompasses many adopted definitions is "Corporate governance describes the processes, structures, and mechanisms that influence the control and direction of corporations." This meta definition accommodates both the narrow definitions used in specific contexts and the broader descriptions that are often presented as authoritative. The latter include the structural definition from the Cadbury Report, which identifies corporate governance as "the system by which companies are directed and controlled" (Cadbury 1992, p. 15); and the relational-structural view adopted by the Organization for Economic Cooperation and Development (OECD) of "Corporate governance involves a set of relationships between a company's management, board, shareholders and stakeholders. Corporate governance also provides the structure and systems through which the company is directed, and its objectives are set, and the means of attaining those objectives and monitoring performance are determined" (OECD 2023, p. 6). Examples of narrower definitions in particular contexts include: "a system of law and sound approaches by which corporations are directed and controlled focusing on the internal and external corporate structures with the intention of monitoring the actions of management and directors and thereby, mitigating agency risks which may stem from the misdeeds of corporate officers." "the set of conditions that shapes the ex post bargaining over the quasi-rents generated by a firm." The firm itself is modelled as a governance structure acting through the mechanisms of contract. Here corporate governance may include its relation to corporate finance. == Principles == Contemporary discussions of corporate governance tend to refer to principles raised in three documents released since 1990: The Cadbury Report (UK, 1992), the Principles of Corporate Governance (OECD, 1999, 2004, 2015 and 2023), and the Sarbanes–Oxley Act of 2002 (US, 2002). The Cadbury and Organisation for Economic Co-operation and Development (OECD) reports present general principles around which businesses are expected to operate to assure proper governance. The Sarbanes–Oxley Act, informally referred to as Sarbox or Sox, is an attempt by the federal government in the United States to legislate several of the principles recommended in the Cadbury and OECD reports. Rights and equitable treatment of shareholders: Organizations should respect the rights of shareholders and help shareholders to exercise those rights. They can help shareholders exercise their rights by openly and effectively communicating information and by encouraging shareholders to participate in general meetings. Interests of other stakeholders: Organizations should recognize that they have legal, contractual, social, and market driven obligations to non-shareholder stakeholders, including employees, investors, creditors, suppliers, local communities, customers, and policymakers. Role and responsibilities of the board: The board needs sufficient relevant skills and understanding to review and challenge management performance. It also needs adequate size and appropriate levels of independence and commitment. Integrity and ethical behavior: Integrity should be a fundamental requirement in choosing corporate officers and board members. Organizations should develop a code of conduct for their directors and executives that promotes ethical and responsible decision making. Disclosure and transparency: Organizations should clarify and make publicly known the roles and responsibilities of board and management to provide stakeholders with a level of accountability. They should also implement procedures to independently verify and safeguard the integrity of the company's financial reporting. Disclosure of material matters concerning the organization should be timely and balanced to ensure that all investors have access to clear, factual information. === Principal–agent conflict === Some concerns regarding governance follows from the potential for conflicts of interests that are a consequence of the non-alignment of preferences between: shareholders and upper management (principal–agent problems); and among shareholders (principal–principal problems), although also other stakeholder relations are affected and coordinated through corporate governance. In large firms where there is a separation of ownership and management, the principal–agent problem can arise between upper-management (the "agent") and the shareholder(s) (the "principals"). The shareholders and upper management may have different interests. The shareholders typically desire returns on their investments through profits and dividends, while upper management may also be influenced by other motives, such as management remuneration or wealth interests, working conditions and perquisites, or relationships with other parties within (e.g., management-worker relations) or outside the corporation, to the extent that these are not necessary for profits. Those pertaining to self-interest are usually emphasized in relation to principal-agent problems. The effectiveness of corporate governance practices from a shareholder perspective might be judged by how well those practices align and coordinate the interests of the upper management with those of the shareholders. However, corporations sometimes undertake initiatives, such as climate activism and voluntary emission reduction, that seems to contradict the idea that rational self-interest drives shareholders' governance goals.: 3  An example of a possible conflict between shareholders and upper management materializes through stock repurchases (treasury stock). Executives may have incentive to divert cash surpluses to buying treasury stock to support or increase the share price. However, that reduces the financial resources available to maintain or enhance profitable operations. As a result, executives can sacrifice long-term profits for short-term personal gain. Shareholders may have different perspectives in this regard, depending on their own time preferences, but it can also be viewed as a conflict with broader corporate interests (including preferences of other stakeholders and the long-term health of the corporation). === Principal–principal conflict (the multiple principal problem) === The principal–agent problem can be intensified when upper management acts on behalf of multiple shareholders—which is often the case in large firms (see Multiple principal problem). Specifically, when upper management acts on behalf of multiple shareholders, the multiple shareholders face a collective action problem in corporate governance, as individual shareholders may lobby upper management or otherwise have incentives to act in their individual interests rather than in the collective interest of all shareholders. As a result, there may be free-riding in steering and monitoring of upper management, or conversely, high costs may arise from duplicate steering and monitoring of upper management. Conflict may break out between principals, and this all leads to increased autonomy for upper management. Ways of mitigating or preventing these conflicts of interests include the processes, customs, policies, laws, and institutions which affect the way a company is controlled—and this is the challenge of corporate governance. To solve the problem of governing upper management under multiple shareholders, corporate governance scholars have figured out that the straightforward solution of appointing one or more shareholders for governance is likely to lead to problems because of the information asymmetry it creates. Shareholders' meetings are necessary to arrange governance under multiple shareholders, and it has been proposed that this is the solution to the problem of multiple principals due to median voter theorem: shareholders' meetings lead power to be devolved to an actor that approximately holds the median interest of all shareholders, thus causing governance to best represent the aggregated interest of all shareholders. === Other themes === An important theme of governance is the nature and extent of corporate accountability. A related discussion at the macro level focuses on the effect of a corporate governance system on economic efficiency, with a strong emphasis on shareholders' welfare. This has resulted in a literature focused on economic analysis. A comparative assessment of corporate governance principles and practices across countries was published by Aguilera and Jackson in 2011. == Models == Different models of corporate governance differ according to the variety of capitalism in which they are embedded. The Anglo-American "model" tends to emphasize the interests of shareholders. The coordinated or multistakeholder model associated with Continental Europe and Japan also recognizes the interests of workers, managers, suppliers, customers, and the community. A related distinction is between market-oriented and network-oriented models of corporate governance. === Continental Europe (two-tier board system) === Some continental European countries, including Germany, Austria, and the Netherlands, require a two-tiered board of directors as a means of improving corporate governance. In the two-tiered board, the executive board, made up of company executives, generally runs day-to-day operations while the supervisory board, made up entirely of non-executive directors who represent shareholders and employees, hires and fires the members of the executive board, determines their compensation, and reviews major business decisions. Germany, in particular, is known for its practice of co-determination, founded on the German Codetermination Act of 1976, in which workers are granted seats on the board as stakeholders, separate from the seats accruing to shareholder equity. === United States, United Kingdom === The so-called "Anglo-American model" of corporate governance emphasizes the interests of shareholders. It relies on a single-tiered board of directors that is normally dominated by non-executive directors elected by shareholders. Because of this, it is also known as "the unitary system". Within this system, many boards include some executives from the company (who are ex officio members of the board). Non-executive directors are expected to outnumber executive directors and hold key posts, including audit and compensation committees. In the United Kingdom, the CEO generally does not also serve as chairman of the board, whereas in the US having the dual role has been the norm, despite major misgivings regarding the effect on corporate governance. The number of US firms combining both roles is declining, however. In the United States, corporations are directly governed by state laws, while the exchange (offering and trading) of securities in corporations (including shares) is governed by federal legislation. Many US states have adopted the Model Business Corporation Act, but the dominant state law for publicly traded corporations is Delaware General Corporation Law, which continues to be the place of incorporation for the majority of publicly traded corporations. Individual rules for corporations are based upon the corporate charter and, less authoritatively, the corporate bylaws. Shareholders cannot initiate changes in the corporate charter although they can initiate changes to the corporate bylaws. It is sometimes colloquially stated that in the US and the UK that "the shareholders own the company." This is, however, a misconception as argued by Eccles and Youmans (2015) and Kay (2015). The American system has long been based on a belief in the potential of shareholder democracy to efficiently allocate capital. === Japan === The Japanese model of corporate governance has traditionally held a broad view that firms should account for the interests of a range of stakeholders. For instance, managers do not have a fiduciary responsibility to shareholders. This framework is rooted in the belief that a balance among stakeholder interests can lead to a superior allocation of resources for society. The Japanese model includes several key principles: Security the rights and equal treatment of shareholders Appropriate cooperation with stakeholders (other than shareholders) Ensuring appropriate information disclosure and transparency Responsibility of the board Dialogue with shareholders === Founder centrism === An article published by the Australian Institute of Company Directors called "Do Boards Need to become more Entrepreneurial?" considered the need for founder centrism behaviour at board level to appropriately manage disruption. == Regulation == Corporations are created as legal persons by the laws and regulations of a particular jurisdiction. These may vary in many respects between countries, but a corporation's legal person status is fundamental to all jurisdictions and is conferred by statute. This allows the entity to hold property in its own right without reference to any real person. It also results in the perpetual existence that characterizes the modern corporation. The statutory granting of corporate existence may arise from general purpose legislation (which is the general case) or from a statute to create a specific corporation. Now, the formation of business corporations in most jurisdictions requires government legislation that facilitates incorporation. This legislation is often in the form of Companies Act or Corporations Act, or similar. Country-specific regulatory devices are summarized below. It is generally perceived that regulatory attention on the corporate governance practices of publicly listed corporations, particularly in relation to transparency and accountability, increased in many jurisdictions following the high-profile corporate scandals in 2001–2002, many of which involved accounting fraud; and then again after the 2008 financial crisis. For example, in the U.S., these included scandals surrounding Enron and MCI Inc. (formerly WorldCom). Their demise led to the enactment of the Sarbanes–Oxley Act in 2002, a U.S. federal law intended to improve corporate governance in the United States. Comparable failures in Australia (HIH, One.Tel) are linked to with the eventual passage of the CLERP 9 reforms there (2004), that similarly aimed to improve corporate governance. Similar corporate failures in other countries stimulated increased regulatory interest (e.g., Parmalat in Italy). Also see In addition to legislation the facilitates incorporation, many jurisdictions have some major regulatory devices that impact on corporate governance. This includes statutory laws concerned with the functioning of stock or securities markets (also see Security (finance), consumer and competition (antitrust) laws, labour or employment laws, and environmental protection laws, which may also entail disclosure requirements. In addition to the statutory laws of the relevant jurisdiction, corporations are subject to common law in some countries. In most jurisdictions, corporations also have some form of a corporate constitution that provides individual rules that govern the corporation and authorize or constrain its decision-makers. This constitution is identified by a variety of terms; in English-speaking jurisdictions, it is sometimes known as the corporate charter or articles of association (which also be accompanied by a memorandum of association). == Country-specific regulation == === Australia === ==== Primary legislation ==== Incorporation in Australia originated under state legislation but has been under federal legislation since 2001. Also see Australian corporate law. Other significant legislation includes: === Canada === ==== Primary legislation ==== Incorporation in Canada can be done either under either federal or provincial legislation. See Canadian corporate law. === The Netherlands === ==== Primary legislation ==== Dutch corporate law is embedded in the ondernemingsrecht and, specifically for limited liability companies, in the vennootschapsrecht. === Corporate Governance Code 2016–2022 === In addition The Netherlands has adopted a Corporate Governance Code in 2016, which has been updated twice since. In the latest version (2022), the Executive Board of the company is held responsible for the continuity of the company and its sustainable long-term value creation. The executive board considers the impact of corporate actions on People and Planet and takes the effects on corporate stakeholders into account. In the Dutch two-tier system, the Supervisory Board monitors and supervises the executive board in this respect. === Poland === ==== Primary legislation ==== Polish Corporate Law is regulated in Code of Commercial Companies. The code regulates most of the aspects of corporate governance, incl. rules of incorporation and liquidation, it defines rights, obligations and rules of operations of corporate bodies (Management Board, Supervisory Board, Shareholders Meeting). === UK === ==== Primary legislation ==== The UK has a single jurisdiction for incorporation. Also see United Kingdom company law Other significant legislation includes: ==== Bribery Act 2010 ==== The UK passed the Bribery Act in 2010. This law made it illegal to bribe either government or private citizens or make facilitating payments (i.e., payment to a government official to perform their routine duties more quickly). It also required corporations to establish controls to prevent bribery. === US === ==== Primary legislation ==== Incorporation in the US is under state level legislation, but there important federal acts. in particular, see Securities Act of 1933, Securities Exchange Act of 1934, and Uniform Securities Act. ===== Sarbanes–Oxley Act ===== The Sarbanes–Oxley Act of 2002 (SOX) was enacted in the wake of a series of high-profile corporate scandals, which cost investors billions of dollars. It established a series of requirements that affect corporate governance in the US and influenced similar laws in many other countries. SOX contained many other elements, but provided for several changes that are important to corporate governance practices: The Public Company Accounting Oversight Board (PCAOB) be established to regulate the auditing profession, which had been self-regulated prior to the law. Auditors are responsible for reviewing the financial statements of corporations and issuing an opinion as to their reliability. The chief executive officer (CEO) and chief financial officer (CFO) attest to the financial statements. Prior to the law, CEOs had claimed in court they hadn't reviewed the information as part of their defense. Board audit committees have members that are independent and disclose whether or not at least one is a financial expert, or reasons why no such expert is on the audit committee. External audit firms cannot provide certain types of consulting services and must rotate their lead partner every 5 years. Further, an audit firm cannot audit a company if those in specified senior management roles worked for the auditor in the past year. Prior to the law, there was the real or perceived conflict of interest between providing an independent opinion on the accuracy and reliability of financial statements when the same firm was also providing lucrative consulting services. ==== Foreign Corrupt Practices Act ==== The U.S. passed the Foreign Corrupt Practices Act (FCPA) in 1977, with subsequent modifications. This law made it illegal to bribe government officials and required corporations to maintain adequate accounting controls. It is enforced by the U.S. Department of Justice and the Securities and Exchange Commission (SEC). Substantial civil and criminal penalties have been levied on corporations and executives convicted of bribery. == Codes and guidelines == Corporate governance principles and codes have been developed in different countries and issued from stock exchanges, corporations, institutional investors, or associations (institutes) of directors and managers with the support of governments and international organizations. As a rule, compliance with these governance recommendations is not mandated by law, although the codes linked to stock exchange listing requirements may have a coercive effect. === Organisation for Economic Co-operation and Development principles === One of the most influential guidelines on corporate governance are the G20/OECD Principles of Corporate Governance, first published as the OECD Principles in 1999, revised in 2004, in 2015 when endorsed by the G20, and in 2023. The Principles are often referenced by countries developing local codes or guidelines. Building on the work of the OECD, other international organizations, private sector associations and more than 20 national corporate governance codes formed the United Nations Intergovernmental Working Group of Experts on International Standards of Accounting and Reporting (ISAR) to produce their Guidance on Good Practices in Corporate Governance Disclosure. This internationally agreed benchmark consists of more than fifty distinct disclosure items across five broad categories: Auditing Board and management structure and process Corporate responsibility and compliance in organization Financial transparency and information disclosure Ownership structure and exercise of control rights The OECD Guidelines on Corporate Governance of State-Owned Enterprises complement the G20/OECD Principles of Corporate Governance, providing guidance tailored to the corporate governance challenges of state-owned enterprises. === Stock exchange listing standards === Companies listed on the New York Stock Exchange (NYSE) and other stock exchanges are required to meet certain governance standards. For example, the NYSE Listed Company Manual requires, among many other elements: Independent directors: "Listed companies must have a majority of independent directors ... Effective boards of directors exercise independent judgment in carrying out their responsibilities. Requiring a majority of independent directors will increase the quality of board oversight and lessen the possibility of damaging conflicts of interest." (Section 303A.01) An independent director is not part of management and has no "material financial relationship" with the company. Board meetings that exclude management: "To empower non-management directors to serve as a more effective check on management, the non-management directors of each listed company must meet at regularly scheduled executive sessions without management." (Section 303A.03) Boards organize their members into committees with specific responsibilities per defined charters. "Listed companies must have a nominating/corporate governance committee composed entirely of independent directors." This committee is responsible for nominating new members for the board of directors. Compensation and Audit Committees are also specified, with the latter subject to a variety of listing standards as well as outside regulations. === Other guidelines === The investor-led organisation International Corporate Governance Network (ICGN) was set up by individuals centred around the ten largest pension funds in the world in 1995. The aim is to promote global corporate governance standards. The network is led by investors that manage US$77 trillion, and members are located in fifty different countries. ICGN has developed a suite of global guidelines ranging from shareholder rights to business ethics. The World Business Council for Sustainable Development (WBCSD) has done work on corporate governance, particularly on accounting and reporting. In 2009, the International Finance Corporation and the UN Global Compact released a report, "Corporate Governance: the Foundation for Corporate Citizenship and Sustainable Business", linking the environmental, social and governance responsibilities of a company to its financial performance and long-term sustainability. Most codes are largely voluntary. An issue raised in the U.S. since the 2005 Disney decision is the degree to which companies manage their governance responsibilities; in other words, do they merely try to supersede the legal threshold, or should they create governance guidelines that ascend to the level of best practice. For example, the guidelines issued by associations of directors, corporate managers and individual companies tend to be wholly voluntary, but such documents may have a wider effect by prompting other companies to adopt similar practices. In 2021, the first ever international standard, ISO 37000, was published as guidance for good governance. The guidance places emphasis on purpose which is at the heart of all organizations, i.e. a meaningful reason to exist. Values inform both the purpose and the way the purpose is achieved. == History == === United States === Robert E. Wright argued in Corporation Nation (2014) that the governance of early U.S. corporations, of which over 20,000 existed by the Civil War of 1861–1865, was superior to that of corporations in the late 19th and early 20th centuries because early corporations governed themselves like "republics", replete with numerous "checks and balances" against fraud and against usurpation of power by managers or by large shareholders. (The term "robber baron" became particularly associated with US corporate figures in the Gilded Age—the late 19th century.) In the immediate aftermath of the Wall Street crash of 1929 legal scholars such as Adolf Augustus Berle, Edwin Dodd, and Gardiner C. Means pondered on the changing role of the modern corporation in society. From the Chicago school of economics, Ronald Coase introduced the notion of transaction costs into the understanding of why firms are founded and how they continue to behave. US economic expansion through the emergence of multinational corporations after World War II (1939–1945) saw the establishment of the managerial class. Several Harvard Business School management professors studied and wrote about the new class: Myles Mace (entrepreneurship), Alfred D. Chandler, Jr. (business history), Jay Lorsch (organizational behavior) and Elizabeth MacIver (organizational behavior). According to Lorsch and MacIver "many large corporations have dominant control over business affairs without sufficient accountability or monitoring by their board of directors". In the 1980s, Eugene Fama and Michael Jensen established the principal–agent problem as a way of understanding corporate governance: the firm is seen as a series of contracts. In the period from 1977 to 1997, corporate directors' duties in the U.S. expanded beyond their traditional legal responsibility of duty of loyalty to the corporation and to its shareholders. In the first half of the 1990s, the issue of corporate governance in the U.S. received considerable press attention due to a spate of CEO dismissals (for example, at IBM, Kodak, and Honeywell) by their boards. The California Public Employees' Retirement System (CalPERS) led a wave of institutional shareholder activism (something only very rarely seen before), as a way of ensuring that corporate value would not be destroyed by the now traditionally cozy relationships between the CEO and the board of directors (for example, by the unrestrained issuance of stock options, not infrequently back-dated). In the early 2000s, the massive bankruptcies (and criminal malfeasance) of Enron and Worldcom, as well as lesser corporate scandals (such as those involving Adelphia Communications, AOL, Arthur Andersen, Global Crossing, and Tyco) led to increased political interest in corporate governance. This was reflected in the passage of the Sarbanes–Oxley Act of 2002. Other triggers for continued interest in the corporate governance of organizations included the 2008 financial crisis and the level of CEO pay. Some corporations have tried to burnish their ethical image by creating whistle-blower protections, such as anonymity. This varies significantly by justification, company and sector. === East Asia === The 1997 Asian financial crisis severely affected the economies of Thailand, Indonesia, South Korea, Malaysia, and the Philippines through the exit of foreign capital after property assets collapsed. The lack of corporate governance mechanisms in these countries highlighted the weaknesses of the institutions in their economies. In the 1990s, China established the Shanghai and Shenzhen Stock Exchanges and the China Securities Regulatory Commission (CSRC) to improve corporate governance. Despite these efforts, state ownership concentration and governance issues such as board independence and insider trading persisted. === Saudi Arabia === In November 2006 the Capital Market Authority (Saudi Arabia) (CMA) issued a corporate governance code in the Arabic language. The Kingdom of Saudi Arabia has made considerable progress with respect to the implementation of viable and culturally appropriate governance mechanisms (Al-Hussain & Johnson, 2009). Al-Hussain, A. and Johnson, R. (2009) found a strong relationship between the efficiency of corporate governance structure and Saudi bank performance when using return on assets as a performance measure with one exception—that government and local ownership groups were not significant. However, using rate of return as a performance measure revealed a weak positive relationship between the efficiency of corporate governance structure and bank performance. == Stakeholders == Key parties involved in corporate governance include stakeholders such as the board of directors, management and shareholders. External stakeholders such as creditors, auditors, customers, suppliers, government agencies, and the community at large also exert influence. The agency view of the corporation posits that the shareholder forgoes decision rights (control) and entrusts the manager to act in the shareholders' best (joint) interests. Partly as a result of this separation between the two investors and managers, corporate governance mechanisms include a system of controls intended to help align managers' incentives with those of shareholders. Agency concerns (risk) are necessarily lower for a controlling shareholder. In private for-profit corporations, shareholders elect the board of directors to represent their interests. In the case of nonprofits, stakeholders may have some role in recommending or selecting board members, but typically the board itself decides who will serve on the board as a 'self-perpetuating' board. The degree of leadership that the board has over the organization varies; in practice at large organizations, the executive management, principally the CEO, drives major initiatives with the oversight and approval of the board. === Responsibilities of the board of directors === Former Chairman of the Board of General Motors John G. Smale wrote in 1995: "The board is responsible for the successful perpetuation of the corporation. That responsibility cannot be relegated to management." A board of directors is expected to play a key role in corporate governance. The board has responsibility for: CEO selection and succession; providing feedback to management on the organization's strategy; compensating senior executives; monitoring financial health, performance and risk; and ensuring accountability of the organization to its investors and authorities. Boards typically have several committees (e.g., Compensation, Nominating and Audit) to perform their work. The OECD Principles of Corporate Governance (2025) describe the responsibilities of the board; some of these are summarized below: Board members should act on a fully informed basis, in good faith, with due diligence and care, and in the best interest of the company and the shareholders, taking into account the interests of stakeholders. Where board decisions may affect different shareholder groups differently, the board should treat all shareholders fairly. The board should apply high ethical standards. The board should fulfil certain key functions, including: Reviewing and guiding corporate strategy, major plans of action, annual budgets and business plans; setting performance objectives; monitoring implementation and corporate performance; and overseeing major capital expenditures, acquisitions and divestitures. Reviewing and assessing risk management policies and procedures. Monitoring the effectiveness of the company's governance practices and making changes as needed. Selecting, overseeing and monitoring the performance of key executives, and, when necessary, replacing them and overseeing succession planning. Aligning key executive and board remuneration with the longer term interests of the company and its shareholders. Ensuring a formal and transparent board nomination and election process. Monitoring and managing potential conflicts of interest of management, board members and shareholders, including misuse of corporate assets and abuse in related party transactions. Ensuring the integrity of the corporation's accounting and reporting systems for disclosure, including the independent external audit, and that appropriate control systems are in place, in compliance with the law and relevant standards. Overseeing the process of disclosure and communications. The board should be able to exercise objective independent judgement on corporate affairs. In order to fulfil their responsibilities, board members should have access to accurate, relevant and timely information. When employee representation on the board is mandated, mechanisms should be developed to facilitate access to information and training for employee representatives, so that this representation is exercised effectively and best contributes to the enhancement of board skills, information and independence. === Stakeholder interests === All parties, not just shareholders, to corporate governance have an interest, whether direct or indirect, in the financial performance of the corporation. Directors, workers and management receive salaries, benefits and reputation, while investors expect to receive financial returns. For lenders, it is specified interest payments, while returns to equity investors arise from dividend distributions or capital gains on their stock. Customers are concerned with the certainty of the provision of goods and services of an appropriate quality; suppliers are concerned with compensation for their goods or services, and possible continued trading relationships. These parties provide value to the corporation in the form of financial, physical, human and other forms of capital. Many parties may also be concerned with corporate social performance. A key factor in a party's decision to participate in or engage with a corporation is their confidence that the corporation will deliver the party's expected outcomes. When categories of parties (stakeholders) do not have sufficient confidence that a corporation is being controlled and directed in a manner consistent with their desired outcomes, they are less likely to engage with the corporation. When this becomes an endemic system feature, the loss of confidence and participation in markets may affect many other stakeholders, and increases the likelihood of political action. There is substantial interest in how external systems and institutions, including markets, influence corporate governance. === "Absentee landlords" vs. capital stewards === In 2016 the director of the World Pensions Council (WPC) said that "institutional asset owners now seem more eager to take to task [the] negligent CEOs" of the companies whose shares they own. This development is part of a broader trend towards more fully exercised asset ownership—notably from the part of the boards of directors ('trustees') of large UK, Dutch, Scandinavian and Canadian pension investors: No longer 'absentee landlords', [pension fund] trustees have started to exercise more forcefully their governance prerogatives across the boardrooms of Britain, Benelux and America: coming together through the establishment of engaged pressure groups […] to 'shift the [whole economic] system towards sustainable investment'. This could eventually put more pressure on the CEOs of publicly listed companies, as "more than ever before, many [North American,] UK and European Union pension trustees speak enthusiastically about flexing their fiduciary muscles for the UN's Sustainable Development Goals", and other ESG-centric investment practices. ==== United Kingdom ==== In Britain, "The widespread social disenchantment that followed the [2008–2012] great recession had an impact" on all stakeholders, including pension fund board members and investment managers. Many of the UK's largest pension funds are thus already active stewards of their assets, engaging with corporate boards and speaking up when they think it is necessary. === Control and ownership structures === Control and ownership structure refers to the types and composition of shareholders in a corporation. In some countries such as most of Continental Europe, ownership is not necessarily equivalent to control due to the existence of e.g. dual-class shares, ownership pyramids, voting coalitions, proxy votes and clauses in the articles of association that confer additional voting rights to long-term shareholders. Ownership is typically defined as the ownership of cash flow rights whereas control refers to ownership of control or voting rights. Researchers often "measure" control and ownership structures by using some observable measures of control and ownership concentration or the extent of inside control and ownership. Some features or types of control and ownership structure involving corporate groups include pyramids, cross-shareholdings, rings, and webs. German "concerns" (Konzern) are legally recognized corporate groups with complex structures. Japanese keiretsu (系列) and South Korean chaebol (which tend to be family-controlled) are corporate groups which consist of complex interlocking business relationships and shareholdings. Cross-shareholding is an essential feature of keiretsu and chaebol groups. Corporate engagement with shareholders and other stakeholders can differ substantially across different control and ownership structures. ==== Difference in firm size ==== In smaller companies founder‐owners often play a pivotal role in shaping corporate value systems that influence companies for years to come. In larger companies that separate ownership and control, managers and boards come to play an influential role. This is in part due to the distinction between employees and shareholders in large firms, where labour forms part of the corporate organization to which it belongs whereas shareholders, creditors and investors act outside of the organization of interest. ==== Family control ==== Family interests dominate ownership and control structures of some corporations, and it has been suggested that the oversight of family-controlled corporations are superior to corporations "controlled" by institutional investors (or with such diverse share ownership that they are controlled by management). A 2003 Business Week study said: "Forget the celebrity CEO. Look beyond Six Sigma and the latest technology fad. One of the biggest strategic advantages a company can have, it turns out, is blood lines." A 2007 study by Credit Suisse found that European companies in which "the founding family or manager retains a stake of more than 10 per cent of the company's capital enjoyed a superior performance over their respective sectoral peers", reported Financial Times. Since 1996, this superior performance amounted to 8% per year. ==== Diffuse shareholders ==== The significance of institutional investors varies substantially across countries. In developed Anglo-American countries (Australia, Canada, New Zealand, U.K., U.S.), institutional investors dominate the market for stocks in larger corporations. While the majority of the shares in the Japanese market are held by financial companies and industrial corporations, these are not institutional investors if their holdings are largely with-on group. The largest funds of invested money or the largest investment management firm for corporations are designed to maximize the benefits of diversified investment by investing in a very large number of different corporations with sufficient liquidity. The idea is this strategy will largely eliminate individual firm financial or other risk. A consequence of this approach is that these investors have relatively little interest in the governance of a particular corporation. It is often assumed that, if institutional investors pressing for changes decide they will likely be costly because of "golden handshakes" or the effort required, they will simply sell out their investment. === Proxy access === Particularly in the United States, proxy access allows shareholders to nominate candidates which appear on the proxy statement, as opposed to restricting that power to the nominating committee. The SEC had attempted a proxy access rule for decades, and the United States Dodd–Frank Wall Street Reform and Consumer Protection Act specifically allowed the SEC to rule on this issue, however, the rule was struck down in court. Beginning in 2015, proxy access rules began to spread driven by initiatives from major institutional investors, and as of 2018, 71% of S&P 500 companies had a proxy access rule. == Mechanisms and controls == Corporate governance mechanisms and controls are designed to reduce the inefficiencies that arise from moral hazard and adverse selection. There are both internal monitoring systems and external monitoring systems. Internal monitoring can be done, for example, by one (or a few) large shareholder(s) in the case of privately held companies or a firm belonging to a business group. Furthermore, the various board mechanisms provide for internal monitoring. External monitoring of managers' behavior occurs when an independent third party (e.g. the external auditor) attests the accuracy of information provided by management to investors. Stock analysts and debt holders may also conduct such external monitoring. An ideal monitoring and control system should regulate both motivation and ability, while providing incentive alignment toward corporate goals and objectives. Care should be taken that incentives are not so strong that some individuals are tempted to cross lines of ethical behavior, for example by manipulating revenue and profit figures to drive the share price of the company up. === Internal corporate governance controls === Internal corporate governance controls monitor activities and then take corrective actions to accomplish organisational goals. Examples include: Monitoring by the board of directors: The board of directors, with its legal authority to hire, fire and compensate top management, safeguards invested capital. Regular board meetings allow potential problems to be identified, discussed and avoided. Whilst non-executive directors are thought to be more independent, they may not always result in more effective corporate governance and may not increase performance. Different board structures are optimal for different firms. Moreover, the ability of the board to monitor the firm's executives is a function of its access to information. Executive directors possess superior knowledge of the decision-making process and therefore evaluate top management on the basis of the quality of its decisions that lead to financial performance outcomes, ex ante. It could be argued, therefore, that executive directors look beyond the financial criteria. Internal control procedures and internal auditors: Internal control procedures are policies implemented by an entity's board of directors, audit committee, management, and other personnel to provide reasonable assurance of the entity achieving its objectives related to reliable financial reporting, operating efficiency, and compliance with laws and regulations. Internal auditors are personnel within an organization who test the design and implementation of the entity's internal control procedures and the reliability of its financial reporting. Balance of power: The simplest balance of power is very common; require that the president be a different person from the treasurer. This application of separation of power is further developed in companies where separate divisions check and balance each other's actions. One group may propose company-wide administrative changes, another group review and can veto the changes, and a third group check that the interests of people (customers, shareholders, employees) outside the three groups are being met. Remuneration: Performance-based remuneration is designed to relate some proportion of salary to individual performance. It may be in the form of cash or non-cash payments such as shares and share options, superannuation or other benefits. Such incentive schemes, however, are reactive in the sense that they provide no mechanism for preventing mistakes or opportunistic behavior, and can elicit myopic behavior. Monitoring by large shareholders and/or monitoring by banks and other large creditors: Given their large investment in the firm, these stakeholders have the incentives, combined with the right degree of control and power, to monitor the management. In publicly traded U.S. corporations, boards of directors are largely chosen by the president/CEO, and the president/CEO often takes the chair of the board position for him/herself (which makes it much more difficult for the institutional owners to "fire" him/her). The practice of the CEO also being the chair of the Board is fairly common in large American corporations. While this practice is common in the U.S., it is relatively rare elsewhere. In the U.K., successive codes of best practice have recommended against duality. === External corporate governance controls === External corporate governance controls the external stakeholders' exercise over the organization. Examples include: competition debt covenants demand for and assessment of performance information (especially financial statements) government regulations managerial labour market media pressure takeovers proxy firms mergers and acquisitions === Financial reporting and the independent auditor === The board of directors has primary responsibility for the corporation's internal and external financial reporting functions. The chief executive officer and chief financial officer are crucial participants, and boards usually have a high degree of reliance on them for the integrity and supply of accounting information. They oversee the internal accounting systems, and are dependent on the corporation's accountants and internal auditors. Current accounting rules under International Accounting Standards and U.S. GAAP allow managers some choice in determining the methods of measurement and criteria for recognition of various financial reporting elements. The potential exercise of this choice to improve apparent performance increases the information risk for users. Financial reporting fraud, including non-disclosure and deliberate falsification of values also contributes to users' information risk. To reduce this risk and to enhance the perceived integrity of financial reports, corporation financial reports must be audited by an independent external auditor who issues a report that accompanies the financial statements. One area of concern is whether the auditing firm acts as both the independent auditor and management consultant to the firm they are auditing. This may result in a conflict of interest which places the integrity of financial reports in doubt due to client pressure to appease management. The power of the corporate client to initiate and terminate management consulting services and, more fundamentally, to select and dismiss accounting firms contradicts the concept of an independent auditor. Changes enacted in the United States in the form of the Sarbanes–Oxley Act (following numerous corporate scandals, culminating with the Enron scandal) prohibit accounting firms from providing both auditing and management consulting services. Similar provisions are in place under clause 49 of Standard Listing Agreement in India. == Systems perspective == A basic comprehension of corporate positioning on the market can be found by looking at which market area or areas a corporation acts in, and which stages of the respective value chain for that market area or areas it encompasses. A corporation may from time to time decide to alter or change its market positioning – through M&A activity for example – however it may loose some or all of its market efficiency in the process due to commercial operations depending to a large extent on its ability to account for a specific positioning on the market. === Systemic problems === Demand for information: In order to influence the directors, the shareholders must combine with others to form a voting group which can pose a real threat of carrying resolutions or appointing directors at a general meeting. Monitoring costs: A barrier to shareholders using good information is the cost of processing it, especially to a small shareholder. The traditional answer to this problem is the efficient-market hypothesis (in finance, the efficient market hypothesis (EMH) asserts that financial markets are efficient), which suggests that the small shareholder will free ride on the judgments of larger professional investors. Supply of accounting information: Financial accounts form a crucial link in enabling providers of finance to monitor directors. Imperfections in the financial reporting process will cause imperfections in the effectiveness of corporate governance. This should, ideally, be corrected by the working of the external auditing process. == Issues == === Sustainability === Well-designed corporate governance policies also support the sustainability and resilience of corporations and in turn, may contribute to the sustainability and resilience of the broader economy. Investors have increasingly expanded their focus on companies' financial performance to include the financial risks and opportunities posed by broader economic, environmental and societal challenges, and companies' resilience to and management of those risks. In some jurisdictions, policy makers also focus on how companies' operations may contribute to addressing such challenges. A sound framework for corporate governance with respect to sustainability matters can help companies recognise and respond to the interests of shareholders and different stakeholders, as well as contribute to their own long-term success. Such a framework should include the disclosure of material sustainability-related information that is reliable, consistent and comparable, including related to climate change. In some cases, jurisdictions may interpret concepts of sustainability-related disclosure and materiality in terms of applicable standards articulating information that a reasonable shareholder needs in order to make investment or voting decisions. === Executive pay === Increasing attention and regulation (as under the Swiss referendum "against corporate rip-offs" of 2013) has been brought to executive pay levels since the 2008 financial crisis. Research on the relationship between firm performance and executive compensation does not identify consistent and significant relationships between executives' remuneration and firm performance. Not all firms experience the same levels of agency conflict, and external and internal monitoring devices may be more effective for some than for others. Some researchers have found that the largest CEO performance incentives came from ownership of the firm's shares, while other researchers found that the relationship between share ownership and firm performance was dependent on the level of ownership. The results suggest that increases in ownership above 20% cause management to become more entrenched, and less interested in the welfare of their shareholders. Some argue that firm performance is positively associated with share option plans and that these plans direct managers' energies and extend their decision horizons toward the long-term, rather than the short-term, performance of the company. However, that point of view came under substantial criticism circa in the wake of various security scandals including mutual fund timing episodes and, in particular, the backdating of option grants as documented by University of Iowa academic Erik Lie and reported by James Blander and Charles Forelle of the Wall Street Journal. Even before the negative influence on public opinion caused by the 2006 backdating scandal, use of options faced various criticisms. A particularly forceful and long running argument concerned the interaction of executive options with corporate stock repurchase programs. Numerous authorities (including U.S. Federal Reserve Board economist Weisbenner) determined options may be employed in concert with stock buybacks in a manner contrary to shareholder interests. These authors argued that, in part, corporate stock buybacks for U.S. Standard & Poor's 500 companies surged to a $500 billion annual rate in late 2006 because of the effect of options. A combination of accounting changes and governance issues led options to become a less popular means of remuneration as 2006 progressed, and various alternative implementations of buybacks surfaced to challenge the dominance of "open market" cash buybacks as the preferred means of implementing a share repurchase plan. === Separation of Chief Executive Officer and Chairman of the Board roles === Shareholders elect a board of directors, who in turn hire a chief executive officer (CEO) to lead management. The primary responsibility of the board relates to the selection and retention of the CEO. However, in many U.S. corporations the CEO and chairman of the board roles are held by the same person. This creates an inherent conflict of interest between management and the board. Critics of combined roles argue the two roles that should be separated to avoid the conflict of interest and more easily enable a poorly performing CEO to be replaced. Warren Buffett wrote in 2014: "In my service on the boards of nineteen public companies, however, I've seen how hard it is to replace a mediocre CEO if that person is also Chairman. (The deed usually gets done, but almost always very late.)" Advocates argue that empirical studies do not indicate that separation of the roles improves stock market performance and that it should be up to shareholders to determine what corporate governance model is appropriate for the firm. In 2004, 73.4% of U.S. companies had combined roles; this fell to 57.2% by May 2012. Many U.S. companies with combined roles have appointed a "Lead Director" to improve independence of the board from management. German and UK companies have generally split the roles in nearly 100% of listed companies. Empirical evidence does not indicate one model is superior to the other in terms of performance. However, one study indicated that poorly performing firms tend to remove separate CEOs more frequently than when the CEO/Chair roles are combined. === Shareholder apathy === Certain groups of shareholders may become disinterested in the corporate governance process, potentially creating a power vacuum in corporate power. Insiders, other shareholders, and stakeholders may take advantage of these situations to exercise greater influence and extract rents from the corporation. Shareholder apathy may result from the increasing popularity of passive investing, diversification, and investment vehicles such as mutual funds and ETFs. == See also == == References == == Further reading == Arcot, Sridhar; Bruno, Valentina; Faure-Grimaud, Antoine (June 2010). "Corporate governance in the UK: Is the comply or explain approach working?" (PDF). International Review of Law and Economics. 30 (2): 193–201. doi:10.1016/j.irle.2010.03.002. S2CID 53448414. Becht, Marco; Bolton, Patrick; Röell, Ailsa (1 October 2002). "Corporate Governance and Control". SSRN 343461. Bowen, William, 1998 and 2004, The Board Book: An Insider's Guide for Directors and Trustees, New York and London, W.W. Norton & Company, ISBN 978-0-393-06645-6 Brickley, James A., William S. Klug and Jerold L. Zimmerman, Managerial Economics & Organizational Architecture, ISBN Cadbury, Sir Adrian, "The Code of Best Practice", Report of the Committee on the Financial Aspects of Corporate Governance, Gee and Co Ltd, 1992. Available online from [4] Cadbury, Sir Adrian, "Corporate Governance: Brussels", Instituut voor Bestuurders, Brussels, 1996. Claessens, Stijn; Djankov, Simeon; Lang, Larry H.P (January 2000). "The separation of ownership and control in East Asian Corporations". Journal of Financial Economics. 58 (1–2): 81–112. doi:10.1016/S0304-405X(00)00067-2. Clarke, Thomas (ed.) (2004) Theories of Corporate Governance: The Philosophical Foundations of Corporate Governance, London and New York: Routledge, ISBN 0-415-32308-8 Clarke, Thomas (ed.) (2004) Critical Perspectives on Business and Management (5 Volume Series on Corporate Governance – Genesis, Anglo-American, European, Asian and Contemporary Corporate Governance) London and New York: Routledge, ISBN 0-415-32910-8 Clarke, Thomas (2007) International Corporate Governance London and New York: Routledge, ISBN 0-415-32309-6 Clarke, Thomas & Chanlat, Jean-Francois (eds.) (2009) European Corporate Governance London and New York: Routledge, ISBN 978-0-415-40533-1 Clarke, Thomas & dela Rama, Marie (eds.) (2006) Corporate Governance and Globalization (3 Volume Series) London and Thousand Oaks, CA: SAGE, ISBN 978-1-4129-2899-1 Clarke, Thomas & dela Rama, Marie (eds.) (2008) Fundamentals of Corporate Governance (4 Volume Series) London and Thousand Oaks, CA: SAGE, ISBN 978-1-4129-3589-0 Colley, J., Doyle, J., Logan, G., Stettinius, W., What is Corporate Governance? (McGraw-Hill, December 2004) ISBN Crawford, C. J. (2007). Compliance & conviction: the evolution of enlightened corporate governance. Santa Clara, Calif: XCEO. ISBN 978-0-9769019-1-4 Denis, Diane K.; McConnell, John J. (March 2003). "International Corporate Governance". The Journal of Financial and Quantitative Analysis. 38 (1): 1–36. CiteSeerX 10.1.1.470.957. doi:10.2307/4126762. JSTOR 4126762. S2CID 232330567. Dignam, Alan and Lowry, John (2020) Company Law, Oxford University Press ISBN 978-0-19-928936-3 Douma, Sytse and Hein Schreuder (2013), Economic Approaches to Organizations, 5th edition. London: Pearson [5] Archived 2015-05-15 at the Wayback Machine ISBN 9780273735298 Uploaded at: https://www.academia.edu/93596216/Economic_approaches_to_corporate_governance_PDF_ Easterbrook, Frank H. and Daniel R. Fischel, The Economic Structure of Corporate Law, ISBN Easterbrook, Frank H. and Daniel R. Fischel, International Journal of Governance, ISBN Eccles, R.G. & T. Youmans (2015), Materiality in Corporate Governance: The Statement of Significant Audiences and Materiality, Boston: Harvard Business School, working paper 16-023, http://hbswk.hbs.edu/item/materiality-in-corporate-governance-the-statement-of-significant-audiences-and-materiality Erturk, Ismail; Froud, Julie; Johal, Sukhdev; Williams, Karel (August 2004). "Corporate governance and disappointment". Review of International Political Economy. 11 (4): 677–713. doi:10.1080/0969229042000279766. S2CID 153865396. Garrett, Allison, "Themes and Variations: The Convergence of Corporate Governance Practices in Major World Markets", 32 Denv. J. Int'l L. & Pol'y. Goergen, Marc, International Corporate Governance, (Prentice Hall 2012) ISBN 978-0-273-75125-0 Holton, Glyn A. (November 2006). "Investor Suffrage Movement". Financial Analysts Journal. 62 (6): 15–20. doi:10.2469/faj.v62.n6.4349. S2CID 153833506. Hovey, Martin; Naughton, Tony (June 2007). "A survey of enterprise reforms in China: The way forward" (PDF). Economic Systems. 31 (2): 138–156. doi:10.1016/j.ecosys.2006.09.001. Kay, John (2015), 'Shareholders think they own the company – they are wrong', The Financial Times, 10 November 2015 Abu Masdoor, Khalid (2011). "Ethical Theories of Corporate Governance". International Journal of Governance. 1 (2): 484–492. La Porta, Rafael; Lopez-De-Silanes, Florencio; Shleifer, Andrei (April 1999). "Corporate Ownership Around the World". The Journal of Finance. 54 (2): 471–517. doi:10.1111/0022-1082.00115. Low, Albert, 2008. "Conflict and Creativity at Work: Human Roots of Corporate Life, Sussex Academic Press. ISBN 978-1-84519-272-3 Monks, Robert A.G. and Minow, Nell, Corporate Governance (Blackwell 2004) ISBN Monks, Robert A.G. and Minow, Nell, Power and Accountability (HarperBusiness 1991), full text available online Moebert, Jochen and Tydecks, Patrick (2007). Power and Ownership Structures among German Companies. A Network Analysis of Financial Linkages [6] Murray, Alan Revolt in the Boardroom (HarperBusiness 2007) (ISBN 0-06-088247-6) Remainder OECD (1999, 2004, 2015) Principles of Corporate Governance Paris: OECD Sapovadia, Vrajlal K. (1 January 2007). "Critical Analysis of Accounting Standards Vis-À-Vis Corporate Governance Practice in India". SSRN 712461. Ulrich Seibert (1999), Control and Transparency in Business (KonTraG) Corporate Governance Reform in Germany. European Business Law Review 70 Shleifer, Andrei; Vishny, Robert W. (June 1997). "A Survey of Corporate Governance". The Journal of Finance. 52 (2): 737–783. CiteSeerX 10.1.1.489.2497. doi:10.1111/j.1540-6261.1997.tb04820.x. S2CID 54538527. Skau, H.O (1992), A Study in Corporate Governance: Strategic and Tactic Regulation (200 p) Sun, William (2009), How to Govern Corporations So They Serve the Public Good: A Theory of Corporate Governance Emergence, New York: Edwin Mellen, ISBN 978-0-7734-3863-7. Touffut, Jean-Philippe (ed.) (2009), Does Company Ownership Matter?, Cheltenham, UK, and Northampton, MA, US: Edward Elgar. Contributors: Jean-Louis Beffa, Margaret Blair, Wendy Carlin, Christophe Clerc, Simon Deakin, Jean-Paul Fitoussi, Donatella Gatti, Gregory Jackson, Xavier Ragot, Antoine Rebérioux, Lorenzo Sacconi and Robert M. Solow. Tricker, Bob and The Economist Newspaper Ltd (2003, 2009), Essentials for Board Directors: An A–Z Guide, Second Edition, New York, Bloomberg Press, ISBN 978-1-57660-354-3. Zelenyuk, Valentin; Zheka, Vitaliy (April 2006). "Corporate Governance and Firm's Efficiency: The Case of a Transitional Country, Ukraine". Journal of Productivity Analysis. 25 (1–2): 143–157. doi:10.1007/s11123-006-7136-8. S2CID 5712854. Shahwan, Y., & Mohammad, N. R. (2016). Descriptive Evidence of Corporate Governance & OECD Principles for Compliance with Jordanian Companies. Journal Studia Universitatis Babeş-Bolyai Negotia. == External links == Media related to Corporate governance at Wikimedia Commons Quotations related to Corporate governance at Wikiquote OECD Corporate Governance Portal WorldBank/IFC Corporate Governance Portal World Bank Corporate Governance Reports
Wikipedia/Corporate_governance
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, 'a measure'. Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science. In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality. == Types of model == === Model in specific contexts === As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout": Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth. Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model) Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person Model (mimicry), a species that is mimicked by another species Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as < {\displaystyle <} ) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software Model (MVC), the information-representing internal component of a software, as distinct from its user interface === Physical model === A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers). The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains. An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy. Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment. === Conceptual model === A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents. Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process. === Examples === Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software Economic model, a theoretical construct representing economic processes Language model, a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT Mathematical model, a description of a system using mathematical concepts and language Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software Medical model, a proposed "set of procedures in which all doctors are trained" Mental model, in psychology, an internal representation of external reality Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures Standard model (disambiguation) == Properties of models, according to general model theory == According to Herbert Stachowiak, a model is characterized by at least three properties: 1. Mapping A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model. 2. Reduction In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user. 3. Pragmatism A model does not relate unambiguously to its original. It is intended to work as a replacement for the original a) for certain subjects (for whom?) b) within a certain time range (when?) c) restricted to certain conceptual or physical actions (what for?). For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism). Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models. == Uses of models == According to Bruce Edmonds, there are at least 5 general uses for models: Prediction: reliably anticipating unknown data, including data within the domain of the training data (interpolation), and outside the domain (extrapolation) Explanation: establishing plausible chains of causality by proposing mechanisms that can explain patterns seen in data Theoretical exposition: discovering or proposing new hypotheses, or refuting existing hypotheses about the behaviour of the system being modelled Description: representing important aspects of the system being modelled Illustration: communicating an idea or explanation == See also == == References == == External links == Media related to Physical models at Wikimedia Commons
Wikipedia/Modeling
Cognition enhanced Natural language Information Analysis Method (CogNIAM) is a conceptual fact-based modelling method, that aims to integrate the different dimensions of knowledge: data, rules, processes and semantics. To represent these dimensions world standards SBVR, BPMN and DMN from the Object Management Group (OMG) are used. CogNIAM, a successor of NIAM, is based on the work of knowledge scientist Sjir Nijssen. CogNIAM structures knowledge, gathered from people, documentation and software, by classifying it. For this purpose CogNIAM uses the so-called ‘Knowledge Triangle’. The outcome of CogNIAM is independent of the person applying it. The resulting model allows the knowledge to be expressed in diagrammatic form as well as in controlled natural language. == The different dimensions of knowledge == CogNIAM recognises 4 different dimensions of knowledge: Data: What are the facts? Process: How are facts generated/deleted/altered? Semantics: What do the facts mean? Rules: What conditions apply on the facts? These dimensions influence each other heavily. Rules restrict data, Semantics describe the concepts and terms used in processes etc., therefore The aim of CogNIAM is to integrate these different dimensions. == Structuring knowledge == As mentioned earlier, CogNIAM classifies knowledge using the knowledge triangle . The knowledge that can be mapped to the knowledge triangle is structurally relevant and can be verbalised. Knowledge that cannot be verbalised, for example the ‘Mona Lisa’, is not included. Also the knowledge must be structurally relevant. Not structurally relevant is for example motivation (the why?). It is important information, but it is not an added value to the model. The remaining knowledge can be mapped to the knowledge triangle. The knowledge triangle consists of three levels Level 1 – The level of facts The majority of knowledge consists of concrete facts. Facts describe possible current, past or future states. In CogNIAM a fact is defined as “a proposition taken to be true by a relevant community”. An example of a level 1 fact is: “The capital of Italy is Rome.” Level 2 – The domain-specific level In this level the rules that govern the facts of level 1 are specified. For the example above a rule governing the level 1 facts could be “a country has exactly one capital”. This is a rule that ensures no untrue states or disallowed transitions between different states can occur at level 1. Besides rules level 2 contains six more knowledge categories, which are discussed in the next chapter. Level 3 – The generic level This level is not associated to any specific domain, it says nothing about capitals or countries. As level 2 governs the facts on level 1, the generic level governs the knowledge categories of level 2. It consists of the same knowledge categories, but here they are applied to the content of level 2. In other words, level 3 contains the rules that determine the rules. The generic level can also be seen as a domain-specific level with the domain being ‘domain-specific knowledge’. As a result, level 3 also governs itself. == Knowledge categories == Level 2 and 3 of the knowledge triangle consist of seven knowledge categories: Concept definitions describe the meaning of every term or group of terms at the fact level. A large part of the semantics dimension can be found here. Fact types provide the functionality to define which kinds of facts are considered to be within the scope of the domain of interest. Communication patterns: Fact communication patterns act as a communication mechanism to be used as a template to communicate facts using terms the subject matter expert is familiar with Rule communication patterns act as communication mechanism for the rules (see below) of the conceptual schema. Rules, distinguishing between: Integrity or validation rules, also known as constraints, restrict the set of facts and the transitions between the permitted sets of facts to those that are considered useful. In terms of data quality, integrity rules are used to guarantee the quality of the facts. Derivation rules are used to derive or calculate new information (facts) based on existing information. Exchange rules transfer facts into the administration of that domain or remove facts from the administration. In other words, they specify how facts are added and/or removed from the fact base so that the system stays in sync with the communication about the outside world. Event rules specify when to update the set of ground facts by a derivation rule or exchange rule in the context of a process description. Process descriptions specify the fact consuming and/or fact generating activities (the exchange and/or derivation rules) to be performed by the different actors for that process, as well as the event rules invoking the execution of those exchange and derivation rules in an ordered manner. Actors, identifying the involved participants and their responsibilities in the processes (in terms of the exchange and derivation rules they need to execute). Services, identifying the realisations of the process descriptions in terms of information products to be delivered or consulted == References ==
Wikipedia/Cognition_enhanced_Natural_language_Information_Analysis_Method
Control is a function of management that helps identify errors and take corrective actions. This is done to minimize deviation from standards and ensure that the stated goals of the organization are achieved effectively. According to modern concepts, control is a proactive action; earlier concepts of control were only used when errors were detected. Control in management includes setting standards, measuring actual performance, and taking corrective action in decision making. == Definition == In 1916, Henri Fayol formulated one of the first definitions of control as it pertains to management: Control of an undertaking consists of seeing that everything is being carried out in accordance with the plan which has been adopted, the orders which have been given, and the principles which have been laid down. Its objective is to point out mistakes so that they may be rectified and prevented from recurring. According to E. F. L. Brech: Control is checking current performance against pre-determined standards contained in the plans, with a view to ensuring adequate progress and satisfactory performance. According to Harold Koontz: Controlling is the measurement and correction of performance to make sure that enterprise objectives and the plans devised to attain them are accomplished. According to Stafford Beer: Management is the profession of control. Robert J. Mockler presented a more comprehensive definition of managerial control: Management control can be defined as a systematic torture by business management to compare performance to predetermined standards, plans, or objectives to determine whether performance is in line with these standards and presumably to take any remedial action required to see that human and other corporate resources are being used most effectively and efficiently possible in achieving corporate objectives. Control can also be defined as "that function of the system that adjusts operations as needed to achieve the plan, or to maintain variations from system objectives within allowable limits." The control subsystem functions in close harmony with the operating system. The degree to which they interact depends on the nature of the operating system and its objectives. Stability concerns a system's ability to maintain a pattern of output without wide fluctuations. The rapidity of response pertains to the speed with which a system can correct variations and return to the expected output. A political election can illustrate the concept of control and the importance of feedback. Each party organizes a campaign to get its candidate selected and outlines a plan to inform the public about both the candidate's credentials and the party's platform. As the election nears, opinion polls furnish feedback about the effectiveness of the campaign and about each candidate's chances of winning. Depending on the nature of this feedback, certain adjustments in strategy and/or tactics can be made in an attempt to achieve the desired result. From these definitions, it can be stated that there is a close link between planning and controlling. Planning is a process by which an organization's objectives and the methods to achieve the objectives are established, and controlling is a process that measures and directs the actual performance against the planned goals of the organization. Thus, goals and objectives are often referred to as Siamese twins of management. The managerial function of management and correction of performance to make sure that enterprise objectives and the goals devised to attain them are accomplished. The absence of a right to control the actions or working practices of person engaged at work is generally an indication that the working relationship with that person is covered by a contract for services and is not a form of employment. == Characteristics == Control is a continuous process Control is closely linked with planning Control is a tool for achieving organizational activities Control is an end-to-end process Control compares actual performance with planned performance* Control points out the error in the execution process Control minimizes cost Control achieves the standard Control saves time Control helps management monitor performance Control compares performance against standards Control is action oriented == Elements == The four basic elements in a control system are: the characteristic or condition to be controlled the sensor the comparator the activator They occur in the same sequence and maintain consistent relationships with each other in every system. The first element is the characteristic or condition of the operating system to be measured. Specific characteristics are selected because a correlation exists between them and the system's performance. A characteristic can be the output of the system during any stage of processing (e.g. the heat energy produced by a furnace), or it may be a condition that is the result of the system (e.g. the temperature in the room which has changed because of the heat generated by the furnace). In an elementary school system, the hours a teacher works or the gain in knowledge demonstrated by the students on a national examination are examples of characteristics that may be selected for measurement, or control. The second element of control, the sensor, is a means for measuring the characteristic. For example, in a home heating system, this device would be the thermostat, and in a quality-control system, this measurement might be performed by a visual inspection of the product. The third element of control, the comparator, determines the need for correction by comparing what is occurring with what has been planned. Some deviation from the plan is usual and expected, but when variations are beyond those considered acceptable, corrective action is required. It involves a sort of preventative action that indicates that good control is being achieved. The fourth element of control, the activator, is the corrective action taken to return the system to its expected output. The actual person, device, or method used to direct corrective inputs into the operating system may take a variety of forms. It may be a hydraulic controller positioned by a solenoid or electric motor in response to an electronic error signal, an employee directed to rework the parts that failed to pass quality inspection, or a school principal who decides to buy additional books to provide for an increased number of students. As long as a plan is performed within allowable limits, corrective action is not necessary; however, this seldom occurs in practice. Information is the medium of control, because the flow of sensory data and later the flow of corrective information allow a characteristic or condition of the system to be controlled. === Controlled characteristic or condition === The primary requirement of a control system is that it maintains the level and kind of output necessary to achieve the system's objectives. It is usually impractical to control every feature and condition associated with the system's output. Therefore, the choice of the controlled item (and appropriate information about it) is extremely important. There should be a direct correlation between the controlled item and the system's operation. In other words, control of the selected characteristic should have a direct relationship to the goal or objective of the system. === Sensor === After the characteristic is sensed, or measured, information pertinent to control is fed back. Exactly what information needs to be transmitted and also the language that will best facilitate the communication process and reduce the possibility of distortion in transmission must be carefully considered. Information that is to be compared with the standard, or plan, should be expressed in the same terms or language as in the original plan to facilitate decision making. Using machine methods (computers) may require extensive translation of the information. Since optimal languages for computation and for human review are not always the same, the relative ease of translation may be a significant factor in selecting the units of measurement or the language unit in the sensing element. In many instances, the measurement may be sampled rather than providing a complete and continuous feedback of information about the operation. A sampling procedure suggests measuring some segment or portion of the operation that will represent the total. === Comparison with standard === In a social system, the norms of acceptable behavior become the standard against which so-called deviant behavior may be judged. Regulations and laws provide a more formal collection of information for society. Social norms change, but very slowly. In contrast, the standards outlined by a formal law can be changed from one day to the next through revision, discontinuation, or replacement by another. Information about deviant behavior becomes the basis for controlling social activity. Output information is compared with the standard or norm and significant deviations are noted. In an industrial example, frequency distribution (a tabulation of the number of times a given characteristic occurs within the sample of products being checked) may be used to show the average quality, the spread, and the comparison of output with a standard. If there is a significant and uncorrectable difference between output and plan, the system is "out of control." This means that the objectives of the system are not feasible in relation to the capabilities of the present design. Either the objectives must be reevaluated or the system redesigned to add new capacity or capability. For example, drug trafficking has been increasing in some cities at an alarming rate. The citizens must decide whether to revise the police system so as to regain control, or whether to modify the law to reflect a different norm of acceptable behavior. === Implementor === The activator unit responds to the information received from the comparator and initiates corrective action. If the system is a machine-to-machine system, the corrective inputs (decision rules) are designed into the network. When the control relates to a man-to-machine or man-to-man system, however, the individual(s) in charge must evaluate (1) the accuracy of the feedback information, (2) the significance of the variation, and (3) what corrective inputs will restore the system to a reasonable degree of stability. Once the decision has been made to direct new inputs into the system, the actual process may be relatively easy. A small amount of energy can change the operation of jet airplanes, automatic steel mills, and hydroelectric power plants. The pilot presses a button, and the landing gear of the airplane goes up or down; the operator of a steel mill pushes a lever, and a ribbon of white-hot steel races through the plant; a worker at a control board directs the flow of electrical energy throughout a regional network of stations and substations. It takes but a small amount of control energy to release or stop large quantities of input. The comparator may be located far from the operating system, although at least some of the elements must be in close proximity to operations. For example, the measurement (the sensory element) is usually at the point of operations. The measurement information can be transmitted to a distant point for comparison with the standard (comparator), and when deviations occur, the correcting input can be released from the distant point. However, the input (activator) will be located at the operating system. This ability to control from afar means that aircraft can be flown by remote control, dangerous manufacturing processes can be operated from a safe distance, and national organizations can be directed from centralized headquarters in Dublin, Ireland. - Kenard E. White == Process == Step 1. Establishment of Standard. Standards are the criteria against which actual performance will be measured. Standards are set in both quantitative and qualitative terms. Step 2. Measurement of actual performance Performance is measured in an objective and reliable manner. It should be checked in the same unit in which the standards are set. Step 3. Comparing actual performance with standards. This step involves comparing the actual performance with standards laid down in order to find the deviations. For example, performance of a salesman in terms of unit sold in a week can be easily measured against the standard output for the week. Step 4. Analysis the cause of deviations. Managers must determine why standards were not met. This step also involves determining whether more control is necessary or if the standard should be changed. Step 5. Taking corrective action. After the reasons for deviations have been determined, managers can then develop solutions for issues with meeting the standards and make changes to processes or behaviors. == Classifications == Control may be grouped according to three general classifications: the nature of the information flow designed into the system (open- or closed-loop control) the kind of components included in the design (man or machine control systems) the relationship of control to the decision process (organizational or operational control). === Open- and closed-loop control === A street-lighting system controlled by a timing device is an example of an open-loop system. At a certain time each evening, a mechanical device closes the circuit and energy flows through the electric lines to light the lamps. Note, however, that the timing mechanism is an independent unit and is not measuring the objective function of the lighting system. If the lights should be needed on a dark, stormy day the timing device would not recognize this need and therefore would not activate energy inputs. Corrective properties may sometimes be built into the controller (for example, to modify the time the lights are turned on as the days grow shorter or longer), but this would not close the loop. In another instance, the sensing, comparison, or adjustment may be made through action taken by an individual who is not part of the system. For example, the lights may be turned on by someone who happens to pass by and recognizes the need for additional light. If control is exercised as a result of the operation rather than because of outside or predetermined arrangements, it is a closed-loop system. A home thermostat is an example of a control device in a closed-loop system. When the room temperature drops below the desired point, the control mechanism closes the circuit to start the furnace and the temperature rises. The furnace is deactivated as the temperature reaches the preselected level. The significant difference between this type of system and an open-loop system is that the control device is an element of the system it serves and measures the performance of the system. In other words, all four control elements are integral to the specific system. An essential part of a closed-loop system is feedback; that is, the output of the system is measured continually through the item controlled, and the input is modified to reduce any difference or error toward zero. Many of the patterns of information flow in organizations are found to have the nature of closed loops, which use feedback. The reason for such a condition is apparent when one recognizes that any system, if it is to achieve a predetermined goal, must have available to it at all times an indication of its degree of attainment. In general, every goal-seeking system employs feedback. === Human and machine control === The elements of control are easy to identify in machine systems. For example, the characteristic to be controlled might be some variable like speed or temperature, and the sensing device could be a speedometer or a thermometer. An expectation of precision exists because the characteristic is quantifiable and the standard and the normal variation to be expected can be described in exact terms. In automatic machine systems, inputs of information are used in a process of continual adjustment to achieve output specifications. When even a small variation from the standard occurs, the correction process begins. The automatic system is highly structured, designed to accept certain kinds of input and produce specific output, and programmed to regulate the transformation of inputs within a narrow range of variation. For an illustration of mechanical control: as the load on a steam engine increases and the engine starts to slow down, the regulator reacts by opening a valve that releases additional inputs of steam energy. This new input returns the engine to the desired number of revolutions per minute. This type of mechanical control is crude in comparison to the more sophisticated electronic control systems in everyday use. Consider the complex missile-guidance systems that measure the actual course according to predetermined mathematical calculations and make almost instantaneous corrections to direct the missile to its target. Machine systems can be complex because of the sophisticated technology, whereas control of people is complex because the elements of control are difficult to determine. In human control systems, the relationship between objectives and associated characteristics is often vague; the measurement of the characteristic may be extremely subjective; the expected standard is difficult to define; and the amount of new inputs required is impossible to quantify. To illustrate, let us refer once more to a formalized social system in which deviant behavior is controlled through a process of observed violation of the existing law (sensing), court hearings and trials (comparison with standard), incarceration when the accused is found guilty (correction), and release from custody after rehabilitation of the individual has occurred. The speed limit established for freeway driving is one standard of performance that is quantifiable, but even in this instance, the degree of permissible variation and the amount of the actual variation are often a subject of disagreement between the patrolman and the suspected violator. The complexity of society is reflected in many laws and regulations, which establish the general standards for economic, political, and social operations. A citizen may not know or understand the law and consequently would not know whether or not he was guilty of a violation. Most organized systems are some combination of man and machine; some elements of control may be performed by machine whereas others are accomplished by man. In addition, some standards may be precisely structured whereas others may be little more than general guidelines with wide variations expected in output. Man must act as the controller when measurement is subjective and judgment is required. Machines such as computers are incapable of making exceptions from the specified control criteria regardless of how much a particular case might warrant special consideration. A pilot acts in conjunction with computers and automatic pilots to fly large jets. In the event of unexpected weather changes, or possible collision with another plane, he must intercede and assume direct control. === Organizational and operational control === The concept of organizational control is implicit in the bureaucratic theory of Max Weber. Associated with this theory are such concepts as "span of control", "closeness of supervision", and "hierarchical authority". Weber's view tends to include all levels or types of organizational control as being the same. More recently, writers have tended to differentiate the control process between that which emphasizes the nature of the organizational or systems design and that which deals with daily operations. To illustrate the difference, we "evaluate" the performance of a system to see how effective and efficient the design proved to be or to discover why it failed. In contrast, we operate and "control" the system with respect to the daily inputs of material, information, and energy. In both instances, the elements of feedback are present, but organizational control tends to review and evaluate the nature and arrangement of components in the system, whereas operational control tends to adjust the daily inputs. The direction for organizational control comes from the goals and strategic plans of the organization. General plans are translated into specific performance measures such as share of the market, earnings, return on investment, and budgets. The process of organizational control is to review and evaluate the performance of the system against these established norms. Rewards for meeting or exceeding standards may range from special recognition to salary increases or promotions. On the other hand, a failure to meet expectations may signal the need to reorganize or redesign. In organizational control, the approach used in the program of review and evaluation depends on the reason for the evaluation — that is, is it because the system is not effective (accomplishing its objectives)? Is the system failing to achieve an expected standard of efficiency? Is the evaluation being conducted because of a breakdown or failure in operations? Is it merely a periodic audit-and-review process? When a system has failed or is in great difficulty, special diagnostic techniques may be required to isolate the trouble areas and to identify the causes of the difficulty. It is appropriate to investigate areas that have been troublesome before or areas where some measure of performance can be quickly identified. For example, if an organization's output backlog builds rapidly, it is logical to check first to see if the problem is due to such readily obtainable measures as increased demand or to a drop in available man hours. When a more detailed analysis is necessary, a systematic procedure should be followed. In contrast to organizational control, operational control serves to regulate the day-to-day output relative to schedules, specifications, and costs. Is the output of product or service the proper quality and is it available as scheduled? Are inventories of raw materials, goods-in-process, and finished products being purchased and produced in the desired quantities? Are the costs associated with the transformation process in line with cost estimates? Is the information needed in the transformation process available in the right form and at the right time? Is the energy resource being utilized efficiently? The most difficult task of management concerns monitoring the behavior of individuals, comparing performance to some standard, and providing rewards or punishment as indicated. Sometimes this control over people relates entirely to their output. For example, a manager might not be concerned with the behavior of a salesman as long as sales were as high as expected. In other instances, close supervision of the salesman might be appropriate if achieving customer satisfaction were one of the sales organization's main objectives. The larger the unit, the more likely that the control characteristic will be related to some output goal. It also follows that if it is difficult or impossible to identify the actual output of individuals, it is better to measure the performance of the entire group. This means that individuals' levels of motivation and the measurement of their performance become subjective judgments made by the supervisor. Controlling output also suggests the difficulty of controlling individuals' performance and relating this to the total system's objectives. == Problems == The perfect plan could be outlined if every possible variation of input could be anticipated and if the system would operate as predicted. This kind of planning is neither realistic, economical, nor feasible for most business systems. If it were feasible, planning requirements would be so complex that the system would be out of date before it could be operated. Therefore, we design control into systems. This requires more thought in the systems design but allows more flexibility of operations and makes it possible to operate a system using unpredictable components and undetermined input. Still, the design and effective operation of control are not without problems. The objective of the system is to perform some specified function. The objective of organizational control is to see that the specified function is achieved. The objective of operational control is to ensure that variations in daily output are maintained within prescribed limits. It is one thing to design a system that contains all of the elements of control, and quite another to make it operate true to the best objectives of design. Operating "in control" or "with plan" does not guarantee optimum performance. For example, the plan may not make the best use of the inputs of materials, energy, or information — in other words, the system may not be designed to operate efficiently. Some of the more typical problems relating to control include the difficulty of measurement, the problem of timing information flow, and the setting of proper standards. When objectives are not limited to quantitative output, the measurement of system effectiveness is difficult to make and subsequently perplexing to evaluate. Many of the characteristics pertaining to output do not lend themselves to quantitative measurement. This is true particularly when inputs of human energy cannot be related directly to output. The same situation applies to machines and other equipment associated with human involvement, when output is not in specific units. In evaluating man-machine or human-oriented systems, psychological and sociological factors obviously do not easily translate into quantifiable terms. For example, how does mental fatigue affect the quality or quantity of output? And, if it does, is mental fatigue a function of the lack of a challenging assignment or the fear of a potential injury? Subjective inputs may be transferred into numerical data, but there is always the danger of an incorrect appraisal and transfer, and the danger that the analyst may assume undue confidence in such data after they have been quantified. Let us suppose, for example, that the decisions made by an executive are rated from 1 to 10, 10 being the perfect decision. After determining the ranking for each decision, adding these, and dividing by the total number of decisions made, the average ranking would indicate a particular executive's score in his decision-making role. On the basis of this score, judgments — which could be quite erroneous — might be made about his decision-making effectiveness. One executive with a ranking of 6.75 might be considered more effective than another who had a ranking of 6.25, and yet the two managers may have made decisions under different circumstances and conditions. External factors over which neither executive had any control may have influenced the difference in "effectiveness". Quantifying human behavior, despite its extreme difficulty, subjectivity, and imprecision in relation to measuring physical characteristics is the most prevalent and important measurement made in large systems. The behavior of individuals ultimately dictates the success or failure of every man-made system. === Information flow === Another problem of control relates to the improper timing of information introduced into the feedback channel. Improper timing can occur in both computerized and human control systems, either by mistakes in measurement or in judgment. The more rapid the system's response to an error signal, the more likely it is that the system could over adjust; yet the need for prompt action is important because any delay in providing corrective input could also be crucial. A system generating feedback inconsistent with current need will tend to fluctuate and will not adjust in the desired manner. The most serious problem in information flow arises when the delay in feedback is exactly one-half cycle, for then the corrective action is superimposed on a variation from norm which, at that moment, is in the same direction as that of the correction. This causes the system to overcorrect, and then if the reverse adjustment is made out of cycle, to correct too much in the other direction, and so on until the system fluctuates ("oscillates") out of control. This phenomenon is illustrated in Figure 1. "Oscillation and Feedback". If, at Point A, the trend below standard is recognized and new inputs are added, but not until Point B, the system will overreact and go beyond the allowable limits. Again, if this is recognized at Point C, but inputs are not withdrawn until Point D, it will cause the system to drop below the lower limit of allowable variation. One solution to this problem rests in anticipation, which involves measuring not only the change but also the rate of change. The correction is outlined as a factor of the type and rate of the error. The difficulty also might be overcome by reducing the time lag between the measurement of the output and the adjustment to input. If a trend can be indicated, a time lead can be introduced to compensate for the time lag, bringing about consistency between the need for correction and the type and magnitude of the indicated action. It is usually more effective for an organization to maintain continuous measurement of its performance and to make small adjustments in operations constantly (this assumes a highly sensitive control system). Information feedback, consequently, should be timely and correct to be effective. That is, the information should provide an accurate indication of the status of the system. == Setting standards == Setting the proper standards or control limits is a problem in many systems. Parents are confronted with this dilemma in expressing what they expect of their children, and business managers face the same issue in establishing standards that will be acceptable to employees. Some theorists have proposed that workers be allowed to set their own standards, on the assumption that when people establish their own goals, they are more apt to accept and achieve them. Standards should be as precise as possible and communicated to all persons concerned. Moreover, communication alone is not sufficient; understanding is necessary. In human systems, standards tend to be poorly defined and the allowable range of deviation from standard also indefinite. For example, how many hours each day should a professor be expected to be available for student consultation? Or, what kind of behavior should be expected by students in the classroom? Discretion and personal judgment play a large part in such systems, to determine whether corrective action should be taken. Perhaps the most difficult problem in human systems is the unresponsiveness of individuals to indicated correction. This may take the form of opposition and subversion to control, or it may be related to the lack of defined responsibility or authority to take action. Leadership and positive motivation then become vital ingredients in achieving the proper response to input requirements. Most control problems relate to design; thus the solution to these problems must start at that point. Automatic control systems, provided that human intervention is possible to handle exceptions, offer the greatest promise. There is a danger, however, that we may measure characteristics that do not represent effective performance (as in the case of the speaker who requested that all of the people who could not hear what he was saying should raise their hands), or that improper information may be communicated. === Importance of control === Motivation for efficient employees For complete discipline Helpful in future planning Aids efficiency Decrease in risk Helpful in coordination == Limitations == 1. Difficult to set up quantitative standards: Controlling loses its benefits when standards and norms cannot be explained in volume statistics. Human behaviour, job satisfaction, and employee morale are some of the factors that are not well managed by quantitative measurement. The control loses some of its usefulness when it is not possible to define a situation in terms of numbers. This makes measuring performance and comparing it to benchmarks a difficult task. It is not an easy task to set principles for human work and set standards for competence and how to maintain one's level of satisfaction. In such cases, it depends on the decision of the manager. This is especially true of job satisfaction, employee behaviour and morale. For example, the task of measuring the quality of behaviour of employees is qualitative. It cannot be measured directly. To measure the behaviour of employees, absenteeism, conflict frequency, turnover etc. can be taken into account. If all these measures have a high proportion, it can be said that the behaviour of the employees in the institution is not great. It is clear that it is not possible to set criteria for all projects and suitable models are not completely accurate. 2. Less control on external controls: Any project operating in another state of the country under a government system cannot stop development. In addition, no company can manage the availability of technology, the latest acquisition of information technology and high competition in the market, etc. There are some issues that are not under the control of management or the organization. As such, the company cannot control external factors such as government policy, technological change, competition and anything that is not under the control of the company and makes things unmanageable. Policies need to be put in place through planning to ensure staff re-energizes improvements. It is incorrect to say that the manager by completing the management process may warn the organization. The manager can control internal factors (e.g. human power, infrastructure, infrastructure, etc.) but cannot control external factors (e.g. political, social change, competition, etc.), 3. Restrictions by employees: When a manager is used to managing his or her subordinates, some of his or her colleagues may refuse and report as directed by the manager or company. This usually happens because you are in control of the rules with or without discussion. For example, users in this field may resist when the GPS or control area of a control system is tracking their location. They see it as a restriction on their freedom. Employees are restricted or restricted in their freedom. Opponents of coping with this challenge are not under the control of the company in some respects. For example, workers may complain while kept under surveillance with the help of CCTV. Employees can resist using the camera for monitoring them. An employer may force employees but they cannot force them to work based on rules and regulations. The business environment is constantly changing. A new regulatory framework must be used to reverse this change. However, users are opposed to these systems. For example, if large company employees have CCTV (Close Circuit TV) to control their work, they will challenge this process. 4. Expensive to install: Create an effective and cost-effective management system because organizations need to have different management levels. Some company executives are more valuable than the company. Or it is the duty of their practice to declare the cost of managing a higher order than their own business. Controlling is expensive because it involves much money, time and effort. Systemic regulation is expensive because it affects more stressful movements. This involves much money, time and effort, which means it is very expensive. It is also important to call other employees who add to their value. Small businesses cannot set up cheap systems. To determine the performance of all employees or employees in an organization, proper equipment is required to send reports to management. In order to improve management for the company with effective control, it is necessary to spend much money. Small organizations cannot afford these. Therefore, it is useful only for large companies and costly for small and expensive organizations. 5. Overcontrolling can lead to employee turnover: However, legal aid covers a number of effective procedures if an employee has complaints; if the employee becomes upset by overcontrolling he might get irritated and moves to another company. In the current situation, managers often keep their employees under control several times to monitor their behaviour on the ground. This can be a hands-on example, especially in the case of new members and facilitates a variety of organizational changes. With too much control, employees feel their freedom is being violated. They do not want to work for an organization who do not let them work according to their preferences. That is why they go to other companies that do give them freedom. It takes much time and effort to manage the system. == See also == == References == Chenhall, R., 2003. Management control system design within its organizational context: Findings from contingency-based research and directions for the future, Accounting, Organizations and Society, 28(2-3), 127-168. == External links == "Control" . Encyclopædia Britannica. Vol. 7 (11th ed.). 1911.
Wikipedia/Control_(management)
In the field of management, strategic management involves the formulation and implementation of the major goals and initiatives taken by an organization's managers on behalf of stakeholders, based on consideration of resources and an assessment of the internal and external environments in which the organization operates. Strategic management provides overall direction to an enterprise and involves specifying the organization's objectives, developing policies and plans to achieve those objectives, and then allocating resources to implement the plans. Academics and practicing managers have developed numerous models and frameworks to assist in strategic decision-making in the context of complex environments and competitive dynamics. Strategic management is not static in nature; the models can include a feedback loop to monitor execution and to inform the next round of planning. Michael Porter identifies three principles underlying strategy: creating a "unique and valuable [market] position" making trade-offs by choosing "what not to do" creating "fit" by aligning company activities with one another to support the chosen strategy. Corporate strategy involves answering a key question from a portfolio perspective: "What business should we be in?" Business strategy involves answering the question: "How shall we compete in this business?" Alternatively, corporate strategy may be thought of as the strategic management of a corporation (a particular legal structure of a business), and business strategy as the strategic management of a business. Management theory and practice often make a distinction between strategic management and operational management, where operational management is concerned primarily with improving efficiency and controlling costs within the boundaries set by the organization's strategy. == Definitions == In 1988, Henry Mintzberg described the many different definitions and perspectives on strategy reflected in both academic research and in practice. He examined the strategic process and concluded it was much more fluid and unpredictable than people had thought. Because of this, he could not point to one process that could be called strategic planning. Instead Mintzberg concludes that there are five types of strategies: Strategy as plan – a directed course of action to achieve an intended set of goals; similar to the strategic planning concept; Strategy as pattern – a consistent pattern of past behavior, with a strategy realized over time rather than planned or intended. Where the realized pattern was different from the intent, he referred to the strategy as emergent; Strategy as position – locating brands, products, or companies within the market, based on the conceptual framework of consumers or other stakeholders; a strategy determined primarily by factors outside the firm; Strategy as ploy – a specific maneuver intended to outwit a competitor; and Strategy as perspective – executing strategy based on a "theory of the business" or natural extension of the mindset or ideological perspective of the organization. In 1998, Mintzberg developed these five types of management strategy into 10 "schools of thought" and grouped them into three categories. The first group is normative. It consists of the schools of informal design and conception, the formal planning, and analytical positioning. The second group, consisting of six schools, is more concerned with how strategic management is actually done, rather than prescribing optimal plans or positions. The six schools are entrepreneurial, visionary, cognitive, learning/adaptive/emergent, negotiation, corporate culture and business environment. The third and final group consists of one school, the configuration or transformation school, a hybrid of the other schools organized into stages, organizational life cycles, or "episodes". Michael Porter defined strategy in 1980 as the "...broad formula for how a business is going to compete, what its goals should be, and what policies will be needed to carry out those goals" and the "...combination of the ends (goals) for which the firm is striving and the means (policies) by which it is seeking to get there." He continued that: "The essence of formulating competitive strategy is relating a company to its environment." Some complexity theorists define strategy as the unfolding of the internal and external aspects of the organization that results in actions in a socio-economic context. Michael D. Watkins (2007) argued that strategic management operates as a critical bridge between an organization's mission, vision, and execution. He asserted that if the mission statement and goals answer the 'what' question, and if the vision statement answers the 'why' questions, then strategy provides answers to the 'how' question of business management. In other words, strategy encompasses the methods, frameworks, and decision-making processes that enable a company to translate its aspirations into concrete actions and competitive success. == Application == Strategy is defined as "the determination of the basic long-term goals of an enterprise, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals". Strategies are established to set direction, focus effort, define or clarify the organization, and provide consistency or guidance in response to the environment. Strategic management involves the related concepts of strategic planning and strategic thinking. Strategic planning is analytical in nature and refers to formalized procedures to produce the data and analyses used as inputs for strategic thinking, which synthesizes the data resulting in the strategy. Strategic planning may also refer to control mechanisms used to implement the strategy once it is determined. In other words, strategic planning happens around the strategic thinking or strategy making activity. Strategic management is often described as involving two major processes: formulation and implementation of strategy. While described sequentially below, in practice the two processes are iterative and each provides input for the other. === Formulation === Formulation of strategy involves analyzing the environment in which the organization operates, then making a series of strategic decisions about how the organization will fulfill its mission. Formulation ends with a series of goals or objectives and measures for the organization to pursue. Environmental analysis includes the: Remote external environment, including the political, economic, social, technological, legal and environmental landscape (PESTLE); Industry environment, such as the competitive behavior of rival organizations, the bargaining power of buyers/customers and suppliers, threats from new entrants to the industry, and the ability of buyers to substitute products (Porter's 5 forces); and Internal environment, regarding the strengths and weaknesses of the organization's resources (i.e., its people, processes and IT systems). Strategic decisions are based on insight from the environmental assessment and are responses to strategic questions about how the organization will compete, such as: What is the organization's business? Who is the target customer for the organization's products and services? Where are the customers and how do they buy? What is considered "value" to the customer? Which businesses, products and services should be included or excluded from the portfolio of offerings? What is the geographic scope of the business? What differentiates the company from its competitors in the eyes of customers and other stakeholders? Which resources, skills and capabilities should be developed within the firm? What are the important opportunities and risks for the organization? How can the firm grow, through both its base business and new business? How can the firm generate more value for investors? The answers to these and many other strategic questions result in the organization's strategy and a series of specific short-term and long-term goals or objectives and related measures. === Implementation === The second major process of strategic management is implementation, which involves decisions regarding how the organization's resources (i.e., people, process and IT systems) will be aligned and mobilized towards the objectives. Implementation results in how the organization's resources are structured (such as by product or service or geography), leadership arrangements, communication, incentives, and monitoring mechanisms to track progress towards objectives, among others. Running the day-to-day operations of the business is often referred to as "operations management" or specific terms for key departments or functions, such as "logistics management" or "marketing management", which take over once strategic management decisions are implemented. == Historical development == === Origins === The strategic management discipline originated in the 1950s and 1960s. Among the numerous early contributors, the most influential were Peter Drucker, Philip Selznick, Alfred Chandler, Igor Ansoff, and Bruce Henderson. The discipline draws from earlier thinking and texts on 'strategy' dating back thousands of years. Prior to 1960, the term "strategy" was primarily used regarding war and politics, not business. Many companies built strategic planning functions to develop and execute the formulation and implementation processes during the 1960s. Peter Drucker was a prolific management theorist and author of dozens of management books, with a career spanning five decades. He addressed fundamental strategic questions in a 1954 book The Practice of Management writing: "... the first responsibility of top management is to ask the question 'what is our business?' and to make sure it is carefully studied and correctly answered." He wrote that the answer was determined by the customer. He recommended eight areas where objectives should be set, such as market standing, innovation, productivity, physical and financial resources, worker performance and attitude, profitability, manager performance and development, and public responsibility. In 1957, Philip Selznick initially used the term "distinctive competence" in referring to how the Navy was attempting to differentiate itself from the other services. He also formalized the idea of matching the organization's internal factors with external environmental circumstances. This core idea was developed further by Kenneth R. Andrews in 1963 into what we now call SWOT analysis, in which the strengths and weaknesses of the firm are assessed in light of the opportunities and threats in the business environment. Alfred Chandler recognized the importance of coordinating management activity under an all-encompassing strategy. Interactions between functions were typically handled by managers who relayed information back and forth between departments. Chandler stressed the importance of taking a long-term perspective when looking to the future. In his 1962 ground breaking work Strategy and Structure, Chandler showed that a long-term coordinated strategy was necessary to give a company structure, direction and focus. He says it concisely, "Structure follows Strategy." Chandler wrote that: "Strategy is the determination of the basic long-term goals of an enterprise, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals." Igor Ansoff built on Chandler's work by adding concepts and inventing a vocabulary. He developed a grid that compared strategies for market penetration, product development, market development and horizontal and vertical integration and diversification. He felt that management could use the grid to systematically prepare for the future. In his 1965 classic Corporate Strategy, he developed gap analysis to clarify the gap between the current reality and the goals and to develop what he called "gap reducing actions". Ansoff wrote that strategic management had three parts: strategic planning; the skill of a firm in converting its plans into reality; and the skill of a firm in managing its own internal resistance to change. Bruce Henderson, founder of the Boston Consulting Group, wrote about the concept of the experience curve in 1968, following initial work begun in 1965. The experience curve refers to a hypothesis that unit production costs decline by 20–30% every time cumulative production doubles. This supported the argument for achieving higher market share and economies of scale. Porter wrote in 1980 that companies have to make choices about their scope and the type of competitive advantage they seek to achieve, whether lower cost or differentiation. The idea of strategy targeting particular industries and customers (i.e., competitive positions) with a differentiated offering was a departure from the experience-curve influenced strategy paradigm, which was focused on larger scale and lower cost. Porter revised the strategy paradigm again in 1985, writing that superior performance of the processes and activities performed by organizations as part of their value chain is the foundation of competitive advantage, thereby outlining a process view of strategy. === Change in focus from production to marketing === The direction of strategic research also paralleled a major paradigm shift in how companies competed, specifically a shift from the production focus to market focus. The prevailing concept in strategy up to the 1950s was to create a product of high technical quality. If you created a product that worked well and was durable, it was assumed you would have no difficulty profiting. This was called the production orientation. Henry Ford famously said of the Model T car: "Any customer can have a car painted any color that he wants, so long as it is black." Management theorist Peter F Drucker wrote in 1954 that it was the customer who defined what business the organization was in. In 1960 Theodore Levitt argued that instead of producing products then trying to sell them to the customer, businesses should start with the customer, find out what they wanted, and then produce it for them. The fallacy of the production orientation was also referred to as marketing myopia in an article of the same name by Levitt. Over time, the customer became the driving force behind all strategic business decisions. This marketing concept, in the decades since its introduction, has been reformulated and repackaged under names including market orientation, customer orientation, customer intimacy, customer focus, customer-driven and market focus. === Nature of strategy === In 1985, Ellen Earle Chaffee summarized what she thought were the main elements of strategic management theory where consensus generally existed as of the 1970s, writing that strategic management: involves adapting the organization to its business environment; is fluid and complex. Change creates novel combinations of circumstances requiring unstructured non-repetitive responses; affects the entire organization by providing direction; involves both strategy formulation processes and also implementation of the content of the strategy; may be planned (intended) or unplanned (emergent); these may differ from each other and also from the realized strategy which results from them (Chaffee, p. 89) is done at several levels: overall corporate-level strategy, and individual business-level strategies; and involves both conceptual and analytical thought processes. Chaffee further wrote that research up to that point covered three models of strategy, which were not mutually exclusive: Linear strategy: a planned determination of goals, initiatives, and allocation of resources, along the lines of the Chandler definition above. This is most consistent with strategic planning approaches and may have a long planning horizon. The strategist "deals with" the environment but it is not the central concern. Adaptive strategy: in this model, the organization's goals and activities are primarily concerned with adaptation to its environment, analogous to a biological organism. The need for continuous adaption reduces or eliminates the planning window. There is more focus on means (resource mobilization to address the environment) rather than ends (goals). Strategy is less centralized than in the linear model. Interpretive strategy: as a less developed model than the linear and adaptive models, dating from the 1970s, interpretive strategy is concerned with "orienting metaphors constructed for the purpose of conceptualizing and guiding individual attitudes or organizational participants". The aim of interpretive strategy is legitimacy or credibility in the mind of stakeholders. It places emphasis on symbols and language to influence the minds of customers, rather than the physical product of the organization. J I Moore identifies four related levels at which strategies can be devised: enterprise, corporate, business and functional Levels. The functional level applies to specific functional areas within an organisation such as its finance department, HR team or IT section. In 2004, George Stalk, a Boston Consulting Group writer, distinguished between two extremes of business strategy using baseball metaphors: Softball: relying on weak competitive tactics which appear to be "strategic" but in fact "do little more than keep the company in the game for the short term"; Hardball: engaging with tough competitive strategies, "relentlessly" aiming for success. == Concepts and frameworks == The progress of strategy since 1960 can be charted by a variety of frameworks and concepts introduced by management consultants and academics. These reflect an increased focus on cost, competition and customers. These "3 Cs" were illuminated by much more robust empirical analysis at ever-more granular levels of detail, as industries and organizations were disaggregated into business units, activities, processes, and individuals in a search for sources of competitive advantage. === SWOT analysis === By the 1960s, the capstone business policy course at the Harvard Business School included the concept of matching the distinctive competence of a company (its internal strengths and weaknesses) with its environment (external opportunities and threats) in the context of its objectives. This framework came to be known by the acronym SWOT and was "a major step forward in bringing explicitly competitive thinking to bear on questions of strategy". Kenneth R. Andrews helped popularize the framework via a 1963 conference and it remains commonly used in practice. === Experience curve === The experience curve was developed by the Boston Consulting Group in 1966. It reflects a hypothesis that total per unit costs decline systematically by as much as 15–25% every time cumulative production (i.e., "experience") doubles. It has been empirically confirmed by some firms at various points in their history. Costs decline due to a variety of factors, such as the learning curve, substitution of labor for capital (automation), and technological sophistication. Author Walter Kiechel wrote that it reflected several insights, including: A company can always improve its cost structure; Competitors have varying cost positions based on their experience; Firms could achieve lower costs through higher market share, attaining a competitive advantage; and An increased focus on empirical analysis of costs and processes, a concept which author Kiechel refers to as "Greater Taylorism". Kiechel wrote in 2010: "The experience curve was, simply, the most important concept in launching the strategy revolution...with the experience curve, the strategy revolution began to insinuate an acute awareness of competition into the corporate consciousness." Prior to the 1960s, the word competition rarely appeared in the most prominent management literature; U.S. companies then faced considerably less competition and did not focus on performance relative to peers. Further, the experience curve provided a basis for the retail sale of business ideas, helping drive the management consulting industry. === Importance-performance matrix === Completion of an importance-performance matrix forms "a crucial stage in the formulation of operations strategy", and may be considered a "simple, yet useful, method for simultaneously considering both the importance and performance dimensions when evaluating or defining strategy". Notes on this subject from the Department of Engineering at the University of Cambridge suggest that a binary matrix may be used "but may be found too crude", and nine point scales on both the importance and performance axes are recommended. An importance scale could be labelled from "the main thrust of competitiveness" to "never considered by customers and never likely to do so", and performance can be segmented into "better than", "the same as", and "worse than" the company's competitors. The highest urgency would than be directed to the most important areas where performance is poorer than competitors. The technique is also used in relation to marketing, where the variable "importance" is related to buyers' perception of important attributes of a product: for attributes which might be considered important to buyers, both their perceived importance and their performance are assessed. === Corporate strategy and portfolio theory === The concept of the corporation as a portfolio of business units, with each plotted graphically based on its market share (a measure of its competitive position relative to its peers) and industry growth rate (a measure of industry attractiveness), was summarized in the growth–share matrix developed by the Boston Consulting Group around 1970. By 1979, one study estimated that 45% of the Fortune 500 companies were using some variation of the matrix in their strategic planning. This framework helped companies decide where to invest their resources (i.e., in their high market share, high growth businesses) and which businesses to divest (i.e., low market share, low growth businesses.) The growth-share matrix was followed by G.E. multi factoral model, developed by General Electric. Companies continued to diversify as conglomerates until the 1980s, when deregulation and a less restrictive antitrust environment led to the view that a portfolio of operating divisions in different industries was worth more as many independent companies, leading to the breakup of many conglomerates. While the popularity of portfolio theory has waxed and waned, the key dimensions considered (industry attractiveness and competitive position) remain central to strategy. In response to the evident problems of "over diversification", C. K. Prahalad and Gary Hamel suggested that companies should build portfolios of businesses around shared technical or operating competencies, and should develop structures and processes to enhance their core competencies. Michael Porter also addressed the issue of the appropriate level of diversification. In 1987, he argued that corporate strategy involves two questions: 1) What business should the corporation be in? and 2) How should the corporate office manage its business units? He mentioned four concepts of corporate strategy each of which suggest a certain type of portfolio and a certain role for the corporate office; the latter three can be used together: Portfolio theory: A strategy based primarily on diversification through acquisition. The corporation shifts resources among the units and monitors the performance of each business unit and its leaders. Each unit generally runs autonomously, with limited interference from the corporate center provided goals are met. Restructuring: The corporate office acquires then actively intervenes in a business where it detects potential, often by replacing management and implementing a new business strategy. Transferring skills: Important managerial skills and organizational capability are essentially spread to multiple businesses. The skills must be necessary to competitive advantage. Sharing activities: Ability of the combined corporation to leverage centralized functions, such as sales, finance, etc. thereby reducing costs. Building on Porter's ideas, Michael Goold, Andrew Campbell and Marcus Alexander developed the concept of "parenting advantage" to be applied at the corporate level, as a parallel to the concept of "competitive advantage" applied at the business level. Parent companies, they argued, should aim to "add more value" to their portfolio of businesses than rivals. If they succeed, they have a parenting advantage. The right level of diversification depends, therefore, on the ability of the parent company to add value in comparison to others. Different parent companies with different skills should expect to have different portfolios. See Corporate Level Strategy 1995 and Strategy for the Corporate Level 2014 === Competitive advantage === In 1980, Porter defined the two types of competitive advantage an organization can achieve relative to its rivals: lower cost or differentiation. This advantage derives from attribute(s) that allow an organization to outperform its competition, such as superior market position, skills, or resources. In Porter's view, strategic management should be concerned with building and sustaining competitive advantage. === Industry structure and profitability === Porter developed a framework for analyzing the profitability of industries and how those profits are divided among the participants in 1980. In five forces analysis he identified the forces that shape the industry structure or environment. The framework involves the bargaining power of buyers and suppliers, the threat of new entrants, the availability of substitute products, and the competitive rivalry of firms in the industry. These forces affect the organization's ability to raise its prices as well as the costs of inputs (such as raw materials) for its processes. The five forces framework helps describe how a firm can use these forces to obtain a sustainable competitive advantage, either lower cost or differentiation. Companies can maximize their profitability by competing in industries with favorable structure. Competitors can take steps to grow the overall profitability of the industry, or to take profit away from other parts of the industry structure. Porter modified Chandler's dictum about structure following strategy by introducing a second level of structure: while organizational structure follows strategy, it in turn follows industry structure. === Generic competitive strategies === Porter wrote in 1980 that strategy target either cost leadership, differentiation, or focus. These are known as Porter's three generic strategies and can be applied to any size or form of business. Porter claimed that a company must only choose one of the three or risk that the business would waste precious resources. Porter's generic strategies detail the interaction between cost minimization strategies, product differentiation strategies, and market focus strategies. Porter described an industry as having multiple segments that can be targeted by a firm. The breadth of its targeting refers to the competitive scope of the business. Porter defined two types of competitive advantage: lower cost or differentiation relative to its rivals. Achieving competitive advantage results from a firm's ability to cope with the five forces better than its rivals. Porter wrote: "[A]chieving competitive advantage requires a firm to make a choice...about the type of competitive advantage it seeks to attain and the scope within which it will attain it." He also wrote: "The two basic types of competitive advantage [differentiation and lower cost] combined with the scope of activities for which a firm seeks to achieve them lead to three generic strategies for achieving above average performance in an industry: cost leadership, differentiation and focus. The focus strategy has two variants, cost focus and differentiation focus." The concept of choice was a different perspective on strategy, as the 1970s paradigm was the pursuit of market share (size and scale) influenced by the experience curve. Companies that pursued the highest market share position to achieve cost advantages fit under Porter's cost leadership generic strategy, but the concept of choice regarding differentiation and focus represented a new perspective. === Value chain === Porter's 1985 description of the value chain refers to the chain of activities (processes or collections of processes) that an organization performs in order to deliver a valuable product or service for the market. These include functions such as inbound logistics, operations, outbound logistics, marketing and sales, and service, supported by systems and technology infrastructure. By aligning the various activities in its value chain with the organization's strategy in a coherent way, a firm can achieve a competitive advantage. Porter also wrote that strategy is an internally consistent configuration of activities that differentiates a firm from its rivals. A robust competitive position cumulates from many activities which should fit coherently together. Porter wrote in 1985: "Competitive advantage cannot be understood by looking at a firm as a whole. It stems from the many discrete activities a firm performs in designing, producing, marketing, delivering and supporting its product. Each of these activities can contribute to a firm's relative cost position and create a basis for differentiation...the value chain disaggregates a firm into its strategically relevant activities in order to understand the behavior of costs and the existing and potential sources of differentiation." === Interorganizational relationships === Interorganizational relationships allow independent organizations to get access to resources or to enter new markets. Interorganizational relationships represent a critical lever of competitive advantage. The field of strategic management has paid much attention to the different forms of relationships between organizations ranging from strategic alliances to buyer-supplier relationships, joint ventures, networks, R&D consortia, licensing, and franchising. On the one hand, scholars drawing on organizational economics (e.g., transaction costs theory) have argued that firms use interorganizational relationships when they are the most efficient form comparatively to other forms of organization such as operating on its own or using the market. On the other hand, scholars drawing on organizational theory (e.g., resource dependence theory) suggest that firms tend to partner with others when such relationships allow them to improve their status, power, reputation, or legitimacy. A key component to the strategic management of inter-organizational relationships relates to the choice of governance mechanisms. While early research focused on the choice between equity and non equity forms, recent scholarship studies the nature of the contractual and relational arrangements between organizations. Researchers have also noted, although to a lesser extent, the dark side of interorganizational relationships, such as conflict, disputes, opportunism and unethical behaviors. Relational or collaborative risk can be defined as the uncertainty about whether potentially significant and/or disappointing outcomes of collaborative activities will be realized. Companies can assess, monitor and manage collaborative risks. Empirical studies show that managers assess risks as lower when they external partners, higher if they are satisfied with their own performance, and lower when their business environment is turbulent. === Core competence === Gary Hamel and C. K. Prahalad described the idea of core competency in 1990, the idea that each organization has some capability in which it excels and that the business should focus on opportunities in that area, letting others go or outsourcing them. Further, core competency is difficult to duplicate, as it involves the skills and coordination of people across a variety of functional areas or processes used to deliver value to customers. By outsourcing, companies expanded the concept of the value chain, with some elements within the entity and others without. Core competency is part of a branch of strategy called the resource-based view of the firm, which postulates that if activities are strategic as indicated by the value chain, then the organization's capabilities and ability to learn or adapt are also strategic. === Theory of the business === According to Peter Drucker, business theory refers to the key points and strategies of a company, which are divided into three parts: 1. The external environment (society, technology, customers, and competition). 2. The goal of an organization. 3. Guidelines essential to achieving the mission. This business theory has four differentiations: 1. Hypotheses maintain that mission and guidelines must be reality focused. 2. Thoughts must have agreement. 3. The business theory must be notable and interpreted by the members of the organization. 4. Business theory must be continuously analyzed. Companies have difficulties when the assumptions of such a theory do not align with reality, Peter Drucker took as an example large retail premises, his goal was that people who wanted to buy in large commercial premises do so, but many consumers rejected commercial premises and preferred retailers (which focus on one or two categories of products and own their own premises) time was essential in shopping instead of profits . This theory is classified as an assumption and a discipline, which focused on the elaboration of systematic diagnoses, monitoring and testing of the guidelines that make up the business theory in order to maintain competition. == Strategic thinking == Strategic thinking involves the generation and application of unique business insights to opportunities intended to create competitive advantage for a firm or organization. It involves challenging the assumptions underlying the organization's strategy and value proposition. Mintzberg wrote in 1994 that it is more about synthesis (i.e., "connecting the dots") than analysis (i.e., "finding the dots"). It is about "capturing what the manager learns from all sources (both the soft insights from his or her personal experiences and the experiences of others throughout the organization and the hard data from market research and the like) and then synthesizing that learning into a vision of the direction that the business should pursue." Mintzberg argued that strategic thinking is the critical part of formulating strategy, more so than strategic planning exercises. General Andre Beaufre wrote in 1963 that strategic thinking "is a mental process, at once abstract and rational, which must be capable of synthesizing both psychological and material data. The strategist must have a great capacity for both analysis and synthesis; analysis is necessary to assemble the data on which he makes his diagnosis, synthesis in order to produce from these data the diagnosis itself--and the diagnosis in fact amounts to a choice between alternative courses of action." Will Mulcaster argued that while much research and creative thought has been devoted to generating alternative strategies, too little work has been done on what influences the quality of strategic decision making and the effectiveness with which strategies are implemented. For instance, in retrospect it can be seen that the 2008 financial crisis could have been avoided if the banks had paid more attention to the risks associated with their investments, but how should banks change the way they make decisions to improve the quality of their decisions in the future? Mulcaster's Managing Forces framework addresses this issue by identifying 11 forces that should be incorporated into the processes of decision making and strategic implementation. The 11 forces are: Time; Opposing forces; Politics; Perception; Holistic effects; Adding value; Incentives; Learning capabilities; Opportunity cost; Risk and Style. Classic strategy thinking, and vision have some limitations in a turbulent environment and uncertainty. The limitations relate to the heterogeneity and future-oriented goals and possession of cognitive capabilities in classic definition. Strategy should not be seen only from the top managerial hierarchy visions. The newer micro foundation framework suggests that people from different managerial levels are needed to work and interact dynamically to result in the knowledge strategy. == Strategic planning == Strategic planning is a means of administering the formulation and implementation of strategy. Strategic planning is analytical in nature and refers to formalized procedures to produce the data and analyses used as inputs for strategic thinking, which synthesizes the data resulting in the strategy. Strategic planning may also refer to control mechanisms used to implement the strategy once it is determined. In other words, strategic planning happens around the strategy formation process. === Environmental analysis === Porter wrote in 1980 that formulation of competitive strategy includes consideration of four key elements: Company strengths and weaknesses; Personal values of the key implementers (i.e., management and the board) Industry opportunities and threats; and Broader societal expectations. The first two elements relate to factors internal to the company (i.e., the internal environment), while the latter two relate to factors external to the company (i.e., the external environment). There are many analytical frameworks which attempt to organize the strategic planning process. Examples of frameworks that address the four elements described above include: External environment: PEST analysis or STEEP analysis is a framework used to examine the remote external environmental factors that can affect the organization, such as political, economic, social/demographic, and technological. Common variations include SLEPT, PESTLE, STEEPLE, and STEER analysis, each of which incorporates slightly different emphases. Industry environment: The Porter Five Forces Analysis framework helps to determine the competitive rivalry and therefore attractiveness of a market. It is used to help determine the portfolio of offerings the organization will provide and in which markets. Relationship of internal and external environment: SWOT analysis is one of the most basic and widely used frameworks, which examines both internal elements of the organization—Strengths and Weaknesses—and external elements—Opportunities and Threats. It helps examine the organization's resources in the context of its environment. === Scenario planning === A number of strategists use scenario planning techniques to deal with change. The way Peter Schwartz put it in 1991 is that strategic outcomes cannot be known in advance so the sources of competitive advantage cannot be predetermined. The fast changing business environment is too uncertain for us to find sustainable value in formulas of excellence or competitive advantage. Instead, scenario planning is a technique in which multiple outcomes can be developed, their implications assessed, and their likeliness of occurrence evaluated. According to Pierre Wack, scenario planning is about insight, complexity, and subtlety, not about formal analysis and numbers. Some business planners are starting to use a complexity theory approach to strategy. Complexity can be thought of as chaos with a dash of order. Chaos theory deals with turbulent systems that rapidly become disordered. Complexity is not quite so unpredictable. It involves multiple agents interacting in such a way that a glimpse of structure may appear. === Measuring and controlling implementation === Once the strategy is determined, various goals and measures may be established to chart a course for the organization, measure performance and control implementation of the strategy. Tools such as the balanced scorecard and strategy maps help crystallize the strategy, by relating key measures of success and performance to the strategy. These tools measure financial, marketing, production, organizational development, and innovation measures to achieve a 'balanced' perspective. Advances in information technology and data availability enable the gathering of more information about performance, allowing managers to take a much more analytical view of their business than before. Strategy may also be organized as a series of "initiatives" or "programs", each of which comprises one or more projects. Various monitoring and feedback mechanisms may also be established, such as regular meetings between divisional and corporate management to control implementation. === Evaluation === A key component to strategic management which is often overlooked when planning is evaluation. Evaluation may involve looking at what was done (implementation) and what happened as a result, or it may involve evaluating options to see what potential different options may open up, in order to decide on planned actions. There are many ways to evaluate whether or not strategic priorities and plans have been achieved, one such method is Robert Stake's Responsive Evaluation. Responsive evaluation provides a naturalistic and humanistic approach to program evaluation. In expanding beyond the goal-oriented or pre-ordinate evaluation design, responsive evaluation takes into consideration the program's background (history), conditions, and transactions among stakeholders. It is largely emergent, the design unfolds as contact is made with stakeholders. == Limitations == While strategies are established to set direction, focus effort, define or clarify the organization, and provide consistency or guidance in response to the environment, these very elements also mean that certain signals are excluded from consideration or de-emphasized. Mintzberg wrote in 1987: "Strategy is a categorizing scheme by which incoming stimuli can be ordered and dispatched." Since a strategy orients the organization in a particular manner or direction, that direction may not effectively match the environment, initially (if a bad strategy) or over time as circumstances change. As such, Mintzberg continued, "Strategy [once established] is a force that resists change, not encourages it." Therefore, a critique of strategic management is that it can overly constrain managerial discretion in a dynamic environment. "How can individuals, organizations and societies cope as well as possible with ... issues too complex to be fully understood, given the fact that actions initiated on the basis of inadequate understanding may lead to significant regret?" Some theorists insist on an iterative approach, considering in turn objectives, implementation and resources. I.e., a "...repetitive learning cycle [rather than] a linear progression towards a clearly defined final destination." Strategies must be able to adjust during implementation because "humans rarely can proceed satisfactorily except by learning from experience; and modest probes, serially modified on the basis of feedback, usually are the best method for such learning." In 2000, Gary Hamel coined the term strategic convergence to explain the limited scope of the strategies being used by rivals in greatly differing circumstances. He lamented that successful strategies are imitated by firms that do not understand that for a strategy to work, it must account for the specifics of each situation. Woodhouse and Collingridge claim that the essence of being "strategic" lies in a capacity for "intelligent trial-and error" rather than strict adherence to finely honed strategic plans. Strategy should be seen as laying out the general path rather than precise steps. Means are as likely to determine ends as ends are to determine means. The objectives that an organization might wish to pursue are limited by the range of feasible approaches to implementation. (There will usually be only a small number of approaches that will not only be technically and administratively possible, but also satisfactory to the full range of organizational stakeholders.) In turn, the range of feasible implementation approaches is determined by the availability of resources. == Strategic themes == Various strategic approaches used across industries (themes) have arisen over the years. These include the shift from product-driven demand to customer- or marketing-driven demand (described above), the increased use of self-service approaches to lower cost, changes in the value chain or corporate structure due to globalization (e.g., off-shoring of production and assembly), and the internet. === Self-service === One theme in strategic competition has been the trend towards self-service, often enabled by technology, where the customer takes on a role previously performed by a worker to lower costs for the firm and perhaps prices. Examples include: Automated teller machine (ATM) to obtain cash rather via a bank teller; Self-service at the gas pump rather than with help from an attendant; Retail internet orders input by the customer rather than a retail clerk, such as online book sales; Mass-produced ready-to-assemble furniture transported by the customer; Self-checkout at the grocery store; and Online banking and bill payment. === Globalization and the virtual firm === One definition of globalization refers to the integration of economies due to technology and supply chain process innovation. Companies are no longer required to be vertically integrated (i.e., designing, producing, assembling, and selling their products). In other words, the value chain for a company's product may no longer be entirely within one firm; several entities comprising a virtual firm may exist to fulfill the customer requirement. For example, some companies have chosen to outsource production to third parties, retaining only design and sales functions inside their organization. === Internet and information availability === The internet has dramatically empowered consumers and enabled buyers and sellers to come together with drastically reduced transaction and intermediary costs, creating much more robust marketplaces for the purchase and sale of goods and services. The Internet has enabled many Internet-based entrepreneurs to tap serendipity as a strategic advantage and thrive. Examples include online auction sites, internet dating services, and internet book sellers. In many industries, the internet has dramatically altered the competitive landscape. Services that used to be provided within one entity (e.g., a car dealership providing financing and pricing information) are now provided by third parties. Further, compared to traditional media like television, the internet has caused a major shift in viewing habits through on demand content which has led to an increasingly fragmented audience. Author Phillip Evans said in 2013 that networks are challenging traditional hierarchies. Value chains may also be breaking up ("deconstructing") where information aspects can be separated from functional activity. Data that is readily available for free or very low cost makes it harder for information-based, vertically integrated businesses to remain intact. Evans said: "The basic story here is that what used to be vertically integrated, oligopolistic competition among essentially similar kinds of competitors is evolving, by one means or another, from a vertical structure to a horizontal one. Why is that happening? It's happening because transaction costs are plummeting and because scale is polarizing. The plummeting of transaction costs weakens the glue that holds value chains together, and allows them to separate." He used Wikipedia as an example of a network that has challenged the traditional encyclopedia business model. Evans predicts the emergence of a new form of industrial organization called a "stack", analogous to a technology stack, in which competitors rely on a common platform of inputs (services or information), essentially layering the remaining competing parts of their value chains on top of this common platform. === Sustainability === In the recent decade, sustainability—or ability to successfully sustain a company in a context of rapidly changing environmental, social, health, and economic circumstances—has emerged as crucial aspect of any strategy development. Research focusing on sustainability in commercial strategies has led to emergence of the concept of "embedded sustainability" – defined by its authors Chris Laszlo and Nadya Zhexembayeva as "incorporation of environmental, health, and social value into the core business with no trade-off in price or quality—in other words, with no social or green premium." Their research showed that embedded sustainability offers at least seven distinct opportunities for business value and competitive advantage creation: a) better risk management, b) increased efficiency through reduced waste and resource use, c) better product differentiation, d) new market entrances, e) enhanced brand and reputation, f) greater opportunity to influence industry standards, and g) greater opportunity for radical innovation. Research further suggested that innovation driven by resource depletion can result in fundamental competitive advantages for a company's products and services, as well as the company strategy as a whole, when right principles of innovation are applied. Asset managers who committed to integrating embedded sustainability factors in their capital allocation decisions created a stronger return on investment than managers that did not strategically integrate sustainability into their similar business model. To achieve genuine sustainability and these associated benefits, corporations have historically relied on a variety of mechanisms that can be integrated into their management strategy. Timothy Galpin in his chapter of “Business Strategies for Sustainability: A Research Anthology” discusses four “Internal Strategic Management Components” to build sustainability. They are as follows: Mission: Defines the purpose and priorities of the organization, ultimately providing critical signals to organizational stakeholders regarding the aims of the firm. Values: Refers to the expectations of internal stakeholders, and communicates the organisation’s belief system to various external stakeholders Goals: Provides a roadmap of the firm’s organisational activity and a basis for which to measure progress and performance. Capabilities and resources: The development of patterns of activity and investment decisions that facilitate sustainable business practices. To fully utilise these strategic management components, a firm’s mission, values, goals, resources, and capabilities need to be functioning in alignment with one another. This develops consistency across management and employee behaviour. Research has indicated that this alignment has led to improved firm performance. Following the embedding of sustainability in a firm’s strategic management plan, to fully reap the benefits the agenda must be communicated effectively to internal and external stakeholders. Doing so satisfies stakeholder theory, whereby the firm maintains ‘trustful and mutually respectful relationships with the various stakeholders’. In the past, this has consisted of advertising and disclosing sustainability information and reports. Firms are available to promote their superior sustainability performance and ultimately possess higher market valuations in comparison to firms that do not provide sustainability reporting. The amalgamation and alignment of these key internal strategic management components, in conjunction with thorough communication of the firm’s sustainability agenda, is required to achieve these associated benefits and is the reason many firms are pursuing such tactics more frequently. == Strategy as learning == === Learning organization === In 1990, Peter Senge, who had collaborated with Arie de Geus at Dutch Shell, popularized de Geus' notion of the "learning organization". The theory is that gathering and analyzing information is a necessary requirement for business success in the information age. To do this, Senge claimed that an organization would need to be structured such that: People can continuously expand their capacity to learn and be productive. New patterns of thinking are nurtured. Collective aspirations are encouraged. People are encouraged to see the "whole picture" together. Senge identified five disciplines of a learning organization. They are: Personal responsibility, self-reliance, and mastery – We accept that we are the masters of our own destiny. We make decisions and live with the consequences of them. When a problem needs to be fixed, or an opportunity exploited, we take the initiative to learn the required skills to get it done. Mental models – We need to explore our personal mental models to understand the subtle effect they have on our behaviour. Shared vision – The vision of where we want to be in the future is discussed and communicated to all. It provides guidance and energy for the journey ahead. Team learning – We learn together in teams. This involves a shift from "a spirit of advocacy to a spirit of enquiry". Systems thinking – We look at the whole rather than the parts. This is what Senge calls the "Fifth discipline". It is the glue that integrates the other four into a coherent strategy. For an alternative approach to the "learning organization", see Garratt, B. (1987). Geoffrey Moore (1991) and R. Frank and P. Cook also detected a shift in the nature of competition. Markets driven by technical standards or by "network effects" can give the dominant firm a near-monopoly. The same is true of networked industries in which interoperability requires compatibility between users. Examples include Internet Explorer's and Amazon's early dominance of their respective industries. IE's later decline shows that such dominance may be only temporary. Moore showed how firms could attain this enviable position by using E.M. Rogers' five stage adoption process and focusing on one group of customers at a time, using each group as a base for reaching the next group. The most difficult step is making the transition between introduction and mass acceptance. (See Crossing the Chasm). If successful a firm can create a bandwagon effect in which the momentum builds and its product becomes a de facto standard. === Integrated view to learning === Bolisani & Bratianu (2017) have defined knowledge strategy as an integration of rational thinking and dynamic learning. Rational planning contains a three-step process where the first step is to collect information, the second step is to analyze the information and the third step is to formulate goals and plans based on information. Emergent planning also contains three steps to the opposite direction starting from practical experience, what is analyzed in the second step, and then formulated to a strategy in the third step. These two approaches are combined to the “integrated view” with the Bolisani and Bratianu research implications. To start the planning process for knowledge and KM strategy creation, company can prepare a preliminary plan with the basis of rational analysis from internal or external environments. While creating rational and predictive plans, company can similarly utilize practical adapted knowledge for example learning from the ground. The idea behind the integrated view is to combine the general visions of knowledge strategy with both the current practical understanding and future ideas. This model will move the decision-making process in a more interactive and co-creative direction. == Strategy as adapting to change == In 1969, Peter Drucker coined the phrase Age of Discontinuity to describe the way change disrupts lives. In an age of continuity attempts to predict the future by extrapolating from the past can be accurate. But according to Drucker, we are now in an age of discontinuity and extrapolating is ineffective. He identifies four sources of discontinuity: new technologies, globalization, cultural pluralism and knowledge capital. In 1970, Alvin Toffler in Future Shock described a trend towards accelerating rates of change. He illustrated how social and technical phenomena had shorter lifespans with each generation, and he questioned society's ability to cope with the resulting turmoil and accompanying anxiety. In past eras periods of change were always punctuated with times of stability. This allowed society to assimilate the change before the next change arrived. But these periods of stability had all but disappeared by the late 20th century. In 1980 in The Third Wave, Toffler characterized this shift to relentless change as the defining feature of the third phase of civilization (the first two phases being the agricultural and industrial waves). In 1978, Derek F. Abell (Abell, D. 1978) described "strategic windows" and stressed the importance of the timing (both entrance and exit) of any given strategy. This led some strategic planners to build planned obsolescence into their strategies. In 1983, Noel Tichy wrote that because we are all beings of habit we tend to repeat what we are comfortable with. He wrote that this is a trap that constrains our creativity, prevents us from exploring new ideas, and hampers our dealing with the full complexity of new issues. He developed a systematic method of dealing with change that involved looking at any new issue from three angles: technical and production, political and resource allocation, and corporate culture. In 1989, Charles Handy identified two types of change. "Strategic drift" is a gradual change that occurs so subtly that it is not noticed until it is too late. By contrast, "transformational change" is sudden and radical. It is typically caused by discontinuities (or exogenous shocks) in the business environment. The point where a new trend is initiated is called a "strategic inflection point" by Andy Grove. Inflection points can be subtle or radical. In 1990, Richard Pascale wrote that relentless change requires that businesses continuously reinvent themselves. His famous maxim is "Nothing fails like success" by which he means that what was a strength yesterday becomes the root of weakness today, We tend to depend on what worked yesterday and refuse to let go of what worked so well for us in the past. Prevailing strategies become self-confirming. To avoid this trap, businesses must stimulate a spirit of inquiry and healthy debate. They must encourage a creative process of self-renewal based on constructive conflict. In 1996, Adrian Slywotzky showed how changes in the business environment are reflected in value migrations between industries, between companies, and within companies. He claimed that recognizing the patterns behind these value migrations is necessary if we wish to understand the world of chaotic change. In "Profit Patterns" (1999) he described businesses as being in a state of strategic anticipation as they try to spot emerging patterns. Slywotsky and his team identified 30 patterns that have transformed industry after industry. In 1997, Clayton Christensen (1997) took the position that great companies can fail precisely because they do everything right since the capabilities of the organization also define its disabilities. Christensen's thesis is that outstanding companies lose their market leadership when confronted with disruptive technology. He called the approach to discovering the emerging markets for disruptive technologies agnostic marketing, i.e., marketing under the implicit assumption that no one – not the company, not the customers – can know how or in what quantities a disruptive product can or will be used without the experience of using it. In 1999, Constantinos Markides reexamined the nature of strategic planning. He described strategy formation and implementation as an ongoing, never-ending, integrated process requiring continuous reassessment and reformation. Strategic management is planned and emergent, dynamic and interactive. J. Moncrieff (1999) stressed strategy dynamics. He claimed that strategy is partially deliberate and partially unplanned. The unplanned element comes from emergent strategies that result from the emergence of opportunities and threats in the environment and from "strategies in action" (ad hoc actions across the organization). David Teece pioneered research on resource-based strategic management and the dynamic capabilities perspective, defined as "the ability to integrate, build, and reconfigure internal and external competencies to address rapidly changing environments". His 1997 paper (with Gary Pisano and Amy Shuen) "Dynamic Capabilities and Strategic Management" was the most cited paper in economics and business for the period from 1995 to 2005. In 2000, Gary Hamel discussed strategic decay, the notion that the value of every strategy, no matter how brilliant, decays over time. == Strategy as operational excellence == === Quality === A large group of theorists felt the area where western business was most lacking was product quality. W. Edwards Deming, Joseph M. Juran, Andrew Thomas Kearney, Philip Crosby and Armand V. Feigenbaum suggested quality improvement techniques such total quality management (TQM), continuous improvement (kaizen), lean manufacturing, Six Sigma, and return on quality (ROQ). Contrarily, James Heskett (1988), Earl Sasser (1995), William Davidow, Len Schlesinger, A. Paraurgman (1988), Len Berry, Jane Kingman-Brundage, Christopher Hart, and Christopher Lovelock (1994), felt that poor customer service was the problem. They gave us fishbone diagramming, service charting, Total Customer Service (TCS), the service profit chain, service gaps analysis, the service encounter, strategic service vision, service mapping, and service teams. Their underlying assumption was that there is no better source of competitive advantage than a continuous stream of delighted customers. Process management uses some of the techniques from product quality management and some of the techniques from customer service management. It looks at an activity as a sequential process. The objective is to find inefficiencies and make the process more effective. Although the procedures have a long history, dating back to Taylorism, the scope of their applicability has been greatly widened, leaving no aspect of the firm free from potential process improvements. Because of the broad applicability of process management techniques, they can be used as a basis for competitive advantage. Carl Sewell, Frederick F. Reichheld, Christian Grönroos, and Earl Sasser observed that businesses were spending more on customer acquisition than on retention. They showed how a competitive advantage could be found in ensuring that customers returned again and again. Reicheld broadened the concept to include loyalty from employees, suppliers, distributors and shareholders. They developed techniques for estimating customer lifetime value (CLV) for assessing long-term relationships. The concepts begat attempts to recast selling and marketing into a long term endeavor that created a sustained relationship (called relationship selling, relationship marketing, and customer relationship management). Customer relationship management (CRM) software became integral to many firms. === Reengineering === Michael Hammer and James Champy felt that these resources needed to be restructured. In a process that they labeled reengineering, firm's reorganized their assets around whole processes rather than tasks. In this way a team of people saw a project through, from inception to completion. This avoided functional silos where isolated departments seldom talked to each other. It also eliminated waste due to functional overlap and interdepartmental communications. In 1989 Richard Lester and the researchers at the MIT Industrial Performance Center identified seven best practices and concluded that firms must accelerate the shift away from the mass production of low cost standardized products. The seven areas of best practice were: Simultaneous continuous improvement in cost, quality, service, and product innovation Breaking down organizational barriers between departments Eliminating layers of management creating flatter organizational hierarchies. Closer relationships with customers and suppliers Intelligent use of new technology Global focus Improving human resource skills The search for best practices is also called benchmarking. This involves determining where you need to improve, finding an organization that is exceptional in this area, then studying the company and applying its best practices in your firm. == Other perspectives on strategy == === Strategy as problem solving === Professor Richard P. Rumelt described strategy as a type of problem solving in 2011. He wrote that good strategy has an underlying structure called a kernel. The kernel has three parts: 1) A diagnosis that defines or explains the nature of the challenge; 2) A guiding policy for dealing with the challenge; and 3) Coherent actions designed to carry out the guiding policy. President Kennedy outlined these three elements of strategy in his Cuban Missile Crisis Address to the Nation of 22 October 1962: Diagnosis: "This Government, as promised, has maintained the closest surveillance of the Soviet military buildup on the island of Cuba. Within the past week, unmistakable evidence has established the fact that a series of offensive missile sites is now in preparation on that imprisoned island. The purpose of these bases can be none other than to provide a nuclear strike capability against the Western Hemisphere." Guiding Policy: "Our unswerving objective, therefore, must be to prevent the use of these missiles against this or any other country, and to secure their withdrawal or elimination from the Western Hemisphere." Action Plans: First among seven numbered steps was the following: "To halt this offensive buildup a strict quarantine on all offensive military equipment under shipment to Cuba is being initiated. All ships of any kind bound for Cuba from whatever nation or port will, if found to contain cargoes of offensive weapons, be turned back." Active strategic management required active information gathering and active problem solving. In the early days of Hewlett-Packard (HP), Dave Packard and Bill Hewlett devised an active management style that they called management by walking around (MBWA). Senior HP managers were seldom at their desks. They spent most of their days visiting employees, customers, and suppliers. This direct contact with key people provided them with a solid grounding from which viable strategies could be crafted. Management consultants Tom Peters and Robert H. Waterman had used the term in their 1982 book In Search of Excellence: Lessons From America's Best-Run Companies. Some Japanese managers employ a similar system, which originated at Honda, and is sometimes called the 3 G's (Genba, Genbutsu, and Genjitsu, which translate into "actual place", "actual thing", and "actual situation"). === Creative vs analytic approaches === In 2010, IBM released a study summarizing three conclusions of 1500 CEOs around the world: 1) complexity is escalating, 2) enterprises are not equipped to cope with this complexity, and 3) creativity is now the single most important leadership competency. IBM said that it is needed in all aspects of leadership, including strategic thinking and planning. Similarly, McKeown argued that over-reliance on any particular approach to strategy is dangerous and that multiple methods can be used to combine the creativity and analytics to create an "approach to shaping the future", that is difficult to copy. === Non-strategic management === A 1938 treatise by Chester Barnard, based on his own experience as a business executive, described the process as informal, intuitive, non-routinized and involving primarily oral, 2-way communications. Bernard says "The process is the sensing of the organization as a whole and the total situation relevant to it. It transcends the capacity of merely intellectual methods, and the techniques of discriminating the factors of the situation. The terms pertinent to it are "feeling", "judgement", "sense", "proportion", "balance", "appropriateness". It is a matter of art rather than science." In 1973, Mintzberg found that senior managers typically deal with unpredictable situations so they strategize in ad hoc, flexible, dynamic, and implicit ways. He wrote, "The job breeds adaptive information-manipulators who prefer the live concrete situation. The manager works in an environment of stimulus-response, and he develops in his work a clear preference for live action." In 1982, John Kotter studied the daily activities of 15 executives and concluded that they spent most of their time developing and working a network of relationships that provided general insights and specific details for strategic decisions. They tended to use "mental road maps" rather than systematic planning techniques. Daniel Isenberg's 1984 study of senior managers found that their decisions were highly intuitive. Executives often sensed what they were going to do before they could explain why. He claimed in 1986 that one of the reasons for this is the complexity of strategic decisions and the resultant information uncertainty. Zuboff claimed that information technology was widening the divide between senior managers (who typically make strategic decisions) and operational level managers (who typically make routine decisions). She alleged that prior to the widespread use of computer systems, managers, even at the most senior level, engaged in both strategic decisions and routine administration, but as computers facilitated (She called it "deskilled") routine processes, these activities were moved further down the hierarchy, leaving senior management free for strategic decision making. In 1977, Abraham Zaleznik distinguished leaders from managers. He described leaders as visionaries who inspire, while managers care about process. He claimed that the rise of managers was the main cause of the decline of American business in the 1970s and 1980s. Lack of leadership is most damaging at the level of strategic management where it can paralyze an entire organization. According to Corner, Kinichi, and Keats, strategic decision making in organizations occurs at two levels: individual and aggregate. They developed a model of parallel strategic decision making. The model identifies two parallel processes that involve getting attention, encoding information, storage and retrieval of information, strategic choice, strategic outcome and feedback. The individual and organizational processes interact at each stage. For instance, competition-oriented objectives are based on the knowledge of competing firms, such as their market share. === Strategy as marketing === The 1980s also saw the widespread acceptance of positioning theory. Although the theory originated with Jack Trout in 1969, it didn't gain wide acceptance until Al Ries and Jack Trout wrote their classic book Positioning: The Battle For Your Mind (1979). The basic premise is that a strategy should not be judged by internal company factors but by the way customers see it relative to the competition. Crafting and implementing a strategy involves creating a position in the mind of the collective consumer. Several techniques enabled the practical use of positioning theory. Perceptual mapping for example, creates visual displays of the relationships between positions. Multidimensional scaling, discriminant analysis, factor analysis and conjoint analysis are mathematical techniques used to determine the most relevant characteristics (called dimensions or factors) upon which positions should be based. Preference regression can be used to determine vectors of ideal positions and cluster analysis can identify clusters of positions. In 1992 Jay Barney saw strategy as assembling the optimum mix of resources, including human, technology and suppliers, and then configuring them in unique and sustainable ways. James Gilmore and Joseph Pine found competitive advantage in mass customization. Flexible manufacturing techniques allowed businesses to individualize products for each customer without losing economies of scale. This effectively turned the product into a service. They also realized that if a service is mass-customized by creating a "performance" for each individual client, that service would be transformed into an "experience". Their book, The Experience Economy, along with the work of Bernd Schmitt, convinced many to see service provision as a form of theatre. This school of thought is sometimes referred to as customer experience management (CEM). === Information- and technology-driven strategy === Many industries with a high information component are being transformed. For example, Encarta demolished Encyclopædia Britannica (whose sales have plummeted 80% since their peak of $650 million in 1990) before it was, in turn, eclipsed by collaborative encyclopedias like Wikipedia. The music industry was similarly disrupted. The technology sector has provided some strategies directly. For example, from the software development industry agile software development provides a model for shared development processes. Peter Drucker conceived of the "knowledge worker" in the 1950s. He described how fewer workers would do physical labor, and more would apply their minds. In 1984, John Naisbitt theorized that the future would be driven largely by information: companies that managed information well could obtain an advantage, however the profitability of what he called "information float" (information that the company had and others desired) would disappear as inexpensive computers made information more accessible. Daniel Bell (1985) examined the sociological consequences of information technology, while Gloria Schuck and Shoshana Zuboff looked at psychological factors. Zuboff distinguished between "automating technologies" and "informating technologies". She studied the effect that both had on workers, managers and organizational structures. She largely confirmed Drucker's predictions about the importance of flexible decentralized structure, work teams, knowledge sharing and the knowledge worker's central role. Zuboff also detected a new basis for managerial authority, based on knowledge (also predicted by Drucker) which she called "participative management". === Regulatory strategy === An organisation's regulatory strategy accounts for how the organisation will respond to its regulatory bodies and standards as a feature of its operating environment, for example for businesses in the financial services, health care or energy industries. Beardsley et al., for example, refer to companies who are fatalistic or confrontational in their approach to being regulated. They recommend instead that the regulatory aspects of the business environment need to be integrated into the wider aspects of strategic planning and a coordinated approach taken in dialogue with regulators. The term "regulatory strategy" is also used by regulators and legislators to define the aims and processes through which they will undertake their regulatory functions. === Maturity of planning process === McKinsey & Company developed a capability maturity model in the 1970s to describe the sophistication of planning processes, with strategic management ranked the highest. The four stages include: Financial planning, which is primarily about annual budgets and a functional focus, with limited regard for the environment; Forecast-based planning, which includes multi-year budgets and more robust capital allocation across business units; Externally oriented planning, where a thorough situation analysis and competitive assessment is performed; Strategic management, where widespread strategic thinking occurs and a well-defined strategic framework is used. === PIMS study === The long-term PIMS study, started in the 1960s and lasting for 19 years, attempted to understand the Profit Impact of Marketing Strategies (PIMS), particularly the effect of market share. The initial conclusion of the study was unambiguous: the greater a company's market share, the greater their rate of profit. Market share provides economies of scale. It also provides experience curve advantages. The combined effect is increased profits. The benefits of high market share naturally led to an interest in growth strategies. The relative advantages of horizontal integration, vertical integration, diversification, franchises, mergers and acquisitions, joint ventures and organic growth were discussed. Other research indicated that a low market share strategy could still be very profitable. Schumacher (1973), Woo and Cooper (1982), Levenson (1984), and later Traverso (2002) showed how smaller niche players obtained very high returns. == Other influences on business strategy == === Military strategy === In the 1980s business strategists realized that there was a vast knowledge base stretching back thousands of years that they had barely examined. They turned to military strategy for guidance. Military strategy books such as The Art of War by Sun Tzu, On War by von Clausewitz, and The Red Book by Mao Zedong became business classics. From Sun Tzu, they learned the tactical side of military strategy and specific tactical prescriptions. From von Clausewitz, they learned the dynamic and unpredictable nature of military action. From Mao, they learned the principles of guerrilla warfare. Important marketing warfare books include Business War Games by Barrie James, Marketing Warfare by Al Ries and Jack Trout and Leadership Secrets of Attila the Hun by Wess Roberts. The marketing warfare literature also examined leadership and motivation, intelligence gathering, types of marketing weapons, logistics and communications. By the twenty-first century marketing warfare strategies had gone out of favour in favor of non-confrontational approaches. In 1989, Dudley Lynch and Paul L. Kordis published Strategy of the Dolphin: Scoring a Win in a Chaotic World. "The Strategy of the Dolphin" was developed to give guidance as to when to use aggressive strategies and when to use passive strategies. A variety of aggressive strategies were developed. In 1993, J. Moore used a similar metaphor. Instead of using military terms, he created an ecological theory of predators and prey(see ecological model of competition), a sort of Darwinian management strategy in which market interactions mimic long term ecological stability. Author Phillip Evans said in 2014 that "Henderson's central idea was what you might call the Napoleonic idea of concentrating mass against weakness, of overwhelming the enemy. What Henderson recognized was that, in the business world, there are many phenomena which are characterized by what economists would call increasing returns—scale, experience. The more you do of something, disproportionately the better you get. And therefore he found a logic for investing in such kinds of overwhelming mass in order to achieve competitive advantage. And that was the first introduction of essentially a military concept of strategy into the business world. ... It was on those two ideas, Henderson's idea of increasing returns to scale and experience, and Porter's idea of the value chain, encompassing heterogenous elements, that the whole edifice of business strategy was subsequently erected." == Traits of successful companies == Like Peters and Waterman a decade earlier, James Collins and Jerry Porras spent years conducting empirical research on what makes great companies. Six years of research uncovered a key underlying principle behind the 19 successful companies that they studied: They all encourage and preserve a core ideology that nurtures the company. Even though strategy and tactics change daily, the companies, nevertheless, were able to maintain a core set of values. These core values encourage employees to build an organization that lasts. In Built To Last (1994) they claim that short term profit goals, cost cutting, and restructuring will not stimulate dedicated employees to build a great company that will endure. In 2000 Collins coined the term "built to flip" to describe the prevailing business attitudes in Silicon Valley. It describes a business culture where technological change inhibits a long term focus. He also popularized the concept of the BHAG (Big Hairy Audacious Goal). Arie de Geus (1997) undertook a similar study and obtained similar results. He identified four key traits of companies that had prospered for 50 years or more. They are: Sensitivity to the business environment – the ability to learn and adjust Cohesion and identity – the ability to build a community with personality, vision, and purpose Tolerance and decentralization – the ability to build relationships Conservative financing A company with these key characteristics he called a living company because it is able to perpetuate itself. If a company emphasizes knowledge rather than finance, and sees itself as an ongoing community of human beings, it has the potential to become great and endure for decades. Such an organization is an organic entity capable of learning (he called it a "learning organization") and capable of creating its own processes, goals, and persona. Will Mulcaster suggests that firms engage in a dialogue that centres around these questions: Will the proposed competitive advantage create Perceived Differential Value?" Will the proposed competitive advantage create something that is different from the competition?" Will the difference add value in the eyes of potential customers?" – This question will entail a discussion of the combined effects of price, product features and consumer perceptions. Will the product add value for the firm?" – Answering this question will require an examination of cost effectiveness and the pricing strategy. == See also == == References == === Further reading === Cameron, Bobby Thomas. (2014). Using responsive evaluation in Strategic Management.Strategic Leadership Review 4 (2), 22–27. David Besanko, David Dranove, Scott Schaefer, and Mark Shanley (2012) Economics of Strategy, John Wiley & Sons, ISBN 978-1118273630 Edwards, Janice et al. Mastering Strategic Management- 1st Canadian Edition. BC Open Textbooks, 2014. Kemp, Roger L. "Strategic Planning for Local Government: A Handbook for Officials and Citizens," McFarland and Co., Inc., Jefferson, NC, USA, and London, England, UK, 2008 (ISBN 978-0-7864-3873-0) Kvint, Vladimir (2009) The Global Emerging Market: Strategic Management and Economics Excerpt from Google Books Pankaj Ghemawhat - Harvard Strategy Professor: Competition and Business Strategy in Historical Perspective Social Science History Network-Spring 2002 == External links == Media related to Strategic management at Wikimedia Commons Institute for Strategy and Competitiveness at Harvard Business School – recent publications The Journal of Business Strategies – online library
Wikipedia/Business_strategy
Artifact-centric business process model represents an operational model of business processes in which the changes and evolution of business data, or business entities, are considered as the main driver of the processes. The artifact-centric approach, a kind of data-centric business process modeling, focuses on describing how business data is changed/updated, by a particular action or task, throughout the process. == Overview == In general, a process model describes activities conducted (i.e. activity-centric) in order to achieve business goals, informational structures, and organizational resources. Workflows, as a typical process modeling approach, often emphasize the sequencing of activities (i.e., control flows), but ignore the informational perspective or treat it only within the context of single activities. Without a complete view of the informational context, business actors often focus on what should be done instead of what can be done, hindering operational innovations. Business process modeling is a foundation for design and management of business processes. Two key aspects of business process modeling are a formal framework that integrates both control flow and data, and a set of tools to assist all aspects of a business process life cycle. A typical business process life cycle includes at least a design phase, concerned with the “correct” realization of business logic in a resource-constrained environment, and an operational phase, concerned with optimizing and improving execution (operations). Traditional business process models emphasize a procedural and/or graph-based paradigm (i.e., control flow). Thus, methodologies to design workflow in those models are typically process-centric. It has been argued that a data-centric perspective is more useful for designing business processes in the modern era. Intuitively, business artifacts (or simply artifacts) are data objects whose manipulations define the underlying processes in a business model. Recent engineering and development efforts have adopted the artifact approach for design and analysis of business models. An important distinction between artifact-centric models and traditional data flow (computational) models is that the notion of the life cycle of the data objects is prominent in the former, while not existing in the latter. == Research and history == Artifact-centric modeling is an area of growing interest. Nigam and Caswell introduced the concept of business artifacts and information-centric processing of artifact lifecycles. Kumaran et al.'s further studies on artifact-centric business processes can be found here. Bhattacharya described a successful business engagement which applies business artifact techniques to industrialize discovery processes in pharmaceutical research. Liu et al. formulated nine commonly used patterns in information-centric business operation models and developed a computational model based on Petri Nets. Bhattacharya, K., et al. provides a formal model for artifact-centric business processes with complexity results concerning static analysis of the semantics of such processes. Kumaran et al. presented the formalized information-centric approach to discovering business entities from activity-centric process models and transforming such models into artifact-centric business process models. An algorithm was provided to achieve this transformation automatically. Other approaches related to artifact-centric modelling can be found in,. Van der Aalst et al. provides a case-handling approach where a process is driven by the presence of data objects instead of control flows. A case is similar to the business entity concept in many respects. Wang and Kumar proposed the document-driven workflow systems which is designed based on data dependencies without the need for explicit control flows. Muller et al. also introduced the framework for the data-driven modelling of large process structures, namely COREPRO. The approach reduces modelling efforts significantly and provides mechanisms for maintaining data-driven process structures. Another related thread of work is the use of state machines to model object lifecycles. Industries often define data objects and standardize their lifecycles as state machines to facilitate interoperability between industry partners and enforce legal regulations. Redding et al. and Küster et al. give techniques to generate business processes which are compliant with predefined object lifecycles. In addition, event-driven process modelling, for example, Event-driven Process Chains (EPC), also describes object lifecycles glued by events. More recent and closely related work on artifact-centric process model can be found in. Gerede and Su developed a specification language ABSL to specify artifact behaviours in artifact-centric process models. The authors showed decidability results of their language for different cases and provided key insights on how artifact-centric view can affect the specification of desirable business properties. Gerede et al. identified important classes of properties on artifact-centric operational models focusing on persistence, uniqueness and arrival properties. They proposed a formal model for artifact-centric operational models to enable a static analysis of these properties and showed that the formal model guarantees persistence and uniqueness. Fritz, Hull, and Su formulated the technical problem of goal-directed workflow construction in the context of declarative artifact-centric workflow, and develop results concerning the general setting, design time analysis, and the synthesis of workflow schemas from goal specifications. The work is among the important initial steps along the path towards eventual support for tools that enable substantial automation for workflow design, analysis, and modification. Deutsch et al. introduced the artifact system model, which formalizes a business process modelling paradigm that has recently attracted the attention of both the industrial and research communities. The problem of automatic verification of artifact systems, with the goal of increasing confidence in the correctness of such business processes is also studied. Sira and Chengfei proposed a novel view framework for artifact-centric business processes. It consists of artifact-centric process model, process view model, a set of consistency rules, and the construction approach for building process views. The formal model of artifact-centric business processes and views, namely ACP, is defined and used to describe artifacts, services, business rules that control the processes, as well as views. They developed a bottom-up abstraction mechanism for process view construction to derive views from underlying process models according to view requirements. Consistency rules are also defined to preserve the consistency between constructed view and its underlying process. This work can be considered as one approach to the abstraction, i.e., generalization of artifact-centric business processes. The framework has also been extended to address modelling and change validation of inter-organizational business processes. == See also == Business architecture Business Model Canvas Business plan Business process illustration Business process mapping Business Process Modeling Notation Capability Maturity Model Integration Extended Enterprise Modeling Language Generalised Enterprise Reference Architecture and Methodology Model-driven engineering == References == == External links ==
Wikipedia/Artifact-centric_business_process_model
Controlled natural languages (CNLs) are subsets of natural languages that are obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity. Traditionally, controlled languages fall into two major types: those that improve readability for human readers (e.g. non-native speakers), and those that enable reliable automatic semantic analysis of the language. The first type of languages (often called "simplified" or "technical" languages), for example ASD Simplified Technical English, Caterpillar Technical English, IBM's Easy English, are used in the industry to increase the quality of technical documentation, and possibly simplify the semi-automatic translation of the documentation. These languages restrict the writer by general rules such as "Keep sentences short", "Avoid the use of pronouns", "Only use dictionary-approved words", and "Use only the active voice". The second type of languages have a formal syntax and formal semantics, and can be mapped to an existing formal language, such as first-order logic. Thus, those languages can be used as knowledge representation languages, and writing of those languages is supported by fully automatic consistency and redundancy checks, query answering, etc. == Languages == Existing controlled natural languages include: == Encoding == IETF has reserved simple as a BCP 47 variant subtag for simplified versions of languages. == See also == == References == == External links == Controlled Natural Languages Archived 2021-03-08 at the Wayback Machine
Wikipedia/Controlled_natural_language
Trivial Graph Format (TGF) is a simple text-based adjacency list file format for describing graphs, widely used because of its simplicity. == Format == The format consists of a list of node definitions, which map node IDs to labels, followed by a list of edges, which specify node pairs and an optional edge label. Because of its lack of standardization, the format has many variations. For instance, some implementations of the format require the node IDs to be integers, while others allow more general alphanumeric identifiers. Each node definition is a single line of text starting with the node ID, separated by a space from its label. The node definitions are separated from the edge definitions by a line containing the "#" character. Each edge definition is another line of text, starting with the two IDs for the endpoints of the edge separated by a space. If the edge has a label, it appears on the same line after the endpoint IDs. The graph may be interpreted as a directed or undirected graph. For directed graphs, to specify the concept of bi-directionality in an edge, one may either specify two edges (forward and back) or differentiate the edge by means of a label. == Example == A simple graph with two nodes and one edge might look like: 1 First node 2 Second node # 1 2 Edge between the two == See also == yEd, a graph editor that can handle TGF file format. == References == == External links == Using TGF in the yFiles Graph Drawing library Using TGF in Wolfram Mathematica
Wikipedia/Trivial_Graph_Format
This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges. == Symbols == Square brackets [ ] G[S] is the induced subgraph of a graph G for vertex subset S. Prime symbol ' The prime symbol is often used to modify notation for graph invariants so that it applies to the line graph instead of the given graph. For instance, α(G) is the independence number of a graph; α′(G) is the matching number of the graph, which equals the independence number of its line graph. Similarly, χ(G) is the chromatic number of a graph; χ ′(G) is the chromatic index of the graph, which equals the chromatic number of its line graph. == A == absorbing An absorbing set A {\displaystyle A} of a directed graph G {\displaystyle G} is a set of vertices such that for any vertex v ∈ G ∖ A {\displaystyle v\in G\setminus A} , there is an edge from v {\displaystyle v} towards a vertex of A {\displaystyle A} . achromatic The achromatic number of a graph is the maximum number of colors in a complete coloring. acyclic 1. A graph is acyclic if it has no cycles. An undirected acyclic graph is the same thing as a forest. An acyclic directed graph, which is a digraph without directed cycles, is often called a directed acyclic graph, especially in computer science. 2. An acyclic coloring of an undirected graph is a proper coloring in which every two color classes induce a forest. adjacency matrix The adjacency matrix of a graph is a matrix whose rows and columns are both indexed by vertices of the graph, with a one in the cell for row i and column j when vertices i and j are adjacent, and a zero otherwise. adjacent 1. The relation between two vertices that are both endpoints of the same edge. 2. The relation between two distinct edges that share an end vertex. α For a graph G, α(G) (using the Greek letter alpha) is its independence number (see independent), and α′(G) is its matching number (see matching). alternating In a graph with a matching, an alternating path is a path whose edges alternate between matched and unmatched edges. An alternating cycle is, similarly, a cycle whose edges alternate between matched and unmatched edges. An augmenting path is an alternating path that starts and ends at unsaturated vertices. A larger matching can be found as the symmetric difference of the matching and the augmenting path; a matching is maximum if and only if it has no augmenting path. antichain In a directed acyclic graph, a subset S of vertices that are pairwise incomparable, i.e., for any x ≤ y {\displaystyle x\leq y} in S, there is no directed path from x to y or from y to x. Inspired by the notion of antichains in partially ordered sets. anti-edge Synonym for non-edge, a pair of non-adjacent vertices. anti-triangle A three-vertex independent set, the complement of a triangle. apex 1. An apex graph is a graph in which one vertex can be removed, leaving a planar subgraph. The removed vertex is called the apex. A k-apex graph is a graph that can be made planar by the removal of k vertices. 2. Synonym for universal vertex, a vertex adjacent to all other vertices. arborescence Synonym for a rooted and directed tree; see tree. arc See edge. arrow An ordered pair of vertices, such as an edge in a directed graph. An arrow (x, y) has a tail x, a head y, and a direction from x to y; y is said to be the direct successor to x and x the direct predecessor to y. The arrow (y, x) is the inverted arrow of the arrow (x, y). articulation point A vertex in a connected graph whose removal would disconnect the graph. More generally, a vertex whose removal increases the number of components. -ary A k-ary tree is a rooted tree in which every internal vertex has no more than k children. A 1-ary tree is just a path. A 2-ary tree is also called a binary tree, although that term more properly refers to 2-ary trees in which the children of each node are distinguished as being left or right children (with at most one of each type). A k-ary tree is said to be complete if every internal vertex has exactly k children. augmenting A special type of alternating path; see alternating. automorphism A graph automorphism is a symmetry of a graph, an isomorphism from the graph to itself. == B == bag One of the sets of vertices in a tree decomposition. balanced A bipartite or multipartite graph is balanced if each two subsets of its vertex partition have sizes within one of each other. ball A ball (also known as a neighborhood ball or distance ball) is the set of all vertices that are at most distance r from a vertex. More formally, for a given vertex v and radius r, the ball B(v,r) consists of all vertices whose shortest path distance to v is less than or equal to r. bandwidth The bandwidth of a graph G is the minimum, over all orderings of vertices of G, of the length of the longest edge (the number of steps in the ordering between its two endpoints). It is also one less than the size of the maximum clique in a proper interval completion of G, chosen to minimize the clique size. biclique Synonym for complete bipartite graph or complete bipartite subgraph; see complete. biconnected Usually a synonym for 2-vertex-connected, but sometimes includes K2 though it is not 2-connected. See connected; for biconnected components, see component. binding number The smallest possible ratio of the number of neighbors of a proper subset of vertices to the size of the subset. bipartite A bipartite graph is a graph whose vertices can be divided into two disjoint sets such that the vertices in one set are not connected to each other, but may be connected to vertices in the other set. Put another way, a bipartite graph is a graph with no odd cycles; equivalently, it is a graph that may be properly colored with two colors. Bipartite graphs are often written G = (U,V,E) where U and V are the subsets of vertices of each color. However, unless the graph is connected, it may not have a unique 2-coloring. biregular A biregular graph is a bipartite graph in which there are only two different vertex degrees, one for each set of the vertex bipartition. block 1. A block of a graph G is a maximal subgraph which is either an isolated vertex, a bridge edge, or a 2-connected subgraph. If a block is 2-connected, every pair of vertices in it belong to a common cycle. Every edge of a graph belongs in exactly one block. 2. The block graph of a graph G is another graph whose vertices are the blocks of G, with an edge connecting two vertices when the corresponding blocks share an articulation point; that is, it is the intersection graph of the blocks of G. The block graph of any graph is a forest. 3. The block-cut (or block-cutpoint) graph of a graph G is a bipartite graph where one partite set consists of the cut-vertices of G, and the other has a vertex b i {\displaystyle b_{i}} for each block B i {\displaystyle B_{i}} of G. When G is connected, its block-cutpoint graph is a tree. 4. A block graph (also called a clique tree if connected, and sometimes erroneously called a Husimi tree) is a graph all of whose blocks are complete graphs. A forest is a block graph; so in particular the block graph of any graph is a block graph, and every block graph may be constructed as the block graph of a graph. bond A minimal cut-set: a set of edges whose removal disconnects the graph, for which no proper subset has the same property. book 1. A book, book graph, or triangular book is a complete tripartite graph K1,1,n; a collection of n triangles joined at a shared edge. 2. Another type of graph, also called a book, or a quadrilateral book, is a collection of 4-cycles joined at a shared edge; the Cartesian product of a star with an edge. 3. A book embedding is an embedding of a graph onto a topological book, a space formed by joining a collection of half-planes along a shared line. Usually, the vertices of the embedding are required to be on the line, which is called the spine of the embedding, and the edges of the embedding are required to lie within a single half-plane, one of the pages of the book. boundary 1. In a graph embedding, a boundary walk is the subgraph containing all incident edges and vertices to a face. bramble A bramble is a collection of mutually touching connected subgraphs, where two subgraphs touch if they share a vertex or each includes one endpoint of an edge. The order of a bramble is the smallest size of a set of vertices that has a nonempty intersection with all of the subgraphs. The treewidth of a graph is the maximum order of any of its brambles. branch A path of degree-two vertices, ending at vertices whose degree is unequal to two. branch-decomposition A branch-decomposition of G is a hierarchical clustering of the edges of G, represented by an unrooted binary tree with its leaves labeled by the edges of G. The width of a branch-decomposition is the maximum, over edges e of this binary tree, of the number of shared vertices between the subgraphs determined by the edges of G in the two subtrees separated by e. The branchwidth of G is the minimum width of any branch-decomposition of G. branchwidth See branch-decomposition. bridge 1. A bridge, isthmus, or cut edge is an edge whose removal would disconnect the graph. A bridgeless graph is one that has no bridges; equivalently, a 2-edge-connected graph. 2. A bridge of a subgraph H is a maximal connected subgraph separated from the rest of the graph by H. That is, it is a maximal subgraph that is edge-disjoint from H and in which each two vertices and edges belong to a path that is internally disjoint from H. H may be a set of vertices. A chord is a one-edge bridge. In planarity testing, H is a cycle and a peripheral cycle is a cycle with at most one bridge; it must be a face boundary in any planar embedding of its graph. 3. A bridge of a cycle can also mean a path that connects two vertices of a cycle but is shorter than either of the paths in the cycle connecting the same two vertices. A bridged graph is a graph in which every cycle of four or more vertices has a bridge. bridgeless A bridgeless or isthmus-free graph is a graph that has no bridge edges (i.e., isthmi); that is, each connected component is a 2-edge-connected graph. butterfly 1. The butterfly graph has five vertices and six edges; it is formed by two triangles that share a vertex. 2. The butterfly network is a graph used as a network architecture in distributed computing, closely related to the cube-connected cycles. == C == C Cn is an n-vertex cycle graph; see cycle. cactus A cactus graph, cactus tree, cactus, or Husimi tree is a connected graph in which each edge belongs to at most one cycle. Its blocks are cycles or single edges. If, in addition, each vertex belongs to at most two blocks, then it is called a Christmas cactus. cage A cage is a regular graph with the smallest possible order for its girth. canonical canonization A canonical form of a graph is an invariant such that two graphs have equal invariants if and only if they are isomorphic. Canonical forms may also be called canonical invariants or complete invariants, and are sometimes defined only for the graphs within a particular family of graphs. Graph canonization is the process of computing a canonical form. card A graph formed from a given graph by deleting one vertex, especially in the context of the reconstruction conjecture. See also deck, the multiset of all cards of a graph. carving width Carving width is a notion of graph width analogous to branchwidth, but using hierarchical clusterings of vertices instead of hierarchical clusterings of edges. caterpillar A caterpillar tree or caterpillar is a tree in which the internal nodes induce a path. center The center of a graph is the set of vertices of minimum eccentricity. centroid A centroid of a tree is a vertex v such that if rooted at v, no other vertex has subtree size greater than half the size of the tree. chain 1. Synonym for walk. 2. When applying methods from algebraic topology to graphs, an element of a chain complex, namely a set of vertices or a set of edges. Cheeger constant See expansion. cherry A cherry is a path on three vertices. χ χ(G) (using the Greek letter chi) is the chromatic number of G and χ ′(G) is its chromatic index; see chromatic and coloring. child In a rooted tree, a child of a vertex v is a neighbor of v along an outgoing edge, one that is directed away from the root. chord chordal 1. A chord of a cycle is an edge that does not belong to the cycle, for which both endpoints belong to the cycle. 2. A chordal graph is a graph in which every cycle of four or more vertices has a chord, so the only induced cycles are triangles. 3. A strongly chordal graph is a chordal graph in which every cycle of length six or more has an odd chord. 4. A chordal bipartite graph is not chordal (unless it is a forest); it is a bipartite graph in which every cycle of six or more vertices has a chord, so the only induced cycles are 4-cycles. 5. A chord of a circle is a line segment connecting two points on the circle; the intersection graph of a collection of chords is called a circle graph. chromatic Having to do with coloring; see color. Chromatic graph theory is the theory of graph coloring. The chromatic number χ(G) is the minimum number of colors needed in a proper coloring of G. χ ′(G) is the chromatic index of G, the minimum number of colors needed in a proper edge coloring of G. choosable choosability A graph is k-choosable if it has a list coloring whenever each vertex has a list of k available colors. The choosability of the graph is the smallest k for which it is k-choosable. circle A circle graph is the intersection graph of chords of a circle. circuit A circuit may refer to a closed trail or an element of the cycle space (an Eulerian spanning subgraph). The circuit rank of a graph is the dimension of its cycle space. circumference The circumference of a graph is the length of its longest simple cycle. The graph is Hamiltonian if and only if its circumference equals its order. class 1. A class of graphs or family of graphs is a (usually infinite) collection of graphs, often defined as the graphs having some specific property. The word "class" is used rather than "set" because, unless special restrictions are made (such as restricting the vertices to be drawn from a particular set, and defining edges to be sets of two vertices) classes of graphs are usually not sets when formalized using set theory. 2. A color class of a colored graph is the set of vertices or edges having one particular color. 3. In the context of Vizing's theorem, on edge coloring simple graphs, a graph is said to be of class one if its chromatic index equals its maximum degree, and class two if its chromatic index equals one plus the degree. According to Vizing's theorem, all simple graphs are either of class one or class two. claw A claw is a tree with one internal vertex and three leaves, or equivalently the complete bipartite graph K1,3. A claw-free graph is a graph that does not have an induced subgraph that is a claw. clique A clique is a set of mutually adjacent vertices (or the complete subgraph induced by that set). Sometimes a clique is defined as a maximal set of mutually adjacent vertices (or maximal complete subgraph), one that is not part of any larger such set (or subgraph). A k-clique is a clique of order k. The clique number ω(G) of a graph G is the order of its largest clique. The clique graph of a graph G is the intersection graph of the maximal cliques in G. See also biclique, a complete bipartite subgraph. clique tree A synonym for a block graph. clique-width The clique-width of a graph G is the minimum number of distinct labels needed to construct G by operations that create a labeled vertex, form the disjoint union of two labeled graphs, add an edge connecting all pairs of vertices with given labels, or relabel all vertices with a given label. The graphs of clique-width at most 2 are exactly the cographs. closed 1. A closed neighborhood is one that includes its central vertex; see neighbourhood. 2. A closed walk is one that starts and ends at the same vertex; see walk. 3. A graph is transitively closed if it equals its own transitive closure; see transitive. 4. A graph property is closed under some operation on graphs if, whenever the argument or arguments to the operation have the property, then so does the result. For instance, hereditary properties are closed under induced subgraphs; monotone properties are closed under subgraphs; and minor-closed properties are closed under minors. closure 1. For the transitive closure of a directed graph, see transitive. 2. A closure of a directed graph is a set of vertices that have no outgoing edges to vertices outside the closure. For instance, a sink is a one-vertex closure. The closure problem is the problem of finding a closure of minimum or maximum weight. co- This prefix has various meanings usually involving complement graphs. For instance, a cograph is a graph produced by operations that include complementation; a cocoloring is a coloring in which each vertex induces either an independent set (as in proper coloring) or a clique (as in a coloring of the complement). color coloring 1. A graph coloring is a labeling of the vertices of a graph by elements from a given set of colors, or equivalently a partition of the vertices into subsets, called "color classes", each of which is associated with one of the colors. 2. Some authors use "coloring", without qualification, to mean a proper coloring, one that assigns different colors to the endpoints of each edge. In graph coloring, the goal is to find a proper coloring that uses as few colors as possible; for instance, bipartite graphs are the graphs that have colorings with only two colors, and the four color theorem states that every planar graph can be colored with at most four colors. A graph is said to be k-colored if it has been (properly) colored with k colors, and k-colorable or k-chromatic if this is possible. 3. Many variations of coloring have been studied, including edge coloring (coloring edges so that no two edges with the same endpoint share a color), list coloring (proper coloring with each vertex restricted to a subset of the available colors), acyclic coloring (every 2-colored subgraph is acyclic), co-coloring (every color class induces an independent set or a clique), complete coloring (every two color classes share an edge), and total coloring (both edges and vertices are colored). 4. The coloring number of a graph is one plus the degeneracy. It is so called because applying a greedy coloring algorithm to a degeneracy ordering of the graph uses at most this many colors. comparability An undirected graph is a comparability graph if its vertices are the elements of a partially ordered set and two vertices are adjacent when they are comparable in the partial order. Equivalently, a comparability graph is a graph that has a transitive orientation. Many other classes of graphs can be defined as the comparability graphs of special types of partial order. complement The complement graph G ¯ {\displaystyle {\bar {G}}} of a simple graph G is another graph on the same vertex set as G, with an edge for each two vertices that are not adjacent in G. complete 1. A complete graph is one in which every two vertices are adjacent: all edges that could exist are present. A complete graph with n vertices is often denoted Kn. A complete bipartite graph is one in which every two vertices on opposite sides of the partition of vertices are adjacent. A complete bipartite graph with a vertices on one side of the partition and b vertices on the other side is often denoted Ka,b. The same terminology and notation has also been extended to complete multipartite graphs, graphs in which the vertices are divided into more than two subsets and every pair of vertices in different subsets are adjacent; if the numbers of vertices in the subsets are a, b, c, ... then this graph is denoted Ka, b, c, .... 2. A completion of a given graph is a supergraph that has some desired property. For instance, a chordal completion is a supergraph that is a chordal graph. 3. A complete matching is a synonym for a perfect matching; see matching. 4. A complete coloring is a proper coloring in which each pairs of colors is used for the endpoints of at least one edge. Every coloring with a minimum number of colors is complete, but there may exist complete colorings with larger numbers of colors. The achromatic number of a graph is the maximum number of colors in a complete coloring. 5. A complete invariant of a graph is a synonym for a canonical form, an invariant that has different values for non-isomorphic graphs. component A connected component of a graph is a maximal connected subgraph. The term is also used for maximal subgraphs or subsets of a graph's vertices that have some higher order of connectivity, including biconnected components, triconnected components, and strongly connected components. condensation The condensation of a directed graph G is a directed acyclic graph with one vertex for each strongly connected component of G, and an edge connecting pairs of components that contain the two endpoints of at least one edge in G. cone A graph that contains a universal vertex. connect Cause to be connected. connected A connected graph is one in which each pair of vertices forms the endpoints of a path. Higher forms of connectivity include strong connectivity in directed graphs (for each two vertices there are paths from one to the other in both directions), k-vertex-connected graphs (removing fewer than k vertices cannot disconnect the graph), and k-edge-connected graphs (removing fewer than k edges cannot disconnect the graph). connected component Synonym for component. contraction Edge contraction is an elementary operation that removes an edge from a graph while merging the two vertices that it previously joined. Vertex contraction (sometimes called vertex identification) is similar, but the two vertices are not necessarily connected by an edge. Path contraction occurs upon the set of edges in a path that contract to form a single edge between the endpoints of the path. The inverse of edge contraction is vertex splitting. converse The converse graph is a synonym for the transpose graph; see transpose. core 1. A k-core is the induced subgraph formed by removing all vertices of degree less than k, and all vertices whose degree becomes less than k after earlier removals. See degeneracy. 2. A core is a graph G such that every graph homomorphism from G to itself is an isomorphism. 3. The core of a graph G is a minimal graph H such that there exist homomorphisms from G to H and vice versa. H is unique up to isomorphism. It can be represented as an induced subgraph of G, and is a core in the sense that all of its self-homomorphisms are isomorphisms. 4. In the theory of graph matchings, the core of a graph is an aspect of its Dulmage–Mendelsohn decomposition, formed as the union of all maximum matchings. cotree 1. The complement of a spanning tree. 2. A rooted tree structure used to describe a cograph, in which each cograph vertex is a leaf of the tree, each internal node of the tree is labeled with 0 or 1, and two cograph vertices are adjacent if and only if their lowest common ancestor in the tree is labeled 1. cover A vertex cover is a set of vertices incident to every edge in a graph. An edge cover is a set of edges incident to every vertex in a graph. A set of subgraphs of a graph covers that graph if its union – taken vertex-wise and edge-wise – is equal to the graph. critical A critical graph for a given property is a graph that has the property but such that every subgraph formed by deleting a single vertex does not have the property. For instance, a factor-critical graph is one that has a perfect matching (a 1-factor) for every vertex deletion, but (because it has an odd number of vertices) has no perfect matching itself. Compare hypo-, used for graphs which do not have a property but for which every one-vertex deletion does. cube cubic 1. Cube graph, the eight-vertex graph of the vertices and edges of a cube. 2. Hypercube graph, a higher-dimensional generalization of the cube graph. 3. Folded cube graph, formed from a hypercube by adding a matching connecting opposite vertices. 4. Halved cube graph, the half-square of a hypercube graph. 5. Partial cube, a distance-preserving subgraph of a hypercube. 6. The cube of a graph G is the graph power G3. 7. Cubic graph, another name for a 3-regular graph, one in which each vertex has three incident edges. 8. Cube-connected cycles, a cubic graph formed by replacing each vertex of a hypercube by a cycle. cut cut-set A cut is a partition of the vertices of a graph into two subsets, or the set (also known as a cut-set) of edges that span such a partition, if that set is non-empty. An edge is said to span the partition if it has endpoints in both subsets. Thus, the removal of a cut-set from a connected graph disconnects it. cut point See articulation point. cut space The cut space of a graph is a GF(2)-vector space having the cut-sets of the graph as its elements and symmetric difference of sets as its vector addition operation. cycle 1. A cycle may be either a kind of graph or a kind of walk. As a walk it may be either be a closed walk (also called a tour) or more usually a closed walk without repeated vertices and consequently edges (also called a simple cycle). In the latter case it is usually regarded as a graph, i.e., the choices of first vertex and direction are usually considered unimportant; that is, cyclic permutations and reversals of the walk produce the same cycle. Important special types of cycle include Hamiltonian cycles, induced cycles, peripheral cycles, and the shortest cycle, which defines the girth of a graph. A k-cycle is a cycle of length k; for instance a 2-cycle is a digon and a 3-cycle is a triangle. A cycle graph is a graph that is itself a simple cycle; a cycle graph with n vertices is commonly denoted Cn. 2. The cycle space is a vector space generated by the simple cycles in a graph, often over the field of 2 elements but also over other fields. == D == DAG Abbreviation for directed acyclic graph, a directed graph without any directed cycles. deck The multiset of graphs formed from a single graph G by deleting a single vertex in all possible ways, especially in the context of the reconstruction conjecture. An edge-deck is formed in the same way by deleting a single edge in all possible ways. The graphs in a deck are also called cards. See also critical (graphs that have a property that is not held by any card) and hypo- (graphs that do not have a property that is held by all cards). decomposition See tree decomposition, path decomposition, or branch-decomposition. degenerate degeneracy A k-degenerate graph is an undirected graph in which every induced subgraph has minimum degree at most k. The degeneracy of a graph is the smallest k for which it is k-degenerate. A degeneracy ordering is an ordering of the vertices such that each vertex has minimum degree in the induced subgraph of it and all later vertices; in a degeneracy ordering of a k-degenerate graph, every vertex has at most k later neighbours. Degeneracy is also known as the k-core number, width, and linkage, and one plus the degeneracy is also called the coloring number or Szekeres–Wilf number. k-degenerate graphs have also been called k-inductive graphs. degree 1. The degree of a vertex in a graph is its number of incident edges. The degree of a graph G (or its maximum degree) is the maximum of the degrees of its vertices, often denoted Δ(G); the minimum degree of G is the minimum of its vertex degrees, often denoted δ(G). Degree is sometimes called valency; the degree of v in G may be denoted dG(v), d(G), or deg(v). The total degree is the sum of the degrees of all vertices; by the handshaking lemma it is an even number. The degree sequence is the collection of degrees of all vertices, in sorted order from largest to smallest. In a directed graph, one may distinguish the in-degree (number of incoming edges) and out-degree (number of outgoing edges). 2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor. Δ, δ Δ(G) (using the Greek letter delta) is the maximum degree of a vertex in G, and δ(G) is the minimum degree; see degree. density In a graph of n nodes, the density is the ratio of the number of edges of the graph to the number of edges in a complete graph on n nodes. See dense graph. depth The depth of a node in a rooted tree is the number of edges in the path from the root to the node. For instance, the depth of the root is 0 and the depth of any one of its adjacent nodes is 1. It is the level of a node minus one. Note, however, that some authors instead use depth as a synonym for the level of a node. diameter The diameter of a connected graph is the maximum length of a shortest path. That is, it is the maximum of the distances between pairs of vertices in the graph. If the graph has weights on its edges, then its weighted diameter measures path length by the sum of the edge weights along a path, while the unweighted diameter measures path length by the number of edges. For disconnected graphs, definitions vary: the diameter may be defined as infinite, or as the largest diameter of a connected component, or it may be undefined. diamond The diamond graph is an undirected graph with four vertices and five edges. diconnected Strongly connected. (Not to be confused with disconnected) digon A digon is a simple cycle of length two in a directed graph or a multigraph. Digons cannot occur in simple undirected graphs as they require repeating the same edge twice, which violates the definition of simple. digraph Synonym for directed graph. dipath See directed path. direct predecessor The tail of a directed edge whose head is the given vertex. direct successor The head of a directed edge whose tail is the given vertex. directed A directed graph is one in which the edges have a distinguished direction, from one vertex to another. In a mixed graph, a directed edge is again one that has a distinguished direction; directed edges may also be called arcs or arrows. directed arc See arrow. directed edge See arrow. directed line See arrow. directed path A path in which all the edges have the same direction. If a directed path leads from vertex x to vertex y, x is a predecessor of y, y is a successor of x, and y is said to be reachable from x. direction 1. The asymmetric relation between two adjacent vertices in a graph, represented as an arrow. 2. The asymmetric relation between two vertices in a directed path. disconnect Cause to be disconnected. disconnected Not connected. disjoint 1. Two subgraphs are edge disjoint if they share no edges, and vertex disjoint if they share no vertices. 2. The disjoint union of two or more graphs is a graph whose vertex and edge sets are the disjoint unions of the corresponding sets. dissociation number A subset of vertices in a graph G is called dissociation if it induces a subgraph with maximum degree 1. distance The distance between any two vertices in a graph is the length of the shortest path having the two vertices as its endpoints. domatic A domatic partition of a graph is a partition of the vertices into dominating sets. The domatic number of the graph is the maximum number of dominating sets in such a partition. dominating A dominating set is a set of vertices that includes or is adjacent to every vertex in the graph; not to be confused with a vertex cover, a vertex set that is incident to all edges in the graph. Important special types of dominating sets include independent dominating sets (dominating sets that are also independent sets) and connected dominating sets (dominating sets that induced connected subgraphs). A single-vertex dominating set may also be called a universal vertex. The domination number of a graph is the number of vertices in the smallest dominating set. dualA dual graph of a plane graph G is a graph that has a vertex for each face of G. == E == E E(G) is the edge set of G; see edge set. ear An ear of a graph is a path whose endpoints may coincide but in which otherwise there are no repetitions of vertices or edges. ear decomposition An ear decomposition is a partition of the edges of a graph into a sequence of ears, each of whose endpoints (after the first one) belong to a previous ear and each of whose interior points do not belong to any previous ear. An open ear is a simple path (an ear without repeated vertices), and an open ear decomposition is an ear decomposition in which each ear after the first is open; a graph has an open ear decomposition if and only if it is biconnected. An ear is odd if it has an odd number of edges, and an odd ear decomposition is an ear decomposition in which each ear is odd; a graph has an odd ear decomposition if and only if it is factor-critical. eccentricity The eccentricity of a vertex is the farthest distance from it to any other vertex. edge An edge is (together with vertices) one of the two basic units out of which graphs are constructed. Each edge has two (or in hypergraphs, more) vertices to which it is attached, called its endpoints. Edges may be directed or undirected; undirected edges are also called lines and directed edges are also called arcs or arrows. In an undirected simple graph, an edge may be represented as the set of its vertices, and in a directed simple graph it may be represented as an ordered pair of its vertices. An edge that connects vertices x and y is sometimes written xy. edge cut A set of edges whose removal disconnects the graph. A one-edge cut is called a bridge, isthmus, or cut edge. edge set The set of edges of a given graph G, sometimes denoted by E(G). edgeless graph The edgeless graph or totally disconnected graph on a given set of vertices is the graph that has no edges. It is sometimes called the empty graph, but this term can also refer to a graph with no vertices. embedding A graph embedding is a topological representation of a graph as a subset of a topological space with each vertex represented as a point, each edge represented as a curve having the endpoints of the edge as endpoints of the curve, and no other intersections between vertices or edges. A planar graph is a graph that has such an embedding onto the Euclidean plane, and a toroidal graph is a graph that has such an embedding onto a torus. The genus of a graph is the minimum possible genus of a two-dimensional manifold onto which it can be embedded. empty graph 1. An edgeless graph on a nonempty set of vertices. 2. The order-zero graph, a graph with no vertices and no edges. end An end of an infinite graph is an equivalence class of rays, where two rays are equivalent if there is a third ray that includes infinitely many vertices from both of them. endpoint One of the two vertices joined by a given edge, or one of the first or last vertex of a walk, trail or path. The first endpoint of a given directed edge is called the tail and the second endpoint is called the head. enumeration Graph enumeration is the problem of counting the graphs in a given class of graphs, as a function of their order. More generally, enumeration problems can refer either to problems of counting a certain class of combinatorial objects (such as cliques, independent sets, colorings, or spanning trees), or of algorithmically listing all such objects. Eulerian An Eulerian path is a walk that uses every edge of a graph exactly once. An Eulerian circuit (also called an Eulerian cycle or an Euler tour) is a closed walk that uses every edge exactly once. An Eulerian graph is a graph that has an Eulerian circuit. For an undirected graph, this means that the graph is connected and every vertex has even degree. For a directed graph, this means that the graph is strongly connected and every vertex has in-degree equal to the out-degree. In some cases, the connectivity requirement is loosened, and a graph meeting only the degree requirements is called Eulerian. even Divisible by two; for instance, an even cycle is a cycle whose length is even. expander An expander graph is a graph whose edge expansion, vertex expansion, or spectral expansion is bounded away from zero. expansion 1. The edge expansion, isoperimetric number, or Cheeger constant of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of edges leaving S to the number of vertices in S. 2. The vertex expansion, vertex isoperimetric number, or magnification of a graph G is the minimum ratio, over subsets S of at most half of the vertices of G, of the number of vertices outside but adjacent to S to the number of vertices in S. 3. The unique neighbor expansion of a graph G is the minimum ratio, over subsets of at most half of the vertices of G, of the number of vertices outside S but adjacent to a unique vertex in S to the number of vertices in S. 4. The spectral expansion of a d-regular graph G is the spectral gap between the largest eigenvalue d of its adjacency matrix and the second-largest eigenvalue. 5. A family of graphs has bounded expansion if all its r-shallow minors have a ratio of edges to vertices bounded by a function of r, and polynomial expansion if the function of r is a polynomial. == F == face In a plane graph or graph embedding, a connected component of the subset of the plane or surface of the embedding that is disjoint from the graph. For an embedding in the plane, all but one face will be bounded; the one exceptional face that extends to infinity is called the outer (or infinite) face. factor A factor of a graph is a spanning subgraph: a subgraph that includes all of the vertices of the graph. The term is primarily used in the context of regular subgraphs: a k-factor is a factor that is k-regular. In particular, a 1-factor is the same thing as a perfect matching. A factor-critical graph is a graph for which deleting any one vertex produces a graph with a 1-factor. factorization A graph factorization is a partition of the edges of the graph into factors; a k-factorization is a partition into k-factors. For instance a 1-factorization is an edge coloring with the additional property that each vertex is incident to an edge of each color. family A synonym for class. finite A graph is finite if it has a finite number of vertices and a finite number of edges. Many sources assume that all graphs are finite without explicitly saying so. A graph is locally finite if each vertex has a finite number of incident edges. An infinite graph is a graph that is not finite: it has infinitely many vertices, infinitely many edges, or both. first order The first order logic of graphs is a form of logic in which variables represent vertices of a graph, and there exists a binary predicate to test whether two vertices are adjacent. To be distinguished from second order logic, in which variables can also represent sets of vertices or edges. -flap For a set of vertices X, an X-flap is a connected component of the induced subgraph formed by deleting X. The flap terminology is commonly used in the context of havens, functions that map small sets of vertices to their flaps. See also the bridge of a cycle, which is either a flap of the cycle vertices or a chord of the cycle. forbidden A forbidden graph characterization is a characterization of a family of graphs as being the graphs that do not have certain other graphs as subgraphs, induced subgraphs, or minors. If H is one of the graphs that does not occur as a subgraph, induced subgraph, or minor, then H is said to be forbidden. forcing graph A forcing graph is a graph H such that evaluating the subgraph density of H in the graphs of a graph sequence G(n) is sufficient to test whether that sequence is quasi-random. forest A forest is an undirected graph without cycles (a disjoint union of unrooted trees), or a directed graph formed as a disjoint union of rooted trees. free edge An edge which is not in a matching. free vertex 1. A vertex not on a matched edge in a matching 2. A vertex which has not been matched. Frucht 1. Robert Frucht 2. The Frucht graph, one of the two smallest cubic graphs with no nontrivial symmetries. 3. Frucht's theorem that every finite group is the group of symmetries of a finite graph. full Synonym for induced. functional graph A functional graph is a directed graph where every vertex has out-degree one. Equivalently, a functional graph is a maximal directed pseudoforest. == G == G A variable often used to denote a graph. genus The genus of a graph is the minimum genus of a surface onto which it can be embedded; see embedding. geodesic As a noun, a geodesic is a synonym for a shortest path. When used as an adjective, it means related to shortest paths or shortest path distances. giant In the theory of random graphs, a giant component is a connected component that contains a constant fraction of the vertices of the graph. In standard models of random graphs, there is typically at most one giant component. girth The girth of a graph is the length of its shortest cycle. graph The fundamental object of study in graph theory, a system of vertices connected in pairs by edges. Often subdivided into directed graphs or undirected graphs according to whether the edges have an orientation or not. Mixed graphs include both types of edges. greedy Produced by a greedy algorithm. For instance, a greedy coloring of a graph is a coloring produced by considering the vertices in some sequence and assigning each vertex the first available color. Grötzsch 1. Herbert Grötzsch 2. The Grötzsch graph, the smallest triangle-free graph requiring four colors in any proper coloring. 3. Grötzsch's theorem that triangle-free planar graphs can always be colored with at most three colors. Grundy number 1. The Grundy number of a graph is the maximum number of colors produced by a greedy coloring, with a badly-chosen vertex ordering. == H == H A variable often used to denote a graph, especially when another graph has already been denoted by G. H-coloring An H-coloring of a graph G (where H is also a graph) is a homomorphism from H to G. H-free A graph is H-free if it does not have an induced subgraph isomorphic to H, that is, if H is a forbidden induced subgraph. The H-free graphs are the family of all graphs (or, often, all finite graphs) that are H-free. For instance the triangle-free graphs are the graphs that do not have a triangle graph as a subgraph. The property of being H-free is always hereditary. A graph is H-minor-free if it does not have a minor isomorphic to H. Hadwiger 1. Hugo Hadwiger 2. The Hadwiger number of a graph is the order of the largest complete minor of the graph. It is also called the contraction clique number or the homomorphism degree. 3. The Hadwiger conjecture is the conjecture that the Hadwiger number is never less than the chromatic number. Hamiltonian A Hamiltonian path or Hamiltonian cycle is a simple spanning path or simple spanning cycle: it covers all of the vertices in the graph exactly once. A graph is Hamiltonian if it contains a Hamiltonian cycle, and traceable if it contains a Hamiltonian path. haven A k-haven is a function that maps every set X of fewer than k vertices to one of its flaps, often satisfying additional consistency conditions. The order of a haven is the number k. Havens can be used to characterize the treewidth of finite graphs and the ends and Hadwiger numbers of infinite graphs. height 1. The height of a node in a rooted tree is the number of edges in a longest path, going away from the root (i.e. its nodes have strictly increasing depth), that starts at that node and ends at a leaf. 2. The height of a rooted tree is the height of its root. That is, the height of a tree is the number of edges in a longest possible path, going away from the root, that starts at the root and ends at a leaf. 3. The height of a directed acyclic graph is the maximum length of a directed path in this graph. hereditary A hereditary property of graphs is a property that is closed under induced subgraphs: if G has a hereditary property, then so must every induced subgraph of G. Compare monotone (closed under all subgraphs) or minor-closed (closed under minors). hexagon A simple cycle consisting of exactly six edges and six vertices. hole A hole is an induced cycle of length four or more. An odd hole is a hole of odd length. An anti-hole is an induced subgraph of order four whose complement is a cycle; equivalently, it is a hole in the complement graph. This terminology is mainly used in the context of perfect graphs, which are characterized by the strong perfect graph theorem as being the graphs with no odd holes or odd anti-holes. The hole-free graphs are the same as the chordal graphs. homomorphic equivalence Two graphs are homomorphically equivalent if there exist two homomorphisms, one from each graph to the other graph. homomorphism 1. A graph homomorphism is a mapping from the vertex set of one graph to the vertex set of another graph that maps adjacent vertices to adjacent vertices. This type of mapping between graphs is the one that is most commonly used in category-theoretic approaches to graph theory. A proper graph coloring can equivalently be described as a homomorphism to a complete graph. 2. The homomorphism degree of a graph is a synonym for its Hadwiger number, the order of the largest clique minor. hyperarc A directed hyperedge having a source and target set. hyperedge An edge in a hypergraph, having any number of endpoints, in contrast to the requirement that edges of graphs have exactly two endpoints. hypercube A hypercube graph is a graph formed from the vertices and edges of a geometric hypercube. hypergraph A hypergraph is a generalization of a graph in which each edge (called a hyperedge in this context) may have more than two endpoints. hypo- This prefix, in combination with a graph property, indicates a graph that does not have the property but such that every subgraph formed by deleting a single vertex does have the property. For instance, a hypohamiltonian graph is one that does not have a Hamiltonian cycle, but for which every one-vertex deletion produces a Hamiltonian subgraph. Compare critical, used for graphs which have a property but for which every one-vertex deletion does not. == I == in-degree The number of incoming edges in a directed graph; see degree. incidence An incidence in a graph is a vertex-edge pair such that the vertex is an endpoint of the edge. incidence matrix The incidence matrix of a graph is a matrix whose rows are indexed by vertices of the graph, and whose columns are indexed by edges, with a one in the cell for row i and column j when vertex i and edge j are incident, and a zero otherwise. incident The relation between an edge and one of its endpoints. incomparability An incomparability graph is the complement of a comparability graph; see comparability. independent 1. An independent set is a set of vertices that induces an edgeless subgraph. It may also be called a stable set or a coclique. The independence number α(G) is the size of the maximum independent set. 2. In the graphic matroid of a graph, a subset of edges is independent if the corresponding subgraph is a tree or forest. In the bicircular matroid, a subset of edges is independent if the corresponding subgraph is a pseudoforest. indifference An indifference graph is another name for a proper interval graph or unit interval graph; see proper. induced An induced subgraph or full subgraph of a graph is a subgraph formed from a subset of vertices and from all of the edges that have both endpoints in the subset. Special cases include induced paths and induced cycles, induced subgraphs that are paths or cycles. inductive Synonym for degenerate. infinite An infinite graph is one that is not finite; see finite. internal A vertex of a path or tree is internal if it is not a leaf; that is, if its degree is greater than one. Two paths are internally disjoint (some people call it independent) if they do not have any vertex in common, except the first and last ones. intersection 1. The intersection of two graphs is their largest common subgraph, the graph formed by the vertices and edges that belong to both graphs. 2. An intersection graph is a graph whose vertices correspond to sets or geometric objects, with an edge between two vertices exactly when the corresponding two sets or objects have a nonempty intersection. Several classes of graphs may be defined as the intersection graphs of certain types of objects, for instance chordal graphs (intersection graphs of subtrees of a tree), circle graphs (intersection graphs of chords of a circle), interval graphs (intersection graphs of intervals of a line), line graphs (intersection graphs of the edges of a graph), and clique graphs (intersection graphs of the maximal cliques of a graph). Every graph is an intersection graph for some family of sets, and this family is called an intersection representation of the graph. The intersection number of a graph G is the minimum total number of elements in any intersection representation of G. interval 1. An interval graph is an intersection graph of intervals of a line. 2. The interval [u, v] in a graph is the union of all shortest paths from u to v. 3. Interval thickness is a synonym for pathwidth. invariant A synonym of property. inverted arrow An arrow with an opposite direction compared to another arrow. The arrow (y, x) is the inverted arrow of the arrow (x, y). isolated An isolated vertex of a graph is a vertex whose degree is zero, that is, a vertex with no incident edges. isomorphic Two graphs are isomorphic if there is an isomorphism between them; see isomorphism. isomorphism A graph isomorphism is a one-to-one incidence preserving correspondence of the vertices and edges of one graph to the vertices and edges of another graph. Two graphs related in this way are said to be isomorphic. isoperimetric See expansion. isthmus Synonym for bridge, in the sense of an edge whose removal disconnects the graph. == J == join The join of two graphs is formed from their disjoint union by adding an edge from each vertex of one graph to each vertex of the other. Equivalently, it is the complement of the disjoint union of the complements. == K == K For the notation for complete graphs, complete bipartite graphs, and complete multipartite graphs, see complete. κ κ(G) (using the Greek letter kappa) can refer to the vertex connectivity of G or to the clique number of G. kernel A kernel of a directed graph is a set of vertices which is both stable and absorbing. knot An inescapable section of a directed graph. See knot (mathematics) and knot theory. == L == L L(G) is the line graph of G; see line. label 1. Information associated with a vertex or edge of a graph. A labeled graph is a graph whose vertices or edges have labels. The terms vertex-labeled or edge-labeled may be used to specify which objects of a graph have labels. Graph labeling refers to several different problems of assigning labels to graphs subject to certain constraints. See also graph coloring, in which the labels are interpreted as colors. 2. In the context of graph enumeration, the vertices of a graph are said to be labeled if they are all distinguishable from each other. For instance, this can be made to be true by fixing a one-to-one correspondence between the vertices and the integers from 1 to the order of the graph. When vertices are labeled, graphs that are isomorphic to each other (but with different vertex orderings) are counted as separate objects. In contrast, when the vertices are unlabeled, graphs that are isomorphic to each other are not counted separately. leaf 1. A leaf vertex or pendant vertex (especially in a tree) is a vertex whose degree is 1. A leaf edge or pendant edge is the edge connecting a leaf vertex to its single neighbour. 2. A leaf power of a tree is a graph whose vertices are the leaves of the tree and whose edges connect leaves whose distance in the tree is at most a given threshold. length In an unweighted graph, the length of a cycle, path, or walk is the number of edges it uses. In a weighted graph, it may instead be the sum of the weights of the edges that it uses. Length is used to define the shortest path, girth (shortest cycle length), and longest path between two vertices in a graph. level 1. This is the depth of a node plus 1, although some define it instead to be synonym of depth. A node's level in a rooted tree is the number of nodes in the path from the root to the node. For instance, the root has level 1 and any one of its adjacent nodes has level 2. 2. A set of all node having the same level or depth. line A synonym for an undirected edge. The line graph L(G) of a graph G is a graph with a vertex for each edge of G and an edge for each pair of edges that share an endpoint in G. linkage A synonym for degeneracy. list 1. An adjacency list is a computer representation of graphs for use in graph algorithms. 2. List coloring is a variation of graph coloring in which each vertex has a list of available colors. local A local property of a graph is a property that is determined only by the neighbourhoods of the vertices in the graph. For instance, a graph is locally finite if all of its neighborhoods are finite. loop A loop or self-loop is an edge both of whose endpoints are the same vertex. It forms a cycle of length 1. These are not allowed in simple graphs. == M == magnification Synonym for vertex expansion. matching A matching is a set of edges in which no two share any vertex. A vertex is matched or saturated if it is one of the endpoints of an edge in the matching. A perfect matching or complete matching is a matching that matches every vertex; it may also be called a 1-factor, and can only exist when the order is even. A near-perfect matching, in a graph with odd order, is one that saturates all but one vertex. A maximum matching is a matching that uses as many edges as possible; the matching number α′(G) of a graph G is the number of edges in a maximum matching. A maximal matching is a matching to which no additional edges can be added. maximal 1. A subgraph of given graph G is maximal for a particular property if it has that property but no other supergraph of it that is also a subgraph of G also has the same property. That is, it is a maximal element of the subgraphs with the property. For instance, a maximal clique is a complete subgraph that cannot be expanded to a larger complete subgraph. The word "maximal" should be distinguished from "maximum": a maximum subgraph is always maximal, but not necessarily vice versa. 2. A simple graph with a given property is maximal for that property if it is not possible to add any more edges to it (keeping the vertex set unchanged) while preserving both the simplicity of the graph and the property. Thus, for instance, a maximal planar graph is a planar graph such that adding any more edges to it would create a non-planar graph. maximum A subgraph of a given graph G is maximum for a particular property if it is the largest subgraph (by order or size) among all subgraphs with that property. For instance, a maximum clique is any of the largest cliques in a given graph. median 1. A median of a triple of vertices, a vertex that belongs to shortest paths between all pairs of vertices, especially in median graphs and modular graphs. 2. A median graph is a graph in which every three vertices have a unique median. Meyniel 1. Henri Meyniel, French graph theorist. 2. A Meyniel graph is a graph in which every odd cycle of length five or more has at least two chords. minimal A subgraph of given graph is minimal for a particular property if it has that property but no proper subgraph of it also has the same property. That is, it is a minimal element of the subgraphs with the property. minimum cut A cut whose cut-set has minimum total weight, possibly restricted to cuts that separate a designated pair of vertices; they are characterized by the max-flow min-cut theorem. minor A graph H is a minor of another graph G if H can be obtained by deleting edges or vertices from G and contracting edges in G. It is a shallow minor if it can be formed as a minor in such a way that the subgraphs of G that were contracted to form vertices of H all have small diameter. H is a topological minor of G if G has a subgraph that is a subdivision of H. A graph is H-minor-free if it does not have H as a minor. A family of graphs is minor-closed if it is closed under minors; the Robertson–Seymour theorem characterizes minor-closed families as having a finite set of forbidden minors. mixed A mixed graph is a graph that may include both directed and undirected edges. modular 1. Modular graph, a graph in which each triple of vertices has at least one median vertex that belongs to shortest paths between all pairs of the triple. 2. Modular decomposition, a decomposition of a graph into subgraphs within which all vertices connect to the rest of the graph in the same way. 3. Modularity of a graph clustering, the difference of the number of cross-cluster edges from its expected value. monotone A monotone property of graphs is a property that is closed under subgraphs: if G has a monotone property, then so must every subgraph of G. Compare hereditary (closed under induced subgraphs) or minor-closed (closed under minors). Moore graph A Moore graph is a regular graph for which the Moore bound is met exactly. The Moore bound is an inequality relating the degree, diameter, and order of a graph, proved by Edward F. Moore. Every Moore graph is a cage. multigraph A multigraph is a graph that allows multiple adjacencies (and, often, self-loops); a graph that is not required to be simple. multiple adjacency A multiple adjacency or multiple edge is a set of more than one edge that all have the same endpoints (in the same direction, in the case of directed graphs). A graph with multiple edges is often called a multigraph. multiplicity The multiplicity of an edge is the number of edges in a multiple adjacency. The multiplicity of a graph is the maximum multiplicity of any of its edges. == N == N 1. For the notation for open and closed neighborhoods, see neighbourhood. 2. A lower-case n is often used (especially in computer science) to denote the number of vertices in a given graph. neighbor neighbour A vertex that is adjacent to a given vertex. neighborhood neighbourhood The open neighbourhood (or neighborhood) of a vertex v is the subgraph induced by all vertices that are adjacent to v. The closed neighbourhood is defined in the same way but also includes v itself. The open neighborhood of v in G may be denoted NG(v) or N(v), and the closed neighborhood may be denoted NG[v] or N[v]. When the openness or closedness of a neighborhood is not specified, it is assumed to be open. network A graph in which attributes (e.g. names) are associated with the nodes and/or edges. node A synonym for vertex. non-edge A non-edge or anti-edge is a pair of vertices that are not adjacent; the edges of the complement graph. null graph See empty graph. == O == odd 1. An odd cycle is a cycle whose length is odd. The odd girth of a non-bipartite graph is the length of its shortest odd cycle. An odd hole is a special case of an odd cycle: one that is induced and has four or more vertices. 2. An odd vertex is a vertex whose degree is odd. By the handshaking lemma every finite undirected graph has an even number of odd vertices. 3. An odd ear is a simple path or simple cycle with an odd number of edges, used in odd ear decompositions of factor-critical graphs; see ear. 4. An odd chord is an edge connecting two vertices that are an odd distance apart in an even cycle. Odd chords are used to define strongly chordal graphs. 5. An odd graph is a special case of a Kneser graph, having one vertex for each (n − 1)-element subset of a (2n − 1)-element set, and an edge connecting two subsets when their corresponding sets are disjoint. open 1. See neighbourhood. 2. See walk. order 1. The order of a graph G is the number of its vertices, |V(G)|. The variable n is often used for this quantity. See also size, the number of edges. 2. A type of logic of graphs; see first order and second order. 3. An order or ordering of a graph is an arrangement of its vertices into a sequence, especially in the context of topological ordering (an order of a directed acyclic graph in which every edge goes from an earlier vertex to a later vertex in the order) and degeneracy ordering (an order in which each vertex has minimum degree in the induced subgraph of it and all later vertices). 4. For the order of a haven or bramble, see haven and bramble. orientation oriented 1. An orientation of an undirected graph is an assignment of directions to its edges, making it into a directed graph. An oriented graph is one that has been assigned an orientation. So, for instance, a polytree is an oriented tree; it differs from a directed tree (an arborescence) in that there is no requirement of consistency in the directions of its edges. Other special types of orientation include tournaments, orientations of complete graphs; strong orientations, orientations that are strongly connected; acyclic orientations, orientations that are acyclic; Eulerian orientations, orientations that are Eulerian; and transitive orientations, orientations that are transitively closed. 2. Oriented graph, used by some authors as a synonym for a directed graph. out-degree See degree. outer See face. outerplanar An outerplanar graph is a graph that can be embedded in the plane (without crossings) so that all vertices are on the outer face of the graph. == P == parent In a rooted tree, a parent of a vertex v is a neighbor of v along the incoming edge, the one that is directed toward the root. path A path may either be a walk or a walk without repeated vertices and consequently edges (also called a simple path), depending on the source. Important special cases include induced paths and shortest paths. path decomposition A path decomposition of a graph G is a tree decomposition whose underlying tree is a path. Its width is defined in the same way as for tree decompositions, as one less than the size of the largest bag. The minimum width of any path decomposition of G is the pathwidth of G. pathwidth The pathwidth of a graph G is the minimum width of a path decomposition of G. It may also be defined in terms of the clique number of an interval completion of G. It is always between the bandwidth and the treewidth of G. It is also known as interval thickness, vertex separation number, or node searching number. pendant See leaf. perfect 1. A perfect graph is a graph in which, in every induced subgraph, the chromatic number equals the clique number. The perfect graph theorem and strong perfect graph theorem are two theorems about perfect graphs, the former proving that their complements are also perfect and the latter proving that they are exactly the graphs with no odd holes or anti-holes. 2. A perfectly orderable graph is a graph whose vertices can be ordered in such a way that a greedy coloring algorithm with this ordering optimally colors every induced subgraph. The perfectly orderable graphs are a subclass of the perfect graphs. 3. A perfect matching is a matching that saturates every vertex; see matching. 4. A perfect 1-factorization is a partition of the edges of a graph into perfect matchings so that each two matchings form a Hamiltonian cycle. peripheral 1. A peripheral cycle or non-separating cycle is a cycle with at most one bridge. 2. A peripheral vertex is a vertex whose eccentricity is maximum. In a tree, this must be a leaf. Petersen 1. Julius Petersen (1839–1910), Danish graph theorist. 2. The Petersen graph, a 10-vertex 15-edge graph frequently used as a counterexample. 3. Petersen's theorem that every bridgeless cubic graph has a perfect matching. planar A planar graph is a graph that has an embedding onto the Euclidean plane. A plane graph is a planar graph for which a particular embedding has already been fixed. A k-planar graph is one that can be drawn in the plane with at most k crossings per edge. polytree A polytree is an oriented tree; equivalently, a directed acyclic graph whose underlying undirected graph is a tree. power 1. A graph power Gk of a graph G is another graph on the same vertex set; two vertices are adjacent in Gk when they are at distance at most k in G. A leaf power is a closely related concept, derived from a power of a tree by taking the subgraph induced by the tree's leaves. 2. Power graph analysis is a method for analyzing complex networks by identifying cliques, bicliques, and stars within the network. 3. Power laws in the degree distributions of scale-free networks are a phenomenon in which the number of vertices of a given degree is proportional to a power of the degree. predecessor A vertex coming before a given vertex in a directed path. prime 1. A prime graph is defined from an algebraic group, with a vertex for each prime number that divides the order of the group. 2. In the theory of modular decomposition, a prime graph is a graph without any nontrivial modules. 3. In the theory of splits, cuts whose cut-set is a complete bipartite graph, a prime graph is a graph without any splits. Every quotient graph of a maximal decomposition by splits is a prime graph, a star, or a complete graph. 4. A prime graph for the Cartesian product of graphs is a connected graph that is not itself a product. Every connected graph can be uniquely factored into a Cartesian product of prime graphs. proper 1. A proper subgraph is a subgraph that removes at least one vertex or edge relative to the whole graph; for finite graphs, proper subgraphs are never isomorphic to the whole graph, but for infinite graphs they can be. 2. A proper coloring is an assignment of colors to the vertices of a graph (a coloring) that assigns different colors to the endpoints of each edge; see color. 3. A proper interval graph or proper circular arc graph is an intersection graph of a collection of intervals or circular arcs (respectively) such that no interval or arc contains another interval or arc. Proper interval graphs are also called unit interval graphs (because they can always be represented by unit intervals) or indifference graphs. property A graph property is something that can be true of some graphs and false of others, and that depends only on the graph structure and not on incidental information such as labels. Graph properties may equivalently be described in terms of classes of graphs (the graphs that have a given property). More generally, a graph property may also be a function of graphs that is again independent of incidental information, such as the size, order, or degree sequence of a graph; this more general definition of a property is also called an invariant of the graph. pseudoforest A pseudoforest is an undirected graph in which each connected component has at most one cycle, or a directed graph in which each vertex has at most one outgoing edge. pseudograph A pseudograph is a graph or multigraph that allows self-loops. == Q == quasi-line graph A quasi-line graph or locally co-bipartite graph is a graph in which the open neighborhood of every vertex can be partitioned into two cliques. These graphs are always claw-free and they include as a special case the line graphs. They are used in the structure theory of claw-free graphs. quasi-random graph sequence A quasi-random graph sequence is a sequence of graphs that shares several properties with a sequence of random graphs generated according to the Erdős–Rényi random graph model. quiver A quiver is a directed multigraph, as used in category theory. The edges of a quiver are called arrows. == R == radius The radius of a graph is the minimum eccentricity of any vertex. Ramanujan A Ramanujan graph is a graph whose spectral expansion is as large as possible. That is, it is a d-regular graph, such that the second-largest eigenvalue of its adjacency matrix is at most 2 d − 1 {\displaystyle 2{\sqrt {d-1}}} . ray A ray, in an infinite graph, is an infinite simple path with exactly one endpoint. The ends of a graph are equivalence classes of rays. reachability The ability to get from one vertex to another within a graph. reachable Has an affirmative reachability. A vertex y is said to be reachable from a vertex x if there exists a path from x to y. recognizable In the context of the reconstruction conjecture, a graph property is recognizable if its truth can be determined from the deck of the graph. Many graph properties are known to be recognizable. If the reconstruction conjecture is true, all graph properties are recognizable. reconstruction The reconstruction conjecture states that each undirected graph G is uniquely determined by its deck, a multiset of graphs formed by removing one vertex from G in all possible ways. In this context, reconstruction is the formation of a graph from its deck. rectangle A simple cycle consisting of exactly four edges and four vertices. regular A graph is d-regular when all of its vertices have degree d. A regular graph is a graph that is d-regular for some d. regular tournament A regular tournament is a tournament where in-degree equals out-degree for all vertices. reverse See transpose. root 1. A designated vertex in a graph, particularly in directed trees and rooted graphs. 2. The inverse operation to a graph power: a kth root of a graph G is another graph on the same vertex set such that two vertices are adjacent in G if and only if they have distance at most k in the root. == S == saturated See matching. searching number Node searching number is a synonym for pathwidth. second order The second order logic of graphs is a form of logic in which variables may represent vertices, edges, sets of vertices, and (sometimes) sets of edges. This logic includes predicates for testing whether a vertex and edge are incident, as well as whether a vertex or edge belongs to a set. To be distinguished from first order logic, in which variables can only represent vertices. self-loop Synonym for loop. separating vertex See articulation point. separation number Vertex separation number is a synonym for pathwidth. sibling In a rooted tree, a sibling of a vertex v is a vertex which has the same parent vertex as v. simple 1. A simple graph is a graph without loops and without multiple adjacencies. That is, each edge connects two distinct endpoints and no two edges have the same endpoints. A simple edge is an edge that is not part of a multiple adjacency. In many cases, graphs are assumed to be simple unless specified otherwise. 2. A simple path or a simple cycle is a path or cycle that has no repeated vertices and consequently no repeated edges. sink A sink, in a directed graph, is a vertex with no outgoing edges (out-degree equals 0). size The size of a graph G is the number of its edges, |E(G)|. The variable m is often used for this quantity. See also order, the number of vertices. small-world network A small-world network is a graph in which most nodes are not neighbors of one another, but most nodes can be reached from every other node by a small number of hops or steps. Specifically, a small-world network is defined to be a graph where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network snark A snark is a simple, connected, bridgeless cubic graph with chromatic index equal to 4. source A source, in a directed graph, is a vertex with no incoming edges (in-degree equals 0). space In algebraic graph theory, several vector spaces over the binary field may be associated with a graph. Each has sets of edges or vertices for its vectors, and symmetric difference of sets as its vector sum operation. The edge space is the space of all sets of edges, and the vertex space is the space of all sets of vertices. The cut space is a subspace of the edge space that has the cut-sets of the graph as its elements. The cycle space has the Eulerian spanning subgraphs as its elements. spanner A spanner is a (usually sparse) graph whose shortest path distances approximate those in a dense graph or other metric space. Variations include geometric spanners, graphs whose vertices are points in a geometric space; tree spanners, spanning trees of a graph whose distances approximate the graph distances, and graph spanners, sparse subgraphs of a dense graph whose distances approximate the original graph's distances. A greedy spanner is a graph spanner constructed by a greedy algorithm, generally one that considers all edges from shortest to longest and keeps the ones that are needed to preserve the distance approximation. spanning A subgraph is spanning when it includes all of the vertices of the given graph. Important cases include spanning trees, spanning subgraphs that are trees, and perfect matchings, spanning subgraphs that are matchings. A spanning subgraph may also be called a factor, especially (but not only) when it is regular. sparse A sparse graph is one that has few edges relative to its number of vertices. In some definitions the same property should also be true for all subgraphs of the given graph. spectral spectrum The spectrum of a graph is the collection of eigenvalues of its adjacency matrix. Spectral graph theory is the branch of graph theory that uses spectra to analyze graphs. See also spectral expansion. split 1. A split graph is a graph whose vertices can be partitioned into a clique and an independent set. A related class of graphs, the double split graphs, are used in the proof of the strong perfect graph theorem. 2. A split of an arbitrary graph is a partition of its vertices into two nonempty subsets, such that the edges spanning this cut form a complete bipartite subgraph. The splits of a graph can be represented by a tree structure called its split decomposition. A split is called a strong split when it is not crossed by any other split. A split is called nontrivial when both of its sides have more than one vertex. A graph is called prime when it has no nontrivial splits. 3. Vertex splitting (sometimes called vertex cleaving) is an elementary graph operation that splits a vertex into two, where these two new vertices are adjacent to the vertices that the original vertex was adjacent to. The inverse of vertex splitting is vertex contraction. square 1. The square of a graph G is the graph power G2; in the other direction, G is the square root of G2. The half-square of a bipartite graph is the subgraph of its square induced by one side of the bipartition. 2. A squaregraph is a planar graph that can be drawn so that all bounded faces are 4-cycles and all vertices of degree ≤ 3 belong to the outer face. 3. A square grid graph is a lattice graph defined from points in the plane with integer coordinates connected by unit-length edges. stable A stable set is a synonym for an independent set. star A star is a tree with one internal vertex; equivalently, it is a complete bipartite graph K1,n for some n ≥ 2. The special case of a star with three leaves is called a claw. strength The strength of a graph is the minimum ratio of the number of edges removed from the graph to components created, over all possible removals; it is analogous to toughness, based on vertex removals. strong 1. For strong connectivity and strongly connected components of directed graphs, see connected and component. A strong orientation is an orientation that is strongly connected; see orientation. 2. For the strong perfect graph theorem, see perfect. 3. A strongly regular graph is a regular graph in which every two adjacent vertices have the same number of shared neighbours and every two non-adjacent vertices have the same number of shared neighbours. 4. A strongly chordal graph is a chordal graph in which every even cycle of length six or more has an odd chord. 5. A strongly perfect graph is a graph in which every induced subgraph has an independent set meeting all maximal cliques. The Meyniel graphs are also called "very strongly perfect graphs" because in them, every vertex belongs to such an independent set. subforest A subgraph of a forest. subgraph A subgraph of a graph G is another graph formed from a subset of the vertices and edges of G. The vertex subset must include all endpoints of the edge subset, but may also include additional vertices. A spanning subgraph is one that includes all vertices of the graph; an induced subgraph is one that includes all the edges whose endpoints belong to the vertex subset. subtree A subtree is a connected subgraph of a tree. Sometimes, for rooted trees, subtrees are defined to be a special type of connected subgraph, formed by all vertices and edges reachable from a chosen vertex. successor A vertex coming after a given vertex in a directed path. superconcentrator A superconcentrator is a graph with two designated and equal-sized subsets of vertices I and O, such that for every two equal-sized subsets S of I and T of O there exists a family of disjoint paths connecting every vertex in S to a vertex in T. Some sources require in addition that a superconcentrator be a directed acyclic graph, with I as its sources and O as its sinks. supergraph A graph formed by adding vertices, edges, or both to a given graph. If H is a subgraph of G, then G is a supergraph of H. == T == theta 1. A theta graph is the union of three internally disjoint (simple) paths that have the same two distinct end vertices. 2. The theta graph of a collection of points in the Euclidean plane is constructed by constructing a system of cones surrounding each point and adding one edge per cone, to the point whose projection onto a central ray of the cone is smallest. 3. The Lovász number or Lovász theta function of a graph is a graph invariant related to the clique number and chromatic number that can be computed in polynomial time by semidefinite programming. Thomsen graph The Thomsen graph is a name for the complete bipartite graph K 3 , 3 {\displaystyle K_{3,3}} . topological 1. A topological graph is a representation of the vertices and edges of a graph by points and curves in the plane (not necessarily avoiding crossings). 2. Topological graph theory is the study of graph embeddings. 3. Topological sorting is the algorithmic problem of arranging a directed acyclic graph into a topological order, a vertex sequence such that each edge goes from an earlier vertex to a later vertex in the sequence. totally disconnected Synonym for edgeless. tour A closed trail, a walk that starts and ends at the same vertex and has no repeated edges. Euler tours are tours that use all of the graph edges; see Eulerian. tournament A tournament is an orientation of a complete graph; that is, it is a directed graph such that every two vertices are connected by exactly one directed edge (going in only one of the two directions between the two vertices). traceable A traceable graph is a graph that contains a Hamiltonian path. trail A walk without repeated edges. transitive Having to do with the transitive property. The transitive closure of a given directed graph is a graph on the same vertex set that has an edge from one vertex to another whenever the original graph has a path connecting the same two vertices. A transitive reduction of a graph is a minimal graph having the same transitive closure; directed acyclic graphs have a unique transitive reduction. A transitive orientation is an orientation of a graph that is its own transitive closure; it exists only for comparability graphs. transpose The transpose graph of a given directed graph is a graph on the same vertices, with each edge reversed in direction. It may also be called the converse or reverse of the graph. tree 1. A tree is an undirected graph that is both connected and acyclic, or a directed graph in which there exists a unique walk from one vertex (the root of the tree) to all remaining vertices. 2. A k-tree is a graph formed by gluing (k + 1)-cliques together on shared k-cliques. A tree in the ordinary sense is a 1-tree according to this definition. tree decomposition A tree decomposition of a graph G is a tree whose nodes are labeled with sets of vertices of G; these sets are called bags. For each vertex v, the bags that contain v must induce a subtree of the tree, and for each edge uv there must exist a bag that contains both u and v. The width of a tree decomposition is one less than the maximum number of vertices in any of its bags; the treewidth of G is the minimum width of any tree decomposition of G. treewidth The treewidth of a graph G is the minimum width of a tree decomposition of G. It can also be defined in terms of the clique number of a chordal completion of G, the order of a haven of G, or the order of a bramble of G. triangle A cycle of length three in a graph. A triangle-free graph is an undirected graph that does not have any triangle subgraphs. trivial A trivial graph is a graph with 0 or 1 vertices. A graph with 0 vertices is also called null graph. Turán 1. Pál Turán 2. A Turán graph is a balanced complete multipartite graph. 3. Turán's theorem states that Turán graphs have the maximum number of edges among all clique-free graphs of a given order. 4. Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph. twin Two vertices u,v are true twins if they have the same closed neighborhood: NG[u] = NG[v] (this implies u and v are neighbors), and they are false twins if they have the same open neighborhood: NG(u) = NG(v)) (this implies u and v are not neighbors). == U == unary vertex In a rooted tree, a unary vertex is a vertex which has exactly one child vertex. undirected An undirected graph is a graph in which the two endpoints of each edge are not distinguished from each other. See also directed and mixed. In a mixed graph, an undirected edge is again one in which the endpoints are not distinguished from each other. uniform A hypergraph is k-uniform when all its edges have k endpoints, and uniform when it is k-uniform for some k. For instance, ordinary graphs are the same as 2-uniform hypergraphs. universal 1. A universal graph is a graph that contains as subgraphs all graphs in a given family of graphs, or all graphs of a given size or order within a given family of graphs. 2. A universal vertex (also called an apex or dominating vertex) is a vertex that is adjacent to every other vertex in the graph. For instance, wheel graphs and connected threshold graphs always have a universal vertex. 3. In the logic of graphs, a vertex that is universally quantified in a formula may be called a universal vertex for that formula. unweighted graph A graph whose vertices and edges have not been assigned weights; the opposite of a weighted graph. utility graph The utility graph is a name for the complete bipartite graph K 3 , 3 {\displaystyle K_{3,3}} . == V == V See vertex set. valency Synonym for degree. vertex A vertex (plural vertices) is (together with edges) one of the two basic units out of which graphs are constructed. Vertices of graphs are often considered to be atomic objects, with no internal structure. vertex cut separating set A set of vertices whose removal disconnects the graph. A one-vertex cut is called an articulation point or cut vertex. vertex set The set of vertices of a given graph G, sometimes denoted by V(G). vertices See vertex. Vizing 1. Vadim G. Vizing 2. Vizing's theorem that the chromatic index is at most one more than the maximum degree. 3. Vizing's conjecture on the domination number of Cartesian products of graphs. volume The sum of the degrees of a set of vertices. == W == W The letter W is used in notation for wheel graphs and windmill graphs. The notation is not standardized. Wagner 1. Klaus Wagner 2. The Wagner graph, an eight-vertex Möbius ladder. 3. Wagner's theorem characterizing planar graphs by their forbidden minors. 4. Wagner's theorem characterizing the K5-minor-free graphs. walk A walk is a finite or infinite sequence of edges which joins a sequence of vertices. Walks are also sometimes called chains. A walk is open if its first and last vertices are distinct, and closed if they are repeated. weakly connected A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. weight A numerical value, assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph. weighted graph A graph whose vertices or edges have been assigned weights. A vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges. well-colored A well-colored graph is a graph all of whose greedy colorings use the same number of colors. well-covered A well-covered graph is a graph all of whose maximal independent sets are the same size. wheel A wheel graph is a graph formed by adding a universal vertex to a simple cycle. width 1. A synonym for degeneracy. 2. For other graph invariants known as width, see bandwidth, branchwidth, clique-width, pathwidth, and treewidth. 3. The width of a tree decomposition or path decomposition is one less than the maximum size of one of its bags, and may be used to define treewidth and pathwidth. 4. The width of a directed acyclic graph is the maximum cardinality of an antichain. windmill A windmill graph is the union of a collection of cliques, all of the same order as each other, with one shared vertex belonging to all the cliques and all other vertices and edges distinct. == See also == List of graph theory topics Gallery of named graphs Graph algorithms Glossary of areas of mathematics == References ==
Wikipedia/Subgraph_(graph_theory)
GraphML is an XML-based file format for graphs. The GraphML file format results from the joint effort of the graph drawing community to define a common format for exchanging graph structure data. It uses an XML-based syntax and supports the entire range of possible graph structure constellations including directed, undirected, mixed graphs, hypergraphs, and application-specific attributes. == Overview == A GraphML file consists of an XML file containing a graph element, within which is an unordered sequence of node and edge elements. Each node element should have a distinct id attribute, and each edge element has source and target attributes that identify the endpoints of an edge by having the same value as the id attributes of those endpoints. Here is what a simple undirected graph with two nodes and one edge between them looks like: Additional features of the GraphML language allow its users to specify whether edges are directed or undirected, and to associate additional data with vertices or edges. == See also == yEd, a widespread graph editor that uses GraphML as its native file format (but ports, hypergraphs not supported and limited nested graphs support). Gephi, a graph visualization software that supports a limited set of GraphML. DOT (graph description language) Boost libraries allow to read from and write to GraphML format. == References == == External links == Official website GraphML Primer Comparison between XML to SVG Transformation Mechanisms, showing conversions between GraphML and SVG
Wikipedia/GraphML
In computer science, a set is an abstract data type that can store unique values, without any particular order. It is a computer implementation of the mathematical concept of a finite set. Unlike most other collection types, rather than retrieving a specific element from a set, one typically tests a value for membership in a set. Some set data structures are designed for static or frozen sets that do not change after they are constructed. Static sets allow only query operations on their elements — such as checking whether a given value is in the set, or enumerating the values in some arbitrary order. Other variants, called dynamic or mutable sets, allow also the insertion and deletion of elements from the set. A multiset is a special kind of set in which an element can appear multiple times in the set. == Type theory == In type theory, sets are generally identified with their indicator function (characteristic function): accordingly, a set of values of type A {\displaystyle A} may be denoted by 2 A {\displaystyle 2^{A}} or P ( A ) {\displaystyle {\mathcal {P}}(A)} . (Subtypes and subsets may be modeled by refinement types, and quotient sets may be replaced by setoids.) The characteristic function F {\displaystyle F} of a set S {\displaystyle S} is defined as: F ( x ) = { 1 , if x ∈ S 0 , if x ∉ S {\displaystyle F(x)={\begin{cases}1,&{\mbox{if }}x\in S\\0,&{\mbox{if }}x\not \in S\end{cases}}} In theory, many other abstract data structures can be viewed as set structures with additional operations and/or additional axioms imposed on the standard operations. For example, an abstract heap can be viewed as a set structure with a min(S) operation that returns the element of smallest value. == Operations == === Core set-theoretical operations === One may define the operations of the algebra of sets: union(S,T): returns the union of sets S and T. intersection(S,T): returns the intersection of sets S and T. difference(S,T): returns the difference of sets S and T. subset(S,T): a predicate that tests whether the set S is a subset of set T. === Static sets === Typical operations that may be provided by a static set structure S are: is_element_of(x,S): checks whether the value x is in the set S. is_empty(S): checks whether the set S is empty. size(S) or cardinality(S): returns the number of elements in S. iterate(S): returns a function that returns one more value of S at each call, in some arbitrary order. enumerate(S): returns a list containing the elements of S in some arbitrary order. build(x1,x2,…,xn,): creates a set structure with values x1,x2,...,xn. create_from(collection): creates a new set structure containing all the elements of the given collection or all the elements returned by the given iterator. === Dynamic sets === Dynamic set structures typically add: create(): creates a new, initially empty set structure. create_with_capacity(n): creates a new set structure, initially empty but capable of holding up to n elements. add(S,x): adds the element x to S, if it is not present already. remove(S, x): removes the element x from S, if it is present. capacity(S): returns the maximum number of values that S can hold. Some set structures may allow only some of these operations. The cost of each operation will depend on the implementation, and possibly also on the particular values stored in the set, and the order in which they are inserted. === Additional operations === There are many other operations that can (in principle) be defined in terms of the above, such as: pop(S): returns an arbitrary element of S, deleting it from S. pick(S): returns an arbitrary element of S. Functionally, the mutator pop can be interpreted as the pair of selectors (pick, rest), where rest returns the set consisting of all elements except for the arbitrary element. Can be interpreted in terms of iterate. map(F,S): returns the set of distinct values resulting from applying function F to each element of S. filter(P,S): returns the subset containing all elements of S that satisfy a given predicate P. fold(A0,F,S): returns the value A|S| after applying Ai+1 := F(Ai, e) for each element e of S, for some binary operation F. F must be associative and commutative for this to be well-defined. clear(S): delete all elements of S. equal(S1', S2'): checks whether the two given sets are equal (i.e. contain all and only the same elements). hash(S): returns a hash value for the static set S such that if equal(S1, S2) then hash(S1) = hash(S2) Other operations can be defined for sets with elements of a special type: sum(S): returns the sum of all elements of S for some definition of "sum". For example, over integers or reals, it may be defined as fold(0, add, S). collapse(S): given a set of sets, return the union. For example, collapse({{1}, {2, 3}}) == {1, 2, 3}. May be considered a kind of sum. flatten(S): given a set consisting of sets and atomic elements (elements that are not sets), returns a set whose elements are the atomic elements of the original top-level set or elements of the sets it contains. In other words, remove a level of nesting – like collapse, but allow atoms. This can be done a single time, or recursively flattening to obtain a set of only atomic elements. For example, flatten({1, {2, 3}}) == {1, 2, 3}. nearest(S,x): returns the element of S that is closest in value to x (by some metric). min(S), max(S): returns the minimum/maximum element of S. == Implementations == Sets can be implemented using various data structures, which provide different time and space trade-offs for various operations. Some implementations are designed to improve the efficiency of very specialized operations, such as nearest or union. Implementations described as "general use" typically strive to optimize the element_of, add, and delete operations. A simple implementation is to use a list, ignoring the order of the elements and taking care to avoid repeated values. This is simple but inefficient, as operations like set membership or element deletion are O(n), as they require scanning the entire list. Sets are often instead implemented using more efficient data structures, particularly various flavors of trees, tries, or hash tables. As sets can be interpreted as a kind of map (by the indicator function), sets are commonly implemented in the same way as (partial) maps (associative arrays) – in this case in which the value of each key-value pair has the unit type or a sentinel value (like 1) – namely, a self-balancing binary search tree for sorted sets (which has O(log n) for most operations), or a hash table for unsorted sets (which has O(1) average-case, but O(n) worst-case, for most operations). A sorted linear hash table may be used to provide deterministically ordered sets. Further, in languages that support maps but not sets, sets can be implemented in terms of maps. For example, a common programming idiom in Perl that converts an array to a hash whose values are the sentinel value 1, for use as a set, is: Other popular methods include arrays. In particular a subset of the integers 1..n can be implemented efficiently as an n-bit bit array, which also support very efficient union and intersection operations. A Bloom map implements a set probabilistically, using a very compact representation but risking a small chance of false positives on queries. The Boolean set operations can be implemented in terms of more elementary operations (pop, clear, and add), but specialized algorithms may yield lower asymptotic time bounds. If sets are implemented as sorted lists, for example, the naive algorithm for union(S,T) will take time proportional to the length m of S times the length n of T; whereas a variant of the list merging algorithm will do the job in time proportional to m+n. Moreover, there are specialized set data structures (such as the union-find data structure) that are optimized for one or more of these operations, at the expense of others. == Language support == One of the earliest languages to support sets was Pascal; many languages now include it, whether in the core language or in a standard library. In C++, the Standard Template Library (STL) provides the set template class, which is typically implemented using a binary search tree (e.g. red–black tree); SGI's STL also provides the hash_set template class, which implements a set using a hash table. C++11 has support for the unordered_set template class, which is implemented using a hash table. In sets, the elements themselves are the keys, in contrast to sequenced containers, where elements are accessed using their (relative or absolute) position. Set elements must have a strict weak ordering. The Rust standard library provides the generic HashSet and BTreeSet types. Java offers the Set interface to support sets (with the HashSet class implementing it using a hash table), and the SortedSet sub-interface to support sorted sets (with the TreeSet class implementing it using a binary search tree). Apple's Foundation framework (part of Cocoa) provides the Objective-C classes NSSet, NSMutableSet, NSCountedSet, NSOrderedSet, and NSMutableOrderedSet. The CoreFoundation APIs provide the CFSet and CFMutableSet types for use in C. Python has built-in set and frozenset types since 2.4, and since Python 3.0 and 2.7, supports non-empty set literals using a curly-bracket syntax, e.g.: {x, y, z}; empty sets must be created using set(), because Python uses {} to represent the empty dictionary. The .NET Framework provides the generic HashSet and SortedSet classes that implement the generic ISet interface. Smalltalk's class library includes Set and IdentitySet, using equality and identity for inclusion test respectively. Many dialects provide variations for compressed storage (NumberSet, CharacterSet), for ordering (OrderedSet, SortedSet, etc.) or for weak references (WeakIdentitySet). Ruby's standard library includes a set module which contains Set and SortedSet classes that implement sets using hash tables, the latter allowing iteration in sorted order. OCaml's standard library contains a Set module, which implements a functional set data structure using binary search trees. The GHC implementation of Haskell provides a Data.Set module, which implements immutable sets using binary search trees. The Tcl Tcllib package provides a set module which implements a set data structure based upon TCL lists. The Swift standard library contains a Set type, since Swift 1.2. JavaScript introduced Set as a standard built-in object with the ECMAScript 2015 standard. Erlang's standard library has a sets module. Clojure has literal syntax for hashed sets, and also implements sorted sets. LabVIEW has native support for sets, from version 2019. Ada provides the Ada.Containers.Hashed_Sets and Ada.Containers.Ordered_Sets packages. As noted in the previous section, in languages which do not directly support sets but do support associative arrays, sets can be emulated using associative arrays, by using the elements as keys, and using a dummy value as the values, which are ignored. == Multiset == A generalization of the notion of a set is that of a multiset or bag, which is similar to a set but allows repeated ("equal") values (duplicates). This is used in two distinct senses: either equal values are considered identical, and are simply counted, or equal values are considered equivalent, and are stored as distinct items. For example, given a list of people (by name) and ages (in years), one could construct a multiset of ages, which simply counts the number of people of a given age. Alternatively, one can construct a multiset of people, where two people are considered equivalent if their ages are the same (but may be different people and have different names), in which case each pair (name, age) must be stored, and selecting on a given age gives all the people of a given age. Formally, it is possible for objects in computer science to be considered "equal" under some equivalence relation but still distinct under another relation. Some types of multiset implementations will store distinct equal objects as separate items in the data structure; while others will collapse it down to one version (the first one encountered) and keep a positive integer count of the multiplicity of the element. As with sets, multisets can naturally be implemented using hash table or trees, which yield different performance characteristics. The set of all bags over type T is given by the expression bag T. If by multiset one considers equal items identical and simply counts them, then a multiset can be interpreted as a function from the input domain to the non-negative integers (natural numbers), generalizing the identification of a set with its indicator function. In some cases a multiset in this counting sense may be generalized to allow negative values, as in Python. C++'s Standard Template Library implements both sorted and unsorted multisets. It provides the multiset class for the sorted multiset, as a kind of associative container, which implements this multiset using a self-balancing binary search tree. It provides the unordered_multiset class for the unsorted multiset, as a kind of unordered associative container, which implements this multiset using a hash table. The unsorted multiset is standard as of C++11; previously SGI's STL provides the hash_multiset class, which was copied and eventually standardized. For Java, third-party libraries provide multiset functionality: Apache Commons Collections provides the Bag and SortedBag interfaces, with implementing classes like HashBag and TreeBag. Google Guava provides the Multiset interface, with implementing classes like HashMultiset and TreeMultiset. Apple provides the NSCountedSet class as part of Cocoa, and the CFBag and CFMutableBag types as part of CoreFoundation. Python's standard library includes collections.Counter, which is similar to a multiset. Smalltalk includes the Bag class, which can be instantiated to use either identity or equality as predicate for inclusion test. Where a multiset data structure is not available, a workaround is to use a regular set, but override the equality predicate of its items to always return "not equal" on distinct objects (however, such will still not be able to store multiple occurrences of the same object) or use an associative array mapping the values to their integer multiplicities (this will not be able to distinguish between equal elements at all). Typical operations on bags: contains(B, x): checks whether the element x is present (at least once) in the bag B is_sub_bag(B1, B2): checks whether each element in the bag B1 occurs in B1 no more often than it occurs in the bag B2; sometimes denoted as B1 ⊑ B2. count(B, x): returns the number of times that the element x occurs in the bag B; sometimes denoted as B # x. scaled_by(B, n): given a natural number n, returns a bag which contains the same elements as the bag B, except that every element that occurs m times in B occurs n * m times in the resulting bag; sometimes denoted as n ⊗ B. union(B1, B2): returns a bag containing just those values that occur in either the bag B1 or the bag B2, except that the number of times a value x occurs in the resulting bag is equal to (B1 # x) + (B2 # x); sometimes denoted as B1 ⊎ B2. === Multisets in SQL === In relational databases, a table can be a (mathematical) set or a multiset, depending on the presence of unicity constraints on some columns (which turns it into a candidate key). SQL allows the selection of rows from a relational table: this operation will in general yield a multiset, unless the keyword DISTINCT is used to force the rows to be all different, or the selection includes the primary (or a candidate) key. In ANSI SQL the MULTISET keyword can be used to transform a subquery into a collection expression: is a general select that can be used as subquery expression of another more general query, while transforms the subquery into a collection expression that can be used in another query, or in assignment to a column of appropriate collection type. == See also == Bloom filter Disjoint set Set (mathematics) == Notes == == References ==
Wikipedia/Set_(computer_science)
Graph Modeling Language (GML) is a hierarchical ASCII-based file format for describing graphs. It has been also named Graph Meta Language. == Example == A simple graph in GML format: graph [ comment "This is a sample graph" directed 1 id 42 label "Hello, I am a graph" node [ id 1 label "node 1" thisIsASampleAttribute 42 ] node [ id 2 label "node 2" thisIsASampleAttribute 43 ] node [ id 3 label "node 3" thisIsASampleAttribute 44 ] edge [ source 1 target 2 label "Edge from node 1 to node 2" ] edge [ source 2 target 3 label "Edge from node 2 to node 3" ] edge [ source 3 target 1 label "Edge from node 3 to node 1" ] ] == Applications supporting GML == Cytoscape, an open source bioinformatics software platform for visualizing molecular interaction networks, loads and save previously-constructed interaction networks in GML. igraph, an open source network analysis library with interfaces to multiple programming languages. Gephi, an open source graph visualization and manipulation software. Graph-tool, a free Python module for manipulation and statistical analysis of graphs. NetworkX, an open source Python library for studying complex graphs. Tulip (software) is a free software in the domain of information visualisation capable of manipulating huge graphs (with more than 1.000.000 elements). yEd, a free Java-based graph editor, supports import from and export to GML. The Graphviz project includes two command-line tools (gml2gv and gv2gml) that can convert to and from the DOT file format. Wolfram Language, a general very high-level programming language, supports GML import and export. == See also == Graph Query Language (GQL) DGML == References == == External links == GML: A portable Graph File Format, Michael Himsolt - 2010/11/30 (archived version) Unravelling Graph-Exchange File Formats, by Matthew Roughan and Jonathan Tuke, 2015, https://arxiv.org/pdf/1503.02781.pdf
Wikipedia/Graph_Modelling_Language
In graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees. A directed tree, oriented tree, polytree, or singly connected network is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. The various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory, although such data structures are generally rooted trees. A rooted tree may be directed, called a directed rooted tree, either making all its edges point away from the root—in which case it is called an arborescence or out-tree—or making all its edges point towards the root—in which case it is called an anti-arborescence or in-tree. A rooted tree itself has been defined by some authors as a directed graph. A rooted forest is a disjoint union of rooted trees. A rooted forest may be directed, called a directed rooted forest, either making all its edges point away from the root in each rooted tree—in which case it is called a branching or out-forest—or making all its edges point towards the root in each rooted tree—in which case it is called an anti-branching or in-forest. The term tree was coined in 1857 by the British mathematician Arthur Cayley. == Definitions == === Tree === A tree is an undirected graph G that satisfies any of the following equivalent conditions: G is connected and acyclic (contains no cycles). G is acyclic, and a simple cycle is formed if any edge is added to G. G is connected, but would become disconnected if any single edge is removed from G. G is connected and the complete graph K3 is not a minor of G. Any two vertices in G can be connected by a unique simple path. If G has finitely many vertices, say n of them, then the above statements are also equivalent to any of the following conditions: G is connected and has n − 1 edges. G is connected, and every subgraph of G includes at least one vertex with zero or one incident edges. (That is, G is connected and 1-degenerate.) G has no simple cycles and has n − 1 edges. As elsewhere in graph theory, the order-zero graph (graph with no vertices) is generally not considered to be a tree: while it is vacuously connected as a graph (any two vertices can be connected by a path), it is not 0-connected (or even (−1)-connected) in algebraic topology, unlike non-empty trees, and violates the "one more vertex than edges" relation. It may, however, be considered as a forest consisting of zero trees. An internal vertex (or inner vertex) is a vertex of degree at least 2. Similarly, an external vertex (or outer vertex, terminal vertex or leaf) is a vertex of degree 1. A branch vertex in a tree is a vertex of degree at least 3. An irreducible tree (or series-reduced tree) is a tree in which there is no vertex of degree 2 (enumerated at sequence A000014 in the OEIS). === Forest === A forest is an undirected acyclic graph or equivalently a disjoint union of trees. Trivially so, each connected component of a forest is a tree. As special cases, the order-zero graph (a forest consisting of zero trees), a single tree, and an edgeless graph, are examples of forests. Since for every tree V − E = 1, we can easily count the number of trees that are within a forest by subtracting the difference between total vertices and total edges. V − E = number of trees in a forest. === Polytree === A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic. Some authors restrict the phrase "directed tree" to the case where the edges are all directed towards a particular vertex, or all directed away from a particular vertex (see arborescence). === Polyforest === A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is acyclic. As with directed trees, some authors restrict the phrase "directed forest" to the case where the edges of each connected component are all directed towards a particular vertex, or all directed away from a particular vertex (see branching). === Rooted tree === A rooted tree is a tree in which one vertex has been designated the root. The edges of a rooted tree can be assigned a natural orientation, either away from or towards the root, in which case the structure becomes a directed rooted tree. When a directed rooted tree has an orientation away from the root, it is called an arborescence or out-tree; when it has an orientation towards the root, it is called an anti-arborescence or in-tree. The tree-order is the partial ordering on the vertices of a tree with u < v if and only if the unique path from the root to v passes through u. A rooted tree T that is a subgraph of some graph G is a normal tree if the ends of every T-path in G are comparable in this tree-order (Diestel 2005, p. 15). Rooted trees, often with an additional structure such as an ordering of the neighbors at each vertex, are a key data structure in computer science; see tree data structure. In a context where trees typically have a root, a tree without any designated root is called a free tree. A labeled tree is a tree in which each vertex is given a unique label. The vertices of a labeled tree on n vertices (for nonnegative integers n) are typically given the labels 1, 2, …, n. A recursive tree is a labeled rooted tree where the vertex labels respect the tree order (i.e., if u < v for two vertices u and v, then the label of u is smaller than the label of v). In a rooted tree, the parent of a vertex v is the vertex connected to v on the path to the root; every vertex has a unique parent, except the root has no parent. A child of a vertex v is a vertex of which v is the parent. An ascendant of a vertex v is any vertex that is either the parent of v or is (recursively) an ascendant of a parent of v. A descendant of a vertex v is any vertex that is either a child of v or is (recursively) a descendant of a child of v. A sibling to a vertex v is any other vertex on the tree that shares a parent with v. A leaf is a vertex with no children. An internal vertex is a vertex that is not a leaf. The height of a vertex in a rooted tree is the length of the longest downward path to a leaf from that vertex. The height of the tree is the height of the root. The depth of a vertex is the length of the path to its root (root path). The depth of a tree is the maximum depth of any vertex. Depth is commonly needed in the manipulation of the various self-balancing trees, AVL trees in particular. The root has depth zero, leaves have height zero, and a tree with only a single vertex (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (a tree with no vertices, if such are allowed) has depth and height −1. A k-ary tree (for nonnegative integers k) is a rooted tree in which each vertex has at most k children. 2-ary trees are often called binary trees, while 3-ary trees are sometimes called ternary trees. === Ordered tree === An ordered tree (alternatively, plane tree or positional tree) is a rooted tree in which an ordering is specified for the children of each vertex. This is called a "plane tree" because an ordering of the children is equivalent to an embedding of the tree in the plane, with the root at the top and the children of each vertex lower than that vertex. Given an embedding of a rooted tree in the plane, if one fixes a direction of children, say left to right, then an embedding gives an ordering of the children. Conversely, given an ordered tree, and conventionally drawing the root at the top, then the child vertices in an ordered tree can be drawn left-to-right, yielding an essentially unique planar embedding. == Properties == Every tree is a bipartite graph. A graph is bipartite if and only if it contains no cycles of odd length. Since a tree contains no cycles at all, it is bipartite. Every tree with only countably many vertices is a planar graph. Every connected graph G admits a spanning tree, which is a tree that contains every vertex of G and whose edges are edges of G. More specific types spanning trees, existing in every connected finite graph, include depth-first search trees and breadth-first search trees. Generalizing the existence of depth-first-search trees, every connected graph with only countably many vertices has a Trémaux tree. However, some uncountable-order graphs do not have such a tree. Every finite tree with n vertices, with n > 1, has at least two terminal vertices (leaves). This minimal number of leaves is characteristic of path graphs; the maximal number, n − 1, is attained only by star graphs. The number of leaves is at least the maximum vertex degree. For any three vertices in a tree, the three paths between them have exactly one vertex in common. More generally, a vertex in a graph that belongs to three shortest paths among three vertices is called a median of these vertices. Because every three vertices in a tree have a unique median, every tree is a median graph. Every tree has a center consisting of one vertex or two adjacent vertices. The center is the middle vertex or middle two vertices in every longest path. Similarly, every n-vertex tree has a centroid consisting of one vertex or two adjacent vertices. In the first case removal of the vertex splits the tree into subtrees of fewer than n/2 vertices. In the second case, removal of the edge between the two centroidal vertices splits the tree into two subtrees of exactly n/2 vertices. The maximal cliques of a tree are precisely its edges, implying that the class of trees has few cliques. == Enumeration == === Labeled trees === Cayley's formula states that there are nn−2 trees on n labeled vertices. A classic proof uses Prüfer sequences, which naturally show a stronger result: the number of trees with vertices 1, 2, …, n of degrees d1, d2, …, dn respectively, is the multinomial coefficient ( n − 2 d 1 − 1 , d 2 − 1 , … , d n − 1 ) . {\displaystyle {n-2 \choose d_{1}-1,d_{2}-1,\ldots ,d_{n}-1}.} A more general problem is to count spanning trees in an undirected graph, which is addressed by the matrix tree theorem. (Cayley's formula is the special case of spanning trees in a complete graph.) The similar problem of counting all the subtrees regardless of size is #P-complete in the general case (Jerrum (1994)). === Unlabeled trees === Counting the number of unlabeled free trees is a harder problem. No closed formula for the number t(n) of trees with n vertices up to graph isomorphism is known. The first few values of t(n) are 1, 1, 1, 1, 2, 3, 6, 11, 23, 47, 106, 235, 551, 1301, 3159, … (sequence A000055 in the OEIS). Otter (1948) proved the asymptotic estimate t ( n ) ∼ C α n n − 5 / 2 as n → ∞ , {\displaystyle t(n)\sim C\alpha ^{n}n^{-5/2}\quad {\text{as }}n\to \infty ,} with C ≈ 0.534949606... and α ≈ 2.95576528565... (sequence A051491 in the OEIS). Here, the ~ symbol means that lim n → ∞ t ( n ) C α n n − 5 / 2 = 1. {\displaystyle \lim _{n\to \infty }{\frac {t(n)}{C\alpha ^{n}n^{-5/2}}}=1.} This is a consequence of his asymptotic estimate for the number r(n) of unlabeled rooted trees with n vertices: r ( n ) ∼ D α n n − 3 / 2 as n → ∞ , {\displaystyle r(n)\sim D\alpha ^{n}n^{-3/2}\quad {\text{as }}n\to \infty ,} with D ≈ 0.43992401257... and the same α as above (cf. Knuth (1997), chap. 2.3.4.4 and Flajolet & Sedgewick (2009), chap. VII.5, p. 475). The first few values of r(n) are 1, 1, 2, 4, 9, 20, 48, 115, 286, 719, 1842, 4766, 12486, 32973, … (sequence A000081 in the OEIS). == Types of trees == A path graph (or linear graph) consists of n vertices arranged in a line, so that vertices i and i + 1 are connected by an edge for i = 1, …, n – 1. A starlike tree consists of a central vertex called root and several path graphs attached to it. More formally, a tree is starlike if it has exactly one vertex of degree greater than 2. A star tree is a tree which consists of a single internal vertex (and n – 1 leaves). In other words, a star tree of order n is a tree of order n with as many leaves as possible. A caterpillar tree is a tree in which all vertices are within distance 1 of a central path subgraph. A lobster tree is a tree in which all vertices are within distance 2 of a central path subgraph. A regular tree of degree d is the infinite tree with d edges at each vertex. These arise as the Cayley graphs of free groups, and in the theory of Tits buildings. In statistical mechanics they are known as Bethe lattices. == See also == Decision tree Hypertree Multitree Pseudoforest Tree structure (general) Tree (data structure) Unrooted binary tree == Notes == == References == Bender, Edward A.; Williamson, S. Gill (2010), Lists, Decisions and Graphs. With an Introduction to Probability Dasgupta, Sanjoy (1999), "Learning polytrees", Proc. 15th Conference on Uncertainty in Artificial Intelligence (UAI 1999), Stockholm, Sweden, July–August 1999 (PDF), pp. 134–141. Deo, Narsingh (1974), Graph Theory with Applications to Engineering and Computer Science (PDF), Englewood, New Jersey: Prentice-Hall, ISBN 0-13-363473-6, archived (PDF) from the original on 2019-05-17 Harary, Frank; Prins, Geert (1959), "The number of homeomorphically irreducible trees, and other species", Acta Mathematica, 101 (1–2): 141–162, doi:10.1007/BF02559543, ISSN 0001-5962 Harary, Frank; Sumner, David (1980), "The dichromatic number of an oriented tree", Journal of Combinatorics, Information & System Sciences, 5 (3): 184–187, MR 0603363. Kim, Jin H.; Pearl, Judea (1983), "A computational model for causal and diagnostic reasoning in inference engines", Proc. 8th International Joint Conference on Artificial Intelligence (IJCAI 1983), Karlsruhe, Germany, August 1983 (PDF), pp. 190–193. Li, Gang (1996), "Generation of Rooted Trees and Free Trees", M.S. Thesis, Dept. of Computer Science, University of Victoria, BC, Canada (PDF), p. 9. Simion, Rodica (1991), "Trees with 1-factors and oriented trees", Discrete Mathematics, 88 (1): 93–104, doi:10.1016/0012-365X(91)90061-6, MR 1099270. == Further reading == Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4. Flajolet, Philippe; Sedgewick, Robert (2009), Analytic Combinatorics, Cambridge University Press, ISBN 978-0-521-89806-5 "Tree", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Knuth, Donald E. (November 14, 1997), The Art of Computer Programming Volume 1: Fundamental Algorithms (3rd ed.), Addison-Wesley Professional Jerrum, Mark (1994), "Counting trees in a graph is #P-complete", Information Processing Letters, 51 (3): 111–116, doi:10.1016/0020-0190(94)00085-9, ISSN 0020-0190. Otter, Richard (1948), "The Number of Trees", Annals of Mathematics, Second Series, 49 (3): 583–599, doi:10.2307/1969046, JSTOR 1969046.
Wikipedia/Root_(graph_theory)
Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications such as social network analysis, cartography, linguistics, and bioinformatics. A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph. In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, and aesthetics. The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map. == Graphical conventions == Graphs are frequently drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented as line segments, polylines, or curves in the Euclidean plane. Node–link diagrams can be traced back to the 14th-16th century works of Pseudo-Lull which were published under the name of Ramon Llull, a 13th century polymath. Pseudo-Lull drew diagrams of this type for complete graphs in order to analyze all pairwise combinations among sets of metaphysical concepts. In the case of directed graphs, arrowheads form a commonly used graphical convention to show their orientation; however, user studies have shown that other conventions such as tapering provide this information more effectively. Upward planar drawing uses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary. Alternative conventions to node–link diagrams include adjacency representations such as circle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions; intersection representations in which vertices are represented by non-disjoint geometric objects and edges are represented by their intersections; visibility representations in which vertices are represented by regions in the plane and edges are represented by regions that have an unobstructed line of sight to each other; confluent drawings, in which edges are represented as smooth curves within mathematical train tracks; fabrics, in which nodes are represented as horizontal lines and edges as vertical lines; and visualizations of the adjacency matrix of the graph. == Quality measures == Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability. In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures. The crossing number of a drawing is the number of pairs of edges that cross each other. If the graph is planar, then it is often convenient to draw it without any edge intersections; that is, in this case, a graph drawing represents a graph embedding. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings. The area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly. The aspect ratio of the bounding box may also be important. Symmetry display is the problem of finding symmetry groups within a given graph, and finding a drawing that displays as much of the symmetry as possible. Some layout methods automatically lead to symmetric drawings; alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing. It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of control points on the edge. Several commonly used quality measures concern lengths of edges: it is generally desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than highly varied. Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree. The slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges (allowing crossings). Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number; it remains open whether the slope number of degree-4 graphs is bounded. == Layout methods == There are many different graph layout strategies: In force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics. Typically, these systems combine attractive forces between adjacent vertices with repulsive forces between all pairs of vertices, in order to seek a layout in which edge lengths are small while vertices are well-separated. These systems may perform gradient descent based minimization of an energy function, or they may translate the forces directly into velocities or accelerations for the moving vertices. Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of the graph. Orthogonal layout methods, which allow the edges of the graph to run horizontally or vertically, parallel to the coordinate axes of the layout. These methods were originally designed for VLSI and PCB layout problems but they have also been adapted for graph drawing. They typically involve a multiphase approach in which an input graph is planarized by replacing crossing points by vertices, a topological embedding of the planarized graph is found, edge orientations are chosen to minimize bends, vertices are placed consistently with these orientations, and finally a layout compaction stage reduces the area of the drawing. Tree layout algorithms these show a rooted tree-like formation, suitable for trees. Often, in a technique called "balloon layout", the children of each node in the tree are drawn on a circle surrounding the node, with the radii of these circles diminishing at lower levels in the tree so that these circles do not overlap. Layered graph drawing methods (often called Sugiyama-style drawing) are best suited for directed acyclic graphs or graphs that are nearly acyclic, such as the graphs of dependencies between modules or functions in a software system. In these methods, the nodes of the graph are arranged into horizontal layers using methods such as the Coffman–Graham algorithm, in such a way that most edges go downwards from one layer to the next; after this step, the nodes within each layer are arranged in order to minimize crossings. Arc diagrams, a layout style dating back to the 1960s, place vertices on a line; edges may be drawn as semicircles above or below the line, or as smooth curves linked together from multiple semicircles. Circular layout methods place the vertices of the graph on a circle, choosing carefully the ordering of the vertices around the circle to reduce crossings and place adjacent vertices close to each other. Edges may be drawn either as chords of the circle or as arcs inside or outside of the circle. In some cases, multiple circles may be used. Dominance drawing places vertices in such a way that one vertex is upwards, rightwards, or both of another if and only if it is reachable from the other vertex. In this way, the layout style makes the reachability relation of the graph visually apparent. == Application-specific graph drawings == Graphs and graph drawings arising in other areas of application include Sociograms, drawings of a social network, as often offered by social network analysis software Hasse diagrams, a type of graph drawing specialized to partial orders Dessin d'enfants, a type of graph drawing used in algebraic geometry State diagrams, graphical representations of finite-state machines Computer network diagrams, depictions of the nodes and connections in a computer network Flowcharts and drakon-charts, drawings in which the nodes represent the steps of an algorithm and the edges represent control flow between steps. Project network, graphical depiction of the chronological order in which activities of a project are to be completed. Data-flow diagrams, drawings in which the nodes represent the components of an information system and the edges represent the movement of information from one component to another. Bioinformatics including phylogenetic trees, protein–protein interaction networks, and metabolic pathways. In addition, the placement and routing steps of electronic design automation (EDA) are similar in many ways to graph drawing, as is the problem of greedy embedding in distributed computing, and the graph drawing literature includes several results borrowed from the EDA literature. However, these problems also differ in several important ways: for instance, in EDA, area minimization and signal length are more important than aesthetics, and the routing problem in EDA may have more than two terminals per net while the analogous problem in graph drawing generally only involves pairs of vertices for each edge. == Software == Software, systems, and providers of systems for drawing graphs include: BioFabric open-source software for visualizing large networks by drawing nodes as horizontal lines. Cytoscape, open-source software for visualizing molecular interaction networks Gephi, open-source network analysis and visualization software graph-tool, a free/libre Python library for analysis of graphs Graphviz, an open-source graph drawing system from AT&T Corporation Linkurious, a commercial network analysis and visualization software for graph databases Mathematica, a general-purpose computation tool that includes 2D and 3D graph visualization and graph analysis tools. Microsoft Automatic Graph Layout, open-source .NET library (formerly called GLEE) for laying out graphs NetworkX is a Python library for studying graphs and networks. Tulip, an open-source data visualization tool yEd, a graph editor with graph layout functionality PGF/TikZ 3.0 with the graphdrawing package (requires LuaTeX). LaNet-vi, an open-source large network visualization software == See also == International Symposium on Graph Drawing List of Unified Modeling Language tools == References == === Footnotes === === General references === === Specialized subtopics === == Further reading == == External links == GraphX library for .NET Archived 2018-01-26 at the Wayback Machine: open-source WPF library for graph calculation and visualization. Supports many layout and edge routing algorithms. Graph drawing e-print archive: including information on papers from all Graph Drawing symposia.
Wikipedia/Graph_drawing_software
In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants. Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity). == General traits == The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms. The CoC is strongly normalizing, and hence consistent. == Usage == The CoC has been developed alongside the Coq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Coq. Variants of the CoC are used in other proof assistants, such as Matita and Lean. == The basics of the calculus of constructions == The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions"). === Terms === A term in the calculus of constructions is constructed using the following rules: T {\displaystyle \mathbf {T} } is a term (also called type); P {\displaystyle \mathbf {P} } is a term (also called prop, the type of all propositions); Variables ( x , y , … {\displaystyle x,y,\ldots } ) are terms; If A {\displaystyle A} and B {\displaystyle B} are terms, then so is ( A B ) {\displaystyle (AB)} ; If A {\displaystyle A} and B {\displaystyle B} are terms and x {\displaystyle x} is a variable, then the following are also terms: ( λ x : A . B ) {\displaystyle (\lambda x:A.B)} , ( ∀ x : A . B ) {\displaystyle (\forall x:A.B)} . In other words, the term syntax, in Backus–Naur form, is then: e ::= T ∣ P ∣ x ∣ e e ∣ λ x : e . e ∣ ∀ x : e . e {\displaystyle e::=\mathbf {T} \mid \mathbf {P} \mid x\mid e\,e\mid \lambda x{\mathbin {:}}e.e\mid \forall x{\mathbin {:}}e.e} The calculus of constructions has five kinds of objects: proofs, which are terms whose types are propositions; propositions, which are also known as small types; predicates, which are functions that return propositions; large types, which are the types of predicates ( P {\displaystyle \mathbf {P} } is an example of a large type); T {\displaystyle \mathbf {T} } itself, which is the type of large types. === β-equivalence === As with the untyped lambda calculus, the calculus of constructions uses a basic notion of equivalence of terms, known as β {\displaystyle \beta } -equivalence. This captures the meaning of λ {\displaystyle \lambda } -abstraction: ( λ x : A . B ) N = β B ( x := N ) {\displaystyle (\lambda x:A.B)N=_{\beta }B(x:=N)} β {\displaystyle \beta } -equivalence is a congruence relation for the calculus of constructions, in the sense that If A = β B {\displaystyle A=_{\beta }B} and M = β N {\displaystyle M=_{\beta }N} , then A M = β B N {\displaystyle AM=_{\beta }BN} . === Judgments === The calculus of constructions allows proving typing judgments: x 1 : A 1 , x 2 : A 2 , … ⊢ t : B {\displaystyle x_{1}:A_{1},x_{2}:A_{2},\ldots \vdash t:B} , which can be read as the implication If variables x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } have, respectively, types A 1 , A 2 , … {\displaystyle A_{1},A_{2},\ldots } , then term t {\displaystyle t} has type B {\displaystyle B} . The valid judgments for the calculus of constructions are derivable from a set of inference rules. In the following, we use Γ {\displaystyle \Gamma } to mean a sequence of type assignments x 1 : A 1 , x 2 : A 2 , … {\displaystyle x_{1}:A_{1},x_{2}:A_{2},\ldots } ; A , B , C , D {\displaystyle A,B,C,D} to mean terms; and K , L {\displaystyle K,L} to mean either P {\displaystyle \mathbf {P} } or T {\displaystyle \mathbf {T} } . We shall write B [ x := N ] {\displaystyle B[x:=N]} to mean the result of substituting the term N {\displaystyle N} for the free variable x {\displaystyle x} in the term B {\displaystyle B} . An inference rule is written in the form Γ ⊢ A : B Γ ′ ⊢ C : D {\displaystyle {\frac {\Gamma \vdash A:B}{\Gamma '\vdash C:D}}} , which means if Γ ⊢ A : B {\displaystyle \Gamma \vdash A:B} is a valid judgment, then so is Γ ′ ⊢ C : D {\displaystyle \Gamma '\vdash C:D} . === Inference rules for the calculus of constructions === 1. Γ ⊢ P : T {\displaystyle {{} \over \Gamma \vdash \mathbf {P} :\mathbf {T} }} 2. Γ ⊢ A : K Γ , x : A , Γ ′ ⊢ x : A {\displaystyle {{\Gamma \vdash A:K} \over {\Gamma ,x:A,\Gamma '\vdash x:A}}} 3. Γ ⊢ A : K Γ , x : A ⊢ B : L Γ ⊢ ( ∀ x : A . B ) : L {\displaystyle {\Gamma \vdash A:K\qquad \qquad \Gamma ,x:A\vdash B:L \over {\Gamma \vdash (\forall x:A.B):L}}} 4. Γ ⊢ A : K Γ , x : A ⊢ N : B Γ ⊢ ( λ x : A . N ) : ( ∀ x : A . B ) {\displaystyle {\Gamma \vdash A:K\qquad \qquad \Gamma ,x:A\vdash N:B \over {\Gamma \vdash (\lambda x:A.N):(\forall x:A.B)}}} 5. Γ ⊢ M : ( ∀ x : A . B ) Γ ⊢ N : A Γ ⊢ M N : B [ x := N ] {\displaystyle {\Gamma \vdash M:(\forall x:A.B)\qquad \qquad \Gamma \vdash N:A \over {\Gamma \vdash MN:B[x:=N]}}} 6. Γ ⊢ M : A A = β B Γ ⊢ B : K Γ ⊢ M : B {\displaystyle {\Gamma \vdash M:A\qquad \qquad A=_{\beta }B\qquad \qquad \Gamma \vdash B:K \over {\Gamma \vdash M:B}}} === Defining logical operators === The calculus of constructions has very few basic operators: the only logical operator for forming propositions is ∀ {\displaystyle \forall } . However, this one operator is sufficient to define all the other logical operators: A ⇒ B ≡ ∀ x : A . B ( x ∉ B ) A ∧ B ≡ ∀ C : P . ( A ⇒ B ⇒ C ) ⇒ C A ∨ B ≡ ∀ C : P . ( A ⇒ C ) ⇒ ( B ⇒ C ) ⇒ C ¬ A ≡ ∀ C : P . ( A ⇒ C ) ∃ x : A . B ≡ ∀ C : P . ( ∀ x : A . ( B ⇒ C ) ) ⇒ C {\displaystyle {\begin{array}{ccll}A\Rightarrow B&\equiv &\forall x:A.B&(x\notin B)\\A\wedge B&\equiv &\forall C:\mathbf {P} .(A\Rightarrow B\Rightarrow C)\Rightarrow C&\\A\vee B&\equiv &\forall C:\mathbf {P} .(A\Rightarrow C)\Rightarrow (B\Rightarrow C)\Rightarrow C&\\\neg A&\equiv &\forall C:\mathbf {P} .(A\Rightarrow C)&\\\exists x:A.B&\equiv &\forall C:\mathbf {P} .(\forall x:A.(B\Rightarrow C))\Rightarrow C&\end{array}}} === Defining data types === The basic data types used in computer science can be defined within the calculus of constructions: Booleans ∀ A : P . A ⇒ A ⇒ A {\displaystyle \forall A:\mathbf {P} .A\Rightarrow A\Rightarrow A} Naturals ∀ A : P . ( A ⇒ A ) ⇒ A ⇒ A {\displaystyle \forall A:\mathbf {P} .(A\Rightarrow A)\Rightarrow A\Rightarrow A} Product A × B {\displaystyle A\times B} A ∧ B {\displaystyle A\wedge B} Disjoint union A + B {\displaystyle A+B} A ∨ B {\displaystyle A\vee B} Note that Booleans and Naturals are defined in the same way as in Church encoding. However, additional problems arise from propositional extensionality and proof irrelevance. == See also == Pure type system Lambda cube System F Dependent type Intuitionistic type theory Homotopy type theory == References == == Sources ==
Wikipedia/Calculus_of_Inductive_Constructions
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class. Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects. == Examples == A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting. An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change. The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios are not invariant under non-uniform scaling (such as stretching). The sum of a triangle's interior angles (180°) is invariant under all the above operations. As another example, all circles are similar: they can be transformed into each other and the ratio of the circumference to the diameter is invariant (denoted by the Greek letter π (pi)). Some more complicated examples: The real part and the absolute value of a complex number are invariant under complex conjugation. The tricolorability of knots. The degree of a polynomial is invariant under a linear change of variables. The dimension and homology groups of a topological object are invariant under homeomorphism. The number of fixed points of a dynamical system is invariant under many mathematical operations. Euclidean distance is invariant under orthogonal transformations. Area is invariant under linear maps which have determinant ±1 (see Equiareal map § Linear transformations). Some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross-ratio. The determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. In other words, the spectrum of a matrix is invariant under a change of basis. The principal invariants of tensors do not change with rotation of the coordinate system (see Invariants of tensors). The singular values of a matrix are invariant under orthogonal transformations. Lebesgue measure is invariant under translations. The variance of a probability distribution is invariant under translations of the real line. Hence the variance of a random variable is unchanged after the addition of a constant. The fixed points of a transformation are the elements in the domain that are invariant under the transformation. They may, depending on the application, be called symmetric with respect to that transformation. For example, objects with translational symmetry are invariant under certain translations. The integral ∫ M K d μ {\textstyle \int _{M}K\,d\mu } of the Gaussian curvature K {\displaystyle K} of a two-dimensional Riemannian manifold ( M , g ) {\displaystyle (M,g)} is invariant under changes of the Riemannian metric g {\displaystyle g} . This is the Gauss–Bonnet theorem. === MU puzzle === The MU puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. The puzzle asks one to start with the word MI and transform it into the word MU, using in each step one of the following transformation rules: If a string ends with an I, a U may be appended (xI → xIU) The string after the M may be completely duplicated (Mx → Mxx) Any three consecutive I's (III) may be replaced with a single U (xIIIy → xUy) Any two consecutive U's may be removed (xUUy → xy) An example derivation (with superscripts indicating the applied rules) is MI →2 MII →2 MIIII →3 MUI →2 MUIUI →1 MUIUIU →2 MUIUIUUIUIU →4 MUIUIIUIU → ... In light of this, one might wonder whether it is possible to convert MI into MU, using only these four transformation rules. One could spend many hours applying these transformation rules to strings. However, it might be quicker to find a property that is invariant to all rules (that is, not changed by any of them), and that demonstrates that getting to MU is impossible. By looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any I's is to have three consecutive I's in the string. This makes the following invariant interesting to consider: The number of I's in the string is not a multiple of 3. This is an invariant to the problem, if for each of the transformation rules the following holds: if the invariant held before applying the rule, it will also hold after applying it. Looking at the net effect of applying the rules on the number of I's and U's, one can see this actually is the case for all rules: The table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of I's was not a multiple of three before applying the rule, then it will not be afterwards either. Given that there is a single I in the starting string MI, and one is not a multiple of three, one can then conclude that it is impossible to go from MI to MU (as the number of I's will never be a multiple of three). == Invariant set == A subset S of the domain U of a mapping T: U → U is an invariant set under the mapping when x ∈ S ⟺ T ( x ) ∈ S . {\displaystyle x\in S\iff T(x)\in S.} The elements of S are not necessarily fixed, even though the set S is fixed in the power set of U. (Some authors use the terminology setwise invariant, vs. pointwise invariant, to distinguish between these cases.) For example, a circle is an invariant subset of the plane under a rotation about the circle's center. Further, a conical surface is invariant as a set under a homothety of space. An invariant set of an operation T is also said to be stable under T. For example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group. In linear algebra, if a linear transformation T has an eigenvector v, then the line through 0 and v is an invariant set under T, in which case the eigenvectors span an invariant subspace which is stable under T. When T is a screw displacement, the screw axis is an invariant line, though if the pitch is non-zero, T has no fixed points. In probability theory and ergodic theory, invariant sets are usually defined via the stronger property x ∈ S ⇔ T ( x ) ∈ S . {\displaystyle x\in S\Leftrightarrow T(x)\in S.} When the map T {\displaystyle T} is measurable, invariant sets form a sigma-algebra, the invariant sigma-algebra. == Formal statement == The notion of invariance is formalized in three different ways in mathematics: via group actions, presentations, and deformation. === Unchanged under group action === Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group. Frequently one will have a group acting on a set X, which leaves one to determine which objects in an associated set F(X) are invariant. For example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. Formally, define the set of lines in the plane P as L(P); then a rigid motion of the plane takes lines to lines – the group of rigid motions acts on the set of lines – and one may ask which lines are unchanged by an action. More importantly, one may define a function on a set, such as "radius of a circle in the plane", and then ask if this function is invariant under a group action, such as rigid motions. Dual to the notion of invariants are coinvariants, also known as orbits, which formalizes the notion of congruence: objects which can be taken to each other by a group action. For example, under the group of rigid motions of the plane, the perimeter of a triangle is an invariant, while the set of triangles congruent to a given triangle is a coinvariant. These are connected as follows: invariants are constant on coinvariants (for example, congruent triangles have the same perimeter), while two objects which agree in the value of one invariant may or may not be congruent (for example, two triangles with the same perimeter need not be congruent). In classification problems, one might seek to find a complete set of invariants, such that if two objects have the same values for this set of invariants, then they are congruent. For example, triangles such that all three sides are equal are congruent under rigid motions, via SSS congruence, and thus the lengths of all three sides form a complete set of invariants for triangles. The three angle measures of a triangle are also invariant under rigid motions, but do not form a complete set as incongruent triangles can share the same angle measures. However, if one allows scaling in addition to rigid motions, then the AAA similarity criterion shows that this is a complete set of invariants. === Independent of presentation === Secondly, a function may be defined in terms of some presentation or decomposition of a mathematical object; for instance, the Euler characteristic of a cell complex is defined as the alternating sum of the number of cells in each dimension. One may forget the cell complex structure and look only at the underlying topological space (the manifold) – as different cell complexes give the same underlying manifold, one may ask if the function is independent of choice of presentation, in which case it is an intrinsically defined invariant. This is the case for the Euler characteristic, and a general method for defining and computing invariants is to define them for a given presentation, and then show that they are independent of the choice of presentation. Note that there is no notion of a group action in this sense. The most common examples are: The presentation of a manifold in terms of coordinate charts – invariants must be unchanged under change of coordinates. Various manifold decompositions, as discussed for Euler characteristic. Invariants of a presentation of a group. === Unchanged under perturbation === Thirdly, if one is studying an object which varies in a family, as is common in algebraic geometry and differential geometry, one may ask if the property is unchanged under perturbation (for example, if an object is constant on families or invariant under change of metric). == Invariants in computer science == In computer science, an invariant is a logical assertion that is always held to be true during a certain phase of execution of a computer program. For example, a loop invariant is a condition that is true at the beginning and the end of every iteration of a loop. Invariants are especially useful when reasoning about the correctness of a computer program. The theory of optimizing compilers, the methodology of design by contract, and formal methods for determining program correctness, all rely heavily on invariants. Programmers often use assertions in their code to make invariants explicit. Some object oriented programming languages have a special syntax for specifying class invariants. === Automatic invariant detection in imperative programs === Abstract interpretation tools can compute simple invariants of given imperative computer programs. The kind of properties that can be found depend on the abstract domains used. Typical example properties are single integer variable ranges like 0<=x<1024, relations between several variables like 0<=i-j<2*n-1, and modulus information like y%4==0. Academic research prototypes also consider simple properties of pointer structures. More sophisticated invariants generally have to be provided manually. In particular, when verifying an imperative program using the Hoare calculus, a loop invariant has to be provided manually for each loop in the program, which is one of the reasons that this approach is generally impractical for most programs. In the context of the above MU puzzle example, there is currently no general automated tool that can detect that a derivation from MI to MU is impossible using only the rules 1–4. However, once the abstraction from the string to the number of its "I"s has been made by hand, leading, for example, to the following C program, an abstract interpretation tool will be able to detect that ICount%3 cannot be 0, and hence the "while"-loop will never terminate. == See also == == Notes == == References == == External links == "Applet: Visual Invariants in Sorting Algorithms" Archived 2022-02-24 at the Wayback Machine by William Braynen in 1997
Wikipedia/Invariant_(computer_science)
The scientific method is an empirical method for acquiring knowledge that has been referred to while doing science since at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a testable hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results. Although procedures vary across fields, the underlying process is often similar. In more detail: the scientific method involves making conjectures (hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested. While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order. Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance. == History == The history of the scientific method considers changes in the methodology of scientific inquiry, not the history of science itself. The development of rules for scientific reasoning has not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge. Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Aristotle, Epicurus, Alhazen, Avicenna, Al-Biruni, Roger Bacon, and William of Ockham. In the Scientific Revolution of the 16th and 17th centuries, some of the most important developments were the furthering of empiricism by Francis Bacon and Robert Hooke, the rationalist approach described by René Descartes, and inductivism, brought to particular prominence by Isaac Newton and those who followed him. Experiments were advocated by Francis Bacon and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by the skeptic Francisco Sanches, by idealists as well as empiricists John Locke, George Berkeley, and David Hume. C. S. Peirce formulated the hypothetico-deductive model in the 20th century, and the model has undergone significant revision since. The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience". Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable. === Modern use and critical thought === The term "scientific method" came into popular use in the twentieth century; Dewey's 1910 book, How We Think, inspired popular guidelines. It appeared in dictionaries and science textbooks, although there was little consensus on its meaning. Although there was growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science; Karl Popper, and Gauch 2003, disagreed with Feyerabend's claim. Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", in which he espouses two ethical principles, and historian of science Daniel Thurs' chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. As myths are beliefs, they are subject to the narrative fallacy, as pointed out by Taleb. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a meta methodology. Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples". But algorithmic methods, such as disproof of existing theory by experiment have been used since Alhacen (1027) and his Book of Optics, and Galileo (1638) and his Two New Sciences, and The Assayer, which still stand as scientific method. == Elements of inquiry == === Overview === The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time. Historically, the development of the scientific method was critical to the Scientific Revolution. The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct. However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order. ==== Factors of scientific inquiry ==== There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of experimental sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions: Characterizations (observations, definitions, and measurements of the subject of inquiry) Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject) Predictions (inductive and deductive reasoning from the hypothesis or theory) Experiments (tests of all of the above) Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, biology, and psychology). The elements above are often taught in the educational system as "the scientific method". The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow but is rather an ongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work. An iterative, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding: Define a question Gather information and resources (observe) Form an explanatory hypothesis Test the hypothesis by performing an experiment and collecting data in a reproducible manner Analyze the data Interpret the data and draw conclusions that serve as a starting point for a new hypothesis Publish results Retest (frequently done by other scientists) The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again. While this schema outlines a typical hypothesis/testing method, many philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced. === Characterizations === The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked with and indented). In 1950, it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; these observations often demand careful measurements and/or counting can take the form of expansive empirical research. A scientific question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation. The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement. I am not accustomed to saying anything with certainty after only one or two observations. ==== Definition ==== The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work. New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them. === Hypothesis development === Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong. and that Pauling would soon admit his difficulties with that structure. A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic. Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning.: II, p.290  The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology. William Glen observes that the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness. In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses. To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts. === Predictions from the hypothesis === James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns. In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities. It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis. If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science. For example, Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation. === Experiments === Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from King's College London – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's photo 51, a detailed X-ray diffraction image, which showed an X-shape and was able to confirm the structure was helical. Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect. Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane. These institutions thereby reduce the research function to a cost/benefit, which is expressed as money, and the time and attention of the researchers to be expended, in exchange for a report to their constituents. Current large instruments, such as CERN's Large Hadron Collider (LHC), or LIGO, or the National Ignition Facility (NIF), or the International Space Station (ISS), or the James Webb Space Telescope (JWST), entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and their adjunct infrastructure. Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of al-Battani (853–929 CE) and Alhazen (965–1039 CE). === Communication and iteration === Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject. This manner of iteration can span decades and sometimes centuries. Published papers can be built upon. For example: By 1027, Alhazen, based on his measurements of the refraction of light, was able to deduce that outer space was less dense than air, that is: "the body of the heavens is rarer than the body of air". In 1079 Ibn Mu'adh's Treatise On Twilight was able to infer that Earth's atmosphere was 50 miles thick, based on atmospheric refraction of the sun's rays. This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collected can be archived, passed onwards and used by others. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility. === Confirmation === Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin. If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically an experimental group gets the treatment, such as a drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated. The process of peer review involves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work. Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at several national archives in the U.S. or the World Data Center. == Foundational principles == === Honesty, openness, and falsifiability === The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science. Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry. His ideas stand in the context of the scale of data–driven and big science, which has seen increased importance of honesty and consequently reproducibility. His thought is that science is a community effort by those who have accreditation and are working within the community. He also warns against overzealous parsimony. Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong: "Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science." === Theory's interactions with observation === Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality. The nature of truth and the discussion on how scientific statements relate to reality is best left to the article on the philosophy of science here. More immediately topical limitations show themselves in the observation of reality. It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework. As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information. An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—their intersubjectivity leading to differing conclusions. Johannes Kepler used Tycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design. Another historic example here is the discovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at. === Empiricism, rationalism, and more pragmatic views === Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations. It was established above how the interpretation of empirical data is theory-laden, so neither approach is trivial. The ubiquitous element in the scientific method is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms of rationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory. The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims that revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth. In 1877, C. S. Peirce characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. His pragmatic views framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless. This "hyperbolic doubt" Peirce argues against here is of course just another name for Cartesian doubt associated with René Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted. A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. In 2010, Hawking suggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the concept model-dependent realism. == Rationality == Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours. The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences. === Beliefs and biases === Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy. The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe). [T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained. A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together. Another important human bias that plays a role is a preference for new, surprising statements (see Appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic. Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn". When a narrative is constructed its elements become easier to believe. Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement. === Deductive and inductive reasoning === The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles. Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions. An example for how inductive and deductive reasoning works can be found in the history of gravitational theory. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic, and European astronomers, to fully record the motion of planet Earth. Kepler(and others) were then able to build their early theories by generalizing the collected data inductively, and Newton was able to unify prior theory and measurements into the consequences of his laws of motion in 1727. Another common example of inductive reasoning is the observation of a counterexample to current theory inducing the need for new ideas. Le Verrier in 1859 pointed out problems with the perihelion of Mercury that showed Newton's theory to be at least incomplete. The observed difference of Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did. Though, today's Standard Model of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively. A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges. This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure. === Certainty, probabilities, and statistical inference === Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent. Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily. Measurements in scientific work are usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken. In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. Inductive statistical generalisation will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, but never a complete representation of circumstances. In statistical analysis, expected and unexpected bias is a large factor. Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a process for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in peer review, after all. More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context. Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology. Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example, multiple probabilities interacting is where, for example medical professionals, have shown a lack of proper understanding. Bayes' theorem is the mathematical principle lining out how standing probabilities are adjusted given new information. The boy or girl paradox is a common example. In knowledge representation, Bayesian estimation of mutual information between random variables is a way to measure dependence, independence, or interdependence of the information under scrutiny. Beyond commonly associated survey methodology of field research, the concept together with probabilistic reasoning is used to advance fields of science where research objects have no definitive states of being. For example, in statistical mechanics. == Methods of inquiry == === Hypothetico-deductive method === The hypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation of hypotheses and their testing via deductive reasoning. A hypothesis stating implications, often called predictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested. Basically, scientists will look at the hypothetical consequences a (potential) theory holds and prove or disprove those instead of the theory itself. If an experimental test of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively. The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, "successful theories are those that survive elimination through falsification". Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few, as valid deductions rely on solid presuppositions. === Inductive method === The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though. It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning. Where the traditional method of inquiry does both, the inductive approach usually formulates only a research question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves". The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are. This measure of certainty can reach quite high degrees, though. For example, in the determination of large primes, which are used in encryption software. === Mathematical modelling === Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors. These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further explored below. Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here are Monte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful. == Scientific inquiry == Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often called scientific theories. Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations. === Properties of scientific inquiry === Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles. Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power. Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology. == Heuristics == === Confirmation theory === During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question: What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducing cognitive bias. Though different thinkers emphasize different aspects, a good theory: is accurate (the trivial element); is consistent, both internally and with other relevant currently accepted theories; has explanatory power, meaning its consequences extend beyond the data it is required to explain; has unificatory power; as in its organizing otherwise confused and isolated phenomena and is fruitful for further research. In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to: parsimony in causal explanations and look for invariant observations. Scientists will sometimes also list the very subjective criteria of "formal elegance" which can indicate multiple different things. The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be considered heuristics rather than a definitive. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird: "[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g. does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict." It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment. ==== Parsimony ==== The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor, which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony. Scientists go as far as to call simple proofs of complex statements beautiful. We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen in Paul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade he reviewed prior work with an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored". Thus touching on the need to bridge the common bias against other circles of thought. ==== Elegance ==== Occam's razor might fall under the heading of "simple elegance", but it is arguable that parsimony and elegance pull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity. Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation. ==== Invariance ==== Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century. The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example in Mill's Methods of difference and agreement—methods that would be referred back to in the context of contrast and invariance. But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied. As David Deutsch put it in 2009: "the search for hard-to-vary explanations is the origin of all progress". An example here can be found in one of Einstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously. Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity". The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection. The discussion on invariance in physics is often had in the more specific context of symmetry. The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical. Related principles here are falsifiability and testability. The opposite of something being hard-to-vary are theories that resist falsification—a frustration that was expressed colourfully by Wolfgang Pauli as them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations. == Philosophy and discourse == Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized. There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science. The one attempted by the unificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). The pluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas. Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey in How We Think (1910) and Karl Pearson in Grammar of Science (1892), as used in fairly uncritical manner in education. === Pluralism === Scientific pluralism is a position within the philosophy of science that rejects various proposed unities of scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: the metaphysics of its subject matter, the epistemology of scientific knowledge, or the research methods and models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that since scientific disciplines already vary in practice, there is no reason to believe this variation is wrong until a specific unification is empirically proven. Finally, some hold that pluralism should be allowed for normative reasons, even if unity were possible in theory. === Unificationism === Unificationism, in science, was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method. Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world. The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. === Epistemological anarchism === Paul Feyerabend examined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 book Against Method he argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'. As has been argued before him however, this is uneconomic; problem solvers, and researchers are to be prudent with their resources during their inquiry. A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research. === Education === In science education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science. This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work. Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation. How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps: observation, hypothesis, prediction, experiment. This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences. It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured. The taught presentation of science had to defend demerits such as: it pays no regard to the social context of science, it suggests a singular methodology of deriving knowledge, it overemphasises experimentation, it oversimplifies science, giving the impression that following a scientific process automatically leads to knowledge, it gives the illusion of determination; that questions necessarily lead to some kind of answers and answers are preceded by (specific) questions, and, it holds that scientific theories arise from observed phenomena only. The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education, and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods. These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3 dimensions of scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas. The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation. Education's approach was heavily influenced by John Dewey's, How We Think (1910). Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey). === Sociology of knowledge === The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise. ==== Thought collectives ==== A perhaps accessible lead into what is claimed is Fleck's thought, echoed in Kuhn's concept of normal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he called thought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group. Comparably, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept. ==== Situated cognition and relativism ==== On the idea of Fleck's thought collectives sociologists built the concept of situated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views. Norwood Russell Hanson, alongside Thomas Kuhn and Paul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by the observer's conceptual framework. He used the concept of gestalt to show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection of Golgi bodies as an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler. Intersubjectivity led to different conclusions. Kuhn and Feyerabend acknowledged Hanson's pioneering work, although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of the strong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests. The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between postmodernist and realist perspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth. == Limits of method == === Role of chance in discovery === Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world. Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise. === Relationship with statistics === When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use, an example being the misuse of p-values. The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim. === Science of complex systems === Science applied to complex systems can involve elements such as transdisciplinarity, systems theory, control theory, and scientific modelling. In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method, as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support the null hypothesis in the predictive analytics application. Fleck (1979), pp. 38–50 notes "a scientific discovery remains incomplete without considerations of the social practices that condition it". == Relationship with mathematics == Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called a conjecture. Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proved using time as a mathematical concept in which objects can flow (see Ricci flow). Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science. George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps. In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it. Building on Pólya's work, Imre Lakatos argued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and Refutations, what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921 Tractatus Logico-Philosophicus 5.13; Lakatos claimed that proofs from such a system were tautological, i.e. internally logically true, by rewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. the Euler characteristic) into or out of forms from homology, or more abstractly, from homological algebra. Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs. Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation). == See also == Empirical limits in science – Idea that knowledge comes only/mainly from sensory experiencePages displaying short descriptions of redirect targets Evidence-based practices – Pragmatic methodologyPages displaying short descriptions of redirect targets Methodology – Study of research methods Metascience – Scientific study of science Outline of scientific method Quantitative research – All procedures for the numerical representation of empirical facts Research transparency Scientific law – Statement based on repeated empirical observations that describes some natural phenomenon Scientific technique – systematic way of obtaining informationPages displaying wikidata descriptions as a fallback Testability – Extent to which truthness or falseness of a hypothesis/declaration can be tested == Notes == === Notes: Problem-solving via scientific method === === Notes: Philosophical expressions of method === == References == == Sources == == Further reading == == External links == Andersen, Hanne; Hepburn, Brian. "Scientific Method". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. "Confirmation and Induction". Internet Encyclopedia of Philosophy. Scientific method at PhilPapers Scientific method at the Indiana Philosophy Ontology Project An Introduction to Science: Scientific Thinking and a scientific method Archived 2018-01-01 at the Wayback Machine by Steven D. Schafersman. Introduction to the scientific method at the University of Rochester The scientific method from a philosophical perspective Theory-ladenness by Paul Newall at The Galilean Library Lecture on Scientific Method by Greg Anderson (archived 28 April 2006) Using the scientific method for designing science fair projects Scientific Methods an online book by Richard D. Jarrard Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures. Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins (archived 21 January 2013). "How Do We Know What Is True?" (animated video; 2:52)
Wikipedia/Process_(science)
This article compares the syntax for defining and instantiating an algebraic data type (ADT), sometimes also referred to as a tagged union, in various programming languages. == Examples of algebraic data types == === ATS === In ATS, an ADT may be defined with: And instantiated as: Additionally in ATS dataviewtypes are the linear type version of ADTs for the purpose of providing in the setting of manual memory management with the convenience of pattern matching. An example program might look like: === Ceylon === In Ceylon, an ADT may be defined with: And instantiated as: === Clean === In Clean, an ADT may be defined with: And instantiated as: === Coq === In Coq, an ADT may be defined with: And instantiated as: === C++ === In C++, an ADT may be defined with: And instantiated as: === Dart === In Dart, an ADT may be defined with: And instantiated as: === Elm === In Elm, an ADT may be defined with: And instantiated as: === F# === In F#, an ADT may be defined with: And instantiated as: === F* === In F*, an ADT may be defined with: And instantiated as: === Free Pascal === In Free Pascal (in standard ISO Pascal mode), an ADT may be defined with variant records: And instantiated as: === Haskell === In Haskell, an ADT may be defined with: And instantiated as: === Haxe === In Haxe, an ADT may be defined with: And instantiated as: === Hope === In Hope, an ADT may be defined with: And instantiated as: === Idris === In Idris, an ADT may be defined with: And instantiated as: === Java === In Java, an ADT may be defined with: And instantiated as: === Julia === In Julia, an ADT may be defined with: And instantiated as: === Kotlin === In Kotlin, an ADT may be defined with: And instantiated as: === Limbo === In Limbo, an ADT may be defined with: And instantiated as: === Mercury === In Mercury, an ADT may be defined with: And instantiated as: === Miranda === In Miranda, an ADT may be defined with: And instantiated as: === Nemerle === In Nemerle, an ADT may be defined with: And instantiated as: === Nim === In Nim, an ADT may be defined with: And instantiated as: === OCaml === In OCaml, an ADT may be defined with: And instantiated as: === Opa === In Opa, an ADT may be defined with: And instantiated as: === OpenCog === In OpenCog, an ADT may be defined with: === PureScript === In PureScript, an ADT may be defined with: And instantiated as: === Python === In Python, an ADT may be defined with: And instantiated as: === Racket === In Typed Racket, an ADT may be defined with: And instantiated as: === Reason === ==== Reason ==== In Reason, an ADT may be defined with: And instantiated as: ==== ReScript ==== In ReScript, an ADT may be defined with: And instantiated as: === Rust === In Rust, an ADT may be defined with: And instantiated as: === Scala === ==== Scala 2 ==== In Scala 2, an ADT may be defined with: And instantiated as: ==== Scala 3 ==== In Scala 3, an ADT may be defined with: And instantiated as: === Standard ML === In Standard ML, an ADT may be defined with: And instantiated as: === Swift === In Swift, an ADT may be defined with: And instantiated as: === TypeScript === In TypeScript, an ADT may be defined with: And instantiated as: === Visual Prolog === In Visual Prolog, an ADT may be defined with: And instantiated as: === Zig === In Zig, an ADT may be defined with: And instantiated as: == References ==
Wikipedia/Comparison_of_programming_languages_(algebraic_data_type)
In formal language theory and computer science, a substring is a contiguous sequence of characters within a string. For instance, "the best of" is a substring of "It was the best of times". In contrast, "Itwastimes" is a subsequence of "It was the best of times", but not a substring. Prefixes and suffixes are special cases of substrings. A prefix of a string S {\displaystyle S} is a substring of S {\displaystyle S} that occurs at the beginning of S {\displaystyle S} ; likewise, a suffix of a string S {\displaystyle S} is a substring that occurs at the end of S {\displaystyle S} . The substrings of the string "apple" would be: "a", "ap", "app", "appl", "apple", "p", "pp", "ppl", "pple", "pl", "ple", "l", "le" "e", "" (note the empty string at the end). == Substring == A string u {\displaystyle u} is a substring (or factor) of a string t {\displaystyle t} if there exists two strings p {\displaystyle p} and s {\displaystyle s} such that t = p u s {\displaystyle t=pus} . In particular, the empty string is a substring of every string. Example: The string u = {\displaystyle u=} ana is equal to substrings (and subsequences) of t = {\displaystyle t=} banana at two different offsets: banana ||||| ana|| ||| ana The first occurrence is obtained with p = {\displaystyle p=} b and s = {\displaystyle s=} na, while the second occurrence is obtained with p = {\displaystyle p=} ban and s {\displaystyle s} being the empty string. A substring of a string is a prefix of a suffix of the string, and equivalently a suffix of a prefix; for example, nan is a prefix of nana, which is in turn a suffix of banana. If u {\displaystyle u} is a substring of t {\displaystyle t} , it is also a subsequence, which is a more general concept. The occurrences of a given pattern in a given string can be found with a string searching algorithm. Finding the longest string which is equal to a substring of two or more strings is known as the longest common substring problem. In the mathematical literature, substrings are also called subwords (in America) or factors (in Europe). == Prefix == A string p {\displaystyle p} is a prefix of a string t {\displaystyle t} if there exists a string s {\displaystyle s} such that t = p s {\displaystyle t=ps} . A proper prefix of a string is not equal to the string itself; some sources in addition restrict a proper prefix to be non-empty. A prefix can be seen as a special case of a substring. Example: The string ban is equal to a prefix (and substring and subsequence) of the string banana: banana ||| ban The square subset symbol is sometimes used to indicate a prefix, so that p ⊑ t {\displaystyle p\sqsubseteq t} denotes that p {\displaystyle p} is a prefix of t {\displaystyle t} . This defines a binary relation on strings, called the prefix relation, which is a particular kind of prefix order. == Suffix == A string s {\displaystyle s} is a suffix of a string t {\displaystyle t} if there exists a string p {\displaystyle p} such that t = p s {\displaystyle t=ps} . A proper suffix of a string is not equal to the string itself. A more restricted interpretation is that it is also not empty.[1] A suffix can be seen as a special case of a substring. Example: The string nana is equal to a suffix (and substring and subsequence) of the string banana: banana |||| nana A suffix tree for a string is a trie data structure that represents all of its suffixes. Suffix trees have large numbers of applications in string algorithms. The suffix array is a simplified version of this data structure that lists the start positions of the suffixes in alphabetically sorted order; it has many of the same applications. == Border == A border is suffix and prefix of the same string, e.g. "bab" is a border of "babab" (and also of "baboon eating a kebab"). == Superstring == A superstring of a finite set P {\displaystyle P} of strings is a single string that contains every string in P {\displaystyle P} as a substring. For example, bcclabccefab {\displaystyle {\text{bcclabccefab}}} is a superstring of P = { abcc , efab , bccla } {\displaystyle P=\{{\text{abcc}},{\text{efab}},{\text{bccla}}\}} , and efabccla {\displaystyle {\text{efabccla}}} is a shorter one. Concatenating all members of P {\displaystyle P} , in arbitrary order, always obtains a trivial superstring of P {\displaystyle P} . Finding superstrings whose length is as small as possible is a more interesting problem. A string that contains every possible permutation of a specified character set is called a superpermutation. == See also == Brace notation Substring index Suffix automaton == References ==
Wikipedia/Prefix_(computer_science)
Comma-separated values (CSV) is a text file format that uses commas to separate values, and newlines to separate records. A CSV file stores tabular data (numbers and text) in plain text, where each line of the file typically represents one data record. Each record consists of the same number of fields, and these are separated by commas in the CSV file. If the field delimiter itself may appear within a field, fields can be surrounded with quotation marks. The CSV file format is one type of delimiter-separated file format. Delimiters frequently used include the comma, tab, space, and semicolon. Delimiter-separated files are often given a ".csv" extension even when the field separator is not a comma. Many applications or libraries that consume or produce CSV files have options to specify an alternative delimiter. The lack of adherence to the CSV standard RFC 4180 necessitates the support for a variety of CSV formats in data input software. Despite this drawback, CSV remains widespread in data applications and is widely supported by a variety of software, including common spreadsheet applications such as Microsoft Excel. Benefits cited in favor of CSV include human readability and the simplicity of the format. == Applications == CSV is a common data exchange format that is widely supported by consumer, business, and scientific applications. Among its most common uses is moving tabular data between programs that natively operate on incompatible (often proprietary or undocumented) formats. For example, a user may need to transfer information from a database program that stores data in a proprietary format, to a spreadsheet that uses a completely different format. Most database programs can export data as CSV. Most spreadsheet programs can read CSV data, allowing CSV to be used as an intermediate format when transferring data from a database to a spreadsheet. Every major ecommerce platform provides support for exporting data as a CSV file. CSV is also used for storing data. Common data science tools such as Pandas include the option to export data to CSV for long-term storage. Benefits of CSV for data storage include the simplicity of CSV makes parsing and creating CSV files easy to implement and fast compared to other data formats, human readability making editing or fixing data simpler, and high compressibility leading to smaller data files. Alternatively, CSV does not support more complex data relations and makes no distinction between null and empty values, and in applications where these features are needed other formats are preferred. More than 200 local, regional, and national data portals, such as those of the UK government and the European Commission, use CSV files with standardized data catalogs. == Specification == RFC 4180 proposes a specification for the CSV format; however, actual practice often does not follow the RFC and the term "CSV" might refer to any file that: is plain text using a character encoding such as ASCII, various Unicode character encodings (e.g. UTF-8), EBCDIC, or Shift JIS, consists of records (typically one record per line), with the records divided into fields separated by a comma, where every record has the same sequence of fields. Within these general constraints, many variations are in use. Therefore, without additional information (such as whether RFC 4180 is honored), a file claimed simply to be in "CSV" format is not fully specified. == History == Comma-separated values is a data format that predates personal computers by more than a decade: the IBM Fortran (level H extended) compiler under OS/360 supported CSV in 1972. List-directed ("free form") input/output was defined in FORTRAN 77, approved in 1978. List-directed input used commas or spaces for delimiters, so unquoted character strings could not contain commas or spaces. The term "comma-separated value" and the "CSV" abbreviation were in use by 1983. The manual for the Osborne Executive computer, which bundled the SuperCalc spreadsheet, documents the CSV quoting convention that allows strings to contain embedded commas, but the manual does not specify a convention for embedding quotation marks within quoted strings. Comma-separated value lists are easier to type (for example into punched cards) than fixed-column-aligned data, and they were less prone to producing incorrect results if a value was punched one column off from its intended location. Comma separated files are used for the interchange of database information between machines of two different architectures. The plain-text character of CSV files largely avoids incompatibilities such as byte-order and word size. The files are largely human-readable, so it is easier to deal with them in the absence of perfect documentation or communication. The main standardization initiative—transforming "de facto fuzzy definition" into a more precise and de jure one—was in 2005, with RFC 4180, defining CSV as a MIME Content Type. Later, in 2013, some of RFC 4180's deficiencies were tackled by a W3C recommendation. In 2014 IETF published RFC 7111 describing the application of URI fragments to CSV documents. RFC 7111 specifies how row, column, and cell ranges can be selected from a CSV document using position indexes. In 2015 W3C, in an attempt to enhance CSV with formal semantics, publicized the first drafts of recommendations for CSV metadata standards, which began as recommendations in December of the same year. == General functionality == CSV formats are best used to represent sets or sequences of records in which each record has an identical list of fields. This corresponds to a single relation in a relational database, or to data (though not calculations) in a typical spreadsheet. The format dates back to the early days of business computing and is widely used to pass data between computers with different internal word sizes, data formatting needs, and so forth. For this reason, CSV files are common on all computer platforms. CSV is a delimited text file that uses a comma to separate values (many implementations of CSV import/export tools allow other separators to be used; for example, the use of a "Sep=^" row as the first row in the *.csv file will cause Excel to open the file expecting caret "^" to be the separator instead of comma ","). Simple CSV implementations may prohibit field values that contain a comma or other special characters such as newlines. More sophisticated CSV implementations permit them, often by requiring " (double quote) characters around values that contain reserved characters (such as commas, double quotes, or less commonly, newlines). Embedded double quote characters may then be represented by a pair of consecutive double quotes, or by prefixing a double quote with an escape character such as a backslash (for example in Sybase Central). CSV formats are not limited to a particular character set. They work just as well with Unicode character sets (such as UTF-8 or UTF-16) as with ASCII (although particular programs that support CSV may have their own limitations). CSV files normally will even survive naïve translation from one character set to another (unlike nearly all proprietary data formats). CSV does not, however, provide any way to indicate what character set is in use, so that must be communicated separately, or determined at the receiving end (if possible). Databases that include multiple relations cannot be exported as a single CSV file. Similarly, CSV cannot naturally represent hierarchical or object-oriented data. This is because every CSV record is expected to have the same structure. CSV is therefore rarely appropriate for documents created with HTML, XML, or other markup or word-processing technologies. Statistical databases in various fields often have a generally relation-like structure, but with some repeatable groups of fields. For example, health databases such as the Demographic and Health Survey typically repeat some questions for each child of a given parent (perhaps up to a fixed maximum number of children). Statistical analysis systems often include utilities that can "rotate" such data; for example, a "parent" record that includes information about five children can be split into five separate records, each containing (a) the information on one child, and (b) a copy of all the non-child-specific information. CSV can represent either the "vertical" or "horizontal" form of such data. In a relational database, similar issues are readily handled by creating a separate relation for each such group, and connecting "child" records to the related "parent" records using a foreign key (such as an ID number or name for the parent). In markup languages such as XML, such groups are typically enclosed within a parent element and repeated as necessary (for example, multiple <child> nodes within a single <parent> node). With CSV there is no widely accepted single-file solution. == Standardization == The name "CSV" indicates the use of the comma to separate data fields. Nevertheless, the term "CSV" is widely used to refer to a large family of formats that differ in many ways. Some implementations allow or require single or double quotation marks around some or all fields; and some reserve the first record as a header containing a list of field names. The character set being used is undefined: some applications require a Unicode byte order mark (BOM) to enforce Unicode interpretation (sometimes even a UTF-8 BOM). Files that use the tab character instead of comma can be more precisely referred to as "TSV" for tab-separated values. Other implementation differences include the handling of more commonplace field separators (such as space or semicolon) and newline characters inside text fields. One more subtlety is the interpretation of a blank line: it can equally be the result of writing a record of zero fields, or a record of one field of zero length; thus decoding it is ambiguous. === RFC 4180 and MIME standards === The 2005 technical standard RFC 4180 formalizes the CSV file format and defines the MIME type "text/csv" for the handling of text-based fields. However, the interpretation of the text of each field is still application-specific. Files that follow the RFC 4180 standard can simplify CSV exchange and should be widely portable. Among its requirements: MS-DOS-style lines that end with (CR/LF) characters (optional for the last line). An optional header record (there is no sure way to detect whether it is present, so care is required when importing). Each record should contain the same number of comma-separated fields. Any field may be quoted (with double quotes). Fields containing a line-break, double-quote or commas should be quoted. (If they are not, the file will likely be impossible to process correctly.) If double-quotes are used to enclose fields, then a double-quote in a field must be represented by two double-quote characters. The format can be processed by most programs that claim to read CSV files. The exceptions are (a) programs may not support line-breaks within quoted fields, (b) programs may confuse the optional header with data or interpret the first data line as an optional header, and (c) double-quotes in a field may not be parsed correctly automatically. === OKF frictionless tabular data package === In 2011 Open Knowledge Foundation (OKF) and various partners created a data protocols working group, which later evolved into the Frictionless Data initiative. One of the main formats they released was the Tabular Data Package. Tabular Data package was heavily based on CSV, using it as the main data transport format and adding basic type and schema metadata (CSV lacks any type information to distinguish the string "1" from the number 1). The Frictionless Data Initiative has also provided a standard CSV Dialect Description Format for describing different dialects of CSV, for example specifying the field separator or quoting rules. === W3C tabular data standard === In 2013 the W3C "CSV on the Web" working group began to specify technologies providing higher interoperability for web applications using CSV or similar formats. The working group completed its work in February 2016 and is officially closed in March 2016 with the release of a set of documents and W3C recommendations for modeling "Tabular Data", and enhancing CSV with metadata and semantics. While the well-formedness of CSV data can readily checked, testing validity and canonical form is less well developed, relative to more precise data formats, such as XML and SQL, which offer richer types and rules-based validation. == Basic rules == Many informal documents exist that describe "CSV" formats. IETF RFC 4180 (summarized above) defines the format for the "text/csv" MIME type registered with the IANA. Rules typical of these and other "CSV" specifications and implementations are as follows: == Example == The above table of data may be represented in CSV format as follows: Year,Make,Model,Description,Price 1997,Ford,E350,"ac, abs, moon",3000.00 1999,Chevy,"Venture ""Extended Edition""","",4900.00 1999,Chevy,"Venture ""Extended Edition, Very Large""","",5000.00 1996,Jeep,Grand Cherokee,"MUST SELL! air, moon roof, loaded",4799.00 Example of a USA/UK CSV file (where the decimal separator is a period/full stop and the value separator is a comma): Year,Make,Model,Length 1997,Ford,E350,2.35 2000,Mercury,Cougar,2.38 Example of an analogous European CSV/DSV file (where the decimal separator is a comma and the value separator is a semicolon): Year;Make;Model;Length 1997;Ford;E350;2,35 2000;Mercury;Cougar;2,38 The latter format is not RFC 4180 compliant. Compliance could be achieved by the use of a comma instead of a semicolon as a separator and by quoting all numbers that have a decimal mark. == Application support == Some applications use CSV as a data interchange format to enhance its interoperability, exporting and importing CSV. Others use CSV as an internal format. As a data interchange format: the CSV file format is supported by almost all spreadsheets and database management systems, Spreadsheets including Apple Numbers, LibreOffice Calc, and Apache OpenOffice Calc. Microsoft Excel also supports a dialect of CSV with restrictions in comparison to other spreadsheet software (e.g., as of 2019 Excel still cannot export CSV files in the commonly used UTF-8 character encoding, and separator is not enforced to be the comma). LibreOffice Calc CSV importer is actually a more generic delimited text importer, supporting multiple separators at the same time as well as field trimming. Various Relational databases support saving query results to a CSV file. PostgreSQL provides the COPY command, which allows for both saving and loading data to and from a file. COPY (SELECT * FROM articles) TO '/home/wikipedia/file.csv' (FORMAT csv) saves the content of a table articles to a file called /home/wikipedia/file.csv. Many utility programs on Unix-style systems (such as cut, paste, join, sort, uniq, awk) can split files on a comma delimiter, and can therefore process simple CSV files. However, this method does not correctly handle commas or new lines within quoted strings, hence it is better to use tools like csvkit or Miller. As (main or optional) internal representation. Can be native or foreign, but differ from interchange format ("export/import only") because it is not necessary to create a copy in another format: Some Spreadsheets including LibreOffice Calc offers this option, without enforcing user to adopt another format. Some relational databases, when using standard SQL, offer foreign-data wrapper (FDW). For example, PostgreSQL offers the CREATE FOREIGN TABLE and CREATE EXTENSION file_fdw commands to configure any variant of CSV. Databases like Apache Hive offer the option to express CSV or .csv.gz as an internal table format. The emacs editor can operate on CSV files using csv-nav mode. CSV format is supported by libraries available for many programming languages. Most provide some way to specify the field delimiter, decimal separator, character encoding, quoting conventions, date format, etc. === Software and row limits === Programs that work with CSV may have limits on the maximum number of rows CSV files can have. Below is a list of common software and its limitations: Microsoft Excel: 1,048,576 row limit; Microsoft PowerShell, no row or cell limit. (Memory Limited) Apple Numbers: 1,000,000 row limit; Google Sheets: 10,000,000 cell limit (the product of columns and rows); OpenOffice and LibreOffice: 1,048,576 row limit; Sourcetable: no row limit. (Spreadsheet-database hybrid); Text Editors (such as WordPad, TextEdit, Vim, etc.): no row or cell limit; Databases (COPY command and FDW): no row or cell limit. == See also == Tab-separated values Comparison of data-serialization formats Delimiter-separated values Delimiter collision Flat-file database Simple Data Format Substitute character, Null character, invisible comma U+2063 == References == == Further reading == "IBM DB2 Administration Guide - LOAD, IMPORT, and EXPORT File Formats". IBM. Archived from the original on 2016-12-13. Retrieved 2016-12-12. (Has file descriptions of delimited ASCII (.DEL) (including comma- and semicolon-separated) and non-delimited ASCII (.ASC) files for data transfer.)
Wikipedia/Comma-separated_values
In data hierarchy, a field (data field) is a variable in a record. A record, also known as a data structure, allows logically related data to be identified by a single name. Identifying related data as a single group is central to the construction of understandable computer programs. The individual fields in a record may be accessed by name, just like any variable in a computer program. Each field in a record has two components. One component is the field's datatype declaration. The other component is the field's identifier. == Memory fields == Fields may be stored in random access memory (RAM). The following Pascal record definition has three field identifiers: firstName, lastName, and age. The two name fields have a datatype of an array of character. The age field has a datatype of integer. In Pascal, the identifier component precedes a colon, and the datatype component follows the colon. Once a record is defined, variables of the record can be allocated. Once the memory of the record is allocated, a field can be accessed like a variable by using the dot notation. The term field has been replaced with the terms data member and attribute. The following Java class has three attributes: firstName, lastName, and age. == File fields == Fields may be stored in a random access file. A file may be written to or read from in an arbitrary order. To accomplish the arbitrary access, the operating system provides a method to quickly seek around the file. Once the disk head is positioned at the beginning of a record, each file field can be read into its corresponding memory field. File fields are the main storage structure in the Indexed Sequential Access Method (ISAM). In relational database theory, the term field has been replaced with the terms column and attribute. == See also == Class variable – Variable defined in a class whose objects all possess the same copy Mutator method – Computer science method == References ==
Wikipedia/Field_(computer_science)
Systems programming, or system programming, is the activity of programming computer system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications). Systems programming requires a great degree of hardware awareness. Its goal is to achieve efficient use of available resources, either because the software itself is performance-critical or because even small efficiency improvements directly transform into significant savings of time or money. == Overview == The following attributes characterize systems programming: The programmer can make assumptions about the hardware and other properties of the system that the program runs on, and will often exploit those properties, for example by using an algorithm that is known to be efficient when used with specific hardware. Usually a low-level programming language or programming language dialect is used so that: Programs can operate in resource-constrained environments Programs can be efficient with little runtime overhead, possibly having either a small runtime library or none at all Programs may use direct and "raw" control over memory access and control flow The programmer may write parts of the program directly in assembly language Often systems programs cannot be run in a debugger. Running the program in a simulated environment can sometimes be used to reduce this problem. In systems programming, often limited programming facilities are available. The use of automatic garbage collection is not common and debugging is sometimes hard to do. The runtime library, if available at all, is usually far less powerful, and does less error checking. Because of those limitations, monitoring and logging are often used; operating systems may have extremely elaborate logging subsystems. Implementing certain parts in operating systems and networking requires systems programming, for example implementing paging (virtual memory) or a device driver for an operating system. == History == Originally systems programmers invariably wrote in assembly language. Experiments with hardware support in high level languages in the late 1960s led to such languages as PL/S, BLISS, BCPL, and extended ALGOL for Burroughs large systems. Forth also has applications as a systems language. In the 1970s, C became widespread, aided by the growth of Unix. More recently a subset of C++ called Embedded C++ has seen some use, for instance it is used in the I/O Kit drivers of macOS. Engineers working at Google created Go in 2007 to address developer productivity in large distributed systems, with developer-focused features such as Concurrency, Garbage Collection, and faster program compilation than C and C++. In 2015 Rust came out, a general-purpose programming language often used in systems programming. Rust was designed with memory safety in mind and to be as performant as C and C++. == Alternative meaning == For historical reasons, some organizations use the term systems programmer to describe a job function which would be more accurately termed systems administrator. This is particularly true in organizations whose computer resources have historically been dominated by mainframes, although the term is even used to describe job functions which do not involve mainframes. This usage arose because administration of IBM mainframes often involved the writing of custom assembler code (IBM's Basic Assembly Language (BAL)), which integrated with the operating system such as OS/MVS, DOS/VSE or VM/CMS. Indeed, some IBM software products had substantial code contributions from customer programming staff. This type of programming is progressively less common, and increasingly done in C rather than Assembly, but the term systems programmer is still used as the de-facto job title for staff administering IBM mainframes even in cases where they do not regularly engage in systems programming activities. == See also == Ousterhout's dichotomy System programming language Scripting language Interrupt handler Computer Programming == References == == Further reading == Systems Programming by John J. Donovan
Wikipedia/Systems_programming
Graph matching is the problem of finding a similarity between graphs. Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching is an important tool in these areas. In these areas it is commonly assumed that the comparison is between the data graph and the model graph. The case of exact graph matching is known as the graph isomorphism problem. The problem of exact matching of a graph to a part of another graph is called subgraph isomorphism problem. Inexact graph matching refers to matching problems when exact matching is impossible, e.g., when the number of vertices in the two graphs are different. In this case it is required to find the best possible match. For example, in image recognition applications, the results of image segmentation in image processing typically produces data graphs with the numbers of vertices much larger than in the model graphs data expected to match against. In the case of attributed graphs, even if the numbers of vertices and edges are the same, the matching still may be only inexact. Two categories of search methods are the ones based on identification of possible and impossible pairings of vertices between the two graphs and methods that formulate graph matching as an optimization problem. Graph edit distance is one of similarity measures suggested for graph matching. The class of algorithms is called error-tolerant graph matching. == See also == String matching Pattern matching == References ==
Wikipedia/Graph_matching
Design & Engineering Methodology for Organizations (DEMO) is an enterprise modelling methodology for transaction modelling, and analysing and representing business processes. It is developed since the 1980s by Jan Dietz and others, and is inspired by the language/action perspective == Overview == DEMO is a methodology for designing, organizing and linking organizations. Central concept is the "communicative action": communication is considered essential for the functioning of organizations. Agreements between employees, customers and suppliers are indeed created to communicate. The same is true for the acceptance of the results supplied. The DEMO methodology is based on the following principles: The essence of an organization is that it consists of people with authority and responsibility to act and negotiate. The modeling of business processes and information systems is a rational activity, which leads to uniformity. Models should be understandable for all concerned. Information should 'fit' with their users. The DEMO methodology provides a coherent understanding of communication, information, action and organization. The scope is here shifted from "Information Systems Engineering" to "Business Systems Engineering", with a clear understanding of both the information and the central organizations. == History == The DEMO methodology is inspired on the language/action perspective, which was initially developed as a philosophy of language by J. L. Austin, John Searle and Jürgen Habermas and was built on the speech act theory. The language/action perspective was introduced in the field of computer science and information systems design by Fernando Flores and Terry Winograd in the 1980s. According to Dignum and Dietz (1997) this concept has "proven to be a new basic paradigm for Information Systems Design. In contrast to traditional views of data flow, the language/action perspective emphasizes what people do while communicating, how they create a common reality by means of language, and how communication brings about a coordination of their activities." DEMO is developed at the Delft University of Technology by Jan Dietz in the early 1990s, and originally stood for "Dynamic Essential Modelling of Organizations". It builds on the Language Action Perspective (LAP), which is derived from the work include John Austin, John Searle and Jürgen Habermas since the 1960s. It is linked to the "Natural language Information Analysis Method" (NIAM) developed by Shir Nijssen, and object-role modeling (ORM) further developed by Terry Halpin. In the 1990s the name was changed to "Design & Engineering Methodology for Organizations". In the new millennium Jan Dietz further elaborated DEMO into "enterprise ontology", in which the graphic note of object-role modeling is integrated. These concepts were also developed by Dietz and others into a framework for enterprise architecture, entitled Architecture Framework (XAF). In the new millennium the French company Sogeti developed a methodology based on the DEMO, called Pronto. The further development of DEMO is supported by the international Enterprise Engineering Institute, based in Delft in The Netherlands. == DEMO, topics == === Pattern of a business transaction === In DEMO the basic pattern of a business transaction is composed of the following three phases: An actagenic phase during which a client requests a fact from the supplier agent. The action execution which will generate the required fact A factagenic phase, which leads the client to accept the results reported Basic transactions can be composed to account for complex transactions. The DEMO methodology gives the analyst an understanding of the business processes of the organization, as well as the agents involved, but is less clear about pragmatics aspects of the transaction, such as the conversation structure and the intentions generated in each agents mind. === Abstraction levels === DEMO assumes that an organization consists of three integrated layers: B-organization, I-organization and D-organization. The B-organization or business layer according to DEMO is the essence of the organization, regardless of the device is possible there. Understanding the business layer is the right starting point in setting up an organization, including the software to support business processes. This vision leads to a division into three perspectives or levels of abstraction: Essential: business system or the B system Informational: either the information I system Documenteel: data system either D system At each level has its own category of systems at that level "active": there are so B systems (of company and business), I-systems (of informational and information) and D systems (from documenteel and data) . The main focus in DEMO is focused on the critical level, the other two are, therefore, less discussed in detail. === The ontological model of an organization === The ontological model of an organisation in DEMO-3 consists of the integrated whole of four aspect models, each taking a specific view on the organisation: Construction Model (CM) Process Model (PM) Action Model (AM), and Fact Model (FM) There are two ways of representing these aspect models: graphically, in diagrams and tables, and textually, in DEMOSL. Construction Model The Construction Model (CM) of an organisation is the ontological model of its construction: the composition (the internal actor roles, i.e. the actor roles within the border of the organisation), the environment (i.e. the actor roles outside the border of the organisation that have interaction with internal actor roles), the interaction structure (i.e. the transaction kinds between the actor roles in the composition, and between these and the actor roles in the environment), and the interstriction structure (i.e. the information links from actor roles in the composition to internal transaction kinds and to external transaction kinds). The CM of an organisation is represented in an Organisation Construction Diagram (OCD), a Transaction Product Table (TPT), and a Bank Contents Table (BCT). Process Model The Process Model (PM) of an organisation is the ontological model of the state space and the transition space of its coordination world. Regarding the state space, the PM contains, for all internal and border transaction kinds, the process steps and the existence laws that apply, according to the complete transaction pattern. Regarding the transition space, the PM contains the coordination event kinds as well as the applicable occurrence laws, including the cardinalities of the occurrences. The occurrence laws within a transaction process are fully determined by the complete transaction pattern. Therefore, a PSD contains only the occurrence laws between transaction processes, expressed in links between process steps. There are two kinds: response links and waiting links. A PM is represented in a Process Structure Diagram (PSD), and a Transaction Pattern Diagram (TPD) for each transaction kind. In these diagrams it is indicated which ‘exceptions’ will be dealt with. Action Model The Action Model (AM) of an organisation consists of a set of action rules. There is an action rule for every agendum kind for every internal actor role. The agendum kinds are determined by the TPDs of the identified transaction kinds (see PM). An action rule consists of an event part (the event(s) to respond to), an assess part (the facts to be inspected), and a response part (the act(s) to be performed. An AM is represented in Action Rule Specifications (ARS) and Work Instruction Specifications (WIS). Fact Model The Fact Model (FM) of an organisation is the ontological model of the state space and the transition space of its production world. Regarding the state space, the FM contains all identified fact kinds (both declared and derived), and the existence laws. Three kinds of existence laws are specified graphically: reference laws, unicity laws, and dependency laws; the other ones are specified textually. Regarding the transition space, the FM contains the production event kinds (results of transactions) as well as the applicable occurrence laws. The transition space of the production world is completely determined by the transition space of its coordination world. Yet it may be illustrative to show the implied occurrence laws in an OFD. The FM is represented in an Object Fact Diagram (OFD), possibly complemented by Derived Fact Specifications and Existence Law Specifications. === Operation principle === Somebody starts a communication with a request to ensure that someone else creates a desired result. The person responsible for the results, can respond with a promise, and, when work was done (the execution has taken place), can state that the desired result is achieved. If this result is accepted by the person who had asked for the result then this is a fact. The pattern described in the communication between two people is called a DEMO transaction. A chain of transactions is called in DEMO a business process. The result of a transaction can be specified in DEMO as a facttype, using object-role modeling ( ORM ). == Support tools == The Dutch company Essmod the "Essential Business Modeler" tool developed based on DEMO, which was acquired in 2008 by Mprise after which it renamed to Xemod. DEMO is also supported in the open source world with the architecture tool Open Modeling. There is also a free online modeling tool Model for World DEMO which can be in an online repository. Multiuser worked This tool is platform-independent in a web browser without downloading or installing software. == See also == Business process modeling Enterprise engineering Ontology (information science) Software engineering == References == == Further reading == Jan L.G. Dietz (1999). DEMO : towards a discipline of Organisation Engineering In: European Journal of Operational Research, 1999. Jan L.G. Dietz (2006). Enterprise Ontology - Theory and Methodology, Springer-Verlag Berlin Heidelberg. Mulder, J.B.F. (2006). Rapid Enterprise Design. PhD Thesis, Delft University of Technology. VIAgroep Rijswijk. Op 't Land, M. (2008). Applying Architecture and Ontology to the Splitting and Allying of Enterprises. PhD Thesis, Delft University of Technology. Oren, E. (2003). Van DEMO naar workflow management. Delft University of Technology. == External links == Enterprise Engineering Institute website
Wikipedia/Design_&_Engineering_Methodology_for_Organizations
The GRAI method, GRAI is short for Graphs with Results and Actions Inter-related, and the further developed GRAI Integrated Methodology (GIM) is a modeling method of Enterprise modelling. The GRAI method was first proposed by Guy Doumeingts in his 1984 PhD thesis, entitled La Méthode GRAI, further developed at the GRAI/LAP (Laboratory of Automation and Productics) of University Bordeaux, and followed by GRAI/GIM by Doumeingts and others in 1992. The GRAI method can represent and analyze the operation of all or part of a production activity. The strength of the GRAI method lies in its ability to provide modelers can effectively model the decision-making system of the company, i.e. organizational processes that generate decisions. In the GRAI methodology four types of views had been incorporated: the functional view, physical view, decisional view and informational systems view. == References == == Further reading == Chen, D., and G. Doumeingts. "The GRAI-GIM reference model, architecture and methodology." Architectures for Enterprise Integration. Springer US, 1996. 102-126. Chen, David, Bruno Vallespir, and Guy Doumeingts. "GRAI integrated methodology and its mapping onto generic enterprise reference architecture and methodology." Computers in industry 33.2 (1997): 387-394. Doumeingts, Guy, Bruno Vallespir, and David Chen. "GRAI grid decisional modelling." Handbook on architectures of information systems. Springer Berlin Heidelberg, 1998. 313-337. Doumeingts, Guy. "How to decentralize decisions through GRAI model in production management." Computers in Industry 6.6 (1985): 501-514. Girard, Philippe, and Guy Doumeingts. "GRAI-Engineering: a method to model, design and run engineering design departments." International Journal of Computer Integrated Manufacturing 17.8 (2004): 716-732. == External links == The GRAI Method Part 1: global modelling by B. Vallespir, G. Doumeingts The GRAI method Part 2: detailed modeling and methodological issues by B. Vallespir, G. Doumeingts
Wikipedia/GRAI_method
An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context. == Overview == The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas. == Information modeling languages == In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database modelling techniques such as Hierarchical, CODASYL, Relational etc. Since then, languages for information models have continued to evolve. Some examples are the Integrated Definition Language 1 Extended (IDEF1X), the EXPRESS language and the Unified Modeling Language (UML). Research by contemporaries of Peter Chen such as J.R.Abrial (1974) and G.M Nijssen (1976) led to today's Fact Oriented Modeling (FOM) languages which are based on linguistic propositions rather than on "entities". FOM tools can be used to generate an ER model which means that the modeler can avoid the time-consuming and error prone practice of manual normalization. Object-Role Modeling language (ORM) and Fully Communication Oriented Information Modeling (FCO-IM) are both research results developed in the early 1990s, based upon earlier research. In the 1980s there were several approaches to extend Chen’s Entity Relationship Model. Also important in this decade is REMORA by Colette Rolland. The ICAM Definition (IDEF) Language was developed from the U.S. Air Force ICAM Program during the 1976 to 1982 timeframe. The objective of the ICAM Program, according to Lee (1999), was to increase manufacturing productivity through the systematic application of computer technology. IDEF includes three different modeling methods: IDEF0, IDEF1, and IDEF2 for producing a functional model, an information model, and a dynamic model respectively. IDEF1X is an extended version of IDEF1. The language is in the public domain. It is a graphical representation and is designed using the ER approach and the relational theory. It is used to represent the “real world” in terms of entities, attributes, and relationships between entities. Normalization is enforced by KEY Structures and KEY Migration. The language identifies property groupings (Aggregation) to form complete entity definitions. EXPRESS was created as ISO 10303-11 for formally specifying information requirements of product data model. It is part of a suite of standards informally known as the STandard for the Exchange of Product model data (STEP). It was first introduced in the early 1990s. The language, according to Lee (1999), is a textual representation. In addition, a graphical subset of EXPRESS called EXPRESS-G is available. EXPRESS is based on programming languages and the O-O paradigm. A number of languages have contributed to EXPRESS. In particular, Ada, Algol, C, C++, Euler, Modula-2, Pascal, PL/1, and SQL. EXPRESS consists of language elements that allow an unambiguous object definition and specification of constraints on the objects defined. It uses SCHEMA declaration to provide partitioning and it supports specification of data properties, constraints, and operations. UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts, rather than processes, of software systems. It was conceived originally by Grady Booch, James Rumbaugh, and Ivar Jacobson. UML was approved by the Object Management Group (OMG) as a standard in 1997. The language, according to Lee (1999), is non-proprietary and is available to the public. It is a graphical representation. The language is based on the objected-oriented paradigm. UML contains notations and rules and is designed to represent data requirements in terms of O-O diagrams. UML organizes a model in a number of views that present different aspects of a system. The contents of a view are described in diagrams that are graphs with model elements. A diagram contains model elements that represent common O-O concepts such as classes, objects, messages, and relationships among these concepts. IDEF1X, EXPRESS, and UML all can be used to create a conceptual model and, according to Lee (1999), each has its own characteristics. Although some may lead to a natural usage (e.g., implementation), one is not necessarily better than another. In practice, it may require more than one language to develop all information models when an application is complex. In fact, the modeling practice is often more important than the language chosen. Information models can also be expressed in formalized natural languages, such as Gellish. Gellish, which has natural language variants Gellish Formal English, Gellish Formal Dutch (Gellish Formeel Nederlands), etc. is an information representation language or modeling language that is defined in the Gellish smart Dictionary-Taxonomy, which has the form of a Taxonomy/Ontology. A Gellish Database is not only suitable to store information models, but also knowledge models, requirements models and dictionaries, taxonomies and ontologies. Information models in Gellish English use Gellish Formal English expressions. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: - the Eiffel tower <is located in> Paris - Paris <is classified as a> city whereas information requirements and knowledge can be expressed for example as follows: - tower <shall be located in a> geographical area - city <is a kind of> geographical area Such Gellish expressions use names of concepts (such as 'city') and relation types (such as ⟨is located in⟩ and ⟨is classified as a⟩) that should be selected from the Gellish Formal English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains definitions of more than 40000 concepts, including more than 600 standard relation types. Thus, an information model in Gellish consists of a collection of Gellish expressions that use those phrases and dictionary concepts to express facts or make statements, queries and answers. == Standard sets of information models == The Distributed Management Task Force (DMTF) provides a standard set of information models for various enterprise domains under the general title of the Common Information Model (CIM). Specific information models are derived from CIM for particular management domains. The TeleManagement Forum (TMF) has defined an advanced model for the Telecommunication domain (the Shared Information/Data model, or SID) as another. This includes views from the business, service and resource domains within the Telecommunication industry. The TMF has established a set of principles that an OSS integration should adopt, along with a set of models that provide standardized approaches. The models interact with the information model (the Shared Information/Data Model, or SID), via a process model (the Business Process Framework (eTOM), or eTOM) and a life cycle model. == See also == Building information modeling Concept map Conceptual model (computer science) System information modelling == Notes == == References == ISO/IEC TR9007 Conceptual Schema, 1986 Andries van Renssen, Gellish, A Generic Extensible Ontological Language, (PhD, Delft University of Technology, 2005) This article incorporates public domain material from the National Institute of Standards and Technology == Further reading == Richard Veryard (1992). Information modelling : practical guidance. New York : Prentice Hall. Repa, Vaclav (2012). Information Modeling of Organizations. Bruckner Publishing. ISBN 978-80-904661-3-5. Berner, Stefan (2019). Information modelling, A method for improving understanding and accuracy in your collaboration. vdf Zurich. ISBN 978-3-7281-3943-6. == External links == RFC 3198 – Terminology for Policy-Based Management
Wikipedia/Information_Modelling
In software engineering, structured analysis (SA) and structured design (SD) are methods for analyzing business requirements and developing specifications for converting practices into computer programs, hardware configurations, and related manual procedures. Structured analysis and design techniques are fundamental tools of systems analysis. They developed from classical systems analysis of the 1960s and 1970s. == Objectives of structured analysis == Structured analysis became popular in the 1980s and is still in use today. Structured analysis consists of interpreting the system concept (or real world situations) into data and control terminology represented by data flow diagrams. The flow of data and control from bubble to the data store to bubble can be difficult to track and the number of bubbles can increase. One approach is to first define events from the outside world that require the system to react, then assign a bubble to that event. Bubbles that need to interact are then connected until the system is defined. Bubbles are usually grouped into higher level bubbles to decrease complexity. Data dictionaries are needed to describe the data and command flows, and a process specification is needed to capture the transaction/transformation information. SA and SD are displayed with structure charts, data flow diagrams and data model diagrams, of which there were many variations, including those developed by Tom DeMarco, Ken Orr, Larry Constantine, Vaughn Frick, Ed Yourdon, Steven Ward, Peter Chen, and others. These techniques were combined in various published system development methodologies, including structured systems analysis and design method, profitable information by design (PRIDE), Nastec structured analysis & design, SDM/70 and the Spectrum structured system development methodology. == History == Structured analysis is part of a series of structured methods that represent a collection of analysis, design, and programming techniques that were developed in response to the problems facing the software world from the 1960s to the 1980s. In this timeframe most commercial programming was done in Cobol and Fortran, then C and BASIC. There was little guidance on "good" design and programming techniques, and there were no standard techniques for documenting requirements and designs. Systems were getting larger and more complex, and the information system development became harder and harder to do so." As a way to help manage large and complex software, the following structured methods emerged since the end of the 1960s: Structured programming in circa 1967 with Edsger Dijkstra - "Go To Statement Considered Harmful" Niklaus Wirth Stepwise design in 1971 Nassi–Shneiderman diagram in 1972 Warnier/Orr diagram in 1974 - "Logical Construction of Programs" HIPO in 1974 - IBM Hierarchy input-process-output (though this should really be output-input-process) Structured design around 1975 with Larry Constantine, Ed Yourdon and Wayne Stevens. Jackson structured programming in circa 1975 developed by Michael A. Jackson Structured analysis in circa 1978 with Tom DeMarco, Edward Yourdon, Gane & Sarson, McMenamin & Palmer. Structured analysis and design technique (SADT) developed by Douglas T. Ross Yourdon structured method developed by Edward Yourdon. Structured analysis and system specification published in 1978 by Tom DeMarco. Structured systems analysis and design method (SSADM) first presented in 1983 developed by the UK Office of Government Commerce. Essential Systems Analysis, proposed by Stephen M. McMenamin and John F. Palmer IDEF0 based on SADT, developed by Douglas T. Ross in 1985. Hatley-Pirbhai modeling, defined in "Strategies for Real-Time System Specification" by Derek J. Hatley and Imtiaz A. Pirbhai in 1988. Modern Structured Analysis, developed by Edward Yourdon, after Essential System Analysis was published, and published in 1989. Information technology engineering in circa 1990 with Finkelstein and popularised by James Martin. According to Hay (1999) "information engineering was a logical extension of the structured techniques that were developed during the 1970s. Structured programming led to structured design, which in turn led to structured systems analysis. These techniques were characterized by their use of diagrams: structure charts for structured design, and data flow diagrams for structured analysis, both to aid in communication between users and developers, and to improve the analyst's and the designer's discipline. During the 1980s, tools began to appear which both automated the drawing of the diagrams, and kept track of the things drawn in a data dictionary". After the example of computer-aided design and computer-aided manufacturing (CAD/CAM), the use of these tools was named computer-aided software engineering (CASE). == Structured analysis topics == === Single abstraction mechanism === Structured analysis typically creates a hierarchy employing a single abstraction mechanism. The structured analysis method can employ IDEF (see figure), is process driven, and starts with a purpose and a viewpoint. This method identifies the overall function and iteratively divides functions into smaller functions, preserving inputs, outputs, controls, and mechanisms necessary to optimize processes. Also known as a functional decomposition approach, it focuses on cohesion within functions and coupling between functions leading to structured data. The functional decomposition of the structured method describes the process without delineating system behavior and dictates system structure in the form of required functions. The method identifies inputs and outputs as related to the activities. One reason for the popularity of structured analysis is its intuitive ability to communicate high-level processes and concepts, whether in single system or enterprise levels. Discovering how objects might support functions for commercially prevalent object-oriented development is unclear. In contrast to IDEF, the UML is interface driven with multiple abstraction mechanisms useful in describing service-oriented architectures (SOAs). === Approach === Structured analysis views a system from the perspective of the data flowing through it. The function of the system is described by processes that transform the data flows. Structured analysis takes advantage of information hiding through successive decomposition (or top down) analysis. This allows attention to be focused on pertinent details and avoids confusion from looking at irrelevant details. As the level of detail increases, the breadth of information is reduced. The result of structured analysis is a set of related graphical diagrams, process descriptions, and data definitions. They describe the transformations that need to take place and the data required to meet a system's functional requirements. De Marco's approach consists of the following objects (see figure): Context diagram Data flow diagram Process specifications Data dictionary Hereby the data flow diagrams (DFDs) are directed graphs. The arcs represent data, and the nodes (circles or bubbles) represent processes that transform the data. A process can be further decomposed to a more detailed DFD which shows the subprocesses and data flows within it. The subprocesses can in turn be decomposed further with another set of DFDs until their functions can be easily understood. Functional primitives are processes which do not need to be decomposed further. Functional primitives are described by a process specification (or mini-spec). The process specification can consist of pseudo-code, flowcharts, or structured English. The DFDs model the structure of the system as a network of interconnected processes composed of functional primitives. The data dictionary is a set of entries (definitions) of data flows, data elements, files, and databases. The data dictionary entries are partitioned in a top-down manner. They can be referenced in other data dictionary entries and in data flow diagrams. === Context diagram === Context diagrams are diagrams that represent the actors outside a system that could interact with that system. This diagram is the highest level view of a system, similar to block diagram, showing a, possibly software-based, system as a whole and its inputs and outputs from/to external factors. This type of diagram according to Kossiakoff (2003) usually "pictures the system at the center, with no details of its interior structure, surrounded by all its interacting systems, environment and activities. The objective of a system context diagram is to focus attention on external factors and events that should be considered in developing a complete set of system requirements and constraints". System context diagrams are related to data flow diagram, and show the interactions between a system and other actors which the system is designed to face. System context diagrams can be helpful in understanding the context in which the system will be part of software engineering. === Data dictionary === A data dictionary or database dictionary is a file that defines the basic organization of a database. A database dictionary contains a list of all files in the database, the number of records in each file, and the names and types of each data field. Most database management systems keep the data dictionary hidden from users to prevent them from accidentally destroying its contents. Data dictionaries do not contain any actual data from the database, only bookkeeping information for managing it. Without a data dictionary, however, a database management system cannot access data from the database. Database users and application developers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases. This typically includes the names and descriptions of various tables and fields in each database, plus additional details, like the type and length of each data element. There is no universal standard as to the level of detail in such a document, but it is primarily a distillation of metadata about database structure, not the data itself. A data dictionary document also may include further information describing how data elements are encoded. One of the advantages of well-designed data dictionary documentation is that it helps to establish consistency throughout a complex database, or across a large collection of federated databases. === Data flow diagrams === A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. It differs from the system flowchart as it shows the flow of data through processes instead of computer hardware. Data flow diagrams were invented by Larry Constantine, developer of structured design, based on Martin and Estrin's "data flow graph" model of computation. It is common practice to draw a system context diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data flow diagram is then "exploded" to show more detail of the system being modeled. Data flow diagrams (DFDs) are one of the three essential perspectives of structured systems analysis and design method (SSADM). The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a data flow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's data flow diagrams can be drawn up and compared with the new system's data flow diagrams to draw comparisons to implement a more efficient system. Data flow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to dispatch to recook. How any system is developed can be determined through a data flow diagram. === Structure chart === A structure chart (SC) is a chart that shows the breakdown of the configuration system to the lowest manageable levels. This chart is used in structured programming to arrange the program modules in a tree structure. Each module is represented by a box which contains the name of the modules. The tree structure visualizes the relationships between the modules. Structure charts are used in structured analysis to specify the high-level design, or architecture, of a computer program. As a design tool, they aid the programmer in dividing and conquering a large software problem, that is, recursively breaking a problem down into parts that are small enough to be understood by a human brain. The process is called top-down design, or functional decomposition. Programmers use a structure chart to build a program in a manner similar to how an architect uses a blueprint to build a house. In the design stage, the chart is drawn and used as a way for the client and the various software designers to communicate. During the actual building of the program (implementation), the chart is continually referred to as the master-plan. === Structured design === Structured design (SD) is concerned with the development of modules and the synthesis of these modules in a so-called "module hierarchy". In order to design optimal module structure and interfaces two principles are crucial: Cohesion which is "concerned with the grouping of functionally related processes into a particular module", and Coupling relates to "the flow of information or parameters passed between modules. Optimal coupling reduces the interfaces of modules and the resulting complexity of the software". Structured design was developed by Larry Constantine in the late 1960s, then refined and published with collaborators in the 1970s; see Larry Constantine: structured design for details. Page-Jones (1980) has proposed his own approach which consists of three main objects : structure charts module specifications data dictionary. The structure chart aims to show "the module hierarchy or calling sequence relationship of modules. There is a module specification for each module shown on the structure chart. The module specifications can be composed of pseudo-code or a program design language. The data dictionary is like that of structured analysis. At this stage in the software development lifecycle, after analysis and design have been performed, it is possible to automatically generate data type declarations", and procedure or subroutine templates. == Criticisms == Problems with data flow diagrams have included the following: Choosing bubbles appropriately Partitioning bubbles in a meaningful and mutually agreed upon manner, Documentation size needed to understand the Data Flows, Data flow diagrams are strongly functional in nature and thus subject to frequent change Though "data" flow is emphasized, "data" modeling is not, so there is little understanding the subject matter of the system Customers have difficulty following how the concept is mapped into data flows and bubbles Designers must shift the DFD organization into an implementable format == See also == Event partitioning Flow-based programming HIPO Jackson structured programming Prosa Structured Analysis Tool Soft systems methodology == References == == Further reading == Stevens, W. P.; Myers, G. J.; Constantine, L. L. (June 1974). "Structured design". IBM Systems Journal. 13 (2): 115–139. doi:10.1147/sj.132.0115. Yourdon, Edward; Constantine, Larry L. (1979) [1975]. Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design. Yourdon Press. ISBN 0-13-854471-9. Tom DeMarco (1978). Structured Analysis and System Specification. Yourdon. ISBN 0-91-707207-3 Page-Jones, M (1980), The Practical Guide to Structured Systems Design, New York: Yourdon Press Derek J. Hatley, Imtiaz A. Pirbhai (1988). Strategies for Real Time System Specification. John Wiley and Sons Ltd. ISBN 0-932633-04-8 Stephen J. Mellor und Paul T. Ward (1986). Structured Development for Real-Time Systems: Implementation Modeling Techniques: 003. Prentice Hall. ISBN 0-13-854803-X Edward Yourdon (1989). Modern Structured Analysis, Yourdon Press Computing Series, 1989, ISBN 0-13-598624-9 Keith Edwards (1993). Real-Time Structured Methods, System Analysis. Wiley. ISBN 0-471-93415-1 == External links == Structured Analysis Wiki Three views of structured analysis CRaG Systems, 2004.
Wikipedia/Structured_Design
The IS–LM model, or Hicks–Hansen model, is a two-dimensional macroeconomic model which is used as a pedagogical tool in macroeconomic teaching. The IS–LM model shows the relationship between interest rates and output in the short run in a closed economy. The intersection of the "investment–saving" (IS) and "liquidity preference–money supply" (LM) curves illustrates a "general equilibrium" where supposed simultaneous equilibria occur in both the goods and the money markets. The IS–LM model shows the importance of various demand shocks (including the effects of monetary policy and fiscal policy) on output and consequently offers an explanation of changes in national income in the short run when prices are fixed or sticky. Hence, the model can be used as a tool to suggest potential levels for appropriate stabilisation policies. It is also used as a building block for the demand side of the economy in more comprehensive models like the AD–AS model. The model was developed by John Hicks in 1937 and was later extended by Alvin Hansen as a mathematical representation of Keynesian macroeconomic theory. Between the 1940s and mid-1970s, it was the leading framework of macroeconomic analysis. Today, it is generally accepted as being imperfect and is largely absent from teaching at advanced economic levels and from macroeconomic research, but it is still an important pedagogical introductory tool in most undergraduate macroeconomics textbooks. As monetary policy since the 1980s and 1990s generally does not try to target money supply as assumed in the original IS–LM model, but instead targets interest rate levels directly, some modern versions of the model have changed the interpretation (and in some cases even the name) of the LM curve, presenting it instead simply as a horizontal line showing the central bank's choice of interest rate. This allows for a simpler dynamic adjustment and supposedly reflects the behaviour of actual contemporary central banks more closely. == History == The IS–LM model was introduced at a conference of the Econometric Society held in Oxford during September 1936. Roy Harrod, John R. Hicks, and James Meade all presented papers describing mathematical models attempting to summarize John Maynard Keynes' General Theory of Employment, Interest, and Money. Hicks, who had seen a draft of Harrod's paper, invented the IS–LM model (originally using the abbreviation "LL", not "LM"). He later presented it in "Mr. Keynes and the Classics: A Suggested Interpretation". Hicks and Alvin Hansen developed the model further in the 1930s and early 1940s,: 527  Hansen extending the earlier contribution. The model became a central tool of macroeconomic teaching for many decades. Between the 1940s and mid-1970s, it was the leading framework of macroeconomic analysis. It was particularly suited to illustrate the debate of the 1960s and 1970s between Keynesians and monetarists as to whether fiscal or monetary policy was most effective to stabilize the economy. Later, this issue faded from focus and came to play only a modest role in discussions of short-run fluctuations. The IS-LM model assumes a fixed price level and consequently cannot in itself be used to analyze inflation. This was of little importance in the 1950s and early 1960s when inflation was not an important issue, but became problematic with the rising inflation levels in the late 1960s and 1970s, which led to extensions of the model to also incorporate aggregate supply in some form, e.g. in the form of the AD–AS model, which can be regarded as an IS-LM model with an added supply side explaining rises in the price level. One of the basic assumptions of the IS-LM model is that the central bank targets the money supply. However, a fundamental rethinking in central bank policy took place from the early 1990s when central banks generally changed strategies towards targeting inflation rather than money growth and using an interest rate rule to achieve their goal.: 507  As central banks started paying little attention to the money supply when deciding on their policy, this model feature became increasingly unrealistic and sometimes confusing to students. David Romer in 2000 suggested replacing the traditional IS-LM framework with an IS-MP model, replacing the positively sloped LM curve with a horizontal MP curve (where MP stands for "monetary policy"). He advocated that it had several advantages compared to the traditional IS-LM model. John B. Taylor independently made a similar recommendation in the same year. After 2000, this has led to various modifications to the model in many textbooks, replacing the traditional LM curve and story of the central bank influencing the interest rate level indirectly via controlling the supply of money in the money market to a more realistic one of the central bank determining the policy interest rate as an exogenous variable directly.: 113  Today, the IS-LM model is largely absent from macroeconomic research, but it is still a backbone conceptual introductory tool in many macroeconomics textbooks. == Formation == The point where the IS and LM schedules intersect represents a short-run equilibrium in the real and monetary sectors (though not necessarily in other sectors, such as labor markets): both the product market and the money market are in equilibrium. This equilibrium yields a unique combination of the interest rate and real GDP. === IS (investment–saving) curve === The IS curve shows the causation from interest rates to planned investment to national income and output. For the investment–saving curve, the independent variable is the interest rate and the dependent variable is the level of income. The IS curve is drawn as downward-sloping with the interest rate r on the vertical axis and GDP (gross domestic product: Y) on the horizontal axis. The IS curve represents the locus where total spending (consumer spending + planned private investment + government purchases + net exports) equals total output (real income, Y, or GDP). The IS curve also represents the equilibria where total private investment equals total saving, with saving equal to consumer saving plus government saving (the budget surplus) plus foreign saving (the trade surplus). The level of real GDP (Y) is determined along this line for each interest rate. Every level of the real interest rate will generate a certain level of investment and spending: lower interest rates encourage higher investment and more spending. The multiplier effect of an increase in fixed investment resulting from a lower interest rate raises real GDP. This explains the downward slope of the IS curve. In summary, the IS curve shows the causation from interest rates to planned fixed investment to rising national income and output. The IS curve is defined by the equation Y = C ( Y − T ( Y ) ) + I ( r ) + G + N X ( Y ) , {\displaystyle Y=C\left({Y}-{T(Y)}\right)+I\left({r}\right)+G+NX(Y),} where Y represents income, C ( Y − T ( Y ) ) {\displaystyle C(Y-T(Y))} represents consumer spending increasing as a function of disposable income (income, Y, minus taxes, T(Y), which themselves depend positively on income), I ( r ) {\displaystyle I(r)} represents business investment decreasing as a function of the real interest rate, G represents government spending, and NX(Y) represents net exports (exports minus imports) decreasing as a function of income (decreasing because imports are an increasing function of income). === LM (liquidity-money) curve === The LM curve shows the combinations of interest rates and levels of real income for which the money market is in equilibrium. It shows where money demand equals money supply. For the LM curve, the independent variable is income and the dependent variable is the interest rate. In the money market equilibrium diagram, the liquidity preference function is the willingness to hold cash. The liquidity preference function is downward sloping (i.e. the willingness to hold cash increases as the interest rate decreases). Two basic elements determine the quantity of cash balances demanded: Transactions demand for money: this includes both (a) the willingness to hold cash for everyday transactions and (b) a precautionary measure (money demand in case of emergencies). Transactions demand is positively related to real GDP. As GDP is considered exogenous to the liquidity preference function, changes in GDP shift the curve. Speculative demand for money: this is the willingness to hold cash instead of securities as an asset for investment purposes. Speculative demand is inversely related to the interest rate. As the interest rate rises, the opportunity cost of holding money rather than investing in securities increases. So, as interest rates rise, speculative demand for money falls. Money supply is determined by central bank decisions and willingness of commercial banks to loan money. Money supply in effect is perfectly inelastic with respect to nominal interest rates. Thus the money supply function is represented as a vertical line – money supply is a constant, independent of the interest rate, GDP, and other factors. Mathematically, the LM curve is defined by the equation M / P = L ( i , Y ) {\displaystyle M/P=L(i,Y)} , where the supply of money is represented as the real amount M/P (as opposed to the nominal amount M), with P representing the price level, and L being the real demand for money, which is some function of the interest rate and the level of real income. An increase in GDP shifts the liquidity preference function rightward and hence increases the interest rate. Thus the LM function is positively sloped. == Shifts == One hypothesis is that a government's deficit spending ("fiscal policy") has an effect similar to that of a lower saving rate or increased private fixed investment, increasing the amount of demand for goods at each individual interest rate. An increased deficit by the national government shifts the IS curve to the right. This raises the equilibrium interest rate (from i1 to i2) and national income (from Y1 to Y2), as shown in the graph above. The equilibrium level of national income in the IS–LM diagram is referred to as aggregate demand. Keynesians argue spending may actually "crowd in" (encourage) private fixed investment via the accelerator effect, which helps long-term growth. Further, if government deficits are spent on productive public investment (e.g., infrastructure or public health) that spending directly and eventually raises potential output, although not necessarily more (or less) than the lost private investment might have. The extent of any crowding out depends on the shape of the LM curve. A shift in the IS curve along a relatively flat LM curve can increase output substantially with little change in the interest rate. On the other hand, a rightward shift in the IS curve along a vertical LM curve will lead to higher interest rates, but no change in output (this case represents the "Treasury view"). Rightward shifts of the IS curve also result from exogenous increases in investment spending (i.e., for reasons other than interest rates or income), in consumer spending, and in export spending by people outside the economy being modelled, as well as by exogenous decreases in spending on imports. Thus these too raise both equilibrium income and the equilibrium interest rate. Of course, changes in these variables in the opposite direction shift the IS curve in the opposite direction. The IS–LM model also allows for the role of monetary policy. If the money supply is increased, that shifts the LM curve downward or to the right, lowering interest rates and raising equilibrium national income. Further, exogenous decreases in liquidity preference, perhaps due to improved transactions technologies, lead to downward shifts of the LM curve and thus increases in income and decreases in interest rates. Changes in these variables in the opposite direction shift the LM curve in the opposite direction. == IS–LM model with interest targeting central bank == The fact that contemporary central banks normally do not target the money supply, as assumed by the original IS–LM model, but instead conduct their monetary policy by steering the interest rate directly, has led to increasing criticism of the traditional IS–LM setup since 2000 for being outdated and confusing to students. In some textbooks, the traditional LM curve derived from an explicit money market equilibrium story consequently has been replaced by an LM curve simply showing the interest rate level determined by the central bank. Notably this is the case in Olivier Blanchard's widely-used intermediate-level textbook "Macroeconomics" since its 7th edition in 2017. In this case, the LM curve becomes horizontal at the interest rate level chosen by the central bank, allowing a simpler kind of dynamics. Also, the interest rate level measured along the vertical axis may be interpreted as either the nominal or the real interest rate, in the latter case allowing inflation to enter the IS–LM model in a simple way. The output level is still determined by the intersection of the IS and LM curves. The LM curve may shift because of a change in monetary policy or possibly a change in inflation expectations, whereas the IS curve as in the traditional model may shift either because of a change in fiscal policy affecting government consumption or taxation, or because of shocks affecting private consumption or investment (or, in the open-economy version, net exports). Additionally, the model distinguishes between the policy interest rate determined by the central bank and the market interest rate which is decisive for firms' investment decisions, and which is equal to the policy interest rate plus a premium which may be interpreted as a risk premium or a measure of the market power or other factors influencing the business strategies of commercial banks. This premium allows for shocks in the financial sector being transmitted to the goods market and consequently affecting aggregate demand.: 195–201  Similar models, though called slightly different names, appear in the textbooks by Charles Jones and by Wendy Carlin and David Soskice and the CORE Econ project. Parallelly, texts by Akira Weerapana and Stephen Williamson have outlined approaches where the LM curve is replaced with a real interest rate rule. == Incorporation into larger models == By itself, the traditional IS–LM model is used to study the short run when prices are fixed or sticky, and no inflation is taken into consideration. In addition, the model is often used as a sub-model of larger models which allow for a flexible price level. The addition of a supply relation enables the model to be used for both short- and medium-run analysis of the economy, or to use a different terminology: classical and Keynesian analysis. A main example of this is the Aggregate Demand-Aggregate Supply model – the AD–AS model. In the aggregate demand-aggregate supply model, each point on the aggregate demand curve is an outcome of the IS–LM model for aggregate demand Y based on a particular price level. Starting from one point on the aggregate demand curve, at a particular price level and a quantity of aggregate demand implied by the IS–LM model for that price level, if one considers a higher potential price level, in the IS–LM model the real money supply M/P will be lower and hence the LM curve will be shifted higher, leading to lower aggregate demand as measured by the horizontal location of the IS–LM intersection; hence at the higher price level the level of aggregate demand is lower, so the aggregate demand curve is negatively sloped.: 315–317  In the 2018 textbook "Macroeconomics" by Daron Acemoglu, David Laibson and John A. List, the corresponding model combining a traditional IS-LM setup with a relation for a changing price level is named an IS-LM-FE model (FE standing for "full equilibrium"). === AD-AS-like models with inflation instead of price levels === In many modern textbooks, the traditional AD–AS diagram is replaced by a variation in which the variables are not output and the price level, but instead output and inflation (i.e., the change in the price level). In this case, the relation corresponding to the AS curve is normally derived from a Phillips curve relationship between inflation and the unemployment gap. As policymakers and economists are generally concerned about inflation levels and not actual price levels, this formulation is considered more appropriate. This variation is often referred to as a dynamic AD–AS model, but may also have other names. Olivier Blanchard in his textbook uses the term IS–LM–PC model (PC standing for Phillips curve). Others, among them Carlin and Soskice, refer to it as the "three-equation New Keynesian model", the three equations being an IS relation, often augmented with a term that allows for expectations influencing demand, a monetary policy (interest) rule and a short-run Phillips curve. == Variations == === IS-LM-NAC model === In 2016, Roger Farmer and Konstantin Platonov presented a so-called IS-LM-NAC model (NAC standing for "no arbitrage condition", in casu between physical capital and financial assets), in which the long-run effect of monetary policy depends on the way in which people form beliefs. The model was an attempt to integrate the phenomenon of secular stagnation in the IS-LM model. Whereas in the IS-LM model, high unemployment would be a temporary phenomenon caused by sticky wages and prices, in the IS-LM-NAC model high unemployment may be a permanent situation caused by pessimistic beliefs - a particular instance of what Keynes called animal spirits. The model was part of a broader research agenda studying how beliefs may independently influence macroeconomic outcomes. == See also == == References == == Further reading == Barro, Robert J. (1984). "The Keynesian Theory of Business Fluctuations". Macroeconomics. New York: John Wiley. pp. 487–513. ISBN 978-0-471-87407-2. Blanchard, Olivier (2021). "Goods and Financial Markets: The IS-LM Model". Macroeconomics (Eighth, global ed.). Harlow, England: Pearson. pp. 107–126. ISBN 978-0-134-89789-9. Hicks, J. R. (1937). "Mr. Keynes and the 'Classics': A Suggested Interpretation". Econometrica. 5 (2): 147–159. doi:10.2307/1907242. JSTOR 1907242. Krugman, Paul (2011-10-09). "IS-LMentary". The New York Times. Retrieved 2020-10-01. Leijonhufvud, Axel (1983). "What is Wrong with IS/LM?". In Fitoussi, Jean-Paul (ed.). Modern Macroeconomic Theory. Oxford: Blackwell. pp. 49–90. ISBN 978-0-631-13158-8. Mankiw, Nicholas Gregory (2022). "Aggregate Demand I+II". Macroeconomics (Eleventh, international ed.). New York, NY: Worth Publishers, Macmillan Learning. pp. 283–334. ISBN 978-1-319-26390-4. Romer, David (2000). "Keynesian Macroeconomics without the LM Curve". Journal of Economic Perspectives. 14 (2): 149–170. doi:10.1257/jep.14.2.149. ISSN 0895-3309. Smith, Warren L. (1956). "A Graphical Exposition of the Complete Keynesian System". Southern Economic Journal. 23 (2): 115–125. doi:10.2307/1053551. JSTOR 1053551. Vroey, Michel de; Hoover, Kevin D., eds. (2004). The IS-LM model: Its Rise, Fall, and Strange Persistence. Durham: Duke University Press. ISBN 978-0-8223-6631-7. Young, Warren; Zilberfarb, Ben-Zion, eds. (2000). IS-LM and Modern Macroeconomics. Recent Economic Thought. Vol. 73. Springer Science & Business Media. doi:10.1007/978-94-010-0644-6. ISBN 978-0-7923-7966-9. == External links == Krugman, Paul. There's something about macro – An explanation of the model and its role in understanding macroeconomics. Krugman, Paul. IS-LMentary – A basic explanation of the model and its uses. Wiens, Elmer G. IS–LM model – An online, interactive IS–LM model of the Canadian economy.
Wikipedia/IS/LM_model
Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each part. Manufacturing can be faster and less error-prone by the integration of computers. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing. == Overview == Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. The term "computer-integrated manufacturing" is both a method of manufacturing and the name of a computer-automated system in which individual engineering, production, marketing, and support functions of a manufacturing enterprise are organized. In a CIM system functional areas such as design, analysis, planning, purchasing, cost accounting, inventory control, and distribution are linked through the computer with factory floor functions such as materials handling and management, providing direct control and monitoring of all the operation. CIM is an example of the implementation of information and communication technologies (ICTs) in manufacturing. CIM implies that there are at least two computers exchanging information, e.g. the controller of an arm robot and a micro-controller. CIM is most useful where a high level of ICT is used in the company or facility, such as CAD/CAM systems, and the availability of process planning and its data. == History == The idea of "digital manufacturing" became prominent in the early 1970s, with the release of Dr. Joseph Harrington's book, Computer Integrated Manufacturing. However, it was not until 1984 when computer-integrated manufacturing began to be developed and promoted by machine tool manufacturers and the Computer and Automated Systems Association and Society of Manufacturing Engineers (CASA/SME). "CIM is the integration of total manufacturing enterprise by using integrated systems and data communication coupled with new managerial philosophies that improve organizational and personnel efficiency." ERHUM In a literature research was shown that 37 different concepts of CIM were published, most of them from Germany and USA. In a timeline of the 37 publications it is possible to see how the CIM concept developed over time. Also it is quite markable how different the concepts of all publications are. == Topics == === Key challenges === There are three major challenges to development of a smoothly operating computer-integrated manufacturing system: Integration of components from different suppliers: When different machines, such as CNC, conveyors and robots, are using different communications protocols (In the case of AGVs, even differing lengths of time for charging the batteries) may cause problems. Data integrity: The higher the degree of automation, the more critical is the integrity of the data used to control the machines. While the CIM system saves on labor of operating the machines, it requires extra human labor in ensuring that there are proper safeguards for the data signals that are used to control the machines. Process control: Computers may be used to assist the human operators of the manufacturing facility, but there must always be a competent engineer on hand to handle circumstances which could not be foreseen by the designers of the control software. === Subsystems === A computer-integrated manufacturing system is not the same as a "lights-out factory", which would run completely independent of human intervention, although it is a big step in that direction. Part of the system involves flexible manufacturing, where the factory can be quickly modified to produce different products, or where the volume of products can be changed quickly with the aid of computers. Some or all of the following subsystems may be found in a CIM operation: Computer-aided techniques: CAD (computer-aided design) CAE (computer-aided engineering) CAM (computer-aided manufacturing) CAPP (computer-aided process planning) CAQ (computer-aided quality assurance) PPC (production planning and control) ERP (enterprise resource planning) A business system integrated by a common database. Devices and equipment required: CNC, Computer numerical controlled machine tools DNC, Direct numerical control machine tools PLCs, Programmable logic controllers Robotics Computers Software Controllers Networks Interfacing Monitoring equipment Technologies: FMS, (flexible manufacturing system) ASRS, automated storage and retrieval system AGV, automated guided vehicle Robotics Automated conveyance systems Others: Lean manufacturing === CIMOSA === CIMOSA (Computer Integrated Manufacturing Open System Architecture), is a 1990s European proposal for an open systems architecture for CIM developed by the AMICE Consortium as a series of ESPRIT projects. The goal of CIMOSA was "to help companies to manage change and integrate their facilities and operations to face world wide competition. It provides a consistent architectural framework for both enterprise modeling and enterprise integration as required in CIM environments". CIMOSA provides a solution for business integration with four types of products: The CIMOSA Enterprise Modeling Framework, which provides a reference architecture for enterprise architecture CIMOSA IIS, a standard for physical and application integration. CIMOSA Systems Life Cycle, is a life cycle model for CIM development and deployment. Inputs to standardization, basics for international standard development. CIMOSA according to Vernadat (1996), coined the term business process and introduced the process-based approach for integrated enterprise modeling based on a cross-boundaries approach, which opposed to traditional function or activity-based approaches. With CIMOSA also the concept of an "Open System Architecture" (OSA) for CIM was introduced, which was designed to be vendor-independent, and constructed with standardised CIM modules. Here to the OSA is "described in terms of their function, information, resource, and organizational aspects. This should be designed with structured engineering methods and made operational in a modular and evolutionary architecture for operational use". == Areas == There are multiple areas of usage: In Industrial and Production engineering In mechanical engineering In electronic design automation (printed circuit board (PCB) and integrated circuit design data for manufacturing) == See also == Direct numerical control Enterprise integration Enterprise resource planning Flexible manufacturing system Integrated Computer-Aided Manufacturing Integrated manufacturing database Manufacturing process management Product lifecycle management == References == == Further reading == == External links == cam-occ, a linux CAM program using OpenCASCADE International Journal of Computer Integrated Manufacturing
Wikipedia/Computer_Integrated_Manufacturing
Integrated enterprise modeling (IEM) is an enterprise modeling method used for the admission and for the reengineering of processes both in producing enterprises and in the public area and service providers. In integrated enterprise modeling different aspects as functions and data become described in one model. Furthermore, the method supports analyses of business processes independently of the available organizational structure. The Integrated Enterprise Modeling is developed at the Fraunhofer Institute for Production Systems and Design Technology (German: IPK) Berlin, Germany. == Integrated enterprise modeling topics == === Base constructs === The integrated enterprise modeling (IEM) method uses an object-oriented approach and adapts this for the enterprise description. An application-oriented division of all elements of an enterprise forms the core of the method in generic object classes "product", "resource" and "order". Product The object class "product" represents all objects whose production and sale are the aim of the looked-at-enterprise as well as all objects which flow into the end product. Raw materials, intermediate products, components and end products, as well as services and the describing data, are included. Order The object class "order" describes all types of commissioning in the enterprise. The objects of the class "order" represent the information that is relevant from the point of view of planning, control, and supervision of the enterprise processes. One understands by it what, when, at which objects, in whose responsibility and with which resources it will be executed. Resource The IEM class "resource" contains all necessary key players which are required in the enterprise for the execution or support of activities. Among other things, these are employees, business partner, all kinds of documents as well as information systems or operating supplies. The classes "product", "order", and "resource" can gradually be given full particulars and specified. Through this it is possible to show both line of business typical and enterprise-specific product, order and resource subclasses. Structures (e.g. parts lists or organisation charts) can be shown as relational features of the classes with the help of being-part-of- and consists-of-relations between different subclasses. Action The activities which are necessary for the production of products and to the provision of services can be described as follows: an activity is the purposeful change of objects. The aim orientation of the activities causes an explicit or implicit planning and control. The execution of the activities is incumbent by the capable key players. From these considerations the definitions can be derived for the following constructs: An action is an object neutral description of activities: a verbal description of a work task, a lawsuit or proceeding; A function describes the change of state of a defined status into another defined one of objects of a class by using an action; and An activity specifies necessary resources for the state transformation of objects of a class the controlling order described by a function and these for the execution of this transformation in the enterprise, in each case represented by an object state description. === Views === All modeled data of the looked-at-enterprise are recorded in the model core of an Integrated Enterprise Modeling (IEM) model in two main views: the "information model"; and the "business process model". All relevant objects of an enterprise, their qualities and relations are shown in the "information model". It is class trees of the object classes "product", "order" and "resource" here. The "business process model" represents enterprise processes and their relations to each other. Activities are shown in their interaction with the objects. === Process modeling === The structuring of the enterprise processes in Integrated Enterprise Modeling (IEM) is reached by its hierarchical subdivision with the help of the decomposition. Decomposition means the reduction of a system in a partial system which respectively contains components which are in a logical cohesion. The process modeling is a partitioning of processes into its threads. Every thread describes a task completed into itself. The decomposition of single processes can be carried out long enough until the threads are manageable, i.e. appropriately small. They may turn out also not too rudimentary because a high number of detailed processes increases the complexity of a business process model. A process modeling person, therefore, has to find a balance between the effort complexity degree of the model and possible detailed description of the enterprise processes. A model depth generally recommends itself with at most three to four decomposition levels (model levels). On a model level business process flows are represented with the aid of illustrated combination elements. There are these five basic types of combinations between the activities: Sequential order: At a sequential order the activities are executed after each other. Parallel branching: A parallel branching means that all parallel branched activities to be executed have to be completed before the following activity can be started with. It is not necessary that the parallel activities are executed at the same time. They can be deferred, too. Case distinction: Decision either or. The case distinction is a branching in alternative processes depending on definition of the subsequent conditions. Uniting: The end of a parallel as the case may be alternative execution or also an integration of process chains is indicated by the uniting. Loop: A repatriation (loop, cycle) is represented by means of case distinction and uniting. The activities included in the loop are executed as long as the condition for the continuation is given. === Modeling proceeding === The modeling procedure for the illustration of business processes in IEM covers the following steps: System delimitation; Modeling; Model evaluation and use; and Model change. The system delimitation is the base of an efficient modeling. Starting out from a conceptual formulation the area of the real system to be shown is selected and interfaces will be defined to an environment. In addition, the detail depth of the model is also determined, i.e. the depth of the hierarchical decomposition relations in the view "business process model". The delimited real system is convicted with help of the IEM method in an abstract model. IEM is the construction of the two main positions "information model" and "business process model". The "information model" is made by the specification of the object classes to be modeled for "product", "order" and "resource" with the class structures as well as descriptive and relational features. By identification and description of functions, activities and its combination to processes the "business process model" is formed. As a general rule the construction of the "information model" follows first in which the modeling person can go back to available reference class structures. The reference classes which do not correspond to the real system or were not found to be relevant at the system delimitation are deleted. The missing relevant classes are inserted. After the object base is fixed, the activities and functions are joined at the objects according to the "generic activity model" and with the help of combination elements to business processes. A model is made which can be analysed and changed if it is required. It often happens, that during the construction of the "business process model" new relevant object classes are identified so that the class trees getting completed. The construction of the two positions is, therefore, an iterative process. Afterward, weak points and improvement potentials can be identified in the course of the model evaluation. This can cause the model changes whose realization should clear the weak points and make use of the improvement potentials in the real system. === Modeling tool MO²GO === The software tool MO²GO (method for an object-oriented business process optimization) supports the modeling process based on the integrated enterprise modeling (IEM). Different analyses of a given model are available like the planning and implementation of information systems. The MO²GO system is expandable easily and makes a high-speed modeling approach possible. The currently used MO²GO system consists of the following components: MO²GO version 2.4: This component offers modeling functions for class structures, process chains and mechanism for analysis of IEM. MO²GO Macro editor version 2.1: The macro editor supports the outline of MO²GO macros for user-defined evaluation procedures. MO²GO Viewer version 1.07: The Java-based and licence-free MO²GO Viewer is a user interface to be used easily to navigate process chains through MO²GO. MO²GO XML converter version 1.0: Nowadays the IT implementation works mainly with UML diagrams. MO²GO supports a component for a model based XML file which can be imported in UML tools. MO²GO Web publisher version 2.0: The web Publisher is a mechanism of analysis to be started directly out of MO²GO 2.4. A process assistant is the result of the evaluation of the model contents based on texture and hyperlink representation. To be able to adapt the process assistant to the user requirements flexibly, the web Publisher contains a configuration component. === MO²GO process assistant === The IEM business process models contain much information that can not only be used by system analysts but also be helpful for the employees at their daily work. To provide this model information for the staff and to enable the participation of the employees for the results of the modeling, a special tool was developed at the Fraunhofer IPK. This is a web-based process assistant whose contents are generated automatically from the IEM business process model of the enterprise. The process assistant provides all users the information of the business process model in an HTML-based form by intranet of the enterprise. For its implementation, no special methods or tool knowledge is required besides the basic EDP and Internet experiences. The process assistant has been developed so that the employees can find answers to the questions fast and precisely: e.g. What are the processes in the enterprise? In which way are they structured as? Who and with which responsibility is involved in the certain process? Which documents and application systems are used? Or also: A certain organisation unit is involved at which processes? Or in which processes a certain document or an application system is used? To make an informative process assistant from the business process model, certain modeling rules must be followed. The means e.g. that the individual actions must be deposited with its descriptions, the responsibility of the organisation units must be indicated explicitly or the paths also must be entered to the documents in the class tree. The fulfilment of these conditions means an additional time expenditure at the modeling, if these conditions are met, all employees are able to "surf" online through the intranet with the help of the process assistant by an informative enterprise documentation. They have the possibility between a graphic view and a texture-based description according to their preferences and methodical previous knowledge. The graphic view is provided by the MO²GO Viewer, a viewer tool for MO²GO models. The process assistant and the MO²GO Viewer are connected so that the graphic representation of the process looked at can be accessed context sensitively from the process assistant. Users can call on all templates, specifications and documents for the working sequence both from the process assistant and from the MO²GO Viewer online. Therefore, the process assistant cannot only be employed for the tracing of the modeling results but also in the daily business for the training of new employees as well as execution of process steps. To improve the usability in the daily routine, the process assistant can be adapted to the needs of the users' flexibility. This customization can be carried out both concerning the layout and concerning the main content emphases of the process assistant. == Areas of application of the IEM == Knowledge is used in organisations as a resource to render services for customers. The service preparation performs along actions which are described as processes or business processes. The analysis and improvement in dealing with knowledge presupposes a common idea about this context. An explicit description of the processes, therefore, is required because they represent the context for the respective knowledge contents. The process modeling represents a powerful instrument for the design and a conversion of a process-oriented knowledge management. In the context of the method of the business process-oriented knowledge management (GPO KM) developed at the Fraunhofer IPK the method of the "integrated enterprise modeling" (IEM) is accessed. It makes it possible to be able to show, to describe, to analyse and to form organisational processes. The IEM features few object classes, is ascertainable easily understandable and fast. Furthermore, the object orientation of the IEM opens up the possibility of showing knowledge as an object class. For the knowledge-oriented modeling of the business processes according to the IEM method the relevant knowledge contents have to be specified after knowledge domains and know-how bearers and represented as resources in the business process model. In further applications, IEM is used to create models across organisations (e.g. companies) to archive a common understanding between the involved stakeholders and derive services (create software and define the ASP). In this context the object-oriented basis of IEM has been used to create a common semantic across the single company models and to archive compliant enterprise models (predefined classes – terminology, model templates, etc.). The reason is that the terminology used within a model has to be understandable independent of the modeling language, see also SDDEM. == See also == Business process modeling == References == == Further reading == Peter Bernus ; Mertins, K. ; Schmidt, G. (2006). Handbook on architectures of information systems. Berlin : Springer, 2006, (International handbook on information systems) ISBN 3-540-64453-9, Second Edition 2006 Mertins, K. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse. Hanser Fachbuchverlag, Germany, ASIN 3446177469 Mertins, K.; Süssenguth, W.; Jochem, R. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse Carl Hanser Verlag, Germany, ISBN 3-446-17746-9 Mertins, K.; Jochem, J. (1997). Qualitätsorientierte Gestaltung von Geschäftsprozessen. Beuth-Verlag Berlin (Germany) Mertins, K.; Jochem, R. (1998). MO²GO. Handbook on Architectures of Information Systems. Springer-Verlag Berlin (Germany) Mertins, K.; Jaekel, F-W. (2006). MO²GO: User Oriented Enterprise Models for Organizational and IT Solutions. In: Bernus, P.; Mertins, K.; Schmidt, G.: Handbook on Architectures of Information Systems. Second Edition. Springer-Verlag Berlin. ISBN 3-540-25472-2 Spur, G.; Mertins, K.; Jochem, R.; Warnecke, H.J. (1993). Integrierte Unternehmensmodellierung Beuth Verlag GmbH Germany, ISBN 3-410-12923-5 Schwermer, M. (1998): Modellierungsvorgehen zur Planung von Geschäftsprozessen (Dissertation) FhG/IPK Berlin (Germany), ISBN 3-8167-5163-6 == External links == Fraunhofer Institute for Production Systems and Design Technology Modeling tool MO²GO
Wikipedia/Integrated_Enterprise_Modeling
Enterprise data modelling or enterprise data modeling (EDM) is the practice of creating a graphical model of the data used by an enterprise or company. Typical outputs of this activity include an enterprise data model consisting of entity–relationship diagrams (ERDs), XML schemas (XSD), and an enterprise wide data dictionary. == Overview == Producing such a model allows for a business to get a 'helicopter' view of their enterprise. In EAI (enterprise application integration) an EDM allows data to be represented in a single idiom, enabling the use of a common syntax for the XML of services or operations and the physical data model for database schema creation. Data modeling tools for ERDs that also allow the user to create a data dictionary are usually used to aid in the development of an EDM. The implementation of an EDM is closely related to the issues of data governance and data stewardship within an organization. An Enterprise Data Model (EDM) represents a single integrated definition of data, unbiased of any system or application. It is independent of “how” the data is physically sourced, stored, processed or accessed. The model unites, formalizes and represents the things important to an organization, as well as the rules governing them == References == Noreen Kendle (July 1, 2005). "The Enterprise Data Model". The Data Administration Newsletter. Andy Graham (2010). The Enterprise Data Model: A framework for enterprise data architecture. ISBN 978-0956582904.
Wikipedia/Enterprise_Data_Modeling
Richard Veryard FRSA (born 1955) is a British computer scientist, author and business consultant, known for his work on service-oriented architecture and the service-based business. == Biography == Veryard attended Sevenoaks School from 1966 to 1972, where he attended classes by Gerd Sommerhoff. He received his MA Mathematics and Philosophy from Merton College, Oxford, in 1976, and his MSc Computing Science at the Imperial College London in 1977. Later he also received his MBA from the Open University in 1992. Veryard started his career in industry working for Data Logic Limited, Middlesex, UK, where he first developed and taught public data analysis courses. After years of practical experience in this field, he wrote his first book about this topic in 1984. In 1987 he became an IT consultant with James Martin Associates (JMA), specializing in the practical problems of planning and implementing information systems. After the European operation of JMA were acquired by the Texas Instruments, he became a Principal Consultant in the Software Business and a member of Group Technical Staff. At Texas Instruments he was one of the developers of IE\Q, a proprietary methodology for software quality management. Since 1997 he is freelance consultant under the flag of Veryard Projects Ltd. Since 2006 he is a principal consultant at CBDi, a research forum for service-oriented architecture and engineering. Veryard has taught courses at City University, Brunel University and the Copenhagen Business School, and is a Fellow of the Royal Society of Arts in London. == Work == === Pragmatic data analysis, 1984 === In "Pragmatic data analysis" (1984) Veryard presented data analysis as a branch of systems analysis, which shared the same principles. His position on data modelling would appear to be implicit in the term data analysis. He presented two philosophical attitudes towards data modeling, which he called "semantic relativism and semantic absolutism. According to the absolutist way of thinking, there is only one correct or ideal way of modeling anything: each object in the real world must be represented by a particular construct. Semantic relativism, on the other hand, believe that most things in the real world can be modeled in many different ways, using basic constructs". Veryard further examined the problem of the discovery of classes and objects. This may proceed from a number of different models, that capture the requirements of the problem domain. Abbott (1983) proposed that each search starts from a textual description of the problem. Ward (1989) and Seidewitz and Stark (1986) suggested starting from the products of structured analysis, namely data flow diagrams. Veryard examined the same problem from the perspective of data modeling. Veryard made the point, that the modeler has some choice in whether to use an entity, relationship or attribute to represent a given universe of discourse (UoD) concept. This justifies a common position, that "data models of the same UoD may differ, but the differences are the result of shortcomings in the data modeling language. The argument is that data modeling is essentially descriptive, but that current data modeling languages allow some choice in how the description is documented." === Economics of Information Systems and Software, 1991 === In the 1991 book "The Economics of Information Systems and Software", edited by Veryard, experts from various areas, including business administration, project management, software engineering and economics, contribute their expertise concerning the economics of systems software, including evaluation of benefits, types of information and project costs and management. === Information Coordination, 1993 === In the 1993 book "Information Coordination: The Management of Information Models, Systems, and Organizations" Veryard gives a snapshot of the state of the art around these subjects. "Maximizing the value of corporate data depends upon being able to manage information models both within and between businesses. A centralized information model is not appropriate for many organizations," Veryard explains. His book "takes the approach that multiple information models exist and the differences and links between them have to be managed. Coordination is currently an area of both intensive theoretical speculation and of practical research and development. Information Coordination explains practical guidelines for information management, both from on-going research and from recent field experience with CASE tools and methods". === Enterprise Modelling Methodology === In the 1990s Veryard worked together in an Enterprise Computing Project and developed a version of Business Relationship Modelling specifically for Open Distributed Processing, under the name Enterprise Modelling Methodology/Open Distributed Processing (EMM/ODP). EMM/ODP proposed some new techniques and method extensions for enterprise modelling for distributed systems. === Component-based business === In 2001 Veryard introduced the concept of "component-based business". Component-based business relates to new business architectures, in which "an enterprise is configured as a dynamic network of components providing business services to one another". In the new millennium there has been "a phenomenal growth in this kind of new autonomous business services, fuelled largely by the internet and e-business". The concept of "component-Based Business constitutes a radical challenge to traditional notions of strategy, planning, requirements, quality and change, and tries to help you improve how you think through the practical difficulties and opportunities of the component-based business". This applied to both hardware and software, and to business relationships. Veryard's subsequent work on organic planning for SOA has been referenced by a number of authors. === Six Viewpoints of Business Architecture, 2013 === In "Six Viewpoints of Business Architecture" Veryard describes business architecture as "a practice (or collection of practices) associated with business performance, strategy and structure." And furthermore about the main task of the business architect: The business architect is expected to take responsibility for some set of stakeholder concerns, in collaboration with a number of related business and architectural roles, including • business strategy planning, business change management, business analysis, etc. • business operations, business excellence, etc. • enterprise architecture, solution architecture, data/process architecture, systems architecture, etc. Conventional accounts of business architecture are often framed within a particular agenda - especially an IT-driven agenda. Many enterprise architecture frameworks follow this agenda, and this affects how they describe business architecture and its relationship with other architectures (such as IT systems architecture). Indeed, business architecture is often seen as little more than a precursor to system architecture - an attempt to derive systems requirements. == Publications == Richard Veryard. Pragmatic data analysis. Oxford : Blackwell Scientific Publications, 1984. Richard Veryard (ed.). The Economics of information systems and software. Oxford : Butterworth-Heinemann, 1991. Richard Veryard. Information modelling : practical guidance. New York : Prentice Hall, 1992. Richard Veryard. Information coordination : the management of information models, systems, and organizations. New York : Prentice Hall, 1994. Richard Veryard. Component-based business : plug and play. London : Springer, 2001. Richard Veryard. Six Viewpoints of Business Architecture, 2013 Articles, papers, book chapters, etc., a selection: Richard Veryard (2000). Reasoning about systems and their properties. In: Peter Henderson (ed) Systems Engineering for Business Process Change, Springer-Verlag, 2002* Richard Veryard. "Business-Driven SOA," CBDI Journal, May–June 2004 == References == == External links == Richard Veryard Home page List of recent publications by Richard Veryard.
Wikipedia/Enterprise_Modelling_Methodology/Open_Distributed_Processing
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation. == Overview == A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize. SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations. In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements". SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer. == History == According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". The structured systems analysis and design method (SSADM) was produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC". == Models == SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one. Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap. === Waterfall === The oldest and best known is the waterfall model, which uses a linear sequence of steps. Waterfall has different varieties. One variety is as follows: ==== Preliminary analysis ==== Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations. Conduct preliminary analysis: Identify the organization's objectives and define the nature and scope of the project. Ensure that the project fits with the objectives. Consider alternative solutions: Alternatives may come from interviewing employees, clients, suppliers, and consultants, as well as competitive analysis. Cost-benefit analysis: Analyze the costs and benefits of the project. ==== Systems analysis, requirements definition ==== Decompose project goals into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness: Collect facts: Obtain end-user requirements by document review, client interviews, observation, and questionnaires. Scrutinize existing system(s): Identify pros and cons. Analyze the proposed system: Find solutions to issues and prepare specifications, incorporating appropriate user proposals. ==== Systems design ==== At this step, desired features and operations are detailed, including screen layouts, business rules, process diagrams, pseudocode, and other deliverables. ==== Development ==== Write the code. ==== Integration and testing ==== Assemble the modules in a testing environment. Check for errors, bugs, and interoperability. ==== Acceptance, installation, deployment ==== Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system. ==== Maintenance ==== Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality. ==== Evaluation ==== The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance. ==== Disposal ==== At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security. In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products: === Systems analysis and design === Systems analysis and design (SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning. === Object-oriented analysis and design === Object-oriented analysis and design (OOAD) is the process of analyzing a problem domain to develop a conceptual model that can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders. The conceptual model that results from OOAD typically consists of use cases, and class and interaction diagrams. It may also include a user interface mock-up. An output artifact does not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process. Some typical input artifacts for OOAD: Conceptual model: A conceptual model is the result of object-oriented analysis. It captures concepts in the problem domain. The conceptual model is explicitly independent of implementation details. Use cases: A use case is a description of sequences of events that, taken together, complete a required task. Each use case provides scenarios that convey how the system should interact with actors (users). Actors may be end users or other systems. Use cases may further elaborated using diagrams. Such diagrams identify the actor and the processes they perform. System Sequence Diagram: A System Sequence diagrams (SSD) is a picture that shows, for a particular use case, the events that actors generate, their order, including inter-system events. User interface document: Document that shows and describes the user interface. Data model: A data model describes how data elements relate to each other. The data model is created before the design phase. Object-oriented designs map directly from the data model. Relational designs are more involved. === System lifecycle === The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal. ==== Conceptual design ==== The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need. Key steps within the conceptual design stage include: Need identification Feasibility analysis System requirements analysis System specification Conceptual design review ==== Preliminary system design ==== During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development. Key steps within the preliminary design stage include: Functional analysis Requirements allocation Detailed trade-off studies Synthesis of system options Preliminary design of engineering models Development specification Preliminary design review For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs. ==== Detail design and development ==== This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification. Key steps within the detail design and development stage include: Detailed design Detailed synthesis Development of engineering and prototype models Revision of development specification Product, process, and material specification Critical design review ==== Production and construction ==== During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement. Key steps within the product construction stage include: Production and/or construction of system components Acceptance testing System distribution and operation Operational testing and evaluation System assessment ==== Utilization and support ==== Once fully deployed, the system is used for its intended operational role and maintained within its operational environment. Key steps within the utilization and support stage include: System operation in the user environment Change management System modifications for improvement System assessment ==== Phase-out and disposal ==== Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems. == Phases == === System investigation === During this step, current priorities that would be affected and how they should be handled are considered. A feasibility study determines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs. The feasibility study should address operational, financial, technical, human factors, and legal/political concerns. === Analysis === The goal of analysis is to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements. === Design === In systems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems. The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced. Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a complete data model with a data dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input. === Testing === The code is tested at various levels in software testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted. The following types of testing may be relevant: Path testing Data set testing Unit testing System testing Integration testing Black-box testing White-box testing Regression testing Automation testing User acceptance testing Software performance testing === Training and transition === Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training. After training, systems engineers and developers transition the system to its production environment. === Operations and maintenance === Maintenance includes changes, fixes, and enhancements. === Evaluation === The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements. == Life cycle == === Management and control === SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure. To manage and control a substantial SDLC initiative, a work breakdown structure (WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The project manager chooses a WBS format that best describes the project. The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support. === Work breakdown structured organization === The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have a statement of work (SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors. === Baselines === Baselines are established after four of the five phases of the SDLC, and are critical to the iterative nature of the model. Baselines become milestones. functional baseline: established after the conceptual design phase. allocated baseline: established after the preliminary design phase. product baseline: established after the detail design and development phase. updated product baseline: established after the production construction phase. == Alternative methodologies == Alternative software development methods to systems development life cycle are: Software prototyping Joint applications development (JAD) Rapid application development (RAD) Extreme programming (XP); Open-source development End-user development Object-oriented programming == Strengths and weaknesses == Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers. == See also == Application lifecycle management Decision cycle IPO model Software development methodologies == References == == Further reading == Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke. ISBN 978-0-230-20368-6 Computer World, 2002, Retrieved on June 22, 2006, from the World Wide Web: Management Information Systems, 2005, Retrieved on June 22, 2006, from the World Wide Web: == External links == The Agile System Development Lifecycle Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology DoD Integrated Framework Chart IFC (front, back) FSA Life Cycle Framework HHS Enterprise Performance Life Cycle Framework The Open Systems Development Life Cycle System Development Life Cycle Evolution Modeling Zero Deviation Life Cycle Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept.
Wikipedia/Systems_Development_Life_Cycle
Systems modeling or system modeling is the interdisciplinary study of the use of models to conceptualize and construct systems in business and IT development. A common type of systems modeling is function modeling, with specific techniques such as the Functional Flow Block Diagram and IDEF0. These models can be extended using functional decomposition, and can be linked to requirements models for further systems partition. Contrasting the functional modeling, another type of systems modeling is architectural modeling which uses the systems architecture to conceptually model the structure, behavior, and more views of a system. The Business Process Modeling Notation (BPMN), a graphical representation for specifying business processes in a workflow, can also be considered to be a systems modeling language. == Overview == In business and IT development the term "systems modeling" has multiple meanings. It can relate to: the use of model to conceptualize and construct systems the interdisciplinary study of the use of these models the systems modeling, analysis, and design efforts the systems modeling and simulation, such as system dynamics any specific systems modeling language As a field of study systems modeling has emerged with the development of system theory and systems sciences. As a type of modeling systems modeling is based on systems thinking and the systems approach. In business and IT systems modeling contrasts other approaches such as: agent based modeling data modeling and mathematical modeling In "Methodology for Creating Business Knowledge" (1997) Arbnor and Bjerke the systems approach (systems modeling) was considered to be one of the three basic methodological approaches for gaining business knowledge, beside the analytical approach and the actor's approach (agent based modeling). == History == The function model originates in the 1950s, after in the first half of the 20th century other types of management diagrams had already been developed. The first known Gantt chart was developed in 1896 by Karol Adamiecki, who called it a harmonogram. Because Adamiecki did not publish his chart until 1931 - and in any case his works were published in either Polish or Russian, languages not popular in the West - the chart now bears the name of Henry Gantt (1861–1919), who designed his chart around the years 1910-1915 and popularized it in the West. One of the first well defined function models, was the Functional Flow Block Diagram (FFBD) developed by the defense-related TRW Incorporated in the 1950s. In the 1960s it was exploited by the NASA to visualize the time sequence of events in a space systems and flight missions. It is further widely used in classical systems engineering to show the order of execution of system functions. One of the earliest pioneering works in information systems modeling has been done by Young and Kent (1958), who argued: Since we may be called upon to evaluate different computers or to find alternative ways of organizing current systems it is necessary to have some means of precisely stating a data processing problem independently of mechanization. They aimed for a precise and abstract way of specifying the informational and time characteristics of a data processing problem, and wanted to create a notation that should enable the analyst to organize the problem around any piece of hardware. Their efforts was not so much focused on independent systems analysis, but on creating abstract specification and invariant basis for designing different alternative implementations using different hardware components. A next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra. == Types of systems modeling == In business and IT development systems are modeled with different scopes and scales of complexity, such as: Functional modeling Systems architecture Business process modeling Enterprise modeling Further more like systems thinking, systems modeling in can be divided into: Systems analysis Hard systems modeling or operational research modeling Soft system modeling Process based system modeling And all other specific types of systems modeling, such as form example complex systems modeling, dynamical systems modeling, and critical systems modeling. == Specific types of modeling languages == Framework-specific modeling language Systems Modeling Language == See also == Behavioral modeling Dynamic systems Human visual system model – a human visual system model used by image processing, video processing, and computer vision Open energy system models – energy system models adopting open science principles SEQUAL framework Software and Systems Modeling Solar System model – a model that illustrates the relative positions and motions of the planets and stars Statistical model Systems analysis Systems design Systems biology modeling Viable system model – a model of the organizational structure of any viable or autonomous system == References == == Further reading == Doo-Kwon Baik eds. (2005). Systems modeling and simulation: theory and applications : third Asian Simulation Conference, AsiaSim 2004, Jeju Island, Korea, October 4–6, 2004. Springer, 2005. ISBN 3-540-24477-8. Derek W. Bunn, Erik R. Larsen (1997). Systems modelling for energy policy. Wiley, 1997. ISBN 0-471-95794-1 Hartmut Ehrig et al. (eds.) (2005). Formal methods in software and systems modeling. Springer, 2005 ISBN 3-540-24936-2 D. J. Harris (1985). Mathematics for business, management, and economics: a systems modelling approach. E. Horwood, 1985. ISBN 0-85312-821-9 Jiming Liu, Xiaolong Jin, Kwok Ching Tsui (2005). Autonomy oriented computing: from problem solving to complex systems modeling. Springer, 2005. ISBN 1-4020-8121-9 Michael Pidd (2004). Systems Modelling: Theory and Practice. John Wiley & Sons, 2004. ISBN 0-470-86732-9 Václav Pinkava (1988). Introduction to Logic for Systems Modelling. Taylor & Francis, 1988. ISBN 0-85626-431-8
Wikipedia/Systems_modelling
Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning of the software life cycle when using an object-oriented approach to software development. The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process. The reasons to model a system before writing the code are: Communication. Users typically cannot understand programming language or code. Model diagrams can be more understandable and can allow users to give developers feedback on the appropriate structure of the system. A key goal of the Object-Oriented approach is to decrease the "semantic gap" between the system and the real world by using terminology that is the same as the functions that users perform. Modeling is an essential tool to facilitate achieving this goal . Abstraction. A goal of most software methodologies is to first address "what" questions and then address "how" questions. I.e., first determine the functionality the system is to provide without consideration of implementation constraints and then consider how to take this abstract description and refine it into an implementable design and code given constraints such as technology and budget. Modeling enables this by allowing abstract descriptions of processes and objects that define their essential structure and behavior. Object-oriented modeling is typically done via use cases and abstract definitions of the most important objects. The most common language used to do object-oriented modeling is the Object Management Group's Unified Modeling Language (UML). == See also == Object-oriented analysis and design == References ==
Wikipedia/Object-Oriented_Modeling
In computer science, the Aho–Corasick algorithm is a string-searching algorithm invented by Alfred V. Aho and Margaret J. Corasick in 1975. It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text. It matches all strings simultaneously. The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. Because all matches are found, multiple matches will be returned for one string location if multiple strings from the dictionary match at that location (e.g. dictionary = a, aa, aaa, aaaa and input string is aaaa). Informally, the algorithm constructs a finite-state machine that resembles a trie with additional links between the various internal nodes. These extra internal links allow fast transitions between failed string matches (e.g. a search for cart in a trie that does not contain cart, but contains art, and thus would fail at the node prefixed by car), to other branches of the trie that share a common suffix (e.g., in the previous case, a branch for attribute might be the best lateral transition). This allows the automaton to transition between string matches without the need for backtracking. When the string dictionary is known in advance (e.g. a computer virus database), the construction of the automaton can be performed once off-line and the compiled automaton stored for later use. In this case, its run time is linear in the length of the input plus the number of matched entries. The Aho—Corasick string-matching algorithm formed the basis of the original Unix command fgrep. == History == Like many inventions at Bell Labs at the time, the Aho–Corasick algorithm was created serendipitously with a conversation between the two after a seminar by Aho. Corasick was an information scientist who got her PhD a year earlier at Lehigh University. There, she did her dissertation on securing propretiary data within open systems, through the lens of both the commercial, legal, and government structures and the technical tools that were emerging at the time. In a similar realm, at Bell Labs, she was building a tool for researchers to learn about current work being done under government contractors by searching government-provided tapes of publications. For this, she wrote a primitive keyword-by-keyword search program to find chosen keywords within the tapes. Such an algorithm scaled poorly with many keywords, and one of the bibliographers using her algorithm hit the $600 usage limit on the Bell Labs machines before their lengthy search even finished. She ended up attending a seminar on algorithm design by Aho, and afterwards they got to speaking about her work and this problem. Aho suggested improving the efficiency of the program using the approach of the now Aho–Corasick algorithm, and Corasick designed a new program based on those insights. This lowered the running cost of that bibliographer's search from over $600 to just $25, and Aho–Corasick was born. == Example == In this example, we will consider a dictionary consisting of the following words: {a, ab, bab, bc, bca, c, caa}. The graph below is the Aho–Corasick data structure constructed from the specified dictionary, with each row in the table representing a node in the trie, with the column path indicating the (unique) sequence of characters from the root to the node. The data structure has one node for every prefix of every string in the dictionary. So if (bca) is in the dictionary, then there will be nodes for (bca), (bc), (b), and (). If a node is in the dictionary then it is a blue node. Otherwise it is a grey node. There is a black directed "child" arc from each node to a node whose name is found by appending one character. So there is a black arc from (bc) to (bca). There is a blue directed "suffix" arc from each node to the node that is the longest possible strict suffix of it in the graph. For example, for node (caa), its strict suffixes are (aa) and (a) and (). The longest of these that exists in the graph is (a). So there is a blue arc from (caa) to (a). The blue arcs can be computed in linear time by performing a breadth-first search [potential suffix node will always be at lower level] starting from the root. The target for the blue arc of a visited node can be found by following its parent's blue arc to its longest suffix node and searching for a child of the suffix node whose character matches that of the visited node. If the character does not exist as a child, we can find the next longest suffix (following the blue arc again) and then search for the character. We can do this until we either find the character (as child of a node) or we reach the root (which will always be a suffix of every string). There is a green "dictionary suffix" arc from each node to the next node in the dictionary that can be reached by following blue arcs. For example, there is a green arc from (bca) to (a) because (a) is the first node in the dictionary (i.e. a blue node) that is reached when following the blue arcs to (ca) and then on to (a). The green arcs can be computed in linear time by repeatedly traversing blue arcs until a blue node is found, and memoizing this information. At each step, the current node is extended by finding its child, and if that doesn't exist, finding its suffix's child, and if that doesn't work, finding its suffix's suffix's child, and so on, finally ending in the root node if nothing's seen before. When the algorithm reaches a node, it outputs all the dictionary entries that end at the current character position in the input text. This is done by printing every node reached by following the dictionary suffix links, starting from that node, and continuing until it reaches a node with no dictionary suffix link. In addition, the node itself is printed, if it is a dictionary entry. Execution on input string abccab yields the following steps: == Dynamic search list == The original Aho–Corasick algorithm assumes that the set of search strings is fixed. It does not directly apply to applications in which new search strings are added during application of the algorithm. An example is an interactive indexing program, in which the user goes through the text and highlights new words or phrases to index as they see them. Bertrand Meyer introduced an incremental version of the algorithm in which the search string set can be incrementally extended during the search, retaining the algorithmic complexity of the original. == See also == Commentz-Walter algorithm == References == == External links == Aho—Corasick in NIST's Dictionary of Algorithms and Data Structures (2019-07-15) Aho-Corasick Algorithm Visualizer
Wikipedia/Aho–Corasick_string_matching_algorithm
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process. The multiple threads of a given process may be executed concurrently (via multithreading capabilities), sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time. The implementation of threads and processes differs between operating systems. == History == Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of the OS/360 control system, of which Multiprogramming with a Variable Number of Tasks (MVT) was one. Saltzer (1966) credits Victor A. Vyssotsky with the term "thread". The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores. == Related concepts == Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively. This yields a variety of related concepts. === Processes === At the kernel level, a process contains one or more kernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes, while if they share data they are usually called (user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known as fibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads. A process is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and data), file handles, sockets, device handles, windows, and a process control block. Processes are isolated by process isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – see interprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged translation lookaside buffer (TLB), notably on x86). === Kernel threads === A kernel thread is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process scheduler is preemptive. Kernel threads do not own resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped. === User threads === Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). User threads as implemented by virtual machines are also called green threads. As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload. However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing. A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/await primitives). === Fibers === Fibers are an even lighter unit of scheduling which are cooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel or user threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of the OpenMP parallel programming model implement their tasks through fibers. Closely related to fibers are coroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct. === Threads vs processes === Threads differ from traditional multitasking operating-system processes in several ways: processes are typically independent, while threads exist as subsets of a process processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources processes have separate address spaces, whereas threads share their address space processes interact only through system-provided inter-process communication mechanisms context switching between threads in the same process typically occurs faster than context switching between processes Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating systems there is not so great a difference except in the cost of an address-space switch, which on some architectures (notably x86) results in a translation lookaside buffer (TLB) flush. Advantages and disadvantages of threads vs processes include: Lower resource consumption of threads: using threads, an application can operate using fewer resources than it would need when using multiple processes. Simplified sharing and communication of threads: unlike processes, which require a message passing or shared memory mechanism to perform inter-process communication (IPC), threads can communicate through data, code and files they already share. Thread crashes a process: due to threads sharing the same address space, an illegal operation performed by a thread can crash the entire process; therefore, one misbehaving thread can disrupt the processing of all the other threads in the application. == Scheduling == === Preemptive vs cooperative scheduling === Operating systems schedule threads either preemptively or cooperatively. Multi-user operating systems generally favor preemptive multithreading for its finer-grained control over execution time via context switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing lock convoy, priority inversion, or other side-effects. In contrast, cooperative multithreading relies on threads to relinquish control of execution, thus ensuring that threads run to completion. This can cause problems if a cooperatively multitasked thread blocks by waiting on a resource or if it starves other threads by not yielding control of execution during intensive computation. === Single- vs multi-processor systems === Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches. In 2002, Intel added support for simultaneous multithreading to the Pentium 4 processor, under the name hyper-threading; in 2005, they introduced the dual-core Pentium D processor and AMD introduced the dual-core Athlon 64 X2 processor. Systems with a single processor generally implement multithreading by time slicing: the central processing unit (CPU) switches between different software threads. This context switching usually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads. === Threading models === ==== 1:1 (kernel-level threading) ==== Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel are the simplest possible threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the GNU C Library implements this approach (via the NPTL or older LinuxThreads). This approach is also used by Solaris, NetBSD, FreeBSD, macOS, and iOS. ==== M:1 (user-level threading) ==== An M:1 model implies that all application-level threads map to one kernel-level scheduled entity; the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable Threads uses User-level threading, as does State Threads. ==== M:N (hybrid threading) ==== M:N maps some M number of application threads onto some N number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler. ==== Hybrid implementation examples ==== Scheduler activations used by older versions of the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model) Light-weight processes used by older versions of the Solaris operating system Marcel from the PM2 project. The OS for the Tera-Cray MTA-2 The Glasgow Haskell Compiler (GHC) for the language Haskell uses lightweight threads which are scheduled on operating system threads. ==== History of threading models in Unix systems ==== SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model. FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model. == Single-threaded vs multithreaded programs == In computer programming, single-threading is the processing of one instruction at a time. In the formal analysis of the variables' semantics and process state, the term single threading can be used differently to mean "backtracking within a single thread", which is common in the functional programming community. Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system. Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions. === Threads and data synchronization === Threads in the same process share the same address space. This allows concurrently running code to couple tightly and conveniently exchange data without the overhead or complexity of an IPC. When shared between threads, however, even simple data structures become prone to race conditions if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate. To prevent this, threading application programming interfaces (APIs) offer synchronization primitives such as mutexes to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock. Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems to contend for the memory bus, especially if the granularity of the locking is too fine. Other synchronization APIs include condition variables, critical sections, semaphores, and monitors. === Thread pools === A popular programming pattern involving threads is that of thread pools where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management. === Multithreaded programs vs single-threaded programs pros and cons === Multithreaded applications have the following advantages vs single-threaded ones: Responsiveness: multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for obtaining similar results. Parallelization: applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like CUDA and OpenCL use the multithreading model where dozens to hundreds of threads run in parallel across data on a large number of cores. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution. Multithreaded applications have the following drawbacks: Synchronization complexity and related bugs: when using shared resources typical for threaded programs, the programmer must be careful to avoid race conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using mutexes) to prevent common data from being read or overwritten in one thread while being modified by another. Careless use of such primitives can lead to deadlocks, livelocks or races over resources. As Edward A. Lee has written: "Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly non-deterministic, and the job of the programmer becomes one of pruning that nondeterminism." Being untestable. In general, multithreaded programs are non-deterministic, and as a result, are untestable. In other words, a multithreaded program can easily have bugs which never manifest on a test system, manifesting only in production. This can be alleviated by restricting inter-thread communications to certain well-defined patterns (such as message-passing). Synchronization costs. As thread context switch on modern CPUs can cost up to 1 million CPU cycles, it makes writing efficient multithreading programs difficult. In particular, special attention has to be paid to avoid inter-thread synchronization from being too frequent. == Programming language support == Many programming languages support threading in some capacity. IBM PL/I(F) included support for multithreading (called multitasking) as early as in the late 1960s, and this was continued in the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model "thread" API. Neither version was part of the PL/I standard. Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. A standardized interface for thread implementation is POSIX Threads (Pthreads), which is a set of C-function library calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the same interface across multiple platforms. Most Unix platforms, including Linux, support Pthreads. Microsoft Windows has its own set of thread functions in the process.h interface for multithreading, like beginthread. Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages and language extensions also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA). A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the application's code on two or more threads at once. This effectively limits the parallelism on multiple core systems. It also limits performance for processor-bound threads (which require the processor), but doesn't effect I/O-bound or network-bound ones as much. Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly "shared" between threads. In Tcl each thread has one or more interpreters. In programming models such as CUDA designed for data parallel computation, an array of threads run the same code in parallel using only its ID to find its data in memory. In essence, the application must be designed so that each thread performs the same operation on different segments of memory so that they can operate in parallel and use the GPU architecture. Hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware). == See also == == References == == Further reading ==
Wikipedia/Thread_(computer_science)
In computer programming, an anonymous function (function literal, expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword lambda, and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. == Names == The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual function f(x) = M would be written (λx.M), and where M is an expression that uses x. Compare to the Python syntax of lambda x: M. The name "arrow function" refers to the mathematical "maps to" symbol, x ↦ M. Compare to the JavaScript syntax of x => M. == Uses == Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples include closures and currying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by name. Anonymous functions often provide a briefer notation than defining named functions. In languages that do not permit the definition of named functions in local scopes, anonymous functions may provide encapsulation via localized scope, however the code in the body of such anonymous function may not be re-usable, or amenable to separate testing. Short/simple anonymous functions used in expressions may be easier to read and understand than separately defined named functions, though without a descriptive name they may be more difficult to understand. In some programming languages, anonymous functions are commonly implemented for very specific purposes such as binding events to callbacks or instantiating the function for particular values, which may be more efficient in a Dynamic programming language, more readable, and less error-prone than calling a named function. The following examples are written in Python 3. === Sorting === When attempting to sort in a non-standard way, it may be easier to contain the sorting logic as an anonymous function instead of creating a named function. Most languages provide a generic sort function that implements a sort algorithm that will sort arbitrary objects. This function usually accepts an arbitrary function that determines how to compare whether two elements are equal or if one is greater or less than the other. Consider this Python code sorting a list of strings by length of the string: The anonymous function in this example is the lambda expression: The anonymous function accepts one argument, x, and returns the length of its argument, which is then used by the sort() method as the criteria for sorting. Basic syntax of a lambda function in Python is The expression returned by the lambda function can be assigned to a variable and used in the code at multiple places. Another example would be sorting items in a list by the name of their class (in Python, everything has a class): Note that 11.2 has class name "float", 10 has class name "int", and 'number' has class name "str". The sorted order is "float", "int", then "str". === Closures === Closures are functions evaluated in an environment containing bound variables. The following example binds the variable "threshold" in an anonymous function that compares the input to the threshold. This can be used as a sort of generator of comparison functions: It would be impractical to create a function for every possible comparison function and may be too inconvenient to keep the threshold around for further use. Regardless of the reason why a closure is used, the anonymous function is the entity that contains the functionality that does the comparing. === Currying === Currying is the process of changing a function so that rather than taking multiple inputs, it takes a single input and returns a function which accepts the second input, and so forth. In this example, a function that performs division by any integer is transformed into one that performs division by a set integer. While the use of anonymous functions is perhaps not common with currying, it still can be used. In the above example, the function divisor generates functions with a specified divisor. The functions half and third curry the divide function with a fixed divisor. The divisor function also forms a closure by binding the variable d. === Higher-order functions === A higher-order function is a function that takes a function as an argument or returns one as a result. This is commonly used to customize the behavior of a generically defined function, often a looping construct or recursion scheme. Anonymous functions are a convenient way to specify such function arguments. The following examples are in Python 3. ==== Map ==== The map function performs a function call on each element of a list. The following example squares every element in an array with an anonymous function. The anonymous function accepts an argument and multiplies it by itself (squares it). The above form is discouraged by the creators of the language, who maintain that the form presented below has the same meaning and is more aligned with the philosophy of the language: ==== Filter ==== The filter function returns all elements from a list that evaluate True when passed to a certain function. The anonymous function checks if the argument passed to it is even. The same as with map, the form below is considered more appropriate: ==== Fold ==== A fold function runs over all elements in a structure (for lists usually left-to-right, a "left fold", called reduce in Python), accumulating a value as it goes. This can be used to combine all elements of a structure into one value, for example: This performs ( ( ( 1 × 2 ) × 3 ) × 4 ) × 5 = 120. {\displaystyle \left(\left(\left(1\times 2\right)\times 3\right)\times 4\right)\times 5=120.} The anonymous function here is the multiplication of the two arguments. The result of a fold need not be one value. Instead, both map and filter can be created using fold. In map, the value that is accumulated is a new list, containing the results of applying a function to each element of the original list. In filter, the value that is accumulated is a new list containing only those elements that match the given condition. == List of languages == The following is a list of programming languages that support unnamed anonymous functions fully, or partly as some variant, or not at all. This table shows some general trends. First, the languages that do not support anonymous functions (C, Pascal, Object Pascal) are all statically typed languages. However, statically typed languages can support anonymous functions. For example, the ML languages are statically typed and fundamentally include anonymous functions, and Delphi, a dialect of Object Pascal, has been extended to support anonymous functions, as has C++ (by the C++11 standard). Second, the languages that treat functions as first-class functions (Dylan, Haskell, JavaScript, Lisp, ML, Perl, Python, Ruby, Scheme) generally have anonymous function support so that functions can be defined and passed around as easily as other data types. == Examples of anonymous functions == == See also == First-class function Lambda calculus definition == References == == External links == Anonymous Methods - When Should They Be Used? (blog about anonymous function in Delphi) Compiling Lambda Expressions: Scala vs. Java 8 php anonymous functions php anonymous functions Lambda functions in various programming languages Functions in Go
Wikipedia/Lambda_function_(computer_programming)
In computer science, an operation, function or expression is said to have a side effect if it has any observable effect other than its primary effect of reading the value of its arguments and returning a value to the invoker of the operation. Example side effects include modifying a non-local variable, a static local variable or a mutable argument passed by reference; raising errors or exceptions; performing I/O; or calling other functions with side-effects. In the presence of side effects, a program's behaviour may depend on history; that is, the order of evaluation matters. Understanding and debugging a function with side effects requires knowledge about the context and its possible histories. Side effects play an important role in the design and analysis of programming languages. The degree to which side effects are used depends on the programming paradigm. For example, imperative programming is commonly used to produce side effects, to update a system's state. By contrast, declarative programming is commonly used to report on the state of system, without side effects. Functional programming aims to minimize or eliminate side effects. The lack of side effects makes it easier to do formal verification of a program. The functional language Haskell eliminates side effects such as I/O and other stateful computations by replacing them with monadic actions. Functional languages such as Standard ML, Scheme and Scala do not restrict side effects, but it is customary for programmers to avoid them. Effect systems extend types to keep track of effects, permitting concise notation for functions with effects, while maintaining information about the extent and nature of side effects. In particular, functions without effects correspond to pure functions. Assembly language programmers must be aware of hidden side effects—instructions that modify parts of the processor state which are not mentioned in the instruction's mnemonic. A classic example of a hidden side effect is an arithmetic instruction that implicitly modifies condition codes (a hidden side effect) while it explicitly modifies a register (the intended effect). One potential drawback of an instruction set with hidden side effects is that, if many instructions have side effects on a single piece of state, like condition codes, then the logic required to update that state sequentially may become a performance bottleneck. The problem is particularly acute on some processors designed with pipelining (since 1990) or with out-of-order execution. Such a processor may require additional control circuitry to detect hidden side effects and stall the pipeline if the next instruction depends on the results of those effects. == Referential transparency == Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value. This requires that the expression is pure, that is to say the expression must be deterministic (always give the same value for the same input) and side-effect free. == Temporal side effects == Side effects caused by the time taken for an operation to execute are usually ignored when discussing side effects and referential transparency. There are some cases, such as with hardware timing or testing, where operations are inserted specifically for their temporal side effects e.g. sleep(5000) or for (int i = 0; i < 10000; ++i) {}. These instructions do not change state other than taking an amount of time to complete. == Idempotence == A subroutine with side effects is idempotent if multiple applications of the subroutine have the same effect on the system state as a single application, in other words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense. For instance, consider the following Python program: setx is idempotent because the second application of setx to 3 has the same effect on the system state as the first application: x was already set to 3 after the first application, and it is still set to 3 after the second application. A pure function is idempotent if it is idempotent in the mathematical sense. For instance, consider the following Python program: abs is idempotent because the second application of abs to the return value of the first application to -3 returns the same value as the first application to -3. == Example == One common demonstration of side effect behavior is that of the assignment operator in C. The assignment a = b is an expression that evaluates to the same value as the expression b, with the side effect of storing the R-value of b into the L-value of a. This allows multiple assignment: Because the operator right associates, this is equivalent to This presents a potential hangup for novice programmers who may confuse with == See also == Action at a distance (computer programming) Don't-care term Sequence point Side-channel attack Undefined behaviour Unspecified behaviour Frame problem == References ==
Wikipedia/Side-effect_(computer_science)
In computing and computer programming, exception handling is the process of responding to the occurrence of exceptions – anomalous or exceptional conditions requiring special processing – during the execution of a program. In general, an exception breaks the normal flow of execution and executes a pre-registered exception handler; the details of how this is done depend on whether it is a hardware or software exception and how the software exception is implemented. Exceptions are defined by different layers of a computer system, and the typical layers are CPU-defined interrupts, operating system (OS)-defined signals, programming language-defined exceptions. Each layer requires different ways of exception handling although they may be interrelated, e.g. a CPU interrupt could be turned into an OS signal. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted. == Definition == The definition of an exception is based on the observation that each procedure has a precondition, a set of circumstances for which it will terminate "normally". An exception handling mechanism allows the procedure to raise an exception if this precondition is violated, for example if the procedure has been called on an abnormal set of arguments. The exception handling mechanism then handles the exception. The precondition, and the definition of exception, is subjective. The set of "normal" circumstances is defined entirely by the programmer, e.g. the programmer may deem division by zero to be undefined, hence an exception, or devise some behavior such as returning zero or a special "ZERO DIVIDE" value (circumventing the need for exceptions). Common exceptions include an invalid argument (e.g. value is outside of the domain of a function), an unavailable resource (like a missing file, a network drive error, or out-of-memory errors), or that the routine has detected a normal condition that requires special handling, e.g., attention, end of file. Social pressure is a major influence on the scope of exceptions and use of exception-handling mechanisms, i.e. "examples of use, typically found in core libraries, and code examples in technical books, magazine articles, and online discussion forums, and in an organization’s code standards". Exception handling solves the semipredicate problem, in that the mechanism distinguishes normal return values from erroneous ones. In languages without built-in exception handling such as C, routines would need to signal the error in some other way, such as the common return code and errno pattern. Taking a broad view, errors can be considered to be a proper subset of exceptions, and explicit error mechanisms such as errno can be considered (verbose) forms of exception handling. The term "exception" is preferred to "error" because it does not imply that anything is wrong - a condition viewed as an error by one procedure or programmer may not be viewed that way by another. The term "exception" may be misleading because its connotation of "anomaly" indicates that raising an exception is abnormal or unusual, when in fact raising the exception may be a normal and usual situation in the program. For example, suppose a lookup function for an associative array throws an exception if the key has no value associated. Depending on context, this "key absent" exception may occur much more often than a successful lookup. == History == The first hardware exception handling was found in the UNIVAC I from 1951. Arithmetic overflow executed two instructions at address 0 which could transfer control or fix up the result. Software exception handling developed in the 1960s and 1970s. Exception handling was subsequently widely adopted by many programming languages from the 1980s onward. == Hardware exceptions == There is no clear consensus as to the exact meaning of an exception with respect to hardware. From the implementation point of view, it is handled identically to an interrupt: the processor halts execution of the current program, looks up the interrupt handler in the interrupt vector table for that exception or interrupt condition, saves state, and switches control. == IEEE 754 floating-point exceptions == Exception handling in the IEEE 754 floating-point standard refers in general to exceptional conditions and defines an exception as "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal one or more exceptions by invoking the default or, if explicitly requested, a language-defined alternate handling." By default, an IEEE 754 exception is resumable and is handled by substituting a predefined value for different exceptions, e.g. infinity for a divide by zero exception, and providing status flags for later checking of whether the exception occurred (see C99 programming language for a typical example of handling of IEEE 754 exceptions). An exception-handling style enabled by the use of status flags involves: first computing an expression using a fast, direct implementation; checking whether it failed by testing status flags; and then, if necessary, calling a slower, more numerically robust, implementation. The IEEE 754 standard uses the term "trapping" to refer to the calling of a user-supplied exception-handling routine on exceptional conditions, and is an optional feature of the standard. The standard recommends several usage scenarios for this, including the implementation of non-default pre-substitution of a value followed by resumption, to concisely handle removable singularities. The default IEEE 754 exception handling behaviour of resumption following pre-substitution of a default value avoids the risks inherent in changing flow of program control on numerical exceptions. For example, the 1996 Cluster spacecraft launch ended in a catastrophic explosion due in part to the Ada exception handling policy of aborting computation on arithmetic error. William Kahan claims the default IEEE 754 exception handling behavior would have prevented this. == In programming languages == == In user interfaces == Front-end web development frameworks, such as React and Vue, have introduced error handling mechanisms where errors propagate up the user interface (UI) component hierarchy, in a way that is analogous to how errors propagate up the call stack in executing code. Here the error boundary mechanism serves as an analogue to the typical try-catch mechanism. Thus a component can ensure that errors from its child components are caught and handled, and not propagated up to parent components. For example, in Vue, a component would catch errors by implementing errorCapturedWhen used like this in markup:The error produced by the child component is caught and handled by the parent component. == See also == Triple fault Data validation == References == == External links == A Crash Course on the Depths of Win32 Structured Exception Handling by Matt Pietrek - Microsoft Systems Journal (1997) Article "C++ Exception Handling" by Christophe de Dinechin Article "Exceptional practices" by Brian Goetz Article "Object Oriented Exception Handling in Perl" by Arun Udaya Shankar Article "Programming with Exceptions in C++" by Kyle Loudon Article "Unchecked Exceptions - The Controversy" Conference slides Floating-Point Exception-Handling policies (pdf p. 46) by William Kahan Descriptions from Portland Pattern Repository Does Java Need Checked Exceptions?
Wikipedia/Exception_(computer_science)
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g., the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFT). Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations. == Divide and conquer == The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the merge sort algorithm. The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list (or its analogue in numerical computing, the bisection algorithm for root finding). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use tail recursion, they can be converted into simple loops. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class. An important application of divide and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the geometric series); this is known as prune and search. == Early historical examples == Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into single subproblems, and indeed can be solved iteratively. Binary search, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by John Mauchly, the idea of using a sorted list of items to facilitate searching dates back at least as far as Babylonia in 200 BC. Another ancient decrease-and-conquer algorithm is the Euclidean algorithm to compute the greatest common divisor of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC. An early example of a divide-and-conquer algorithm with multiple subproblems is Gauss's 1805 description of what is now called the Cooley–Tukey fast Fourier transform (FFT) algorithm, although he did not analyze its operation count quantitatively, and FFTs did not become widespread until they were rediscovered over a century later. An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in 1945. Another notable example is the algorithm invented by Anatolii A. Karatsuba in 1960 that could multiply two n-digit numbers in O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} operations (in Big O notation). This algorithm disproved Andrey Kolmogorov's 1956 conjecture that Ω ( n 2 ) {\displaystyle \Omega (n^{2})} operations would be required for that task. As another example of a divide-and-conquer algorithm that did not originally involve computers, Donald Knuth gives the method a post office typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a radix sort, described for punch-card sorting machines as early as 1929. == Advantages == === Solving difficult problems === Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases, and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height n {\displaystyle n} to move a tower of height n − 1 {\displaystyle n-1} . === Algorithm efficiency === The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the Strassen algorithm for matrix multiplication, and fast Fourier transforms. In all these examples, the D&C approach led to an improvement in the asymptotic cost of the solution. For example, if (a) the base cases have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size n {\displaystyle n} , and (b) there is a bounded number p {\displaystyle p} of sub-problems of size ~ n p {\displaystyle {\frac {n}{p}}} at each stage, then the cost of the divide-and-conquer algorithm will be O ( n log p ⁡ n ) {\displaystyle O(n\log _{p}n)} . For other types of divide-and-conquer approaches, running times can also be generalized. For example, when a) the work of splitting the problem and combining the partial solutions take c n {\displaystyle cn} time, where n {\displaystyle n} is the input size and c {\displaystyle c} is some constant; b) when n < 2 {\displaystyle n<2} , the algorithm takes time upper-bounded by c {\displaystyle c} , and c) there are q {\displaystyle q} subproblems where each subproblem has size ~ n 2 {\displaystyle {\frac {n}{2}}} . Then, the running times are as follows: if the number of subproblems q > 2 {\displaystyle q>2} , then the divide-and-conquer algorithm's running time is bounded by O ( n log 2 ⁡ q ) {\displaystyle O(n^{\log _{2}q})} . if the number of subproblems is exactly one, then the divide-and-conquer algorithm's running time is bounded by O ( n ) {\displaystyle O(n)} . If, instead, the work of splitting the problem and combining the partial solutions take c n 2 {\displaystyle cn^{2}} time, and there are 2 subproblems where each has size n 2 {\displaystyle {\frac {n}{2}}} , then the running time of the divide-and-conquer algorithm is bounded by O ( n 2 ) {\displaystyle O(n^{2})} . === Parallelism === Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors. === Memory access === Divide-and-conquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called cache-oblivious, because it does not contain the cache size as an explicit parameter. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be optimal cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is blocking, as in loop nest optimization, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine. The same advantage exists with regards to other hierarchical storage systems, such as NUMA or virtual memory, as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels. === Roundoff control === In computations with rounded arithmetic, e.g. with floating-point numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called pairwise summation that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first and pays the overhead of the recursive calls, it is usually more accurate. == Implementation issues == === Recursion === Divide-and-conquer algorithms are naturally implemented as recursive procedures. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the procedure call stack. A recursive function is a function that calls itself within its definition. === Explicit stack === Divide-and-conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a stack, queue, or priority queue. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in breadth-first recursion and the branch-and-bound method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures. === Stack size === In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise, the execution may fail because of stack overflow. D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than log 2 ⁡ n {\displaystyle \log _{2}n} nested recursive calls to sort n {\displaystyle n} items. Stack overflow may be difficult to avoid when using recursive procedures since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary, such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure or by using an explicit stack structure. === Choosing the base cases === In any recursive algorithm, there is considerable freedom in the choice of the base cases, the small subproblems that are solved directly in order to terminate the recursion. Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, a Fast Fourier Transform algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples, there is only one base case to consider, and it requires no processing. On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a hybrid algorithm. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is short-circuiting the base case, also known as arm's-length recursion. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack. Thus, for example, many library implementations of quicksort will switch to a simple loop-based insertion sort (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with n {\displaystyle n} entries would entail maximally n {\displaystyle n} quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation. Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely unrolled into code that has no recursion, loops, or conditionals (related to the technique of partial evaluation). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes. Source-code generation methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently. The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case. === Dynamic programming for overlapping subproblems === For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as memoization. Followed to the limit, it leads to bottom-up divide-and-conquer algorithms such as dynamic programming. == See also == Akra–Bazzi method – Method in computer science Decomposable aggregation function – Type of function in database managementPages displaying short descriptions of redirect targets "Divide and conquer" – Strategy in politics and sociology Fork–join model – Way of setting up and executing parallel computer programs Master theorem (analysis of algorithms) – Tool for analyzing divide-and-conquer algorithms Mathematical induction – Form of mathematical proof MapReduce – Parallel programming model Heuristic (computer science) – Type of algorithm, produces approximately correct solutions == References ==
Wikipedia/Divide_and_conquer_algorithm
In assembly language programming, the function prologue is a few lines of code at the beginning of a function, which prepare the stack and registers for use within the function. Similarly, the function epilogue appears at the end of the function, and restores the stack and registers to the state they were in before the function was called. The prologue and epilogue are not a part of the assembly language itself; they represent a convention used by assembly language programmers, and compilers of many higher-level languages. They are fairly rigid, having the same form in each function. Function prologue and epilogue also sometimes contain code for buffer overflow protection. == Prologue == A function prologue typically does the following actions if the architecture has a base pointer (also known as frame pointer) and a stack pointer: Pushes current base pointer onto the stack, so it can be restored later. Value of base pointer is set to the address of stack pointer (which is pointed to the top of the stack) so that the base pointer will point to the top of the stack. Moves the stack pointer further by decreasing or increasing its value, depending on whether the stack grows down or up. On x86, the stack pointer is decreased to make room for the function's local variables. Several possible prologues can be written, resulting in slightly different stack configuration. These differences are acceptable, as long as the programmer or compiler uses the stack in the correct way inside the function. As an example, here is a typical x86 assembly language function prologue as produced by the GCC The N immediate value is the number of bytes reserved on the stack for local use. The same result may be achieved by using the enter instruction: More complex prologues can be obtained using different values (other than 0) for the second operand of the enter instruction. These prologues push several base/frame pointers to allow for nested functions, as required by languages such as Pascal. However, modern versions of these languages don′t use these instructions because they limit the nesting depth in some cases. == Epilogue == Function epilogue reverses the actions of the function prologue and returns control to the calling function. It typically does the following actions (this procedure may differ from one architecture to another): Drop the stack pointer to the current base pointer, so room reserved in the prologue for local variables is freed. Pops the base pointer off the stack, so it is restored to its value before the prologue. Returns to the calling function, by popping the previous frame's program counter off the stack and jumping to it. The given epilogue will reverse the effects of either of the above prologues (either the full one, or the one which uses enter). Under certain calling conventions it is the callee's responsibility to clean the arguments off the stack, so the epilogue can also include the step of moving the stack pointer down or up. For example, these three steps may be accomplished in 32-bit x86 assembly language by the following instructions: Like the prologue, the x86 processor contains a built-in instruction which performs part of the epilogue. The following code is equivalent to the above code: The leave instruction performs the mov and pop instructions, as outlined above. A function may contain multiple epilogues. Every function exit point must either jump to a common epilogue at the end, or contain its own epilogue. Therefore, programmers or compilers often use the combination of leave and ret to exit the function at any point. (For example, a C compiler would substitute a return statement with a leave/ret sequence). == Further reading == de Boyne Pollard, Jonathan (2010). "The gen on function perilogues". Frequently Given Answers.
Wikipedia/Function_prologue
In some programming languages, function overloading or method overloading is the ability to create multiple functions of the same name with different implementations. Calls to an overloaded function will run a specific implementation of that function appropriate to the context of the call, allowing one function call to perform different tasks depending on context. == Basic definition == For example, doTask() and doTask(object o) are overloaded functions. To call the latter, an object must be passed as a parameter, whereas the former does not require a parameter, and is called with an empty parameter field. A common error would be to assign a default value to the object in the second function, which would result in an ambiguous call error, as the compiler wouldn't know which of the two methods to use. Another example is a Print(object o) function that executes different actions based on whether it's printing text or photos. The two different functions may be overloaded as Print(text_object T); Print(image_object P). If we write the overloaded print functions for all objects our program will "print", we never have to worry about the type of the object, and the correct function call again, the call is always: Print(something). == Languages supporting overloading == Languages which support function overloading include, but are not necessarily limited to, the following: Ada Apex C++ C# Clojure D Swift Fortran Kotlin Java Julia PostgreSQL and PL/SQL Scala TypeScript Visual Basic (.NET) Wolfram Language Elixir Nim Crystal Delphi Python Languages that do not support function overloading include C, Rust and Zig. == Rules in function overloading == The same function name is used for more than one function definition in a particular module, class or namespace The functions must have different type signatures, i.e. differ in the number or the types of their formal parameters (as in C++) or additionally in their return type (as in Ada). Function overloading is usually associated with statically-typed programming languages that enforce type checking in function calls. An overloaded function is a set of different functions that are callable with the same name. For any particular call, the compiler determines which overloaded function to use and resolves this at compile time. This is true for programming languages such as Java. Function overloading differs from forms of polymorphism where the choice is made at runtime, e.g. through virtual functions, instead of statically. Example: Function overloading in C++ In the above example, the volume of each component is calculated using one of the three functions named "volume", with selection based on the differing number and type of actual parameters. == Constructor overloading == Constructors, used to create instances of an object, may also be overloaded in some object-oriented programming languages. Because in many languages the constructor's name is predetermined by the name of the class, it would seem that there can be only one constructor. Whenever multiple constructors are needed, they are to be implemented as overloaded functions. In C++, default constructors take no parameters, instantiating the object members with their appropriate default values, "which is normally zero for numeral fields and empty string for string fields". For example, a default constructor for a restaurant bill object written in C++ might set the tip to 15%: The drawback to this is that it takes two steps to change the value of the created Bill object. The following shows creation and changing the values within the main program: By overloading the constructor, one could pass the tip and total as parameters at creation. This shows the overloaded constructor with two parameters. This overloaded constructor is placed in the class as well as the original constructor we used before. Which one gets used depends on the number of parameters provided when the new Bill object is created (none, or two): Now a function that creates a new Bill object could pass two values into the constructor and set the data members in one step. The following shows creation and setting the values: This can be useful in increasing program efficiency and reducing code length. Another reason for constructor overloading can be to enforce mandatory data members. In this case the default constructor is declared private or protected (or preferably deleted since C++11) to make it inaccessible from outside. For the Bill above total might be the only constructor parameter – since a Bill has no sensible default for total – whereas tip defaults to 0.15. == Complications == Two issues interact with and complicate function overloading: Name masking (due to scope) and implicit type conversion. If a function is declared in one scope, and then another function with the same name is declared in an inner scope, there are two natural possible overloading behaviors: the inner declaration masks the outer declaration (regardless of signature), or both the inner declaration and the outer declaration are included in the overload, with the inner declaration masking the outer declaration only if the signature matches. The first is taken in C++: "in C++, there is no overloading across scopes." As a result, to obtain an overload set with functions declared in different scopes, one needs to explicitly import the functions from the outer scope into the inner scope, with the using keyword. Implicit type conversion complicates function overloading because if the types of parameters do not exactly match the signature of one of the overloaded functions, but can match after type conversion, resolution depends on which type conversion is chosen. These can combine in confusing ways: An inexact match declared in an inner scope can mask an exact match declared in an outer scope, for instance. For example, to have a derived class with an overloaded function taking a double or an int, using the function taking an int from the base class, in C++, one would write: Failing to include the using results in an int parameter passed to F in the derived class being converted to a double and matching the function in the derived class, rather than in the base class; Including using results in an overload in the derived class and thus matching the function in the base class. == Caveats == If a method is designed with an excessive number of overloads, it may be difficult for developers to discern which overload is being called simply by reading the code. This is particularly true if some of the overloaded parameters are of types that are inherited types of other possible parameters (for example "object"). An IDE can perform the overload resolution and display (or navigate to) the correct overload. Type-based overloading can also hamper code maintenance, where code updates can accidentally change which method overload is chosen by the compiler. == See also == Abstraction (computer science) Constructor (computer science) Default argument Dynamic dispatch Factory method pattern Method signature Method overriding Object-oriented programming Operator overloading == Citations == == References == Bloch, Joshua (2018). "Effective Java: Programming Language Guide" (third ed.). Addison-Wesley. ISBN 978-0134685991. == External links == Meyer, Bertrand (October 2001). "Overloading vs Object Technology" (PDF). Eiffel column. Journal of Object-Oriented Programming. 14 (4). 101 Communications LLC: 3–7. Retrieved 27 August 2020.
Wikipedia/Function_overloading
In computer software, in compiler theory, an intrinsic function, also called built-in function or builtin function, is a function (subroutine) available for use in a given programming language whose implementation is handled specially by the compiler. Typically, it may substitute a sequence of automatically generated instructions for the original function call, similar to an inline function. Unlike an inline function, the compiler has an intimate knowledge of an intrinsic function and can thus better integrate and optimize it for a given situation. Compilers that implement intrinsic functions may enable them only when a program requests optimization, otherwise falling back to a default implementation provided by the language runtime system (environment). == Vectorization and parallelization == Intrinsic functions are often used to explicitly implement vectorization and parallelization in languages which do not address such constructs. Some application programming interfaces (API), for example, AltiVec and OpenMP, use intrinsic functions to declare, respectively, vectorizable and multiprocessing-aware operations during compiling. The compiler parses the intrinsic functions and converts them into vector math or multiprocessing object code appropriate for the target platform. Some intrinsics are used to provide additional constraints to the optimizer, such as values a variable cannot assume. == By programming language == === C and C++ === Compilers for C and C++, of Microsoft, Intel, and the GNU Compiler Collection (GCC) implement intrinsics that map directly to the x86 single instruction, multiple data (SIMD) instructions (MMX, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX512, FMA, ...). Intrinsics allow mapping to standard assembly instructions that are not normally accessible through C/C++, e.g., bit scan. Some C and C++ compilers provide non-portable platform-specific intrinsics. Other intrinsics (such as GNU built-ins) are slightly more abstracted, approximating the abilities of several contemporary platforms, with portable fall back implementations on platforms with no appropriate instructions. It is common for C++ libraries, such as glm or Sony's vector maths libraries, to achieve portability via conditional compilation (based on platform specific compiler flags), providing fully portable high-level primitives (e.g., a four-element floating-point vector type) mapped onto the appropriate low level programming language implementations, while still benefiting from the C++ type system and inlining; hence the advantage over linking to hand-written assembly object files, using the C application binary interface (ABI). ==== Examples ==== The following are examples of signatures of intrinsic functions from Intel's set of intrinsic functions. === Java === The HotSpot Java virtual machine's (JVM) just-in-time compiler also has intrinsics for specific Java APIs. Hotspot intrinsics are standard Java APIs which may have one or more optimized implementation on some platforms. === PL/I === ANSI/ISO PL/I defines nearly 90 builtin functions. These are conventionally grouped as follows:: 337–338  String-handling builtin functions such as INDEX, LENGTH Arithmetic builtin functions such as ABS, CEIL, ROUND Mathematical builtin functions like SIN, COS, LOG, ERF Array-handling builtin functions, for example ANY, ALL, PROD Condition-handling builtin functions like ONCODE, ONFILE Storage Control builtin functions, for example ADDR, POINTER Input-Output builtins: LINENO Miscellaneous builtin functions like DATE and TIME Individual compilers have added additional builtins specific to a machine architecture or operating system. A builtin function is identified by leaving its name undeclared and allowing it to default, or by declaring it BUILTIN. A user-supplied function of the same name can be substituted by declaring it as ENTRY. == References == == External links == Intel Intrinsics Guide Using milicode routines, IBM AIX 6.1 documentation
Wikipedia/Intrinsic_function
Function or functionality may refer to: == Computing == Function key, a type of key on computer keyboards Function model, a structured representation of processes in a system Function object or functor or functionoid, a concept of object-oriented programming Function (computer programming), a callable sequence of instructions == Music == Function (music), a relationship of a chord to a tonal centre Function (musician) (born 1973), David Charles Sumner, American techno DJ and producer "Function" (song), a 2012 song by American rapper E-40 featuring YG, Iamsu! & Problem "Function", song by Dana Kletter from Boneyard Beach 1995 == Other uses == Function (biology), the effect of an activity or process Function (engineering), a specific action that a system can perform Function (language), a way of achieving an aim using language Function (mathematics), a relation that associates an input to a single output Function (sociology), an activity's role in society Functionality (chemistry), the presence of functional groups in a molecule Party or function, a social event Function Drinks, an American beverage company == See also == Function field (disambiguation) Function hall Functional (disambiguation) Functional group (disambiguation) Functionalism (disambiguation) Functor (disambiguation)
Wikipedia/Function_(disambiguation)
A design is the concept or proposal for an object, process, or system. The word design refers to something that is or has been intentionally created by a thinking agent, and is sometimes used to refer to the inherent nature of something – its design. The verb to design expresses the process of developing a design. In some cases, the direct construction of an object without an explicit prior plan may also be considered to be a design (such as in arts and crafts). A design is expected to have a purpose within a specific context, typically aiming to satisfy certain goals and constraints while taking into account aesthetic, functional and experiential considerations. Traditional examples of designs are architectural and engineering drawings, circuit diagrams, sewing patterns, and less tangible artefacts such as business process models. == Designing == People who produce designs are called designers. The term 'designer' usually refers to someone who works professionally in one of the various design areas. Within the professions, the word 'designer' is generally qualified by the area of practice (for example: a fashion designer, a product designer, a web designer, or an interior designer), but it can also designate other practitioners such as architects and engineers (see below: Types of designing). A designer's sequence of activities to produce a design is called a design process, with some employing designated processes such as design thinking and design methods. The process of creating a design can be brief (a quick sketch) or lengthy and complicated, involving considerable research, negotiation, reflection, modeling, interactive adjustment, and re-design. Designing is also a widespread activity outside of the professions of those formally recognized as designers. In his influential book The Sciences of the Artificial, the interdisciplinary scientist Herbert A. Simon proposed that, "Everyone designs who devises courses of action aimed at changing existing situations into preferred ones." According to the design researcher Nigel Cross, "Everyone can – and does – design," and "Design ability is something that everyone has, to some extent, because it is embedded in our brains as a natural cognitive function." == History of design == The study of design history is complicated by varying interpretations of what constitutes 'designing'. Many design historians, such as John Heskett, look to the Industrial Revolution and the development of mass production. Others subscribe to conceptions of design that include pre-industrial objects and artefacts, beginning their narratives of design in prehistoric times. Originally situated within art history, the historical development of the discipline of design history coalesced in the 1970s, as interested academics worked to recognize design as a separate and legitimate target for historical research. Early influential design historians include German-British art historian Nikolaus Pevsner and Swiss historian and architecture critic Sigfried Giedion. == Design education == In Western Europe, institutions for design education date back to the nineteenth century. The Norwegian National Academy of Craft and Art Industry was founded in 1818, followed by the United Kingdom's Government School of Design (1837), and Konstfack in Sweden (1844). The Rhode Island School of Design was founded in the United States in 1877. The German art and design school Bauhaus, founded in 1919, greatly influenced modern design education. Design education covers the teaching of theory, knowledge, and values in the design of products, services, and environments, with a focus on the development of both particular and general skills for designing. Traditionally, its primary orientation has been to prepare students for professional design practice, based on project work and studio, or atelier, teaching methods. There are also broader forms of higher education in design studies and design thinking. Design is also a part of general education, for example within the curriculum topic, Design and Technology. The development of design in general education in the 1970s created a need to identify fundamental aspects of 'designerly' ways of knowing, thinking, and acting, which resulted in establishing design as a distinct discipline of study. == Design process == Substantial disagreement exists concerning how designers in many fields, whether amateur or professional, alone or in teams, produce designs. Design researchers Dorst and Dijkhuis acknowledged that "there are many ways of describing design processes," and compare and contrast two dominant but different views of the design process: as a rational problem-solving process and as a process of reflection-in-action. They suggested that these two paradigms "represent two fundamentally different ways of looking at the world – positivism and constructionism." The paradigms may reflect differing views of how designing should be done and how it actually is done, and both have a variety of names. The problem-solving view has been called "the rational model," "technical rationality" and "the reason-centric perspective." The alternative view has been called "reflection-in-action," "coevolution" and "the action-centric perspective." === Rational model === The rational model was independently developed by Herbert A. Simon, an American scientist, and two German engineering design theorists, Gerhard Pahl and Wolfgang Beitz. It posits that: Designers attempt to optimize a design candidate for known constraints and objectives. The design process is plan-driven. The design process is understood in terms of a discrete sequence of stages. The rational model is based on a rationalist philosophy and underlies the waterfall model, systems development life cycle, and much of the engineering design literature. According to the rationalist philosophy, design is informed by research and knowledge in a predictable and controlled manner. Typical stages consistent with the rational model include the following: Pre-production design Design brief – initial statement of intended outcome. Analysis – analysis of design goals. Research – investigating similar designs in the field or related topics. Specification – specifying requirements of a design for a product (product design specification) or service. Problem solving – conceptualizing and documenting designs. Presentation – presenting designs. Design during production. Development – continuation and improvement of a design. Product testing – in situ testing of a design. Post-production design feedback for future designs. Implementation – introducing the design into the environment. Evaluation and conclusion – summary of process and results, including constructive criticism and suggestions for future improvements. Redesign – any or all stages in the design process repeated (with corrections made) at any time before, during, or after production. Each stage has many associated best practices. ==== Criticism of the rational model ==== The rational model has been widely criticized on two primary grounds: Designers do not work this way – extensive empirical evidence has demonstrated that designers do not act as the rational model suggests. Unrealistic assumptions – goals are often unknown when a design project begins, and the requirements and constraints continue to change. === Action-centric model === The action-centric perspective is a label given to a collection of interrelated concepts, which are antithetical to the rational model. It posits that: Designers use creativity and emotion to generate design candidates. The design process is improvised. No universal sequence of stages is apparent – analysis, design, and implementation are contemporary and inextricably linked. The action-centric perspective is based on an empiricist philosophy and broadly consistent with the agile approach and methodical development. Substantial empirical evidence supports the veracity of this perspective in describing the actions of real designers. Like the rational model, the action-centric model sees design as informed by research and knowledge. At least two views of design activity are consistent with the action-centric perspective. Both involve these three basic activities: In the reflection-in-action paradigm, designers alternate between "framing", "making moves", and "evaluating moves". "Framing" refers to conceptualizing the problem, i.e., defining goals and objectives. A "move" is a tentative design decision. The evaluation process may lead to further moves in the design. In the sensemaking–coevolution–implementation framework, designers alternate between its three titular activities. Sensemaking includes both framing and evaluating moves. Implementation is the process of constructing the design object. Coevolution is "the process where the design agent simultaneously refines its mental picture of the design object based on its mental picture of the context, and vice versa". The concept of the design cycle is understood as a circular time structure, which may start with the thinking of an idea, then expressing it by the use of visual or verbal means of communication (design tools), the sharing and perceiving of the expressed idea, and finally starting a new cycle with the critical rethinking of the perceived idea. Anderson points out that this concept emphasizes the importance of the means of expression, which at the same time are means of perception of any design ideas. == Philosophies == Philosophy of design is the study of definitions, assumptions, foundations, and implications of design. There are also many informal 'philosophies' for guiding design such as personal values or preferred approaches. === Approaches to design === Some of these values and approaches include: Critical design uses designed artefacts as an embodied critique or commentary on existing values, morals, and practices in a culture. Critical design can make aspects of the future physically present to provoke a reaction. Ecological design is a design approach that prioritizes the consideration of the environmental impacts of a product or service, over its whole lifecycle. Ecodesign research focuses primarily on barriers to implementation, ecodesign tools and methods, and the intersection of ecodesign with other research disciplines. Participatory design (originally co-operative design, now often co-design) is the practice of collective creativity to design, attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end-users) in the design process to help ensure the result meets their needs and is usable. Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own. Scientific design refers to industrialised design based on scientific knowledge. Science can be used to study the effects and need for a potential or existing product in general and to design products that are based on scientific knowledge. For instance, a scientific design of face masks for COVID-19 mitigation may be based on investigations of filtration performance, mitigation performance, thermal comfort, biodegradability and flow resistance. Service design is a term that is used for designing or organizing the experience around a product and the service associated with a product's use. The purpose of service design methodologies is to establish the most effective practices for designing services, according to both the needs of users and the competencies and capabilities of service providers. Sociotechnical system design, a philosophy and tools for participative designing of work arrangements and supporting processes – for organizational purpose, quality, safety, economics, and customer requirements in core work processes, the quality of peoples experience at work, and the needs of society. Transgenerational design, the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living. User-centered design, which focuses on the needs, wants, and limitations of the end-user of the designed artefact. One aspect of user-centered design is ergonomics. == Relationship with the arts == The boundaries between art and design are blurry, largely due to a range of applications both for the term 'art' and the term 'design'. Applied arts can include industrial design, graphic design, fashion design, and the decorative arts which traditionally includes craft objects. In graphic arts (2D image making that ranges from photography to illustration), the distinction is often made between fine art and commercial art, based on the context within which the work is produced and how it is traded. == Types of designing == == See also == == References == == Further reading == Margolin, Victor. World History of Design. New York: Bloomsbury Academic, 2015. (2 vols) ISBN 9781472569288. Raizman, David Seth (12 November 2003). The History of Modern Design. Pearson. ISBN 978-0131830400.
Wikipedia/Design_process
In computer science, an instance is an occurrence of a software element that is based on a type definition. When created, an occurrence is said to have been instantiated, and both the creation process and the result of creation are called instantiation. == Examples == Class instance An object-oriented programming (OOP) object created from a class. Each instance of a class shares a data layout but has its own memory allocation. Computer instance An occurrence of a virtual machine which typically includes storage, a virtual CPU. Polygonal model In computer graphics, it can be instantiated in order to be drawn several times in different locations in a scene which can improve the performance of rendering since a portion of the work needed to display each instance is reused. Program instance In a POSIX-oriented operating system, it refers to an executing process. It is instantiated for a program via system calls such as fork() and exec(). Each executing process is an instance of a program which it has been instantiated from. == References ==
Wikipedia/Instance_(computer_science)
In geography and particularly in geographic information science, a geographic feature or simply feature (also called an object or entity) is a representation of phenomenon that exists at a location in the space and scale of relevance to geography; that is, at or near the surface of Earth.: 62  It is an item of geographic information, and may be represented in maps, geographic information systems, remote sensing imagery, statistics, and other forms of geographic discourse. Such representations of phenomena consist of descriptions of their inherent nature, their spatial form and location, and their characteristics or properties. == Terminology == The term "feature" is broad and inclusive, and includes both natural and human-constructed objects. The term covers things which exist physically (e.g. a building) as well as those that are conceptual or social creations (e.g. a neighbourhood). Formally, the term is generally restricted to things which endure over a period. A feature is also discrete, meaning that it has a clear identity and location distinct from other objects, and is defined as a whole, defined more or less precisely by the boundary of its geographical extent. This differentiates features from geographic processes and events, which are perdurants that only exist in time; and from geographic masses and fields, which are continuous in that they are not conceptualized as a distinct whole. In geographic information science, the terms feature, object, and entity are generally used as roughly synonymous. In the 1992 Spatial Data Transfer Standard (SDTS), one of the first public standard models of geographic information, an attempt was made to formally distinguish them: an entity as the real-world phenomenon, an object as a representation thereof (e.g. on paper or digital), and a feature as the combination of both entity and representation objects. Although this distinction is often cited in textbooks, it has not gained lasting nor widespread usage. In the ISO 19101 Geographic Information Reference Model and Open Geospatial Consortium (OGC) Simple Features Specification, international standards that form the basis for most modern geospatial technologies, a feature is defined as "an abstraction of a real-world phenomenon", essentially the object in SDTS. == Types of features == === Natural features === A natural feature is an object on the planet that was not created by humans, but is a part of the natural world. ==== Ecosystems ==== There are two different terms to describe habitats: ecosystem and biome. An ecosystem is a community of organisms. In contrast, biomes occupy large areas of the globe and often encompass many different kinds of geographical features, including mountain ranges. Biotic diversity within an ecosystem is the variability among living organisms from all sources, including inter alia, terrestrial, marine and other aquatic ecosystems. Living organisms are continually engaged in a set of relationships with every other element constituting the environment in which they exist, and ecosystem describes any situation where there is relationship between organisms and their environment. Biomes represent large areas of ecologically similar communities of plants, animals, and soil organisms. Biomes are defined based on factors such as plant structures (such as trees, shrubs, and grasses), leaf types (such as broadleaf and needleleaf), plant spacing (forest, woodland, savanna), and climate. Unlike biogeographic realms, biomes are not defined by genetic, taxonomic, or historical similarities. Biomes are often identified with particular patterns of ecological succession and climax vegetation. ==== Water bodies ==== A body of water is any significant and reasonably long-lasting accumulation of water, usually covering the land. The term "body of water" most often refers to oceans, seas, and lakes, but it may also include smaller pools of water such as ponds, creeks or wetlands. Rivers, streams, canals, and other geographical features where water moves from one place to another are not always considered bodies of water, but they are included as geographical formations featuring water. Some of these are easily recognizable as distinct real-world entities (e.g. an isolated lake), while others are at least partially based on human conceptualizations. Examples of the latter are a branching stream network in which one of the branches has been arbitrarily designated as the continuation of the primary named stream; or a gulf or bay of a body of water (e.g. a lake or an ocean), which has no meaningful dividing line separatingt it from the rest of the lake or ocean. ==== Landforms ==== A landform comprises a geomorphological unit and is largely defined by its surface form and location in the landscape, as part of the terrain, and as such is typically an element of topography. Landforms are categorized by features such as elevation, slope, orientation, stratification, rock exposure, and soil type. They include berms, mounds, hills, cliffs, valleys, rivers, and numerous other elements. Oceans and continents are the highest-order landforms. === Artificial features === ==== Settlements ==== A settlement is a permanent or temporary community in which people live. Settlements range in components from a small number of dwellings grouped together to the largest of cities with surrounding urbanized areas. Other landscape features such as roads, enclosures, field systems, boundary banks and ditches, ponds, parks and woods, mills, manor houses, moats, and churches may be considered part of a settlement. ==== Administrative regions and other constructs ==== These include social constructions that are created to administer and organize the land, people, and other spatially-relevant resources. Examples are governmental units such as a state, cadastral land parcels, mining claims, zoning partitions of a city, and church parishes. There are also more informal social features, such as city neighbourhoods and other vernacular regions. These are purely conceptual entities established by edict or practice, although they may align with visible features (e.g. a river boundary), and may be subsequently manifested on the ground, such as by survey markers or fences. ==== Engineered constructs ==== Engineered geographic features include highways, bridges, airports, railroads, buildings, dams, and reservoirs, and are part of the anthroposphere because they are man-made geographic features. === Cartographic features === Cartographic features are types of abstract geographical features, which appear on maps but not on the planet itself, even though they are located on the planet. For example, grid lines, latitudes, longitudes, the Equator, the prime meridian, and many types of boundary, are shown on maps of Earth, but do not physically exist. They are theoretical lines used for reference, navigation, and measurement. == Features and Geographic Information == In GIS, maps, statistics, databases, and other information systems, a geographic feature is represented by a set of descriptors of its various characteristics. A common classification of those characteristics has emerged based on developments by Peuquet, Mennis, and others, including the following : Identity, the fact that a feature is unique and distinct from all other features. This does not have an inherent description, but humans have created many systems for attempting to express identity, such as names and identification numbers/codes. Existence, the fact that a feature exists in the world. At first, this may seem trivial, but complex situations are common, such as features that are proposed or planned, abstract concepts (e.g., the Equator), under construction, or that no longer exist. Kind (also known as class, type, or category), one or more groups to which a feature belongs, typically focused on those that are most fundamental to its existence. It thus completes the sentence "This is a _________." These are generally in the form of common nouns (tree, dog, building, county, etc.), which may be isolated or part of a taxonomic hierarchy. Relationships to other features. These may be inherent if they are crucial to the existence and identity of the feature, or incidental if they are not crucial, but "just happen to be." These may be of at least three types: Spatial relations, those that can be visualized and measured in space. For example, the fact that the Potomac River is adjacent to Maryland is an inherent spatial relation because the river is part of the definition of the boundary of Maryland, but the overlap relation between Maryland and the Delmarva Peninsula is incidental, as each would exist unproblematically without the other. Meronomic relations (also known as partonomy), in which a feature may exist as a part of a larger whole, or may exist as a collection of parts. For example, the relationship between Maryland and the United States is a meronomic relation; one is not just spatially within the boundaries of the other, but is a component part of the other that in part defines the existence of both. Genealogical relations (also known as parent-child), which tie a feature to others that existed previously and created it (or from which it was formed by another agent), and in turn to any features it has created. For example, if a county were created by the subdivision of two existing counties, they would be considered its parents. Location, a description of where the feature exists, often including the shape of its extent. While a feature has an inherent location, measuring it for the purpose of representation as data can be a complex process, such as requiring the invention of abstract spatial reference systems, and the necessary employment of cartographic generalization, including an expedient choice of dimension (e.g., a city could be represented as a region or as a point, depending on scale and need). Attributes, characteristics of a feature other than location, often expressed as text or numbers; for example, the population of a city. In geography, the levels of measurement developed by Stanley Smith Stevens (and further extended by others) is a common system for understanding and using attribute data. Time is fundamental to the representation of a feature, although it does not have independent temporal descriptions. Instead, expressions of time are attached to other characteristics, describing how they change (thus, they are analogous to adverbs in common discourse). Any of the above characteristics is mutable, with the possible exception of identity. For example, the lifespan of a feature could be considered as the temporal extent of its existence. The location of a city can change over time as annexations expand its extent. The resident population of a country changes frequently due to immigration, emigration, birth, and death. The descriptions of features (i.e., the measured values of each of the above characteristics) are typically collected in Geographic databases, such as GIS datasets, based on a variety of data models and file formats, often based on the vector logical model. == See also == Geographical field Geographical location Human geography Landscape Physical geography Simple Features == References ==
Wikipedia/Geographic_feature
The Unified Modeling Language (UML) is a general-purpose visual modeling language that is intended to provide a standard way to visualize the design of a system. UML provides a standard notation for many types of diagrams which can be roughly divided into three main groups: behavior diagrams, interaction diagrams, and structure diagrams. The creation of UML was originally motivated by the desire to standardize the disparate notational systems and approaches to software design. It was developed at Rational Software in 1994–1995, with further development led by them through 1996. In 1997, UML was adopted as a standard by the Object Management Group (OMG) and has been managed by this organization ever since. In 2005, UML was also published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as the ISO/IEC 19501 standard. Since then the standard has been periodically revised to cover the latest revision of UML. In software engineering, most practitioners do not use UML, but instead produce informal hand drawn diagrams; these diagrams, however, often include elements from UML.: 536  == History == === Before UML 1.0 === UML has evolved since the second half of the 1990s and has its roots in the object-oriented programming methods developed in the late 1980s and early 1990s. The timeline (see image) shows the highlights of the history of object-oriented modeling methods and notation. It is originally based on the notations of the Booch method, the object-modeling technique (OMT), and object-oriented software engineering (OOSE), which it has integrated into a single language. Rational Software Corporation hired James Rumbaugh from General Electric in 1994 and after that, the company became the source for two of the most popular object-oriented modeling approaches of the day: Rumbaugh's object-modeling technique (OMT) and Grady Booch's method. They were soon assisted in their efforts by Ivar Jacobson, the creator of the object-oriented software engineering (OOSE) method, who joined them at Rational in 1995. === UML 1.x === Under the technical leadership of those three (Rumbaugh, Jacobson, and Booch), a consortium called the UML Partners was organized in 1996 to complete the Unified Modeling Language (UML) specification and propose it to the Object Management Group (OMG) for standardization. The partnership also contained additional interested parties (for example HP, DEC, IBM, and Microsoft). The UML Partners' UML 1.0 draft was proposed to the OMG in January 1997 by the consortium. During the same month, the UML Partners formed a group, designed to define the exact meaning of language constructs, chaired by Cris Kobryn and administered by Ed Eykholt, to finalize the specification and integrate it with other standardization efforts. The result of this work, UML 1.1, was submitted to the OMG in August 1997 and adopted by the OMG in November 1997. After the first release, a task force was formed to improve the language, which released several minor revisions, 1.3, 1.4, and 1.5. The standards it produced (as well as the original standard) have been noted as being ambiguous and inconsistent. ==== Cardinality notation ==== As with database Chen, Bachman, and ISO ER diagrams, class models are specified to use "look-across" cardinalities, even though several authors (Merise, Elmasri & Navathe, amongst others) prefer same-side or "look-here" for roles and both minimum and maximum cardinalities. Recent researchers (Feinerer and Dullea et al.) have shown that the "look-across" technique used by UML and ER diagrams is less effective and less coherent when applied to n-ary relationships of order strictly greater than 2. Feinerer says: "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann investigates this situation and shows how and why different transformations fail.", and: "As we will see on the next few pages, the look-across interpretation introduces several difficulties which prevent the extension of simple mechanisms from binary to n-ary associations." === UML 2 === UML 2.0 major revision replaced version 1.5 in 2005, which was developed with an enlarged consortium to improve the language further to reflect new experiences on the usage of its features. Although UML 2.1 was never released as a formal specification, versions 2.1.1 and 2.1.2 appeared in 2007, followed by UML 2.2 in February 2009. UML 2.3 was formally released in May 2010. UML 2.4.1 was formally released in August 2011. UML 2.5 was released in October 2012 as an "In progress" version and was officially released in June 2015. The formal version 2.5.1 was adopted in December 2017. There are four parts to the UML 2.x specification: The Superstructure that defines the notation and semantics for diagrams and their model elements The Infrastructure that defines the core metamodel on which the Superstructure is based The Object Constraint Language (OCL) for defining rules for model elements The UML Diagram Interchange that defines how UML 2 diagram layouts are exchanged Until UML 2.4.1, the latest versions of these standards were: UML Superstructure version 2.4.1 UML Infrastructure version 2.4.1 OCL version 2.3.1 UML Diagram Interchange version 1.0. Since version 2.5, the UML Specification has been simplified (without Superstructure and Infrastructure), and the latest versions of these standards are now: UML Specification 2.5.1 OCL version 2.4 It continues to be updated and improved by the revision task force, who resolve any issues with the language. == Design == UML offers a way to visualize a system's architectural blueprints in a diagram, including elements such as: any activities (jobs); individual components of the system; and how they can interact with other software components; how the system will run; how entities interact with others (components and interfaces); external user interface. Although originally intended for object-oriented design documentation, UML has been extended to a larger set of design documentation (as listed above), and has been found useful in many contexts. === Software development methods === UML is not a development method by itself; however, it was designed to be compatible with the leading object-oriented software development methods of its time, for example, OMT, Booch method, Objectory, and especially RUP it was originally intended to be used with when work began at Rational Software. === Modeling === It is important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphic representation of a system's model. The set of diagrams need not completely cover the model and deleting a diagram does not change the model. The model may also contain documentation that drives the model elements and diagrams (such as written use cases). UML diagrams represent two different views of a system model: Static (or structural) view: emphasizes the static structure of the system using objects, attributes, operations and relationships. It includes class diagrams and composite structure diagrams. Dynamic (or behavioral) view: emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and state machine diagrams. UML models can be exchanged among UML tools by using the XML Metadata Interchange (XMI) format. In UML, one of the key tools for behavior modeling is the use-case model, caused by OOSE. Use cases are a way of specifying required usages of a system. Typically, they are used to capture the requirements of a system, that is, what a system is supposed to do. == Diagrams == UML 2 has many types of diagrams, which are divided into two categories. Some types represent structural information, and the rest represent general types of behavior, including a few that represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following class diagram: These diagrams may all contain comments or notes explaining usage, constraint, or intent. === Structure diagrams === Structure diagrams represent the static aspects of the system. It emphasizes the things that must be present in the system being modeled. Since structure diagrams represent the structure, they are used extensively in documenting the software architecture of software systems. For example, the component diagram describes how a software system is split up into components and shows the dependencies among these components. === Behavior diagrams === Behavior diagrams represent the dynamic aspect of the system. It emphasizes what must happen in the system being modeled. Since behavior diagrams illustrate the behavior of a system, they are used extensively to describe the functionality of software systems. As an example, the activity diagram describes the business and operational step-by-step activities of the components in a system. Visual Representation: Staff User → Complaints System: Submit Complaint Complaints System → HR System: Forward Complaint HR System → Department: Assign Complaint Department → Complaints System: Update Resolution Complaints System → Feedback System: Request Feedback Feedback System → Staff User: Provide Feedback Staff User → Feedback System: Submit Feedback. This description can be used to draw a sequence diagram using tools like Lucidchart, Draw.io, or any UML diagram software. The diagram would have actors on the left side, with arrows indicating the sequence of actions and interactions between systems and actors as described. Sequence diagrams should be drawn for each use case to show how different objects interact with each other to achieve the functionality of the use case. == Artifacts == In UML, an artifact is the "specification of a physical piece of information that is used or produced by a software development process, or by deployment and operation of a system." "Examples of artifacts include model files, source files, scripts, and binary executable files, a table in a database system, a development deliverable, a word-processing document, or a mail message." Artifacts are the physical entities that are deployed on Nodes (i.e. Devices and Execution Environments). Other UML elements such as classes and components are first manifested into artifacts and instances of these artifacts are then deployed. Artifacts can also be composed of other artifacts. == Metamodeling == The Object Management Group (OMG) has developed a metamodeling architecture to define the UML, called the Meta-Object Facility. MOF is designed as a four-layered architecture, as shown in the image at right. It provides a meta-meta model at the top, called the M3 layer. This M3-model is the language used by Meta-Object Facility to build metamodels, called M2-models. The most prominent example of a Layer 2 Meta-Object Facility model is the UML metamodel, which describes the UML itself. These M2-models describe elements of the M1-layer, and thus M1-models. These would be, for example, models written in UML. The last layer is the M0-layer or data layer. It is used to describe runtime instances of the system. The meta-model can be extended using a mechanism called stereotyping. This has been criticized as being insufficient/untenable by Brian Henderson-Sellers and Cesar Gonzalez-Perez in "Uses and Abuses of the Stereotype Mechanism in UML 1.x and 2.0". == Adoption == In 2013, UML had been marketed by OMG for many contexts, but aimed primarily at software development with limited success. It has been treated, at times, as a design silver bullet, which leads to problems. UML misuse includes overuse (designing every part of the system with it, which is unnecessary) and assuming that novices can design with it. It is considered a large language, with many constructs. Some people (including Jacobson) feel that UML's size hinders learning and therefore uptake. MS Visual Studio dropped support for UML in 2016 due to lack of usage. According to Google Trends, UML has been on a steady decline since 2004. == See also == Applications of UML BPMN (Business Process Model and Notation) C4 model Department of Defense Architecture Framework DOT (graph description language) List of Unified Modeling Language tools MODAF Model-based testing Model-driven engineering Object-oriented role analysis and modeling Process Specification Language Systems Modeling Language (SysML) == References == == Further reading == Ambler, Scott William (2004). The Object Primer: Agile Model Driven Development with UML 2. Cambridge University Press. ISBN 0-521-54018-6. Archived from the original on 31 January 2010. Retrieved 29 April 2006. Chonoles, Michael Jesse; James A. Schardt (2003). UML 2 for Dummies. Wiley Publishing. ISBN 0-7645-2614-6. Fowler, Martin (2004). UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd ed.). Addison-Wesley. ISBN 0-321-19368-7. Jacobson, Ivar; Grady Booch; James Rumbaugh (1998). The Unified Software Development Process. Addison Wesley Longman. ISBN 0-201-57169-2. Martin, Robert Cecil (2003). UML for Java Programmers. Prentice Hall. ISBN 0-13-142848-9. Noran, Ovidiu S. "Business Modelling: UML vs. IDEF" (PDF). Retrieved 14 November 2022. Horst Kargl. "Interactive UML Metamodel with additional Examples". Penker, Magnus; Hans-Erik Eriksson (2000). Business Modeling with UML. John Wiley & Sons. ISBN 0-471-29551-5. Douglass, Bruce Powel. "Bruce Douglass: Real-Time Agile Systems and Software Development" (web). Retrieved 1 January 2019. Douglass, Bruce (2014). Real-Time UML Workshop 2nd Edition. Newnes. ISBN 978-0-471-29551-8. Douglass, Bruce (2004). Real-Time UML 3rd Edition. Newnes. ISBN 978-0321160768. Douglass, Bruce (2002). Real-Time Design Patterns. Addison-Wesley Professional. ISBN 978-0201699562. Douglass, Bruce (2009). Real-Time Agility. Addison-Wesley Professional. ISBN 978-0321545497. Douglass, Bruce (2010). Design Patterns for Embedded Systems in C. Newnes. ISBN 978-1856177078. == External links == Official website Current UML specification: Unified Modeling Language 2.5.1. OMG Document Number formal/2017-12-05. Object Management Group Standards Development Organization (OMG SDO). December 2017.
Wikipedia/Unified_modeling_language
A georelational data model is a geographic data model that represents geographic features as an interrelated set of spatial and attribute data. The georelational model was the dominant form of vector file format during the 1980s and 1990s, including the Esri coverage and Shapefile. == History == The second era in the history of GIS, starting in the mid-1970s, was characterized by the rise of the first general-purpose GIS software programs (rather than the bespoke systems created in the 1960s and early 1970s). Each of these programs also created its own data file structures, primarily focused on finding innovative ways to store the spatial or geometric aspect of the data in the most efficient and error-free way. One example of this was the POLYVRT software and data structure (1973) from the Harvard Laboratory for Computer Graphics and Spatial Analysis, which inspired the Arc/INFO Coverage format.: 105  In experimental GIS software such as ODYSSEY, attribute data was only handled in a rudimentary way. Meanwhile, the relational database was quickly becoming the most promising software for managing non-spatial data, and several nascent GIS software companies chose to adopt it into their systems, especially Esri. Although there were exceptions such as the object-oriented data models in Smallworld GIS (1989) and Intergraph's experimental TIGRIS, georelational data dominated the GIS industry until the rise of spatial databases in the late 1990s. Most of them are obsolete, although the Shapefile is still in common (if decreasing) use. == Georelational formats == In any vector data structure, the core unit is an object (either a geographic feature or a sample location for a field) that has a location in space (of 0, 1, 2, or 3 dimension) and a set of attributes. In the georelational model, these are stored as separate files: a geometry file that is usually custom-designed by a software developer for use in a particular program, and an attribute table that follows relational database principles; often, the latter is adopted directly from an existing relational database management system software. Examples of commonly-used georelational data formats include: ARC/INFO Coverage (Esri 1981-2005) The name ARC/INFO literally reflected the georelational design of the software and the coverage format. The ARC model or Coverage was the topological vector data structure developed by ESRI, based on earlier structures developed at Harvard such as POLYVRT. INFO was a relational database developed by Henco Software, Inc. (originally for financial management) that was licensed by ESRI. In the Coverage structure, each point, line, or polygon had an identification number, which could be joined to the row in the INFO table with the same primary key, as in a relational table join. In an ARC/INFO workspace (=directory/folder), all of the INFO tables were stored in a separate directory from the directories for the ARC data for each coverage. To process attribute data, the user had to leave the ARC program and start the INFO program. During the 1990s, Esri added support for other commercial RDBMS software for the attribute data. MGE (Intergraph 1989-2000) During the 1980s, Intergraph was an industry leader on workstation CAD with its IGDS software, including Microstation (developed by Bentley Systems). When it developed MGE (Modular GIS Environment), its first flagship GIS product, it directly incorporated the Microstation software as its interactive environment, and the Microstation Design File (.dgn, a non-topological vector graphics file format) for storing graphics. The associated attribute table could be stored in any RDBMS supported on Intergraph UNIX workstations, Informix being one of the most common. An ID attached to each object in the design file enabled a relational join to the rows in the attribute table. Shapefile (Esri 1992–present) As the GIS industry grew to incorporate more casual users, the inherent complexity of the coverage data structure became a concern. When Esri released ArcView GIS 2.0 in 1992, it introduced the new shapefile format for vector data. This was a much simpler data model, eliminating features such as topology, but was still a georelational design. A shape-"file" actually consisted of several files, including at the very least a .shp file to store the geometry, and a .dbf file for the attributes, the latter directly adopting the dBase format that was the dominant microcomputer database at the time (despite it being a proprietary trade secret, the .dbf format had been legally reverse-engineered by the xBase community and published). Rather than using a relational join to connect the two files, the shapefile merely uses file order: the first shape matches the first attribute row, and so on. == See also == GIS data model == References ==
Wikipedia/Georelational_data_model
The Harvard Laboratory for Computer Graphics and Spatial Analysis (1965 to 1991) pioneered early cartographic and architectural computer applications that led to integrated geographic information systems (GIS). Some of the Laboratory's influential programs included SYMAP, SYMVU, GRID, CALFORM, and POLYVRT. The Laboratory's Odyssey project created a geographic information system that served as a milestone in the development of integrated mapping systems. The Laboratory influenced numerous computer graphic, mapping and architectural systems such as Intergraph, Computervision, and Esri. == Founding == In 1963, during a training session held at Northwestern University, Chicago architect Howard T. Fisher encountered computer maps on urban planning and civil engineering produced by Edgar Horwood's group at the University of Washington. Fisher conceived a computer mapping software program, SYMAP (Synergistic Mapping), to produce conformant, proximal, and contour maps on a line printer. Fisher applied for a Ford Foundation grant to explore thematic mapping based on early SYMAP outputs, which was awarded in 1965. In association with Harvard providing facilities in Robinson Hall in Harvard Yard as part of the Graduate School of Design, the Ford Foundation provided $294,000 over three years to seed the Harvard Laboratory for Computer Graphics. Working with programmer Betty Benson, Fisher completed SYMAP for distribution in 1966. Also under Fisher's direction, SYMVU and GRID programs were developed. A 1968 reorganisation followed Fisher reaching Harvard's mandatory retirement age and led to renaming as the Harvard Laboratory for Computer Graphics and Spatial Analysis. From 1972, the Laboratory was based in Graduate School's newly built Gund Hall. The Laboratory's original and continuing goals were: To design and develop computer software for the analysis and graphic display of spatial data. To distribute the resulting software to governmental agencies, educational organizations and interested professionals. To conduct research concerning the definition and analysis of spatial structure and process." == Major research outputs == SYMAP's ability to print cheap, albeit low quality, maps using readily available technology led to rapid adoption in the late 1960s. SYMVU software, developed in 1969 to illustrate surface displays, was another popular product. GRID, CALFORM, and POLYVRT products further explored the raster versus vector approach to mapping. The Laboratory gained a reputation for solid output leading to several commercially successful projects and significant budgetary independence for a research institute. Some struggles with restructuring Geographic Base Files - Dual Independent Map Encoding (GBF-DIME files, an early vector and polygonal data structure) for the Census Bureau's Urban Atlas in 1975 inspired the Laboratory to develop an integrated suite of programs beneath by a common user interface and common data manipulation software. In 1978 this suite became the Odyssey project. The Odyssey project's aim was to produce a vector GIS that provided spatial analysis of many different forms within a single system. As of 1980, in addition to early Odyssey modules, the Laboratory sold the following programs for display and analysis of spatial data ASPEX - 3d data perspectives; CALFORM - shaded vector maps; DOT.MAP - contour, shaded, and dot-distribution maps from gridded data; KWIC - key-word-in-context programs for indexing bibliographic references; POLYVRT - data conversion and analysis of polygonal data; SYMAP - line printer mapping producing conformant (areas), contour, trend surface, and proximal (also known as Voronoi diagram or Thiessan polygonal) GIMMS (Geographic Information Management and Mapping System) - a general purpose mapping system written by Tom Waugh at the University of Edinburgh; MDS(X) - a multidimensional pattern detection and scaling system under the direction of Tony Coxon at the University of Cardiff. The 1982 release of Odyssey included seven programs for geographical analysis: Like most of the Laboratory's software, it was written in FORTRAN and operated on several platforms. The POLYPS and PRISM modules could draw maps on a variety of vector display devices. PROTEUS - editing, projections, generalization, aggregation, simple display; CYCLONE - topological checking of nodes, error correction; CYCLOPS - topological checking of polygons, produce graphics shape files; WHIRLPOOL - planar enforcement: overlay, error detection of input; CALYPSO - attribute manipulations (areal interpolation); POLYPS - planar choropleth maps; PRISM - raised 3d prism maps. == Activities == The Laboratory distributed software, and later data, at cost, thus encouraging experimentation. The Laboratory conducted correspondence courses, hosted numerous conferences, and worked on environmental planning and architectural projects with the Harvard Graduate School of Design. From 1978 to 1983, the Laboratory hosted a popular annual Harvard Computer Graphics Week. Geoffrey Dutton, a research associate at the Laboratory from 1969 through 1984, created the first holographic thematic map, "America Graph Fleeting" in 1978. This rotating strip of 3,000 holograms depicted an animated sequence of 3d maps showing US population growth from 1790 to 1970, generated by the Laboratory's ASPEX program. Dutton also contributed the program DOT.MAP to the Laboratory's family of distributed software (1977). In 1977 James Dougenik, Duane Niemeyer, and Nicholas Chrisman developed contiguous area cartograms. Bruce Donald, working at the Laboratory from 1978 to 1984, wrote BUILDER, a program for computer-aided architecture. BUILDER produced plan and shaded perspectives that popularised computer-aided-design in architecture. Donald also wrote the CALYPSO module for the commercial Odyssey project and worked on the GLIB/LINGUIST table-driven language system in collaboration with Nick Chrisman and Jim Dougenik, which was based on automata theory and dynamic scoping. GLIB/LINGUIST provided an English-like user interface for Odyssey, BUILDER, and other HLCG software. The early period of the Laboratory saw staff numbers grow to approximately 40 in 1970, but shrink to half a dozen by 1972 as grants expired. The Odyssey project grew the Laboratory from about 12 people in 1977 to forty people by 1981. The Laboratory shrank significantly back to approximately half a dozen people from 1982 until its closure in June 1991. == Later period and influence == From 1979 the Laboratory was encouraged to develop external software sales and entered into licensing agreements for this purpose, most notably with Synercom and ISSCO Corporation (sold to Computer Associates in 1987) for Odyssey. However, the licensing agreements had weak to non-existent technology exploitation and non-compete clauses, so potential purchasers were frustrated in attempting to license from a competitor often preferentially positioning its proprietary software. Potential purchasers often redeveloped Odyssey functions rather than wait for licenses. Meanwhile, having carved out the potential commercial interests, from 1981 the Harvard Graduate School of Design sought less commercial work and an increased focus on research, though with reduced budgets. "But the timing of this burgeoning commercialism of the Lab's activities collided with the moment in history when Harvard's President Derek Bok set out to clarify the blurred lines between academic research and development on the one hand, and more clearly defined commercial activities on the other." Financial strain and the lack of commercial inspiration for projects led to the dispersal of many team members from 1981. Despite some further research during the late 1980s, the Laboratory closed in 1991. Odyssey became the template for subsequent GIS software, cited as an inspiration by numerous commercial efforts in mapping and architecture, such as M&S Computing (later Intergraph), Computervision, and Geodat. The Laboratory was an enormous influence on the commercial Environmental Systems Research Institute, Esri, founded in 1969 by Jack Dangermond, a landscape architect graduate of Harvard Graduate School of Design who had worked as a research assistant at the Laboratory during 1968 and 1969. Scott Morehouse, the development lead for the Odyssey project, worked at the lab from 1977 to 1981. When revenues from Odyssey did not meet expectations, his team's resources started to dwindle, and Morehouse left to join Jack at Esri to build a next-generation GIS platform that was to be ArcInfo. Scott's intimate knowledge of the Odyssey geoprocessing model and code base, combined with Jack's insights into how to put the 'IS' in 'GIS' evolved the Laboratory's GIS prototype processors into a system that could effectively and interactively manage, process, edit, and display vector geodata and its scalar attributes that addressed evolving market needs for more robust GIS capabilities. == References ==
Wikipedia/Harvard_Laboratory_for_Computer_Graphics_and_Spatial_Analysis
The Canada Geographic Information System (CGIS) was an early geographic information system (GIS) developed for the Government of Canada beginning in the early 1960s. CGIS was used to store geospatial data for the Canada Land Inventory and assisted in the development of regulatory procedures for land-use management and resource monitoring in Canada. At that time, Canada was beginning to realize problems associated with its large land mass and attempting to discern the availability of natural resources. The federal government decided to launch a national program to assist in management and inventory of its resources. The simple automated computer processes designed to store and process large amounts of data enabled Canada to begin a national land-use management program and become a foremost promoter of geographic information systems (GIS). CGIS was designed to withstand great amounts of collected data by managing, modeling, and analyzing this data very quickly and accurately. As Canada presented such large geospatial datasets, it was necessary to be able to focus on certain regions or provinces in order to more effectively manage and maintain land-use. CGIS enabled its users to effectively collect national data and, if necessary, break it down into provincial datasets. Early applications of CGIS benefited land-use management and environmental impact monitoring programs across Canada. == Development == In 1960, Roger Tomlinson was working at Spartan Air Services, an aerial survey company based in Ottawa, Ontario. The company was focused on producing large-scale photogrammetric and geophysical maps, mostly for the Government of Canada. In the early 1960s, Tomlinson and the company were asked to produce a map for site-location analysis in an east African nation. Tomlinson immediately recognized that the new automated computer technologies might be applicable and even necessary to complete such a detail-oriented task more effectively and efficiently than humans. Eventually, Spartan met with IBM offices in Ottawa to begin developing a relationship to bridge the previous gap between geographic data and computer services. Tomlinson brought his geographic knowledge to the table as IBM brought computer programming and data management. The Government of Canada began working towards the development of a national program after a 1962 meeting between Tomlinson and Lee Pratt, head of the Canada Land Inventory (CLI). Pratt was charged with creation of maps covering the entire region of Canada's commercially productive areas by showing agriculture, forestry, wildlife, and recreation, all with the same classification schemes. Not only was the development of such maps a formidable task, but Pratt understood that computer automation may assist in the analytical processes as well. Tomlinson was the first to produce a technical feasibility study on whether computer mapping programs would be a viable solution for the land-use inventory and management programs, such as CLI. He is also given credit for coining the term "geographic information system" and is recognized as the "Modern Father of GIS." CGIS continued to be developed and operated as a stand alone system by the Government of Canada until the late 1980s, at which point the widespread emergence of commercial GIS software slowly rendered it obsolete. In the early 1990s, a group of volunteers successfully extracted all of the data from the old computer tapes, and the data made available on GeoGratis. == See also == Geographic Information System == References == == External links == GeoGratis Data for Decision, a 1968 short documentary about the project.
Wikipedia/Canadian_Geographic_Information_System
Vector graphics are a form of computer graphics in which visual images are created directly from geometric shapes defined on a Cartesian plane, such as points, lines, curves and polygons. The associated mechanisms may include vector display and printing hardware, vector data models and file formats, as well as the software based on these data models (especially graphic design software, computer-aided design, and geographic information systems). Vector graphics are an alternative to raster or bitmap graphics, with each having advantages and disadvantages in specific situations. While vector hardware has largely disappeared in favor of raster-based monitors and printers, vector data and software continue to be widely used, especially when a high degree of geometric precision is required, and when complex information can be decomposed into simple geometric primitives. Thus, it is the preferred model for domains such as engineering, architecture, surveying, 3D rendering, and typography, but is entirely inappropriate for applications such as photography and remote sensing, where raster is more effective and efficient. Some application domains, such as geographic information systems (GIS) and graphic design, use both vector and raster graphics at times, depending on purpose. Vector graphics are based on the mathematics of analytic or coordinate geometry, and is not related to other mathematical uses of the term vector. This can lead to some confusion in disciplines in which both meanings are used. == Data model == The logical data model of vector graphics is based on the mathematics of coordinate geometry, in which shapes are defined as a set of points in a two- or three-dimensional cartesian coordinate system, as p = (x, y) or p = (x, y, z). Because almost all shapes consist of an infinite number of points, the vector model defines a limited set of geometric primitives that can be specified using a finite sample of salient points called vertices. For example, a square can be unambiguously defined by the locations of three of its four corners, from which the software can interpolate the connecting boundary lines and the interior space. Because it is a regular shape, a square could also be defined by the location of one corner, a size (width=height), and a rotation angle. The fundamental geometric primitives are: A single point. A line segment, defined by two end points, allowing for a simple linear interpolation of the intervening line. A polygonal chain or polyline, a connected set of line segments, defined by an ordered list of points. A polygon, representing a region of space, defined by its boundary, a polyline with coincident starting and ending vertices. A variety of more complex shapes may be supported: Parametric curves, in which polylines or polygons are augmented with parameters to define a non-linear interpolation between vertices, including circular arcs, cubic splines, Catmull–Rom splines, Bézier curves and bezigons. Standard parametric shapes in two or three dimensions, such as circles, ellipses, squares, superellipses, spheres, tetrahedrons, superellipsoids, etc. Irregular three-dimensional surfaces and solids, are usually defined as a connected set of polygons (e.g., a polygon mesh) or as parametric surfaces (e.g., NURBS). Fractals, often defined as an iterated function system. In many vector datasets, each shape can be combined with a set of properties. The most common are visual characteristics, such as color, line weight, or dash pattern. In systems in which shapes represent real-world features, such as GIS and BIM, a variety of attributes of each represented feature can be stored, such as name, age, size, and so on. In some Vector data, especially in GIS, information about topological relationships between objects may be represented in the data model, such as tracking the connections between road segments in a transport network. If a dataset stored in one vector file format is converted to another file format that supports all the primitive objects used in that particular image, then the conversion can be lossless. == Vector display hardware == Vector-based devices, such as the vector CRT and the pen plotter, directly control a drawing mechanism to produce geometric shapes. Since vector display devices can define a line by dealing with just two points (that is, the coordinates of each end of the line), the device can reduce the total amount of data it must deal with by organizing the image in terms of pairs of points. Vector graphic displays were first used in 1958 by the US SAGE air defense system. Vector graphics systems were retired from the U.S. en route air traffic control in 1999. Vector graphics were also used on the TX-2 at the Massachusetts Institute of Technology Lincoln Laboratory by computer graphics pioneer Ivan Sutherland to run his program Sketchpad in 1963. Subsequent vector graphics systems, most of which iterated through dynamically modifiable stored lists of drawing instructions, include the IBM 2250, Imlac PDS-1, and DEC GT40. There was a video game console that used vector graphics called Vectrex as well as various arcade games like Asteroids, Space Wars, Tempest and many cinematronics titles such as Rip Off, and Tail Gunner using vector monitors. Storage scope displays, such as the Tektronix 4014, could display vector images but not modify them without first erasing the display. However, these were never as widely used as the raster-based scanning displays used for television, and had largely disappeared by the mid-1980s except for specialized applications. Plotters used in technical drawing still draw vectors directly to paper by moving a pen as directed through the two-dimensional space of the paper. However, as with monitors, these have largely been replaced by the wide-format printer that prints a raster image (which may be rendered from vector data). == Software == Because this model is useful in a variety of application domains, many different software programs have been created for drawing, manipulating, and visualizing vector graphics. While these are all based on the same basic vector data model, they can interpret and structure shapes very differently, using very different file formats. Graphic design and illustration, using a vector graphics editor or graphic art software such as Adobe Illustrator. See Comparison of vector graphics editors for capabilities. Geographic information systems (GIS), which can represent a geographic feature by a combination of a vector shape and a set of attributes. GIS includes vector editing, mapping, and vector spatial analysis capabilities. Computer-aided design (CAD), used in engineering, architecture, and surveying. Building information modeling (BIM) models add attributes to each shape, similar to a GIS. 3D computer graphics software, including computer animation. == File formats == Vector graphics are commonly found today in the SVG, WMF, EPS, PDF, CDR or AI types of graphic file formats, and are intrinsically different from the more common raster graphics file formats such as JPEG, PNG, APNG, GIF, WebP, BMP and MPEG4. The World Wide Web Consortium (W3C) standard for vector graphics is Scalable Vector Graphics (SVG). The standard is complex and has been relatively slow to be established at least in part owing to commercial interests. Many web browsers now have some support for rendering SVG data but full implementations of the standard are still comparatively rare. In recent years, SVG has become a significant format that is completely independent of the resolution of the rendering device, typically a printer or display monitor. SVG files are essentially printable text that describes both straight and curved paths, as well as other attributes. Wikipedia prefers SVG for images such as simple maps, line illustrations, coats of arms, and flags, which generally are not like photographs or other continuous-tone images. Rendering SVG requires conversion to a raster format at a resolution appropriate for the current task. SVG is also a format for animated graphics. There is also a version of SVG for mobile phones called SVGT (SVG Tiny version). These images can count links and also exploit anti-aliasing. They can also be displayed as wallpaper. CAD software uses its own vector data formats, usually proprietary formats created by software vendors, such as Autodesk's DWG and public exchange formats such as DXF. Hundreds of distinct vector file formats have been created for GIS data over its history, including proprietary formats like the Esri file geodatabase, proprietary but public formats like the Shapefile and the original KML, open source formats like GeoJSON, and formats created by standards bodies like Simple Features and GML from the Open Geospatial Consortium. === Conversion === ==== To raster ==== Modern displays and printers are raster devices; vector formats have to be converted to a raster format (bitmaps – pixel arrays) before they can be rendered (displayed or printed). The size of the bitmap/raster-format file generated by the conversion will depend on the resolution required, but the size of the vector file generating the bitmap/raster file will always remain the same. Thus, it is easy to convert from a vector file to a range of bitmap/raster file formats but it is much more difficult to go in the opposite direction, especially if subsequent editing of the vector picture is required. It might be an advantage to save an image created from a vector source file as a bitmap/raster format, because different systems have different (and incompatible) vector formats, and some might not support vector graphics at all. However, once a file is converted from the vector format, it is likely to be bigger, and it loses the advantage of scalability without loss of resolution. It will also no longer be possible to edit individual parts of the image as discrete objects. The file size of a vector graphic image depends on the number of graphic elements it contains; it is a list of descriptions. ==== From raster ==== === Printing === Vector art is ideal for printing since the art is made from a series of mathematical curves; it will print very crisply even when resized. For instance, one can print a vector logo on a small sheet of copy paper, and then enlarge the same vector logo to billboard size and keep the same crisp quality. A low-resolution raster graphic would blur or pixelate excessively if it were enlarged from business card size to billboard size. (The precise resolution of a raster graphic necessary for high-quality results depends on the viewing distance; e.g., a billboard may still appear to be of high quality even at low resolution if the viewing distance is great enough.) If we regard typographic characters as images, then the same considerations that we have made for graphics apply even to the composition of written text for printing (typesetting). Older character sets were stored as bitmaps. Therefore, to achieve maximum print quality they had to be used at a given resolution only; these font formats are said to be non-scalable. High-quality typography is nowadays based on character drawings (fonts) which are typically stored as vector graphics, and as such are scalable to any size. Examples of these vector formats for characters are Postscript fonts and TrueType fonts. == Operation == Advantages of this style of drawing over raster graphics: Because vector graphics consist of coordinates with lines/curves between them, the size of the representation does not depend on the dimensions of the object. This minimal amount of information translates to a much smaller file size compared to large raster images which are defined pixel by pixel. This said, a vector graphic with a small file size is often said to lack detail compared with a real-world photo. Correspondingly, one can infinitely zoom in on e.g., a circle arc, and it remains smooth. On the other hand, a polygon representing a curve will reveal being not really curved. On zooming in, lines and curves need not get wider proportionally. Often the width is either not increased or less than proportional. On the other hand, irregular curves represented by simple geometric shapes may be made proportionally wider when zooming in, to keep them looking smooth and not like these geometric shapes. The parameters of objects are stored and can be later modified. This means that moving, scaling, rotating, filling, etc. does not degrade the quality of a drawing. Moreover, it is usual to specify the dimensions in device-independent units, which results in the best possible rasterization on raster devices. From a 3-D perspective, rendering shadows is also much more realistic with vector graphics, as shadows can be abstracted into the rays of light from which they are formed. This allows for photorealistic images and renderings. For example, consider a circle of radius r. The main pieces of information a program needs in order to draw this circle are An indication that what is to be drawn is a circle the radius r the location of the center point of the circle stroke line style and color (possibly transparent) fill style and color (possibly transparent) Vector formats are not always appropriate in graphics work and also have numerous disadvantages. For example, devices such as cameras and scanners produce essentially continuous-tone raster graphics that are impractical to convert into vectors, and so for this type of work, an image editor will operate on the pixels rather than on drawing objects defined by mathematical expressions. Comprehensive graphics tools will combine images from vector and raster sources, and may provide editing tools for both, since some parts of an image could come from a camera source, and others could have been drawn using vector tools. Some authors have criticized the term vector graphics as being confusing. In particular, vector graphics does not simply refer to graphics described by Euclidean vectors. Some authors have proposed to use object-oriented graphics instead. However this term can also be confusing as it can be read as any kind of graphics implemented using object-oriented programming. == Vector operations == Vector graphics editors typically allow translation, rotation, mirroring, stretching, skewing, affine transformations, changing of z-order (loosely, what's in front of what) and combination of primitives into more complex objects. More sophisticated transformations include set operations on closed shapes (union, difference, intersection, etc.). In SVG, the composition operations are based on alpha composition. Vector graphics are ideal for simple or composite drawings that need to be device-independent, or do not need to achieve photo-realism. For example, the PostScript and PDF page description languages use a vector graphics model. == Vector image repositories == Many stock photo websites provide vectorized versions of hosted images, while specific repositories specialize in vector images given their growing popularity among graphic designers. == See also == == Notes == == References == Barr, Alan H. (July 1984). "Global and local deformations of solid primitives" (PDF). Proceedings of the 11th annual conference on Computer graphics and interactive techniques. Vol. 18. pp. 21–30. CiteSeerX 10.1.1.67.6046. doi:10.1145/800031.808573. ISBN 0897911385. S2CID 16162806. Retrieved July 31, 2020. Gharachorloo, Nader; Gupta, Satish; Sproull, Robert F.; Sutherland, Ivan E. (July 1989). "A characterization of ten rasterization techniques" (PDF). Proceedings of the 16th annual conference on Computer graphics and interactive techniques. Vol. 23. pp. 355–368. CiteSeerX 10.1.1.105.461. doi:10.1145/74333.74370. ISBN 0201504340. S2CID 8253227. Retrieved July 28, 2020. Murray, Stephen (2002). "Graphic Devices". In Roger R. Flynn (ed.). Computer Sciences, Vol 2: Software and Hardware, Macmillan Reference USA. Gale eBooks. Retrieved August 3, 2020. == External links == Media related to Vector graphics at Wikimedia Commons
Wikipedia/Vector_graphic
Flight training is a course of study used when learning to pilot an aircraft. The overall purpose of primary and intermediate flight training is the acquisition and honing of basic airmanship skills. Flight training can be conducted under a structured accredited syllabus with a flight instructor at a flight school or as private lessons with no syllabus with a flight instructor as long as all experience requirements for the desired pilot certificate/license are met. Typically flight training consists of a combination of two parts: Flight Lessons given in the aircraft or in a certified Flight Training Device. Ground School primarily given as a classroom lecture or lesson by a flight instructor where aeronautical theory is learned in preparation for the student's written, oral, and flight pilot certification/licensing examinations. Although there are various types of aircraft, many of the principles of piloting them have common techniques, especially those aircraft which are heavier-than-air types. Flight schools commonly rent aircraft to students and licensed pilots at an hourly rate. Typically, the hourly rate is determined by the aircraft's Hobbs meter or Tach timer, therefore the student is only charged while the aircraft engine is running. Flight instructors can also be scheduled with or without an aircraft for pilot proficiency and recurring training. The oldest flight training school still in existence is the Royal Air Force's (RAF's) Central Flying School formed in May 1912 at Upavon, United Kingdom. The oldest civil flight school still active in the world is based in Germany at the Wasserkuppe. It was founded as "Mertens Fliegerschule", and is currently named "Fliegerschule Wasserkuppe". == Licences == The International Civil Aviation Organization sets global standards for Pilot licensing that are implemented and enforced by a country's Civil aviation authority. Pilots must first meet their country's requirements to obtain a Student pilot certificate which is used for training towards a Private Pilot Licence (PPL). They can then progress to a Commercial Pilot Licence (CPL), and finally an Airline Transport Pilot Licence (ATPL). Some countries have a Light Aircraft Pilot Licence (LAPL), but this cannot be used internationally. Separate licences are required for different aircraft categories, for example helicopters and aeroplanes. == Ratings == A type rating, also known as an endorsement, is the process undertaken by a pilot to update their license to allow them to fly a different type of aircraft. A class rating covers multiple aircraft. An instrument rating allows a pilot to fly under instrument flight rules (IFR). A night rating allows a pilot to fly at night (that is, outside of Civil twilight). == See also == Bárány chair Bachelor of Aviation Ground Instructor Integrated pilot training Pilot licensing and certification Pilot certification in the United States Pilot licensing in Canada Pilot licensing in the United Kingdom == References == == External links == Learning to Fly: A Practical Manual for Beginners (1916) by Claude Grahame-White and Harry Harper Student Pilot Guide from the FAA Accelerated Flight Training from Flying Mag. Pilot Training Compass: Back to the Future from European Cockpit Association.
Wikipedia/Type_conversion_(aviation)
In computer programming, a parameter, a.k.a. formal argument, is a variable that represents an argument, a.k.a. actual argument, a.k.a. actual parameter, to a subroutine call.. A function's signature defines its parameters. A call invocation involves evaluating each argument expression of a call and associating the result with the corresponding parameter. For example, consider subroutine def add(x, y): return x + y. Variables x and y are parameters. For call add(2, 3), the expressions 2 and 3 are arguments. For call add(a+1, b+2), the arguments are a+1 and b+2. Parameter passing is defined by a programming language. Evaluation strategy defines the semantics for how parameters can be declared and how arguments are passed to a subroutine. Generally, with call by value, a parameter acts like a new, local variable initialized to the value of the argument. If the argument is a variable, the subroutine cannot modify the argument state because the parameter is a copy. With call by reference, which requires the argument to be a variable, the parameter is an alias of the argument. == Example == The following program defines a function named SalesTax with one parameter named price; both typed double. For call SalesTax(10.00), the argument 10.00 is evaluated to a double value (10) and assigned to parameter variable price. The function is executed and returns the value 0.5. == Parameters and arguments == The terms parameter and argument may have different meanings in different programming languages. Sometimes they are used interchangeably, and the context is used to distinguish the meaning. The term parameter (sometimes called formal parameter) is often used to refer to the variable as found in the function declaration, while argument (sometimes called actual parameter) refers to the actual input supplied at a function call statement. For example, if one defines a function as def f(x): ..., then x is the parameter, and if it is called by a = ...; f(a) then a is the argument. A parameter is an (unbound) variable, while the argument can be a literal or variable or more complex expression involving literals and variables. In case of call by value, what is passed to the function is the value of the argument – for example, f(2) and a = 2; f(a) are equivalent calls – while in call by reference, with a variable as argument, what is passed is a reference to that variable - even though the syntax for the function call could stay the same. The specification for pass-by-reference or pass-by-value would be made in the function declaration and/or definition. Parameters appear in procedure definitions; arguments appear in procedure calls. In the function definition f(x) = x*x the variable x is a parameter; in the function call f(2) the value 2 is the argument of the function. Loosely, a parameter is a type, and an argument is an instance. A parameter is an intrinsic property of the procedure, included in its definition. For example, in many languages, a procedure to add two supplied integers together and calculate the sum would need two parameters, one for each integer. In general, a procedure may be defined with any number of parameters, or no parameters at all. If a procedure has parameters, the part of its definition that specifies the parameters is called its parameter list. By contrast, the arguments are the expressions supplied to the procedure when it is called, usually one expression matching one of the parameters. Unlike the parameters, which form an unchanging part of the procedure's definition, the arguments may vary from call to call. Each time a procedure is called, the part of the procedure call that specifies the arguments is called the argument list. Although parameters are also commonly referred to as arguments, arguments are sometimes thought of as the actual values or references assigned to the parameter variables when the subroutine is called at run-time. When discussing code that is calling into a subroutine, any values or references passed into the subroutine are the arguments, and the place in the code where these values or references are given is the parameter list. When discussing the code inside the subroutine definition, the variables in the subroutine's parameter list are the parameters, while the values of the parameters at runtime are the arguments. For example, in C, when dealing with threads it is common to pass in an argument of type void* and cast it to an expected type: To better understand the difference, consider the following function written in C: The function Sum has two parameters, named addend1 and addend2. It adds the values passed into the parameters, and returns the result to the subroutine's caller (using a technique automatically supplied by the C compiler). The code which calls the Sum function might look like this: The variables value1 and value2 are initialized with values. value1 and value2 are both arguments to the sum function in this context. At runtime, the values assigned to these variables are passed to the function Sum as arguments. In the Sum function, the parameters addend1 and addend2 are evaluated, yielding the arguments 40 and 2, respectively. The values of the arguments are added, and the result is returned to the caller, where it is assigned to the variable sum_value. Because of the difference between parameters and arguments, it is possible to supply inappropriate arguments to a procedure. The call may supply too many or too few arguments; one or more of the arguments may be a wrong type; or arguments may be supplied in the wrong order. Any of these situations causes a mismatch between the parameter and argument lists, and the procedure will often return an unintended answer or generate a runtime error. === Alternative convention in Eiffel === Within the Eiffel software development method and language, the terms argument and parameter have distinct uses established by convention. The term argument is used exclusively in reference to a routine's inputs, and the term parameter is used exclusively in type parameterization for generic classes. Consider the following routine definition: The routine sum takes two arguments addend1 and addend2, which are called the routine's formal arguments. A call to sum specifies actual arguments, as shown below with value1 and value2. Parameters are also thought of as either formal or actual. Formal generic parameters are used in the definition of generic classes. In the example below, the class HASH_TABLE is declared as a generic class which has two formal generic parameters, G representing data of interest and K representing the hash key for the data: When a class becomes a client to HASH_TABLE, the formal generic parameters are substituted with actual generic parameters in a generic derivation. In the following attribute declaration, my_dictionary is to be used as a character string based dictionary. As such, both data and key formal generic parameters are substituted with actual generic parameters of type STRING. == Datatypes == In strongly typed programming languages, each parameter's type must be specified in the procedure declaration. Languages using type inference attempt to discover the types automatically from the function's body and usage. Dynamically typed programming languages defer type resolution until run-time. Weakly typed languages perform little to no type resolution, relying instead on the programmer for correctness. Some languages use a special keyword (e.g. void) to indicate that the subroutine has no parameters; in formal type theory, such functions take an empty parameter list (whose type is not void, but rather unit). == Argument passing == The exact mechanism for assigning arguments to parameters, called argument passing, depends upon the evaluation strategy used for that parameter (typically call by value), which may be specified using keywords. === Default arguments === Some programming languages such as Ada, C++, Clojure, Common Lisp, Fortran 90, Python, Ruby, Tcl, and Windows PowerShell allow for a default argument to be explicitly or implicitly given in a subroutine's declaration. This allows the caller to omit that argument when calling the subroutine. If the default argument is explicitly given, then that value is used if it is not provided by the caller. If the default argument is implicit (sometimes by using a keyword such as Optional) then the language provides a well-known value (such as null, Empty, zero, an empty string, etc.) if a value is not provided by the caller. PowerShell example: Default arguments can be seen as a special case of the variable-length argument list. === Variable-length parameter lists === Some languages allow subroutines to be defined to accept a variable number of arguments. For such languages, the subroutines must iterate through the list of arguments. PowerShell example: === Named parameters === Some programming languages—such as Ada and Windows PowerShell—allow subroutines to have named parameters. This allows the calling code to be more self-documenting. It also provides more flexibility to the caller, often allowing the order of the arguments to be changed, or for arguments to be omitted as needed. PowerShell example: === Multiple parameters in functional languages === In lambda calculus, each function has exactly one parameter. What is thought of as functions with multiple parameters is usually represented in lambda calculus as a function which takes the first argument, and returns a function which takes the rest of the arguments; this is a transformation known as currying. Some programming languages, like ML and Haskell, follow this scheme. In these languages, every function has exactly one parameter, and what may look like the definition of a function of multiple parameters, is actually syntactic sugar for the definition of a function that returns a function, etc. Function application is left-associative in these languages as well as in lambda calculus, so what looks like an application of a function to multiple arguments is correctly evaluated as the function applied to the first argument, then the resulting function applied to the second argument, etc. == Output parameters == An output parameter, also known as an out parameter or return parameter, is a parameter used for output, rather than the more usual use for input. Using call by reference parameters, or call by value parameters where the value is a reference, as output parameters is an idiom in some languages, notably C and C++, while other languages have built-in support for output parameters. Languages with built-in support for output parameters include Ada (see Ada subprograms), Fortran (since Fortran 90; see Fortran "intent"), various procedural extensions to SQL, such as PL/SQL (see PL/SQL functions) and Transact-SQL, C# and the .NET Framework, Swift, and the scripting language TScript (see TScript function declarations). More precisely, one may distinguish three types of parameters or parameter modes: input parameters, output parameters, and input/output parameters; these are often denoted in, out, and in out or inout. An input argument (the argument to an input parameter) must be a value, such as an initialized variable or literal, and must not be redefined or assigned to; an output argument must be an assignable variable, but it need not be initialized, any existing value is not accessible, and must be assigned a value; and an input/output argument must be an initialized, assignable variable, and can optionally be assigned a value. The exact requirements and enforcement vary between languages – for example, in Ada 83 output parameters can only be assigned to, not read, even after assignment (this was removed in Ada 95 to remove the need for an auxiliary accumulator variable). These are analogous to the notion of a value in an expression being an r-value (has a value), an l-value (can be assigned), or an r-value/l-value (has a value and can be assigned), respectively, though these terms have specialized meanings in C. In some cases only input and input/output are distinguished, with output being considered a specific use of input/output, and in other cases only input and output (but not input/output) are supported. The default mode varies between languages: in Fortran 90 input/output is default, while in C# and SQL extensions input is default, and in TScript each parameter is explicitly specified as input or output. Syntactically, parameter mode is generally indicated with a keyword in the function declaration, such as void f(out int x) in C#. Conventionally output parameters are often put at the end of the parameter list to clearly distinguish them, though this is not always followed. TScript uses a different approach, where in the function declaration input parameters are listed, then output parameters, separated by a colon (:) and there is no return type to the function itself, as in this function, which computes the size of a text fragment: Parameter modes are a form of denotational semantics, stating the programmer's intent and allowing compilers to catch errors and apply optimizations – they do not necessarily imply operational semantics (how the parameter passing actually occurs). Notably, while input parameters can be implemented by call by value, and output and input/output parameters by call by reference – and this is a straightforward way to implement these modes in languages without built-in support – this is not always how they are implemented. This distinction is discussed in detail in the Ada '83 Rationale, which emphasizes that the parameter mode is abstracted from which parameter passing mechanism (by reference or by copy) is actually implemented. For instance, while in C# input parameters (default, no keyword) are passed by value, and output and input/output parameters (out and ref) are passed by reference, in PL/SQL input parameters (IN) are passed by reference, and output and input/output parameters (OUT and IN OUT) are by default passed by value and the result copied back, but can be passed by reference by using the NOCOPY compiler hint. A syntactically similar construction to output parameters is to assign the return value to a variable with the same name as the function. This is found in Pascal and Fortran 66 and Fortran 77, as in this Pascal example: This is semantically different in that when called, the function is simply evaluated – it is not passed a variable from the calling scope to store the output in. === Use === The primary use of output parameters is to return multiple values from a function, while the use of input/output parameters is to modify state using parameter passing (rather than by shared environment, as in global variables). An important use of returning multiple values is to solve the semipredicate problem of returning both a value and an error status – see Semipredicate problem: Multivalued return. For example, to return two variables from a function in C, one may write: where x is an input parameter and width and height are output parameters. A common use case in C and related languages is for exception handling, where a function places the return value in an output variable, and returns a Boolean corresponding to whether the function succeeded or not. An archetypal example is the TryParse method in .NET, especially C#, which parses a string into an integer, returning true on success and false on failure. This has the following signature: and may be used as follows: Similar considerations apply to returning a value of one of several possible types, where the return value can specify the type and then value is stored in one of several output variables. === Drawbacks === Output parameters are often discouraged in modern programming, essentially as being awkward, confusing, and too low-level – commonplace return values are considerably easier to understand and work with. Notably, output parameters involve functions with side effects (modifying the output parameter) and are semantically similar to references, which are more confusing than pure functions and values, and the distinction between output parameters and input/output parameters can be subtle. Further, since in common programming styles most parameters are simply input parameters, output parameters and input/output parameters are unusual and hence susceptible to misunderstanding. Output and input/output parameters prevent function composition, since the output is stored in variables, rather than in the value of an expression. Thus one must initially declare a variable, and then each step of a chain of functions must be a separate statement. For example, in C++ the following function composition: when written with output and input/output parameters instead becomes (for F it is an output parameter, for G an input/output parameter): In the special case of a function with a single output or input/output parameter and no return value, function composition is possible if the output or input/output parameter (or in C/C++, its address) is also returned by the function, in which case the above becomes: === Alternatives === There are various alternatives to the use cases of output parameters. For returning multiple values from a function, an alternative is to return a tuple. Syntactically this is clearer if automatic sequence unpacking and parallel assignment can be used, as in Go or Python, such as: For returning a value of one of several types, a tagged union can be used instead; the most common cases are nullable types (option types), where the return value can be null to indicate failure. For exception handling, one can return a nullable type, or raise an exception. For example, in Python one might have either: or, more idiomatically: The micro-optimization of not requiring a local variable and copying the return when using output variables can also be applied to conventional functions and return values by sufficiently sophisticated compilers. The usual alternative to output parameters in C and related languages is to return a single data structure containing all return values. For example, given a structure encapsulating width and height, one can write: In object-oriented languages, instead of using input/output parameters, one can often use call by sharing, passing a reference to an object and then mutating the object, though not changing which object the variable refers to. == See also == Command-line argument Evaluation strategy Operator overloading Free variables and bound variables == Notes == == References ==
Wikipedia/Argument_(computer_science)
In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted. Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware. In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-in routines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch. In most ALGOL-like languages, such as Pascal, Modula-2, Ada and Delphi, conversion and casting are distinctly different concepts. In these languages, conversion refers to either implicitly or explicitly changing a value from one data type storage format to another, e.g. a 16-bit integer to a 32-bit integer. The storage needs may change as a result of the conversion, including a possible loss of precision or truncation. The word cast, on the other hand, refers to explicitly changing the interpretation of the bit pattern representing a value from one type to another. For example, 32 contiguous bits may be treated as an array of 32 Booleans, a 4-byte string, an unsigned 32-bit integer or an IEEE single precision floating point value. Because the stored bits are never changed, the programmer must know low level details such as representation format, byte order, and alignment needs, to meaningfully cast. In the C family of languages and ALGOL 68, the word cast typically refers to an explicit type conversion (as opposed to an implicit conversion), causing some ambiguity about whether this is a re-interpretation of a bit-pattern or a real data representation conversion. More important is the multitude of ways and rules that apply to what data type (or class) is located by a pointer and how a pointer may be adjusted by the compiler in cases like object (class) inheritance. == Explicit casting in various languages == === Ada === Ada provides a generic library function Unchecked_Conversion. === C-like languages === ==== Implicit type conversion ==== Implicit type conversion, also known as coercion or type juggling, is an automatic type conversion by the compiler. Some programming languages allow compilers to provide coercion; others require it. In a mixed-type expression, data of one or more subtypes can be converted to a supertype as needed at runtime so that the program will run correctly. For example, the following is legal C language code: Although d, l, and i belong to different data types, they will be automatically converted to equal data types each time a comparison or assignment is executed. This behavior should be used with caution, as unintended consequences can arise. Data can be lost when converting representations from floating-point to integer, as the fractional components of the floating-point values will be truncated (rounded toward zero). Conversely, precision can be lost when converting representations from integer to floating-point, since a floating-point type may be unable to exactly represent all possible values of some integer type. For example, float might be an IEEE 754 single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can. This can lead to unintuitive behavior, as demonstrated by the following code: On compilers that implement floats as IEEE single precision, and ints as at least 32 bits, this code will give this peculiar print-out: The integer is: 16777217 The float is: 16777216.000000 Their equality: 1 Note that 1 represents equality in the last line above. This odd behavior is caused by an implicit conversion of i_value to float when it is compared with f_value. The conversion causes loss of precision, which makes the values equal before the comparison. Important takeaways: float to int causes truncation, i.e., removal of the fractional part. double to float causes rounding of digit. long to int causes dropping of excess higher order bits. ===== Type promotion ===== One special case of implicit type conversion is type promotion, where an object is automatically converted into another data type representing a superset of the original type. Promotions are commonly used with types smaller than the native type of the target platform's arithmetic logic unit (ALU), before arithmetic and logical operations, to make such operations possible, or more efficient if the ALU can work with more than one type. C and C++ perform such promotion for objects of Boolean, character, wide character, enumeration, and short integer types which are promoted to int, and for objects of type float, which are promoted to double. Unlike some other type conversions, promotions never lose precision or modify the value stored in the object. In Java: ==== Explicit type conversion ==== Explicit type conversion, also called type casting, is a type conversion which is explicitly defined within a program (instead of being done automatically according to the rules of the language for implicit type conversion). It is requested by the user in the program. There are several kinds of explicit conversion. checked Before the conversion is performed, a runtime check is done to see if the destination type can hold the source value. If not, an error condition is raised. unchecked No check is performed. If the destination type cannot hold the source value, the result is undefined. bit pattern The raw bit representation of the source is copied verbatim, and it is re-interpreted according to the destination type. This can also be achieved via aliasing. In object-oriented programming languages, objects can also be downcast : a reference of a base class is cast to one of its derived classes. === C# and C++ === In C#, type conversion can be made in a safe or unsafe (i.e., C-like) manner, the former called checked type cast. In C++ a similar effect can be achieved using C++-style cast syntax. === Eiffel === In Eiffel the notion of type conversion is integrated into the rules of the type system. The Assignment Rule says that an assignment, such as: is valid if and only if the type of its source expression, y in this case, is compatible with the type of its target entity, x in this case. In this rule, compatible with means that the type of the source expression either conforms to or converts to that of the target. Conformance of types is defined by the familiar rules for polymorphism in object-oriented programming. For example, in the assignment above, the type of y conforms to the type of x if the class upon which y is based is a descendant of that upon which x is based. ==== Definition of type conversion in Eiffel ==== The actions of type conversion in Eiffel, specifically converts to and converts from are defined as: A type based on a class CU converts to a type T based on a class CT (and T converts from U) if either CT has a conversion procedure using U as a conversion type, or CU has a conversion query listing T as a conversion type ==== Example ==== Eiffel is a fully compliant language for Microsoft .NET Framework. Before development of .NET, Eiffel already had extensive class libraries. Using the .NET type libraries, particularly with commonly used types such as strings, poses a conversion problem. Existing Eiffel software uses the string classes (such as STRING_8) from the Eiffel libraries, but Eiffel software written for .NET must use the .NET string class (System.String) in many cases, for example when calling .NET methods which expect items of the .NET type to be passed as arguments. So, the conversion of these types back and forth needs to be as seamless as possible. In the code above, two strings are declared, one of each different type (SYSTEM_STRING is the Eiffel compliant alias for System.String). Because System.String does not conform to STRING_8, then the assignment above is valid only if System.String converts to STRING_8. The Eiffel class STRING_8 has a conversion procedure make_from_cil for objects of type System.String. Conversion procedures are also always designated as creation procedures (similar to constructors). The following is an excerpt from the STRING_8 class: The presence of the conversion procedure makes the assignment: semantically equivalent to: in which my_string is constructed as a new object of type STRING_8 with content equivalent to that of my_system_string. To handle an assignment with original source and target reversed: the class STRING_8 also contains a conversion query to_cil which will produce a System.String from an instance of STRING_8. The assignment: then, becomes equivalent to: In Eiffel, the setup for type conversion is included in the class code, but then appears to happen as automatically as explicit type conversion in client code. The includes not just assignments but other types of attachments as well, such as argument (parameter) substitution. === Rust === Rust provides no implicit type conversion (coercion) between primitive types. But, explicit type conversion (casting) can be performed using the as keyword. == Type assertion == A related concept in static type systems is called type assertion, which instruct the compiler to treat the expression of a certain type, disregarding its own inference. Type assertion may be safe (a runtime check is performed) or unsafe. A type assertion does not convert the value from a data type to another. === TypeScript === In TypeScript, a type assertion is done by using the as keyword: In the above example, document.getElementById is declared to return an HTMLElement, but you know that it always return an HTMLCanvasElement, which is a subtype of HTMLElement, in this case. If it is not the case, subsequent code which relies on the behaviour of HTMLCanvasElement will not perform correctly, as in Typescript there is no runtime checking for type assertions. In Typescript, there is no general way to check if a value is of a certain type at runtime, as there is no runtime type support. However, it is possible to write a user-defined function which the user tells the compiler if a value is of a certain type of not. Such a function is called type guard, and is declared with a return type of x is Type, where x is a parameter or this, in place of boolean. This allows unsafe type assertions to be contained in the checker function instead of littered around the codebase. === Go === In Go, a type assertion can be used to access a concrete type value from an interface value. It is a safe assertion that it will panic (in the case of one return value), or return a zero value (if two return values are used), if the value is not of that concrete type. This type assertions tell the system that i is of type T. If it isn't, it panics. == Implicit casting using untagged unions == Many programming languages support union types which can hold a value of multiple types. Untagged unions are provided in some languages with loose type-checking, such as C and PL/I, but also in the original Pascal. These can be used to interpret the bit pattern of one type as a value of another type. == Security issues == In hacking, typecasting is the misuse of type conversion to temporarily change a variable's data type from how it was originally defined. This provides opportunities for hackers since in type conversion after a variable is "typecast" to become a different data type, the compiler will treat that hacked variable as the new data type for that specific operation. == See also == Downcasting Run-time type information § C++ – dynamic cast and Java cast Truth value Type punning == References == == External links == Casting in Ada Casting in C++ C++ Reference Guide Why I hate C++ Cast Operators, by Danny Kalev Casting in Java Implicit Conversions in C# Implicit Type Casting at Cppreference.com Static and Reinterpretation castings in C++ Upcasting and Downcasting in F#
Wikipedia/Cast_(computer_science)
Corporate blog is a blog that is published and used by an organization, corporation, etc. to reach its organizational goals. The advantage of blogs is that posts and comments are easy to reach and follow due to centralized hosting and generally structured conversation threads. Although there are many different types of corporate blogs, most can be categorized as either external or internal. == Types == === Internal blogs === An internal blog, generally accessed through the corporation's Intranet, is a weblog that any employee can view. Many blogs are also communal, allowing anyone to post to them. The informal nature of blogs may encourage: employee participation free discussion of issues collective intelligence direct communication between various layers of an organization a sense of community Internal blogs may be used in lieu of meetings and e-mail discussions, and can be especially useful when the people involved are in different locations, or have conflicting schedules. Blogs may also allow individuals who otherwise would not have been aware of or invited to participate in a discussion to contribute their expertise. === External blogs === An external blog is a publicly available weblog where company employees, teams, or spokespersons share their views. It is often used to announce new products and services (or the end of old products), to explain and clarify policies, or to react to public criticism on certain issues. It also allows a window to the company culture and is often treated more informally than traditional press releases, though a corporate blog often tries to accomplish similar goals as press releases do. In some corporate blogs, all posts go through a review before they are posted. Some corporate blogs, but not all, allow comments to be made to the posts. According to Hoffman Agency, corporate blogs should not be ‘about me’, but should be a platform to show thought leadership and communicate views on industry issues. External corporate blogs, by their very nature, are biased, though they can also offer a more honest and direct view than traditional communication channels. Nevertheless, they remain public relations tools. Corporate blogs may be written primarily for consumers (business-to-consumer) or primarily for other businesses (B2B). Certain corporate blogs have a very high number of subscribers. The official Google Blog is currently in the Technorati top 50 listing among all blogs worldwide. The number of subscribers, blog comments, links to blog posts, and the number of times a post is shared in other social media are indicators of a blog's popularity, potential influence, and reach. While business blogs targeted to consumer readers may have a high number of subscribers, comments, and other measures of engagement; corporate blogs targeted to other businesses, especially those in niche industries, may have a very limited number of subscribers, comments, links, and sharing via social media. Accordingly, other metrics are often evaluated to determine the success and effectiveness of B2B blogs. Marketers might expect to have product evangelists or influencers among the audience of an external blog. Once they find them, they may treat them like VIPs, asking them for feedback on exclusive previews, product testing, marketing plans, customer services audits, etc. The business blog can provide additional value by adding a level of credibility that is often unobtainable from a standard corporate site. The informality and increased timeliness of information posted to the blog assists with increasing transparency and accessibility in the corporate image. Business blogs can interact with a target market on a more personal level while building link credibility that can ultimately be tied back to the corporate site. == References ==
Wikipedia/Corporate_blog
In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite data type, i.e., a data type formed by combining other types. Two common classes of algebraic types are product types (i.e., tuples, and records) and sum types (i.e., tagged or disjoint unions, coproduct types or variant types). The values of a product type typically contain several values, called fields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., the Cartesian product, of the sets of all possible values of its field types. The values of a sum type are typically grouped into several classes, called variants. A value of a variant type is usually created with a quasi-functional entity called a constructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., the disjoint union, of the sets of all possible values of its variants. Enumerated types are a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor. Values of algebraic types are analyzed with pattern matching, which identifies a value by its constructor or field names and extracts the data it contains. == History == Algebraic data types were introduced in Hope, a small functional programming language developed in the 1970s at the University of Edinburgh. == Examples == === Singly linked list === One of the most common examples of an algebraic data type is the singly linked list. A list type is a sum type with two variants, Nil for an empty list and Cons x xs for the combination of a new element x with a list xs to create a new list. Here is an example of how a singly linked list would be declared in Haskell: or Cons is an abbreviation of construct. Many languages have special syntax for lists defined in this way. For example, Haskell and ML use [] for Nil, : or :: for Cons, respectively, and square brackets for entire lists. So Cons 1 (Cons 2 (Cons 3 Nil)) would normally be written as 1:2:3:[] or [1,2,3] in Haskell, or as 1::2::3::[] or [1,2,3] in ML. === Binary tree === For a slightly more complex example, binary trees may be implemented in Haskell as follows: or Here, Empty represents an empty tree, Leaf represents a leaf node, and Node organizes the data into branches. In most languages that support algebraic data types, it is possible to define parametric types. Examples are given later in this article. Somewhat similar to a function, a data constructor is applied to arguments of an appropriate type, yielding an instance of the data type to which the type constructor belongs. For example, the data constructor Leaf is logically a function Int -> Tree, meaning that giving an integer as an argument to Leaf produces a value of the type Tree. As Node takes two arguments of the type Tree itself, the datatype is recursive. Operations on algebraic data types can be defined by using pattern matching to retrieve the arguments. For example, consider a function to find the depth of a Tree, given here in Haskell: Thus, a Tree given to depth can be constructed using any of Empty, Leaf, or Node and must be matched for any of them respectively to deal with all cases. In case of Node, the pattern extracts the subtrees l and r for further processing. === Abstract syntax === Algebraic data types are highly suited to implementing abstract syntax. For example, the following algebraic data type describes a simple language representing numerical expressions: An element of such a data type would have a form such as Mult (Add (Number 4) (Minus (Number 0) (Number 1))) (Number 2). Writing an evaluation function for this language is a simple exercise; however, more complex transformations also become feasible. For example, an optimization pass in a compiler might be written as a function taking an abstract expression as input and returning an optimized form. == Pattern matching == Algebraic data types are used to represent values that can be one of several types of things. Each type of thing is associated with an identifier called a constructor, which can be considered a tag for that kind of data. Each constructor can carry with it a different type of data. For example, considering the binary Tree example shown above, a constructor could carry no data (e.g., Empty), or one piece of data (e.g., Leaf has one Int value), or multiple pieces of data (e.g., Node has one Int value and two Tree values). To do something with a value of this Tree algebraic data type, it is deconstructed using a process called pattern matching. This involves matching the data with a series of patterns. The example function depth above pattern-matches its argument with three patterns. When the function is called, it finds the first pattern that matches its argument, performs any variable bindings that are found in the pattern, and evaluates the expression corresponding to the pattern. Each pattern above has a form that resembles the structure of some possible value of this datatype. The first pattern simply matches values of the constructor Empty. The second pattern matches values of the constructor Leaf. Patterns are recursive, so then the data that is associated with that constructor is matched with the pattern "n". In this case, a lowercase identifier represents a pattern that matches any value, which then is bound to a variable of that name — in this case, a variable "n" is bound to the integer value stored in the data type — to be used in the expression to evaluate. The recursion in patterns in this example are trivial, but a possible more complex recursive pattern would be something like: Node i (Node j (Leaf 4) x) (Node k y (Node Empty z)) Recursive patterns several layers deep are used for example in balancing red–black trees, which involve cases that require looking at colors several layers deep. The example above is operationally equivalent to the following pseudocode: The advantages of algebraic data types can be highlighted by comparison of the above pseudocode with a pattern matching equivalent. Firstly, there is type safety. In the pseudocode example above, programmer diligence is required to not access field2 when the constructor is a Leaf. The type system would have difficulties assigning a static type in a safe way for traditional record data structures. However, in pattern matching such problems are not faced. The type of each extracted value is based on the types declared by the relevant constructor. The number of values that can be extracted is known based on the constructor. Secondly, in pattern matching, the compiler performs exhaustiveness checking to ensure all cases are handled. If one of the cases of the depth function above were missing, the compiler would issue a warning. Exhaustiveness checking may seem easy for simple patterns, but with many complex recursive patterns, the task soon becomes difficult for the average human (or compiler, if it must check arbitrary nested if-else constructs). Similarly, there may be patterns which never match (i.e., are already covered by prior patterns). The compiler can also check and issue warnings for these, as they may indicate an error in reasoning. Algebraic data type pattern matching should not be confused with regular expression string pattern matching. The purpose of both is similar (to extract parts from a piece of data matching certain constraints) however, the implementation is very different. Pattern matching on algebraic data types matches on the structural properties of an object rather than on the character sequence of strings. == Theory == A general algebraic data type is a possibly recursive sum type of product types. Each constructor tags a product type to separate it from others, or if there is only one constructor, the data type is a product type. Further, the parameter types of a constructor are the factors of the product type. A parameterless constructor corresponds to the empty product. If a datatype is recursive, the entire sum of products is wrapped in a recursive type, and each constructor also rolls the datatype into the recursive type. For example, the Haskell datatype: is represented in type theory as λ α . μ β .1 + α × β {\displaystyle \lambda \alpha .\mu \beta .1+\alpha \times \beta } with constructors n i l α = r o l l ( i n l ⟨ ⟩ ) {\displaystyle \mathrm {nil} _{\alpha }=\mathrm {roll} \ (\mathrm {inl} \ \langle \rangle )} and c o n s α x l = r o l l ( i n r ⟨ x , l ⟩ ) {\displaystyle \mathrm {cons} _{\alpha }\ x\ l=\mathrm {roll} \ (\mathrm {inr} \ \langle x,l\rangle )} . The Haskell List datatype can also be represented in type theory in a slightly different form, thus: μ ϕ . λ α .1 + α × ϕ α {\displaystyle \mu \phi .\lambda \alpha .1+\alpha \times \phi \ \alpha } . (Note how the μ {\displaystyle \mu } and λ {\displaystyle \lambda } constructs are reversed relative to the original.) The original formation specified a type function whose body was a recursive type. The revised version specifies a recursive function on types. (The type variable ϕ {\displaystyle \phi } is used to suggest a function rather than a base type like β {\displaystyle \beta } , since ϕ {\displaystyle \phi } is like a Greek f.) The function must also now be applied ϕ {\displaystyle \phi } to its argument type α {\displaystyle \alpha } in the body of the type. For the purposes of the List example, these two formulations are not significantly different; but the second form allows expressing so-called nested data types, i.e., those where the recursive type differs parametrically from the original. (For more information on nested data types, see the works of Richard Bird, Lambert Meertens, and Ross Paterson.) In set theory the equivalent of a sum type is a disjoint union, a set whose elements are pairs consisting of a tag (equivalent to a constructor) and an object of a type corresponding to the tag (equivalent to the constructor arguments). == Programming languages with algebraic data types == Many programming languages incorporate algebraic data types as a first class notion, including: == See also == Disjoint union Generalized algebraic data type Initial algebra Quotient type Tagged union Type theory Visitor pattern == References ==
Wikipedia/Algebraic_datatype
The Data Reference Model (DRM) is one of the five reference models of the Federal Enterprise Architecture. == Overview == The DRM is a framework whose primary purpose is to enable information sharing and reuse across the United States federal government via the standard description and discovery of common data and the promotion of uniform data management practices. The DRM describes artifacts which can be generated from the data architectures of federal government agencies. The DRM provides a flexible and standards-based approach to accomplish its purpose. The scope of the DRM is broad, as it may be applied within a single agency, within a community of interest, or cross-community of interest. == Data Reference Model topics == === DRM structure === The DRM provides a standard means by which data may be described, categorized, and shared. These are reflected within each of the DRM's three standardization areas: Data Description: Provides a means to uniformly describe data, thereby supporting its discovery and sharing. Data Context: Facilitates discovery of data through an approach to the categorization of data according to taxonomies. Additionally, enables the definition of authoritative data assets within a community of interest. Data Sharing: Supports the access and exchange of data where access consists of ad hoc requests (such as a query of a data asset), and exchange consists of fixed, re-occurring transactions between parties. Enabled by capabilities provided by both the Data Context and Data Description standardization areas. === DRM Version 2 === The Data Reference Model version 2 released in November 2005 is a 114-page document with detailed architectural diagrams and an extensive glossary of terms. The DRM also make many references to ISO standards specifically the ISO/IEC 11179 metadata registry standard. === DRM usage === The DRM is not technically a published technical interoperability standard such as web services, it is an excellent starting point for data architects within federal and state agencies. Any federal or state agencies that are involved with exchanging information with other agencies or that are involved in data warehousing efforts should use this document as a guide. == See also == Enterprise architecture framework Enterprise application integration Enterprise service bus Federal Enterprise Architecture ISO/IEC 11179 Metadata publishing Semantic spectrum Semantic web Synonym ring == External links == US Department of Defense Data Reference Model US Federal Enterprise Architecture Program Data Reference Model Version 2.0 This article incorporates text from this source, which is available under the CC BY 3.0 US license.
Wikipedia/Data_Reference_Model
Microsoft Dynamics 365 is an integrated suite of enterprise resource planning (ERP) and customer relationship management (CRM) applications offered by Microsoft. Combines various functions such as sales, customer service, field service, operations, finance, marketing, and project service automation into a single platform. Dynamics 365 integrates with other Microsoft products such as Office 365, Power BI, and Azure, allowing businesses to streamline their operations, improve customer engagement, and make data-driven decisions. The platform is highly customizable, enabling organizations to tailor it to their specific needs and industry requirements. Dynamics 365 is designed to help businesses unify their processes, gain insights into their operations, and foster better relationships with customers. It provides tools for managing sales leads, automating marketing campaigns, tracking customer interactions, managing finances, optimizing operations, and more. The platform is available on a subscription basis, with different modules and pricing options to suit the needs of various businesses. == Applications == Microsoft Dynamics is largely made up of products developed by companies that Microsoft acquired: Dynamics GP (formerly Great Plains), Dynamics NAV (formerly Navision; now forked into Dynamics 365 Business Central), Dynamics SL (formerly Solomon), and Dynamics AX (formerly Axapta; now forked into Dynamics 365 Finance and Operations). The various products are aimed at different market segments, ranging from small and medium-sized businesses (SMBs) to large organizations with multi-language, currency, and legal entity capability. In recent years Microsoft Dynamics ERP has focused its marketing and innovation efforts on SaaS suites. Microsoft Dynamics 365 contains more than 15 applications: Dynamics 365 Sales – Sales Leaders, Sales Operations Dynamics 365 Customer data platform- Customer Insights Dynamics 365 Customer data platform- Customer Voice Dynamics 365 Customer Service – Customer Service Leaders, Customer Service Operations Dynamics 365 Field Service – Field Service Leaders, Field Service Operations Dynamics 365 Remote Assist Dynamics 365 Human Resources – Attract, Onboard, Core HR Dynamics 365 Finance & Operations – Finance Leaders, Operation Leaders Dynamics 365 Supply Chain Management – Streamline planning, production, stock, warehouse, and transportation. Dynamics 365 Intelligent Order Management Dynamics 365 Commerce Dynamics 365 Project Operations Dynamics 365 Marketing—Adobe Marketing Cloud, Dynamics 365 for Marketing Dynamics 365 Artificial Intelligence – AI for Sales, AI for Customer Service, AI for Market Insight Dynamics 365 Mixed Reality – Remote Assist, Layout, Guides Dynamics 365 Business Central – ERP for SMBs == Microsoft Dynamics 365 for Finance and Operations == Microsoft Dynamics 365 for Finance and Operations Enterprise Edition (formerly Microsoft Dynamics AX) – ERP and CRM software-as-a-service product meant for mid-sized and large enterprises. Integrating both Dynamics AX and Dynamics CRM features, consisting of the following modules: for Financials and Operations, for Sales Enterprise, for Marketing, for Customer Service, for Field Service, for Project Service Automation. It is designed to be easily connected with Office 365 and PowerBI. === Microsoft Dynamics ax === Microsoft Dynamics AX was one of Microsoft's Enterprise resource planning (ERP) software products. In 2018, its thick-client interface was removed and the web product was rebranded as Microsoft Dynamics 365 for Finance and Operations as a part of the Dynamics 365 suite. MDCC or Microsoft Development Center Copenhagen was once the primary development center for Dynamics AX. Microsoft Dynamics AX contained 19 core modules: ==== Traditional core (since axapta 2.5) ==== General ledger – ledger, sales tax, currency, and fixed assets features Bank management – receives and pays cash Customer relationship management (CRM) – business relations contact and maintenance (customers, vendors, and leads) Accounts receivable – order entry, shipping, and invoicing Accounts payable – purchase orders, goods received into inventory Inventory management – inventory management and valuation Master planning (resources) – purchase and production planning Production – bills of materials, manufacturing tracking Store, manage, and interpret data. ==== Extended core ==== The following modules are part of the core of AX 2009 (AX 5.0) and available on a per-license basis in AX 4.0: Shop floor control Cost accounting Balanced scorecards Service management Expense management Payroll management Environmental management ==== Morphx and x++ ==== X++ integrates SQL queries into standard Java-style code. ==== Presence on the internet ==== Information about Axapta prior to the Microsoft purchase was available on technet.navision.com, a proprietary web-based newsgroup, which grew to a considerable number of members and posts before the Microsoft purchase in 2002. After Microsoft incorporated Axapta into their Business Solution suite, they transferred the newsgroup's content to the Microsoft Business Solutions newsgroup. The oldest Axapta Technet post that can be found dates to August 2000. ==== Events ==== Extreme Conferences: extreme365 is a conference for the Dynamics 365 Partner Community which now includes Dynamics AX, featuring an Executive Forum. ==== Personalization and predictive analytics ==== At the National Retail Federation (NRF) Conference 2016 in New York, Microsoft unveiled its partnership with Infinite Analytics, a Cambridge-based predictive analytics and personalization company. == Microsoft Dynamics 365 Business Central == Microsoft Dynamics 365 Business Central (formerly Microsoft Dynamics NAV) – ERP and CRM software-as-a-service product meant for small and mid-sized businesses. Integrating both Dynamics NAV and Dynamics CRM features, consisting of the following modules: for Financials and Operations, for Sales Professionals, for Marketing. Easily connected with Office 365 and PowerBI. Microsoft Dynamics 365 Customer Engagement (formerly Microsoft CRM). Microsoft Dynamics 365 Customer Engagement contains modules to interact with customers: Marketing, Sales, Customer Service, Field Service. The Customer Service is a module used to automate customer service processes providing performance data reports and dashboards. === Online and on-premises deployment === The Dynamics 365 Business Central system comes in both an online hosted (SaaS) version and an on-premises version for manual deployment and administration. Some features, such as integration with other online Microsoft services, are not available in the on-premises version and only in the online edition. === Localization === As an international ERP system, Business Central is available with 24 official localizations to work with the local features and requirements of various countries. Local partners provide an additional 47 localizations. The system is compliant with various internal financial standards to meet local requirements, such as GDPR, IAS/IFRS and SOX. === Editions and licensing === There are two editions of Business Central, Essentials and Premium. Essentials covers Finance, Sales, Marketing, Purchasing, Inventory, Warehousing, and Project Management. Premium includes all of Essentials functionality plus Service Management and Manufacturing features. With the arrival of NAV 2013, Microsoft introduced a new licensing model that operated on a concurrent user basis. With this model, user licenses were of two types: A full user or a limited user. The full user has access to the entire system, whereas the limited user only has read access to the system and limited write access. From the Business Central rebrand launch, the licensing model changed to a per-seat license model with a 3x concurrent seat multiplier added to any existing perpetual licences from previous Dynamics NAV versions. Customers with a Dynamics NAV Extended Pack license were moved to the Premium edition. == Microsoft Dynamics 365 Sales == Microsoft Dynamics 365 Sales is a customer relationship management software package developed by Microsoft. The current version is Dynamics 365. The name and licensing changed with the update from Dynamics 2016. 365 Sales comes with softphone capabilities. == History == Microsoft Dynamics was a line of Business Applications, containing enterprise resource planning (ERP) and customer relationship management (CRM). Microsoft marketed Dynamics applications through a network of reselling partners who provided specialized services. Microsoft Dynamics formed part of "Microsoft Business Solutions". Dynamics can be used with other Microsoft programs and services, such as SharePoint, Yammer, Office 365, Azure and Outlook. The Microsoft Dynamics focus-industries are retail, services, manufacturing, financial services, and the public sector. Microsoft Dynamics offers services for small, medium, and large businesses. === Business Central === Business Central was first published as Dynamics NAV and Navision, which Microsoft acquired in 2002. ==== Navision ==== Navision originated at PC&C A/S (Personal Computing and Consulting), a company founded in Denmark in 1984. PC&C released its first accounting package, PCPlus, in 1985—a single-user application with basic accounting functionality. There followed in 1987 the first version of Navision, a client/server-based accounting application that allowed multiple users to access the system simultaneously. The success of the product prompted the company to rename itself to Navision Software A/S in 1995. The Navision product sold primarily in Denmark until 1990. From Navision version 3 the product was distributed in other European countries, including Germany and the United Kingdom. In 1995 the first version of Navision based on Microsoft Windows 95 was released. In 2000, Navision Software A/S merged with fellow Danish firm Damgaard A/S (founded 1983) to form NavisionDamgaard A/S. In 2001 the company changed its name to "Navision A/S". On July 11, 2002, Microsoft bought Navision A/S to go with its previous acquisition of Great Plains Software. Navision became a new division at Microsoft, named Microsoft Business Solutions, which also handled Microsoft CRM. In 2003 Microsoft announced plans to develop an entirely new ERP system (Project Green). But it later decided to continue development of all ERP systems (Dynamics AX, Dynamics NAV, Dynamics GP and Dynamics SL). Microsoft launched all four ERP systems with the same new role-based user interface, SQL-based reporting and analysis, SharePoint-based portal, Pocket PC-based mobile clients and integration with Microsoft Office. ==== Dynamics NAV ==== In September 2005, Microsoft re-branded the product and re-released it as Microsoft Dynamics NAV. In December 2008, Microsoft released Dynamics NAV 2009, which contains both the original "classic" client, as well as a new .NET Framework-based three-tier GUI called the RoleTailored Client (RTC). In first quarter of 2014 NAV reached 102,000 current customers. In 2016, Microsoft announced the creation of Dynamics 365 — a rebranding of the suite of Dynamics ERP and CRM products as a part of a new online-only offering. As a part of this suite, the successor to NAV was codenamed "Madeira". ==== Dynamics 365 Business Central ==== In September 2017 at the Directions conference, Microsoft announced the new codename "Tenerife" as the next generation of the Dynamics NAV product. This replaced codename "Madeira". On April 2, 2018, Business Central was released publicly and plans for semi-annual releases were announced. Business Central introduced a new AL language for development and translated the codebase from Dynamics NAV (C/AL). === Dynamics SL, Dynamics GP, Dynamics C5 === Several variants of the Dynamics brand have migration paths to Business Central with most having not had a new release since 2018. The later releases of the SL, GP, and C5 products adopted the Dynamics NAV Role-Tailored Client UI which helped pave the transition to the Business Central product. ==== History of Dynamics C5 ==== Dynamics C5 was developed in Denmark as the successor to the DOS-based Concorde C4. The developing company Damgaard Data merged with Navision in 2001 which was subsequently acquired by Microsoft Microsoft in 2002 rebranding the solution from Navision C5 to Microsoft Dynamics C5. The product handles currently more than 70,000 installations in Denmark. ==== History of Dynamics SL ==== Based in Findlay, Ohio, Solomon's roots go back more than 35 years, when co-founders Gary Harpst, Jack Ridge and Vernon Strong started TLB, Inc. in 1980. TLB, Inc. stands for The Lord's Business, "to remind the founders why the business was started: to conduct the business according to biblical principles." TLB was later renamed Solomon Software, and then Microsoft Dynamics SL. ==== History of Dynamics GP ==== The Dynamics GP product was originally developed by Great Plains Software, an independent company located in Fargo, North Dakota run by Doug Burgum. Dynamics Release 1.0 was released in February 1993. It was one of the first accounting packages in the United States that were designed and written to be multi-user and to run under Windows as 32-bit software. In late 2000, Microsoft announced the purchase of Great Plains Software. This acquisition was completed in April 2001. Dynamics GP is written in a language called Dexterity. Previous versions were compatible with Microsoft SQL Server, Pervasive PSQL, Btrieve, and earlier versions also used C-tree, although after the buyout all new versions switched entirely to Microsoft SQL Server databases. Dynamics GP will no longer be updated after September 2029, with security updates through April 2031. === Finance === Microsoft Dynamics 365 Finance is a Microsoft enterprise resource planning (ERP) system for medium to large organizations. The software, part of the Dynamics 365 product line, was first on general release in November 2016, initially branded as Dynamics 365 for Operations. In July 2017, it was rebranded to Dynamics 365 for Finance and Operations. At the same time, Microsoft rebranded their business software suite for small businesses (Business Edition, Financials) to Finance and Operations, Business Edition, however, the two applications are based on completely different platforms. Its history includes: 1998 (March) – Axapta, a collaboration between IBM and Danish Damgaard Data, released in the Danish and US markets. 2000 – Damgaard Data merged with Navision Software A/S to form NavisionDamgaard, later named Navision A/S. Released Axapta 2.5. IBM returned all rights in the product to Damgaard Data shortly after the release of Version 1.5. 2002 – Microsoft acquires Navision A/S. Released Axapta 3.0. 2006 – Released Microsoft Dynamics AX 4.0. 2008 – Released Microsoft Dynamics AX 2009. 2011 – Released Microsoft Dynamics AX 2012. It was made available and supported in more than 30 countries and 25 languages. Dynamics AX is used in over 20,000 organizations of all sizes, worldwide. 2016 – Released Microsoft Dynamics AX 7. Later rebranded to Dynamics 365 for Operations. This update was a major revision with a completely new UI delivered through a browser-based HTML5 client, and initially only available as a cloud-hosted application. This version lasted only a few months, though, as Dynamics AX was rebranded Microsoft Dynamics 365 for Operations in October 2016, and once more as Dynamics 365 for Finance and Operations in July 2017. 2017 – Rebranded to Dynamics 365 for Finance and Operations, Enterprise Edition (not to be mistaken with Dynamics 365 for Finance and Operations Business Edition, which is based on former Microsoft Dynamics NAV). 2018 – Rebranded to Dynamics 365 for Finance and Operations 2018 – The Human Resources Module became Dynamics 365 for Talent, now Dynamics 365 Human Resources. 2020 – Rebranded and split into two products: Dynamics 365 Finance Dynamics 365 Supply Chain Management 2023 – Dynamics 365 Human Resources re-integrated === Sales === Microsoft Dynamics 365 Sales has undergone several iterations over its history. ==== Microsoft CRM 1.2 ==== Microsoft CRM 1.2 was released on December 8, 2003. Microsoft CRM 1.2 was not widely adopted by industry. It was not possible to create custom entities but there was a software development kit (SDK) available using SOAP and XML endpoints to interact with it. ==== Microsoft Dynamics CRM 3.0 ==== The second version was rebranded as Microsoft Dynamics 3.0 (version 2.0 was skipped entirely) to signify its inclusion within the Dynamics product family and was released on December 5, 2005. Notable updates over version 1.2 are the ease of creating customizations to CRM, the switch from using Crystal Reports to Microsoft SQL Reporting Services, and the ability to run on Windows Vista and Outlook 2007. Significant additions released later by Microsoft also allowed Dynamics CRM 3.0 to be accessed by various mobile devices and integration with Siebel Systems. This was the first version that saw reasonable take up by customers. You could create custom entities and (1xN) relations between the system/custom entities. ==== Microsoft Dynamics CRM 4.0 ==== Dynamics CRM 4.0 (a.k.a. Titan) was introduced in December 2007 (RTM build number 4.0.7333.3 Microsoft CRM build numbers from version 4.0 to version 8). It features multi-tenancy, improved reporting security, data importing, direct mail merging and support for newer technologies such as Windows Server 2008 and SQL Server 2008 (Update Rollup 4). Dynamics CRM 4.0 also implements CRM Online, a hosted solution that is offered directly by Microsoft. The multi-tenancy option also allows ISVs to offer hosted solutions to end customers as well. Dynamics CRM 4.0 is the first version of the product, which has seen significant take up in the market and passed the 1 million user mark in July 2009. Additional support for NxN relations was added, which solved a lot of 'in-between' entities. "Connections" were also introduced in favour of "Relations". The UI design was based on Office 2007 look and feel, with the same blue shading and round button as "start". ==== Microsoft Dynamics CRM 2011 ==== Dynamics CRM 2011 was released to open Beta in February 2010. It then went into Release Candidate stage in December 2010. The product was then released in February 2011 (build number 5.0.9688.583) Browsers such as Internet Explorer, Chrome and Firefox browsers are fully supported since Microsoft Dynamics CRM 2011 Update Rollup 12. Because of this browser compatibility R12 was highly anticipated but also caused a lot of stress for customers that had used unsupported customizations. R12 broke those customizations and clients had to rethink their changes. Microsoft offered additional wizards to pinpoint the problems. ==== Microsoft Dynamics CRM 2013 ==== Dynamics CRM 2013 was released to a closed beta group on July 28, 2013. Dynamics CRM 2013 Online went live for new signups in October 2013. It was released in November 2013 (build number 6.0.0000.0809). ==== Microsoft Dynamics CRM 2015 ==== On September 16, 2014, Microsoft announced that Microsoft Dynamics CRM 2015, as well as updates to its Microsoft Dynamics CRM Online and Microsoft Dynamics Marketing services, will be generally available in the fourth quarter of 2014. Microsoft also released a preview guide with details. On November 30, 2014, Microsoft announced the general availability of Microsoft Dynamics CRM 2015 and the 2015 Update of Microsoft Dynamics Marketing. On January 6, 2015, Microsoft announced the availability of a CRM Cloud service specifically for the US Government that is designed for FedRAMP compliance. ==== Microsoft Dynamics CRM 2016 ==== Microsoft Dynamics CRM 2016 was officially released on November 30, 2015. Versions for CRM 2016 was 8.0, 8.1 and 8.2. With version 8.2 the name, Microsoft Dynamics CRM 2016, was changed to Dynamics 365 Microsoft Dynamics CRM 2016 was officially released on November 30, 2015. It includes advancements in intelligence, mobility and service, with significant productivity enhancements. In June 2016 was developed a special application which sends scanned info from business cards into MS Dynamics CRM named Business Card Reader for MS Dynamics and Call Tracker application in 2017. ==== Microsoft Dynamics 365 sales ==== Microsoft Dynamics 365 was officially released on November 1, 2016, as the successor to Dynamics CRM. The product combines Microsoft business products (CRM and ERP Dynamics AX). A softphone dialer can be added as an extension. The on-premises application, called Dynamics 365 Customer Engagement contained the following applications: Dynamics 365 for Sales Dynamics 365 for Customer Service Dynamics 365 for Marketing Dynamics 365 for Field Service Dynamics 365 for Project Service Automation The offerings Dynamics 365 for Finance and Operations cover the ERP needs, such as bookkeeping, invoice and order handling and manufacturing. In Dynamics 365 version 9.0.0.1 many notable features like Virtual entities in Dynamics 365, Auto Numbering Attributes, Multi Select Options sets etc. were introduced. == Product updates == === October 2018 update === The update released in October 2018 included new features for sales, marketing, customer service, and recruitment. === April 2019 update === This update was released on April 5, 2019. The features added after the update, included a user interface (UUI) to embed canvas apps created in PowerApps and it also brought back the tabs facility. The update also led to the removal of the Xrm.Page.data. === February 2020 update === An update was announced on February 19, 2019. The update included additions to the Customer Insights, Microsoft's customer data platform (CDP) such as new first and third-party data connections. In addition to this, this update brought forth new sales forecasting tools and Dynamics 365 Sales Engagement Center. The Dynamics 365 Project Operations was introduced in this update. === October 2021 update (wave 1) === An update was announced on October 5, 2019. This update included a replacement of bank reconciliation reports. The payment reconciliation journal was improved to support preview posting, separate number series, and user-defined document numbers. Microsoft Dynamics 365 also welcomes Correct Dimensions action. With this update, Microsoft Dynamics 365 has welcomed integration with Microsoft Teams search box, Microsoft Word, and Microsoft Universal Print technology. == Support and end of life == == Related products == Microsoft Dynamics includes a set of related products: Microsoft Dynamics Management Reporter. Management Reporter is a financial reporting and analysis application. Its main feature is to create income statements, balance sheet statements, cash flow statements and other financial reports. Reports can be stored in a centralized Report Library along with external supporting files. Security on reports and files may be controlled using Windows Authentication and SQL Server. Microsoft Dynamics for Retail (formerly Microsoft Dynamics RMS, QuickSell 2000 and Dynamics POS) Microsoft Dynamics for Marketing (formerly MDM and MarketingPilot 2012) Microsoft Dynamics Social Listening (formerly Netbreeze 2013) Power Automate, formerly Microsoft Flow (until 2019), a toolkit similar to IFTTT for implementing business workflow products. Power Automate Desktop, robotic process automation software for automating graphical user interfaces (acquired in May 2020) Parature customer engagement software in the customer support and service channels (acquired in January 2014) Microsoft also sells Sure Step as an implementation methodology for Microsoft Dynamics for its re-sellers. In July 2018, Microsoft announced Dynamics 365 AI for sales applications. == See also == Microsoft Azure Microsoft Dataverse Microsoft Office Microsoft Power Platform List of Microsoft software == References == == Further reading == Bellu, Renato (2018). Microsoft Dynamics 365 For Dummies. For Dummies. ISBN 978-1119508861. Houdeshell, Robert (2021). Microsoft Dynamics 365 Project Operations: Deliver profitable projects with effective project planning and productive operational workflows. Packt Publishing. ISBN 978-1801072076. Newell, Eric (2021). Mastering Microsoft Dynamics 365 Implementations. Sybex. ISBN 978-1119789321. Brummel, Marije; Studebaker, David; Studebaker, Chris (2019). Programming Microsoft Dynamics 365 Business Central: Build customized business applications with the latest tools in Dynamics 365 Business Central. Packt Publishing. ISBN 978-1789137798. Demiliani, Stefano; Tacconi, Duilio (2019). Mastering Microsoft Dynamics 365 Business Central: Discover extension development best practices, build advanced ERP integrations, and use DevOps tools. Packt Publishing. ISBN 978-1789951257. Yadav, JJ; Shukla, Sandeep; Mohta, Rahul; Kasat, Yogesh (2020). Implementing Microsoft Dynamics 365 for Finance and Operations Apps: Learn best practices, architecture, tools, techniques, and more. Packt Publishing. ISBN 978-1789950847. Luszczak, Andreas (2018). Using Microsoft Dynamics 365 for Finance and Operations: Learn and understand the functionality of Microsoft's enterprise solution. Springer Vieweg. ISBN 978-3658241063. == External links == Official website Microsoft Dynamics AX 2012 Launches Worldwide Microsoft Dynamics 365 for Finance and Operations official webpage
Wikipedia/Microsoft_Dynamics_365
An agile application is the result of service-oriented architecture and agile development paradigms. An agile application is distinguished from average applications in that it is a loosely coupled set of services with a decoupled orchestration layer and it is easily modified to address changing business needs and it is scalable by design. Using agile applications development paradigms, a set of services can be built to address business specific functional components. These services can be exposed using any one of the standard communication protocols including web services. A well designed agile application will standardize on a common communication protocol and a common data model. The services can then be orchestrated using a decoupled layer to implement business logic. There are many tools by different vendors (IBM, Intel etc.,) in the industry that can support the orchestration layer. The decoupled nature of an agile application permits it to accommodate fault tolerance and scalability. For example, scalability is addressed through focusing the attention of the QA team in the set of services that are causing the bottleneck as opposed to trying to solve scalability for the entire system which can be a much bigger problem. Similarly, fault tolerance can be achieved through deploying multiple instances of a service. If one service fails, another instance can pick up the load. For stateless services, this can lead to continuous availability. Following the Agile Development paradigm, each unit of development cycle can be focused on a single service. Furthermore, multiple of these development cycles can run in parallel leading to faster development completion. Agile is a means of responsiveness based on customization rather than stable production or standardization. == References == == Further reading == Nanocomputers and Swarm Intelligence by Jean-Baptiste Waldner, ISTE, ISBN 978-1-84704-002-2, 2007. Agile Web Development with Rails 2nd Edition by Dave Thomas; David Heinemeier; Leon Breedt, Rails, ISBN 0-9776166-3-0, 2007.yy == See also == Semantic Web Semantic Grid Ontology (computer science) Semantic Web Rule Language (SWRL)
Wikipedia/Agile_application
RailTopoModel is a systemic data model for describing the topology-based railway infrastructure as needed by various applications. The RailTopoModel has been initially developed under patronage of the International Union of Railways (UIC) and was released as International Railway Standard (IRS) 30100 in April 2016. It has been described as a common data model for the railway sector. RailTopoModel is currently continued by UIC as RailSystemModel, a re-branding resulting from the extension of its scope. On the other hand, RTM development (from RTM 1.2) went on as a fork initiated by the railML community and managed by the organisation railML.org. == Motivation == In the field of railway networks, many non-standard descriptions are needed for addressing specific needs: RINF to describe infrastructure; ETCS for train control and protection; INSPIRE for spatial information. Network operators or suppliers took particular initiatives to harmonize their network representations for gathering, providing, or using network-related data. The purpose of RailTopoModel is to define a general, standard model for railway infrastructure. == History == The development of the RailTopoModel is a result of the ERIM project (abbreviation for European Rail Infrastructure Modelling, previously referred as European Rail Infrastructure Masterplan) within UIC that aimed at standardized data representation and exchange concerning railway networks. In 2013, starting from the assessment of a small group of Railway infrastructure managers about limitation of current exchange formats for ETCS, RINF, Inspire, and European projects based on network topology, the UIC ERIM feasibility study was launched. The objective of this working group was to qualify the business needs, analyze the existing solutions and experiences, and propose a project plan to build a universal "language" to improve the railway data exchange, and support the design of an infrastructure data exchange format based on topology. Based on this study a topology model, the ‘UIC RailTopoModel’, was developed. In April 2015 RTM V1.0 was released. ‘UIC RailTopoModel’ was released as an UIC recommendation called International Railway Standard (IRS 30100) in spring 2016. Version 1.2, re-branded RailSystemModel 1.2, was released in 2021 and published online in January 2022. railML.org, a European open source initiative providing a standard for data exchange in railway networks since 2001, has offered the first use case for RailTopoModel through a new version of its infrastructure schema, railML3. Under the leadership of railML.org the RailTopoModel was continued to be developed leading to the publishing of RailTopoModel 1.2 in 2018 and RailTopoModel 1.4 in 2022. == Structure == RailTopoModel is based on connexity graph theory and it is defined in terms of UML. Its emphasis lies on: Core elements — identification of all network components; Referencing — defining standards for addressing locations e.g. via geographical coordinates. The backbone of referencing in RailTopoModel is a linear referencing system; Topology — expressing the relations between the elements; Business – allowing to project objects and events onto the topology. These can be spots (e.g. a signal), linear entities ( e.g. a tunnel) or areas (e.g. a train station); Aggregation – allowing for the standardized and reversible aggregation e.g. to visualize the network in a broader scale. There are four predefined aggregation levels: nano: very large scale, depicting e.g. the interior of a switch micro: large scale, depicting e.g. switches and buffer stops connected by tracks meso: intermediate scale, depicting e.g. operation points and the number of tracks connecting them macro: small scale, depicting main operation points and the corridors between them The model allows defining as many levels as is deemed useful, while ensuring consistency of data between levels. Ideally, standardisation should grant for references and switches between aggregation levels being bijective and different applications being able to exchange data. == Applications == Current applications are: railML: the topology core of railML's scheme version 3 will be defined on the basis of RailTopoModel. Ariane model as foundation of all SNCF Réseau IT projects: the definition of Ariane model at SNCF Réseau employs the same concept: it combines a connexity graph (for the topology of the network) with an object approach to define a systemic model of the railway system. The principal benefit of this approach is to distinguish between the business object of the system and the processes that manage them. Moreover, it allows for an evolutive and understanding model. This type of modeling is needed to build a virtual railway system to simulate all processes. Eulynx, a European initiative in the area of signalling, uses RailSystemModel to provide its Data Preparation model with quantities, units, and general patterns for observations and measurements, as well as network topology, geographic positioning, localisation of entities on the network. == External links == https://www.railtopomodel.org/ — former project website https://www.railtopomodel.org/en/download-rtm.html — download of former versions (RTM 1.2 and later a fork by railML.org) https://rsm.uic.org/ – current RSM website == References ==
Wikipedia/RailTopoModel
Janus clinical trial data repository is a clinical trial data repository (or data warehouse) standard as sanctioned by the U.S. Food and Drug Administration (FDA). It was named for the Roman god Janus (mythology), who had two faces, one that could see in the past and one that could see in the future. The analogy is that the Janus data repository would enable the FDA and the pharmaceutical industry to both look retrospectively into past clinical trials, and also relative to one or more current clinical trials (or even future clinical trials thru better enablement of clinical trial design). The Janus data model is a relational database model, and is based on SDTM as a standard, in terms of many of its basic concepts such as the loading and storing of findings, events, interventions and inclusion data. However, Janus itself is a data warehouse independent of any single clinical trials submission standard. For example, Janus can store pre-clinical trial (non-human) submission information as well, in the form of the SEND non-clinical standard. The goals of Janus are as follows: To create an integrated data platform for most commercial tools for review, analysis and reporting Reduce the overall cost of existing information gathering and submissions development processes as well as review and analysis of information Provide a common data model that is based on the SDTM standard to represent four classes of clinical data submitted to regulatory agencies: tabulation datasets, patient profiles, listings, etc. Provides central access to standardized data, and provide common data views across collaborative partners. Support cross-trial analyses for data mining and help detect clinical trends and address clinical hypotheses, and perform more advanced, robust analysis. This will enable the ability to contrast and compare data from multiple clinical trials to help improve efficacy and safety. Facilitate a more efficient review process and ability to locate and query data more easily through automated processes and data standards. Provide a potentially broader data view for all clinical trials with proper security, de-identified patient data, and proper agreements in place to share data. == External links == fda.gov http://gforge.nci.nih.gov/docman/?group_id=142 http://gforge.nci.nih.gov/docman/index.php?group_id=180 https://web.archive.org/web/20060929233205/http://crix.nci.nih.gov/projects/janus/
Wikipedia/JANUS_clinical_trial_data_repository
A canonical model is a design pattern used to communicate between different data formats. Essentially: create a data model which is a superset of all the others ("canonical"), and create a "translator" module or layer to/from which all existing modules exchange data with other modules. The canonical model acts as a middleman. Each model now only needs to know how to communicate with the canonical model and doesn't need to know the implementation details of the other modules. == Details == A form of enterprise application integration, it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. A canonical model is any model that is canonical in nature, meaning a model that is in the simplest form possible based on a standard enterprise application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise or business domain canonical model common view within a given context. Often the term canonical model is used interchangeably with integration strategy and often entails a move to a message-based integration methodology. A typical migration from point-to-point canonical data model, an enterprise design pattern which provides common data naming, definition and values within a generalized data framework. Advantages of using a canonical data model are reducing the number of data translations and reducing the maintenance effort. Adoption of a comprehensive enterprise interfacing to message-based integration begins with a decision on the middleware to be used to transport messages between endpoints. Often this decision results in the adoption of an enterprise service bus (ESB) or enterprise application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise form of XML schema built from the common model objects thus providing the desired consistency and re-usability while ensuring data integrity. == See also == Canonical schema pattern Common data model Enterprise information integration Enterprise integration Information architecture List of XML schemas Service-oriented architecture Web service XML schema == References == == Bibliography == "Enterprise Integration Patterns: Canonical Data Model". "Metadata Hub and Spokes (Canonical Data Domain)". == External links == Forrester Research, Canonical Model Management Forum Canonical Model, Canonical Schema, and Event Driven SOA Forrester Research, Canonical Information Modeling
Wikipedia/Canonical_model
In computer science, a data buffer (or just buffer) is a region of memory used to store data temporarily while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a microphone) or just before it is sent to an output device (such as speakers); however, a buffer may be used when data is moved between processes within a computer, comparable to buffers in telecommunication. Buffers can be implemented in a fixed memory location in hardware or by using a virtual data buffer in software that points at a location in the physical memory. In all cases, the data stored in a data buffer is stored on a physical storage medium. The majority of buffers are implemented in software, which typically use RAM to store temporary data because of its much faster access time when compared with hard disk drives. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler or in online video streaming. In a distributed computing environment, data buffers are often implemented in the form of burst buffers, which provides distributed buffering services. A buffer often adjusts timing by implementing a queue (or FIFO) algorithm in memory, simultaneously writing data into the queue at one rate and reading it at another rate. == Applications == Buffers are often used in conjunction with I/O to hardware, such as disk drives, sending or receiving data to or from a network, or playing sound on a speaker. A line to a rollercoaster in an amusement park shares many similarities. People who ride the coaster come in at an unknown and often variable pace, but the roller coaster will be able to load people in bursts (as a coaster arrives and is loaded). The queue area acts as a buffer—a temporary space where those wishing to ride wait until the ride is available. Buffers are usually used in a FIFO (first in, first out) method, outputting data in the order it arrived. Buffers can increase application performance by allowing synchronous operations such as file reads or writes to complete quickly instead of blocking while waiting for hardware interrupts to access a physical disk subsystem; instead, an operating system can immediately return a successful result from an API call, allowing an application to continue processing while the kernel completes the disk operation in the background. Further benefits can be achieved if the application is reading or writing small blocks of data that do not correspond to the block size of the disk subsystem, which allows a buffer to be used to aggregate many smaller read or write operations into block sizes that are more efficient for the disk subsystem, or in the case of a read, sometimes to completely avoid having to physically access a disk. == Telecommunication buffer == A buffer routine or storage medium used in telecommunications compensates for a difference in rate of flow of data or time of occurrence of events when data is transferred from one device to another. Buffers are used for many purposes, including: Interconnecting two digital circuits operating at different rates. Holding data for later use. Allowing timing corrections to be made on a data stream. Collecting binary data bits into groups that can then be operated on as a unit. Delaying the transit time of a signal in order to allow other operations to occur. == Examples == The BUFFERS command/statement in CONFIG.SYS of DOS. The buffer between a serial port (UART) and a modem. The COM port speed may be 38400 bit/s while the modem may have only a 14400 bit/s carrier. The integrated disk buffer on a hard disk drive, solid state drive or BD/DVD/CD drive. The integrated SRAM buffer on an Ethernet adapter. The Windows NT kernel also manages a portion of main memory as the buffer for slower devices such as sound cards and network interface controllers. The framebuffer on a video card. == History == An early mention of a print buffer is the "Outscriber" devised by image processing pioneer Russel A. Kirsch for the SEAC computer in 1952: One of the most important problems in the design of automatic digital computers is that of getting the calculated results out of the machine rapidly enough to avoid delaying the further progress of the calculations. In many of the problems to which a general-purpose computer is applied the amount of output data is relatively big — so big that serious inefficiency would result from forcing the computer to wait for these data to be typed on existing printing devices. This difficulty has been solved in the SEAC by providing magnetic recording devices as output units. These devices are able to receive information from the machine at rates up to 100 times as fast as an electric typewriter can be operated. Thus, better efficiency is achieved in recording the output data; transcription can be made later from the magnetic recording device to a printing device without tying up the main computer. == See also == Buffer overflow Buffer underrun Circular buffer Disk buffer Streaming media Frame buffer for use in graphical display Double buffering and Triple buffering for techniques mainly in graphics Depth buffer, Stencil buffer, for different parts of image information Variable length buffer Optical buffer MissingNo., the result of buffer data not being cleared properly in Pokémon Red and Blue UART buffer ENOBUFS, POSIX error caused by lack of memory in buffers Write buffer, a type of memory buffer Zero-copy 512k day == References ==
Wikipedia/Buffer_(computer_science)
String functions are used in computer programming languages to manipulate a string or query information about a string (some do both). Most programming languages that have a string datatype will have some string functions although there may be other low-level ways within each language to handle strings directly. In object-oriented languages, string functions are often implemented as properties and methods of string objects. In functional and list-based languages a string is represented as a list (of character codes), therefore all list-manipulation procedures could be considered string functions. However such languages may implement a subset of explicit string-specific functions as well. For function that manipulate strings, modern object-oriented languages, like C# and Java have immutable strings and return a copy (in newly allocated dynamic memory), while others, like C manipulate the original string unless the programmer copies data to a new string. See for example Concatenation below. The most basic example of a string function is the length(string) function. This function returns the length of a string literal. e.g. length("hello world") would return 11. Other languages may have string functions with similar or exactly the same syntax or parameters or outcomes. For example, in many languages the length function is usually represented as len(string). The below list of common functions aims to help limit this confusion. == Common string functions (multi language reference) == String functions common to many languages are listed below, including the different names used. The below list of common functions aims to help programmers find the equivalent function in a language. Note, string concatenation and regular expressions are handled in separate pages. Statements in guillemets (« … ») are optional. === CharAt === # Example in ALGOL 68 # "Hello, World"[2]; // 'e' === Compare (integer result) === === Compare (relational operator-based, Boolean result) === === Concatenation === === Contains === ¢ Example in ALGOL 68 ¢ string in string("e", loc int, "Hello mate"); ¢ returns true ¢ string in string("z", loc int, "word"); ¢ returns false ¢ === Equality === Tests if two strings are equal. See also #Compare and #Compare. Note that doing equality checks via a generic Compare with integer result is not only confusing for the programmer but is often a significantly more expensive operation; this is especially true when using "C-strings". === Find === Examples Common Lisp C# Raku Scheme Visual Basic Smalltalk === Find character === ^a Given a set of characters, SCAN returns the position of the first character found, while VERIFY returns the position of the first character that does not belong to the set. === Format === === Inequality === Tests if two strings are not equal. See also #Equality. === index === see #Find === indexof === see #Find === instr === see #Find === instrrev === see #rfind === join === === lastindexof === see #rfind === left === === len === see #length === length === === locate === see #Find === Lowercase === === mid === see #substring === partition === === replace === === reverse === === rfind === === right === === rpartition === === slice === see #substring === split === === sprintf === see #Format === strip === see #trim === strcmp === see #Compare (integer result) === substring === === Uppercase === === trim === trim or strip is used to remove whitespace from the beginning, end, or both beginning and end, of a string. Other languages In languages without a built-in trim function, it is usually simple to create a custom function which accomplishes the same task. ==== APL ==== APL can use regular expressions directly: Alternatively, a functional approach combining Boolean masks that filter away leading and trailing spaces: Or reverse and remove leading spaces, twice: ==== AWK ==== In AWK, one can use regular expressions to trim: or: ==== C/C++ ==== There is no standard trim function in C or C++. Most of the available string libraries for C contain code which implements trimming, or functions that significantly ease an efficient implementation. The function has also often been called EatWhitespace in some non-standard C libraries. In C, programmers often combine a ltrim and rtrim to implement trim: The open source C++ library Boost has several trim variants, including a standard one: With boost's function named simply trim the input sequence is modified in-place, and returns no result. Another open source C++ library Qt, has several trim variants, including a standard one: The Linux kernel also includes a strip function, strstrip(), since 2.6.18-rc1, which trims the string "in place". Since 2.6.33-rc1, the kernel uses strim() instead of strstrip() to avoid false warnings. ==== Haskell ==== A trim algorithm in Haskell: may be interpreted as follows: f drops the preceding whitespace, and reverses the string. f is then again applied to its own output. Note that the type signature (the second line) is optional. ==== J ==== The trim algorithm in J is a functional description: That is: filter (#~) for non-space characters (' '&~:) between leading (+./\) and (*.) trailing (+./\.) spaces. ==== JavaScript ==== There is a built-in trim function in JavaScript 1.8.1 (Firefox 3.5 and later), and the ECMAScript 5 standard. In earlier versions it can be added to the String object's prototype as follows: ==== Perl ==== Perl 5 has no built-in trim function. However, the functionality is commonly achieved using regular expressions. Example: or: These examples modify the value of the original variable $string. Also available for Perl is StripLTSpace in String::Strip from CPAN. There are, however, two functions that are commonly used to strip whitespace from the end of strings, chomp and chop: chop removes the last character from a string and returns it. chomp removes the trailing newline character(s) from a string if present. (What constitutes a newline is $INPUT_RECORD_SEPARATOR dependent). In Raku, the upcoming sister language of Perl, strings have a trim method. Example: ==== Tcl ==== The Tcl string command has three relevant subcommands: trim, trimright and trimleft. For each of those commands, an additional argument may be specified: a string that represents a set of characters to remove—the default is whitespace (space, tab, newline, carriage return). Example of trimming vowels: ==== XSLT ==== XSLT includes the function normalize-space(string) which strips leading and trailing whitespace, in addition to replacing any whitespace sequence (including line breaks) with a single space. Example: XSLT 2.0 includes regular expressions, providing another mechanism to perform string trimming. Another XSLT technique for trimming is to utilize the XPath 2.0 substring() function. == References ==
Wikipedia/String_function
The Media Control Interface — MCI for short — is a high-level API developed by Microsoft and IBM for controlling multimedia peripherals connected to a Microsoft Windows or OS/2 computer, such as CD-ROM players and audio controllers. MCI makes it very simple to write a program which can play a wide variety of media files and even to record sound by just passing commands as strings. It uses relations described in Windows registries or in the [MCI] section of the file system.ini. One advantage of this API is that MCI commands can be transmitted both from the programming language and from the scripting language (open script, lingo aso). Example of such commands are mciSendCommand or mciSendString. After a few years, the MCI interface has been phased out in favor of the DirectX APIs first released in 1995. == MCI Devices == The Media Control Interface consists of 7 parts: cdaudio digitalvideo overlay sequencer vcr videodisc waveaudio Each of these so-called MCI devices (e.g. CD-ROM or VCD player) can play a certain type of files, e.g. AVIVideo plays .avi files, CDAudio plays CD-DA tracks among others. Other MCI devices have also been made available over time. == Playing media through the MCI interface == To play a type of media, it needs to be initialized correctly using MCI commands. These commands are subdivided into categories: System Commands Required Commands Basic Commands Extended Commands A full list of MCI commands can be found at Microsoft's MSDN Library. == See also == DirectShow == References == == External links == Microsoft MCI Reference - MSDN Library
Wikipedia/Multimedia_Control_Interface
An ideogram or ideograph (from Greek idéa 'idea' + gráphō 'to write') is a symbol that is used within a given writing system to represent an idea or concept in a given language. (Ideograms are contrasted with phonograms, which indicate sounds of speech and thus are independent of any particular language.) Some ideograms are more arbitrary than others: some are only meaningful assuming preexisting familiarity with some convention; others more directly resemble their signifieds. Ideograms that represent physical objects by visually illustrating them are called pictograms. Numerals and mathematical symbols are ideograms, for example ⟨1⟩ 'one', ⟨2⟩ 'two', ⟨+⟩ 'plus', and ⟨=⟩ 'equals'. The ampersand ⟨&⟩ is used in many languages to represent the word and, originally a stylized ligature of the Latin word et. Other typographical examples include ⟨§⟩ 'section', ⟨€⟩ 'euro', ⟨£⟩ 'pound sterling', and ⟨©⟩ 'copyright'. == Terminology == === Logograms === Ideograms are not to be equated with logograms, which represent specific morphemes in a language. In a broad sense, ideograms may form part of a writing system otherwise based on other principles, like the examples above in the phonetic English writing system—while also potentially representing the same idea across several languages, as they do not correspond to a specific spoken word. There may not always be a single way to read a given ideograph. While remaining logograms assigned to morphemes, specific Chinese characters like ⟨中⟩ 'middle' may be classified as ideographs in a narrower sense, given their origin and visual structure. === Pictograms and indicatives === Pictograms, depending on the definition, are ideograms that represent an idea either through a direct iconic resemblance to what is being referenced, or otherwise more broadly visually represent or illustrate it. In proto-writing systems, pictograms generally comprised most of the available symbols. Their use could also be extended via the rebus principle: for example, the pictorial Dongba symbols without Geba annotation cannot represent the Naxi language, but are used as a mnemonic for the recitation of oral literature. Some systems also use indicatives, which denote abstract concepts. Sometimes, the word ideogram is used to refer exclusively to indicatives, contrasting them with pictograms. The word ideogram has historically often been used to describe Egyptian hieroglyphs, Sumerian cuneiform, and Chinese characters. However, these symbols represent semantic elements of a language, and not the underlying ideas directly—their use generally requires knowledge of a specific spoken language. Modern scholars refer to these symbols instead as logograms, and generally avoid calling them ideograms. Most logograms include some representation of the pronunciation of the corresponding word in the language, often using the rebus principle. Later systems used selected symbols to represent the sounds of the language, such as the adaptation of the logogram for ʾālep 'ox' as the letter aleph representing the initial glottal stop. However, some logograms still meaningfully depict the meaning of the morpheme they represent visually. Pictograms are shaped like the object that the word refers to, such as an icon of a bull denoting the Semitic word ʾālep 'ox'. Other logograms may visually represent meaning via more abstract techniques. Many Egyptian hieroglyphs and cuneiform graphs could be used either logographically or phonetically. For example, the Sumerian dingir ⟨𒀭⟩ could represent the word diĝir 'deity', the god An or the word an 'sky'. In Akkadian, the graph ⟨⟩ could represent the stem il- 'deity', the word šamu 'sky', or the syllable an. While Chinese characters generally function as logograms, three of the six classes in the traditional classification are ideographic (or semantographic) in origin, as they have no phonetic component: Pictograms (象形 xiàngxíng) are generally among the oldest characters, with forms dating to the 12th century BC. Generally, with the evolution of the script, the forms of pictographs became less directly representational, to the extent that their referents are no longer plausible to intuit. Examples include ⟨田⟩ 'field', and ⟨心⟩ 'heart'. Indicatives (指事字 zhǐshìzì) like ⟨上⟩ 'up' and ⟨下⟩ 'down', or numerals like ⟨三⟩ 'three'. Ideographic compounds (会意字 huìyìzì) have a meaning synthesized from several other characters, such as ⟨明⟩ 'bright', a compound of ⟨日⟩ 'Sun' and ⟨月⟩ 'Moon', or ⟨休⟩ 'rest', composed of ⟨人⟩ 'person' and ⟨木⟩ 'tree'. As the understanding of Old Chinese phonology developed during the second half of the 20th century, many researchers became convinced that the etymology of most characters originally thought to be ideographic compounds actually included some phonetic component. Example of ideograms are the DOT pictograms, a collection of 50 symbols developed during the 1970s by the American Institute of Graphic Arts at the request of the United States Department of Transportation. Initially used to mark airports, the system gradually became more widespread. === Pure signs === Many ideograms only represent ideas by convention. For example, a red octagon only carries the meaning of 'stop' due to the public association and reification of that meaning over time. In the field of semiotics, these are a type of pure sign, a term which also includes symbols using non-graphical media. Modern analysis of Chinese characters reveals that pure signs are as old as the system itself, with prominent examples including the numerals representing numbers larger than four, including ⟨五⟩ 'five', and ⟨八⟩ 'eight'. These do not indicate anything about the quantities they represent visually or phonetically, only conventionally. == Types == === Mathematical notation === A mathematical symbol is a type of ideogram. == History == As true writing systems emerged from systems of pure ideograms, later societies with phonetic writing were often compelled by the intuitive connection between pictures, diagrams and logograms—though ultimately ignorant of the latter's necessary phonetic dimension. Greek speakers began regularly visiting Egypt during the 7th century BC. Ancient Greek writers generally mistook the Egyptian writing system to be purely ideographic. According to tradition, the Greeks had acquired the ability to write, among other things, from the Egyptians through Pythagoras (c. 570 – c. 495 BC), who had been directly taught their silent form of "symbolic teaching". Beginning with Plato (428–347 BC), the conception of hieroglyphs as ideograms was rooted in a broader philosophical conception of most language as an imperfect and obfuscatory image of reality. The views of Plato involved an ontologically separate world of forms, but those of his student Aristotle (384–322 BC) instead saw the forms as abstracts, identical in the mind of every person. For both, ideography was a more perfect representation of the forms possessed by the Egyptians. The Aristotelian framework would be the foundation for the conception of language in the Mediterranean world into the medieval era. According to the classical theory, because ideographs directly reflected the forms, they were the only "true language", and had the unique ability to communicate arcane wisdom to readers. The ability to read Egyptian hieroglyphs had been lost during late antiquity, in the context of the country's Hellenization and Christianization. However, the traditional notion that the latter trends compelled the abandonment of hieroglyphic writing has been rejected by recent scholarship. Europe only became fully acquainted with written Chinese near the end of the 16th century, and initially related the system to their existing framework of ideography as partially informed by Egyptian hieroglyphs. Ultimately, Jean-François Champollion's successful decipherment of hieroglyphs in 1823 stemmed from an understanding that they did represent spoken Egyptian language, as opposed to being purely ideographic. Champollion's insight in part stemmed from his familiarity with the work of French sinologist Jean-Pierre Abel-Rémusat regarding fanqie, which demonstrated that Chinese characters were often used to write sounds, and not just ideas. === Proposed universal languages === Inspired by these conceptions of ideography, several attempts have been made to design a universal written language—i.e., an ideography whose interpretations are accessible to all people with no regard to the languages they speak. An early proposal was made in 1668 by John Wilkins in An Essay Towards a Real Character, and a Philosophical Language. More recently, Blissymbols was devised by Charles K. Bliss in 1949, and currently includes over 2,000 graphs. == See also == Ideographic rune == References == === Citations === === Works cited === == Further reading == DeFrancis, John (1984). "The Ideographic Myth". The Chinese Language: Fact and Fantasy. University of Hawaiʻi Press. Retrieved 2024-02-29 – via pinyin.info.
Wikipedia/Ideographs