text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In computer security, an access-control list (ACL) is a list of permissions associated with a system resource (object or facility). An ACL specifies which users or system processes are granted access to resources, as well as what operations are allowed on given resources. Each entry in a typical ACL specifies a subject and an operation. For instance,
If a file object has an ACL that contains(Alice: read,write; Bob: read), this would give Alice permission to read and write the file and give Bob permission only to read it.
If the Resource Access Control Facility (RACF) profile CONSOLE CLASS(TSOAUTH) has an ACL that contains(ALICE:READ), this would give ALICE permission to use the TSO CONSOLE command.
== Implementations ==
Many kinds of operating systems implement ACLs or have a historical implementation; the first implementation of ACLs was in the filesystem of Multics in 1965.
=== Filesystem ACLs ===
A filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access-control entries (ACEs) in the Microsoft Windows NT, OpenVMS, and Unix-like operating systems such as Linux, macOS, and Solaris. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations, an ACE can control whether or not a user, or group of users, may alter the ACL on an object.
One of the first operating systems to provide filesystem ACLs was Multics. PRIMOS featured ACLs at least as early as 1984.
In the 1990s the ACL and role-based access control (RBAC) models were extensively tested and used to administer file permissions.
==== POSIX ACL ====
POSIX 1003.1e/1003.2c working group made an effort to standardize ACLs, resulting in what is now known as "POSIX.1e ACL" or simply "POSIX ACL". The POSIX.1e/POSIX.2c drafts were withdrawn in 1997 due to participants losing interest for funding the project and turning to more powerful alternatives such as NFSv4 ACL. As of December 2019, no live sources of the draft could be found on the Internet, but it can still be found in the Internet Archive.
Most of the Unix and Unix-like operating systems (e.g. Linux since 2.5.46 or November 2002, FreeBSD, or Solaris) support POSIX.1e ACLs (not necessarily draft 17). ACLs are usually stored in the extended attributes of a file on these systems.
==== NFSv4 ACL ====
NFSv4 ACLs are much more powerful than POSIX draft ACLs. Unlike draft POSIX ACLs, NFSv4 ACLs are defined by an actually published standard, as part of the Network File System.
NFSv4 ACLs are supported by many Unix and Unix-like operating systems. Examples include AIX, FreeBSD, Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem, support NFSv4 ACLs, which are part of the NFSv4 standard. There are two experimental implementations of NFSv4 ACLs for Linux: NFSv4 ACLs support for Ext3 filesystem and the more recent Richacls, which brings NFSv4 ACLs support for Ext4 filesystem. As with POSIX ACLs, NFSv4 ACLs are usually stored as extended attributes on Unix-like systems.
NFSv4 ACLs are organized nearly identically to the Windows NT ACLs used in NTFS. NFSv4.1 ACLs are a superset of both NT ACLs and POSIX draft ACLs. Samba supports saving the NT ACLs of SMB-shared files in many ways, one of which is as NFSv4-encoded ACLs.
=== Active Directory ACLs ===
Microsoft's Active Directory service implements an LDAP server that stores and disseminates configuration information about users and computers in a domain. Active Directory extends the LDAP specification by adding the same type of access-control list mechanism as Windows NT uses for the NTFS filesystem. Windows 2000 then extended the syntax for access-control entries such that they could not only grant or deny access to entire LDAP objects, but also to individual attributes within these objects.
=== Networking ACLs ===
On some types of proprietary computer hardware (in particular, routers and switches), an access-control list provides rules that are applied to port numbers or IP addresses that are available on a host or other layer 3, each with a list of hosts and/or networks permitted to use the service. Although it is additionally possible to configure access-control lists based on network domain names, this is a questionable idea because individual TCP, UDP, and ICMP headers do not contain domain names. Consequently, the device enforcing the access-control list must separately resolve names to numeric addresses. This presents an additional attack surface for an attacker who is seeking to compromise security of the system which the access-control list is protecting. Both individual servers and routers can have network ACLs. Access-control lists can generally be configured to control both inbound and outbound traffic, and in this context they are similar to firewalls. Like firewalls, ACLs could be subject to security regulations and standards such as PCI DSS.
=== SQL implementations ===
ACL algorithms have been ported to SQL and to relational database systems. Many "modern" (2000s and 2010s) SQL-based systems, like enterprise resource planning and content management systems, have used ACL models in their administration modules.
== Comparing with RBAC ==
The main alternative to the ACL model is the role-based access-control (RBAC) model. A "minimal RBAC model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent.
In modern SQL implementations, ACLs also manage groups and inheritance in a hierarchy of groups. So "modern ACLs" can express all that RBAC express and are notably powerful (compared to "old ACLs") in their ability to express access-control policy in terms of the way in which administrators view organizations.
For data interchange, and for "high-level comparisons", ACL data can be translated to XACML.
== See also ==
Access token manager
Cacls
Capability-based security
C-list
Confused deputy problem
DACL
Extended file attributes
File-system permissions
Privilege (computing)
Role-based access control (RBAC)
== Notes ==
== References ==
== Further reading == | Wikipedia/Access_control_lists |
An exit control lock (also known as an exit control device, exit lock, or simply an exit control) prevents or deters unauthorized exit.
== Function ==
Many exit control locks incorporate magnetic locks. One type, called "delayed egress magnetic locks", will not allow the door to open immediately. This delay reserves time for security personnel to get to the door before the door opens. The lock will also release if there is a fire alarm or power failure, but otherwise these locks hold the exit doors shut.
Exit control systems can include a "request to exit detector" such as a pushbutton that opens the exit, if exit requests are enabled.
In some facilities, entrances as well as exits require authentication such as swiping or otherwise reading a card with a card reader. If an intruder slips by the entrance controls of a building, they will not be able to exit undetected, and can be detained for questioning.
== Typical uses ==
Exit control locks are often used in retail establishments to deter shoplifting. They are also used in airports and other controlled areas, where people are held until they clear customs or quarantine stations. Exit control locks are also used in libraries, where there is one well-staffed entrance and exit, and a number of other exits that are intended for emergency use only.
=== Hospitals and nursing homes ===
Exit control devices are often used in hospitals, and can be interfaced to wireless sensors worn by newborn children, so that all exits will lock if a baby is stolen from one of the hospital rooms. For example, if a newborn baby is removed from a specialized section of the hospital without proper exit procedures, all exit control locks in the area switch to the locked state. Attempts to remove the transmitter from the baby's ankle also lock the exits. If the transmitter falls out, an alarm also sounds. The exits remain locked while the alarm is sounding, and unlock only after the alarm is cleared.
Similar devices are often used in Alzheimer's disease housing facilities.
=== Retail shops and storerooms ===
Often, retail stores will install emergency exits in a way that discourages their misuse for shoplifting. Usually, the door is locked with an emergency exit button next to it. Pushing the emergency exit button will unlock the door, and also trigger the fire alarm. This deters shoplifting because a person who unlocks the door in order to take an item out of the building when it is not an emergency may be reported to the police, with CCTV footage if available.
==== Benefits of locking door ====
Shoplifters will be deterred because using the exit would attract unwanted attention.
Reduces requirements for security guards and security technology (e.g. CCTV, electronic article surveillance gates).
==== Benefits of not locking door ====
Increased footfall: Multiple exits will lead to people walking through the shop, and therefore having the shop's products advertised to them, as a shortcut.
Psychological evidence shows that customers feel less relaxed and welcomed if there are signs saying that they are not allowed to do something if there is not an emergency. They can also feel frustrated at having to look for another exit.
== References == | Wikipedia/Exit_control_lock |
In computer security, lattice-based access control (LBAC) is a complex access control model based on the interaction between any combination of objects (such as resources, computers, and applications) and subjects (such as individuals, groups or organizations).
In this type of label-based mandatory access control model, a lattice is used to define the levels of security that an object may have and that a subject may have access to. The subject is only allowed to access an object if the security level of the subject is greater than or equal to that of the object.
Mathematically, the security level access may also be expressed in terms of the lattice (a partial order set) where each object and subject have a greatest lower bound (meet) and least upper bound (join) of access rights. For example, if two subjects A and B need access to an object, the security level is defined as the meet of the levels of A and B. In another example, if two objects X and Y are combined, they form another object Z, which is assigned the security level formed by the join of the levels of X and Y.
LBAC is also known as a label-based access control (or rule-based access control) restriction as opposed to role-based access control (RBAC).
Lattice based access control models were first formally defined by Denning (1976); see also Sandhu (1993).
== See also ==
== References ==
Denning, Dorothy E. (1976). "A lattice model of secure information flow" (PDF). Communications of the ACM. 19 (5): 236–243. doi:10.1145/360051.360056.
Sandhu, Ravi S. (1993). "Lattice-based access control models" (PDF). IEEE Computer. 26 (11): 9–19. doi:10.1109/2.241422. | Wikipedia/Lattice-based_access_control |
In computer science, an access control matrix or access matrix is an abstract, formal security model of protection state in computer systems, that characterizes the rights of each subject with respect to every object in the system. It was first introduced by Butler W. Lampson in 1971.
An access matrix can be envisioned as a rectangular array of cells, with one row per subject and one column per object. The entry in a cell – that is, the entry for a particular subject-object pair – indicates the access mode that the subject is permitted to exercise on the object. Each column is equivalent to an access control list for the object; and each row is equivalent to an access profile for the subject.
== Definition ==
According to the model, the protection state of a computer system can be abstracted as a set of objects
O
{\displaystyle O}
, that is the set of entities that needs to be protected (e.g. processes, files, memory pages) and a set of subjects
S
{\displaystyle S}
, that consists of all active entities (e.g. users, processes). Further there exists a set of rights
R
{\displaystyle R}
of the form
r
(
s
,
o
)
{\displaystyle r(s,o)}
, where
s
∈
S
{\displaystyle s\in S}
,
o
∈
O
{\displaystyle o\in O}
and
r
(
s
,
o
)
⊆
R
{\displaystyle r(s,o)\subseteq R}
. A right thereby specifies the kind of access a subject is allowed to process object.
== Example ==
In this matrix example there exist two processes, two assets, a file, and a device. The first process is the owner of asset 1, has the ability to execute asset 2, read the file, and write some information to the device, while the second process is the owner of asset 2 and can read asset 1.
== Utility ==
Because it does not define the granularity of protection mechanisms, the Access Control Matrix can be used as a model of the static access permissions in any type of access control system. It does not model the rules by which permissions can change in any particular system, and therefore only gives an incomplete description of the system's access control security policy.
An Access Control Matrix should be thought of only as an abstract model of permissions at a given point in time; a literal implementation of it as a two-dimensional array would have excessive memory requirements. Capability-based security and access control lists are categories of concrete access control mechanisms whose static permissions can be modeled using Access Control Matrices. Although these two mechanisms have sometimes been presented (for example in Butler Lampson's Protection paper) as simply row-based and column-based implementations of the Access Control Matrix, this view has been criticized as drawing a misleading equivalence between systems that does not take into account dynamic behaviour.
== See also ==
Access control list (ACL)
Capability-based security
Computer security model
Computer security policy
== References ==
Bishop, Matt (2004). Computer security: art and science. Addison-Wesley. ISBN 0-201-44099-7. | Wikipedia/Access_Control_Matrix |
In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce).
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.
A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These also enforce the policies of no read-up and no write-down.
== Trusted systems in classified information ==
A subset of trusted systems ("Division B" and "Division A") implement mandatory access control (MAC) labels, and as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.
Central to the concept of U.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is
tamper-proof
always invoked
small enough to be subject to independent testing, the completeness of which can be assured.
According to the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
The dedication of significant system engineering toward minimizing the complexity (not size, as often cited) of the trusted computing base (TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".
TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced Common Criteria (CC), which derive from a blend of technically mature standards from various NATO countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised the Bell–LaPadula model, in which a trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data such as files, disks, or printers) and subjects (active entities that cause information to flow among objects e.g. users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents, e.g. TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.)
The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain Trojan horses, one of the most general classes of attacks (sciz., the popularly reported worms and viruses are specializations of the Trojan horse concept).
The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls, i.e. they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model, "The SeaView Model"
An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed discretionary access controls(DACs), are under the direct control of system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and – in a more flexible and powerful form – by Multics since earlier still) and access control list (ACLs) are familiar examples of DACs.
The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a finite-state machine (FSM) with state criteria, state transition constraints (a set of "operations" that correspond to state transitions), and a descriptive top-level specification, DTLS (entails a user-perceptible interface such as an API, a set of system calls in UNIX or system exits in mainframes). Each element of the aforementioned engenders one or more model operations.
== Trusted systems in trusted computing ==
The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
== Trusted systems in policy analysis ==
In the context of national or homeland security, law enforcement, or social control policy, trusted systems provide conditional prediction about the behavior of people or objects prior to authorizing access to system resources. For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and credit or identity scoring systems in financial and anti-fraud applications. In general, they include any system in which
probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints to control behavior within the system); or
deviation analysis or systems surveillance is used to ensure that behavior within systems complies with expected or authorized parameters.
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints.
In this emergent model, "security" is not geared towards policing but to risk management through surveillance, information exchange, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty, and to a broader philosophical debate about appropriate social governance methodologies.
== Trusted systems in information theory ==
Trusted systems in the context of information theory are based on the following definition:
"Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel"
In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".
The IBM Federal Software Group has suggested that "trust points" provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
== See also ==
Accuracy and precision
Computer security
Data quality
Information quality
Trusted Computing
== References ==
== External links ==
Global Information Society Project – a joint research project | Wikipedia/Trusted_systems |
The object-capability model is a computer security model. A capability describes a transferable right to perform one (or more) operations on a given object. It can be obtained by the following combination:
An unforgeable reference (in the sense of object references or protected pointers) that can be sent in messages.
A message that specifies the operation to be performed.
The security model relies on not being able to forge references.
Objects can interact only by sending messages on references.
A reference can be obtained by:
Initial conditions: In the initial state of the computational world being described, object A may already have a reference to object B.
Parenthood: If A creates B, at that moment A obtains the only reference to the newly created B.
Endowment: If A creates B, B is born with that subset of A's references with which A chose to endow it.
Introduction: If A has references to both B and C, A can send to B a message containing a reference to C. B can retain that reference for subsequent use.
In the object-capability model, all computation is performed following the above rules.
Advantages that motivate object-oriented programming, such as encapsulation or information hiding, modularity, and separation of concerns, correspond to security goals such as least privilege and privilege separation in capability-based programming.
The object-capability model was first proposed by Jack Dennis and Earl C. Van Horn in 1966.
== Loopholes in object-oriented programming languages ==
Some object-based programming languages (e.g. JavaScript, Java, and C#) provide ways to access resources in other ways than according to the rules above including the following:
Direct assignment to the instance variables of an object in Java and C#.
Direct reflective inspection of the meta-data of an object in Java and C#.
The pervasive ability to import primitive modules, e.g. java.io.File that enable external effects.
Such use of undeniable authority violates the conditions of the object-capability model. Caja and Joe-E are variants of JavaScript and Java, respectively, that impose restrictions to eliminate these loopholes.
== Advantages of object capabilities ==
Computer scientist E. Dean Tribble stated that in smart contracts, identity-based access control did not support well dynamically changing permissions, compared to the object-capability model. He analogized the ocap model with giving a valet the key to one's car, without handing over the right to car ownership.
The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation.
These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these – in particular, information flow properties – can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code.
These structural properties stem from the two rules governing access to existing objects:
1) An object A can send a message to B only if object A holds a reference to B.
2) An object A can obtain a reference to C only if object A receives a message containing a reference to C.
As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity."
== Glossary of related terms ==
object-capability system
A computational system that implements principles described in this article.
object
An object has local state and behavior. An object in this sense is both a subject and an object in the sense used in the access control literature.
reference
An unforgeable communications channel (protected pointer, opaque address) that unambiguously designates a single object, and provides permission to send messages to that object.
message
What is sent on a reference. Depending on the system, messages may or may not themselves be first-class objects.
request
An operation in which a message is sent on a reference. When the message is received, the receiver will have access to any references included in the message.
attenuation
A common design pattern in object-capability systems: given one reference of an object, create another reference for a proxy object with certain security restrictions, such as only permitting read-only access or allowing revocation. The proxy object performs security checks on messages that it receives and passes on any that are allowed. Deep attenuation refers to the case where the same attenuation is applied transitively to any objects obtained via the original attenuated object, typically by use of a "membrane".
== Implementations ==
Almost all historical systems that have been described as "capability systems" can be modeled as object-capability systems. (Note, however, that some uses of the term "capability" are not consistent with the model, such as POSIX "capabilities".)
KeyKOS, EROS, Integrity (operating system), CapROS, Coyotos, seL4, OKL4 and Fiasco.OC are secure operating systems that implement the object-capability model.
== Languages that implement object capabilities ==
Act 1 (1981)
Eden (1985),
Emerald (1987),
Trusty Scheme (1992),
W7 (1995),
Joule (1996),
Original-E (1997),
Oz-E (2005),
Joe-E (2005),
CaPerl (2006),
Emily (2006)
Caja (2007–2021)
Monte (2008–present)
Pony (2014–present)
Wyvern (2012–present)
Newspeak (2007–present)
Hacklang (2021-present)
Rholang (2018-present)
== See also ==
Capability-based security
Capability-based addressing
Actor model
== References == | Wikipedia/Object-capability_model |
Key control refers to various methods for making sure that certain keys are only used by authorized people. This is especially important for master key systems with many users. A system of key control includes strategies for keeping track of which keys are carried by which people, as well as strategies to prevent people from giving away copies of the keys to unauthorized users. The former may be as simple as assigning someone the job of keeping an up-to-date list on paper. A more complex system may require signatures and/or a monetary deposit.
== Levels ==
Preventing unauthorized copies typically falls into one of the following five levels.
Level 5 (lowest): ordinary unrestricted keys. This level relies on the honor system. Users are instructed not to make copies or loan keys and trusted to comply. This is common for private residences.
Level 4 (low): unrestricted keys marked "Do Not Duplicate". These keys can theoretically be copied anywhere, but many stores will refuse to copy them. This is a very low-level deterrent which ALOA calls "deceptive because it provides a false sense of security".
Level 3 (medium): restricted keys. These keys are not generally available at retail outlets and often can only be obtained through a single source. The supplier has their own rules in place to prevent unauthorized duplication.
Level 2 (high): patented keys. By definition, patented keys are restricted. They also have the added feature of being protected by patent law. Anyone who sells such a key without permission of the patent holder could face financial penalties.
Level 1 (highest): factory-only patented keys. These keys cannot be cut locally. In addition to the restrictions above, users must send an authorization request to the factory to have additional keys cut and strict records are kept of each key.
None of these levels can protect against a user who loans a key to someone else and then falsely claims that the key was lost. Additional methods of key control include mechanical or electronic means. Electronic key control systems use serialized key assignments housed in a centralized database to allow for better tracking of each key made.
== References == | Wikipedia/Key_control |
Identity-based security is a type of security that focuses on access to digital information or services based on the authenticated identity of an entity. It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved into fingerprinting or facial recognition.
While most forms of identity-based security are secure and reliable, none of them are perfect and each contains its own flaws and issues.
== History ==
The earliest forms of Identity-based security was introduced in the 1960s by computer scientist Fernando Corbató. During this time, Corbató invented computer passwords to prevent users from going through other people's files, a problem evident in his Compatible Time-Sharing System (C.T.S.S.), which allowed multiple users access to a computer concurrently. Fingerprinting however, although not digital when first introduced, dates back even further to the 2nd and 3rd century, with King Hammurabi sealing contracts through his fingerprints in ancient Babylon. Evidence of fingerprinting was also discovered in ancient China as a method of identification in official courts and documents. It was then introduced in the U.S. during the early 20th century through prison systems as a method of identification. On the other hand, facial recognition was developed in the 1960s, funded by American intelligence agencies and the military.
== Types of identity-based security ==
=== Account Login ===
The most common form of Identity-based security is password authentication involving the login of an online account. Most of the largest digital corporations rely on this form of security, such as Facebook, Google, and Amazon. Account logins are easy to register, difficult to compromise, and offer a simple solution to identity-based digital services.
=== Fingerprint ===
Fingerprint biometric authentication is another type of identity-based security. It is considered to be one of the most secure forms of identification due to its reliability and accessibility, in addition to it being extremely hard to fake. Fingerprints are also unique for every person, lasting a lifetime without significant change. Currently, fingerprint biometric authentication are most commonly used in police stations, security industries, as well as smart-phones.
=== Facial Recognition ===
Facial recognition operates by first capturing an image of the face. Then, a computer algorithm determines the distinctiveness of the face, including but not limited to eye location, shape of chin, or distance from the nose. The algorithm then converts this information into a database, with each set of data having enough detail to distinguish one face from another.
== Controversies and issues ==
=== Account Login ===
A problem of this form of security is the tendency for consumers to forget their passwords. On average, an individual is registered to 25 online accounts requiring a password, and most individuals vary passwords for each account. According to a study by Mastercard and the University of Oxford, "about a third of online purchases are abandoned at checkout because consumers cannot remember their passwords." If the consumer does forget their password, they will usually have to request a password reset sent to their linked email account, further delaying the purchasing process. According to an article published by Phys Org, 18.75% of consumers abandon checkout due to password reset issues.
When individuals set a uniform password across all online platforms, this makes the login process much simpler and hard to forget. However, by doing so, it introduces another issue where a security breach in one account will lead to similar breaches in all remaining accounts, jeopardizing their online security. This makes the solution to remembering all passwords much harder to achieve.
=== Fingerprint ===
While fingerprinting is generally considered to be secure and reliable, the physical condition of one's finger during the scan can drastically affect its results. For example, physical injuries, differing displacement, and skin conditions can all lead to faulty and unreliable biometric information that may deny one's authorization.
Another issue with fingerprinting is known as the biometric sensor attack. In such an attack, a fake finger or a print of the finger is used in replacement to fool the sensors and grant authentication to unauthorized personnel.
=== Facial Recognition ===
Facial recognition relies on the face of an individual to identify and grant access to products, services, or information. However, it can be fraudulent due to limitations in technology (lighting, image resolution) as well as changes in facial structures over time.
There are two types of failure for facial recognition tests. The first is a false positive, where the database matches the image with a data set but not the data set of the actual user's image. The other type of failure is a false negative, where the database fails to recognize the face of the correct user. Both types of failure have trade-offs with accessibility and security, which make the percentage of each type of error significant. For instance, a facial recognition on a smart-phone would much rather have instances of false negatives rather than false positives since it is more optimal for you to take several tries logging in rather than randomly granting a stranger access to your phone.
While in ideal conditions with perfect lighting, positioning, and camera placement, facial recognition technology can be as accurate as 99.97%. However, such conditions are extremely rare and therefore unrealistic. In a study conducted by the National Institute of Standards and Technology (NIST), video-recorded facial recognition accuracy ranged from 94.4% to 36% depending on camera placement as well as the nature of the setting.
Aside from the technical deficiencies of Facial Recognition, racial bias has also emerged as a controversial subject. A federal study in 2019 concluded that facial recognition systems falsely identified Black and Asian faces 10 to 100 times more often than White faces.
== See also ==
Digital Identity
Attribute-based access control
Federated identity
Identity-based conditional proxy re-encryption
Identity driven networking
Identity management system
Network security
Self-sovereign identity
== References == | Wikipedia/Identity-based_access_control |
Bluetooth Low Energy (Bluetooth LE, colloquially BLE, formerly marketed as Bluetooth Smart) is a wireless personal area network technology designed and marketed by the Bluetooth Special Interest Group (Bluetooth SIG) aimed at novel applications in the healthcare, fitness, beacons, security, and home entertainment industries. Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range.
It is independent of classic Bluetooth and has no compatibility, but Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) and LE can coexist. The original specification was developed by Nokia in 2006 under the name Wibree, which was integrated into Bluetooth 4.0 in December 2009 as Bluetooth Low Energy.
Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well as macOS, Linux, Windows 8, Windows 10 and Windows 11, natively support Bluetooth Low Energy.
== Compatibility ==
Bluetooth Low Energy is distinct from the previous (often called "classic") Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) protocol, but the two protocols can both be supported by one device: the Bluetooth 4.0 specification permits devices to implement either or both of the LE and BR/EDR systems.
Bluetooth Low Energy uses the same 2.4 GHz radio frequencies as classic Bluetooth, which allows dual-mode devices to share a single radio antenna, but uses a simpler modulation system.
== Branding ==
In 2011, the Bluetooth SIG announced the Bluetooth Smart logo so as to clarify compatibility between the new low energy devices and other Bluetooth devices.
Bluetooth Smart Ready indicates a dual-mode device compatible with both classic and low energy peripherals.
Bluetooth Smart indicates a low-energy–only device which requires either a Smart Ready or another Bluetooth Smart device in order to function.
With the May 2016 Bluetooth SIG branding information, the Bluetooth SIG began phasing out the Bluetooth Smart and Bluetooth Smart Ready logos and word marks and reverted to using the Bluetooth logo and word mark in a new blue colour.
== Target market ==
The Bluetooth SIG identifies a number of markets for low-energy technology, particularly in the smart home, health, sport, and fitness sectors. Cited advantages include:
low power requirements, operating for "months or years" on a button cell.
small size and low cost.
compatibility with a large installed base of mobile phones, tablets, and computers.
== History ==
In 2001, researchers at Nokia determined various scenarios that contemporary wireless technologies did not address. The company began developing a wireless technology adapted from the Bluetooth standard which would provide lower power usage and cost while minimizing its differences from Bluetooth technology. The results were published in 2004 using the name Bluetooth Low End Extension.
After further development with partners, in particular Logitech and within the European project MIMOSA, and actively promoted and supported by STMicroelectronics since its early stage, the technology was released to the public in October 2006 with the brand name Wibree. After negotiations with Bluetooth SIG members, an agreement was reached in June 2007 to include Wibree in a future Bluetooth specification as a Bluetooth ultra low power technology.
The technology was marketed as Bluetooth Smart and integration into version 4.0 of the Core Specification was completed in early 2010. The first smartphone to implement the 4.0 specification was the iPhone 4S, released in October 2011. A number of other manufacturers released Bluetooth Low Energy Ready devices in 2012.
The Bluetooth SIG officially unveiled Bluetooth 5 on 16 June 2016 during a media event in London. One change on the marketing side is that the point number was dropped, so it is now just called Bluetooth 5 (and not Bluetooth 5.0 or 5.0 LE like for Bluetooth 4.0). This decision was made to "simplify marketing, and communicate user benefits more effectively". On the technical side, Bluetooth 5 will quadruple the range by using increased transmit power or coded physical layer, double the speed by using optional half of the symbol time compared to Bluetooth 4.x, and provide an eight-fold increase in data broadcasting capacity by increasing the advertising data length of low energy Bluetooth transmissions compared to Bluetooth 4.x, which could be important for IoT applications where nodes are connected throughout a whole house. An 'advertising packet' in Bluetooth parlance is the information that is exchanged between two devices before pairing, i.e. when they are not connected. For example, advertising packets allow a device to display to the user the name of another Bluetooth device before pairing with it. Bluetooth 5 will increase the data length of this advertising packet. The length of this packet in Bluetooth 4.x was 31 bytes (for broadcast topology).
The Bluetooth SIG released Mesh Profile and Mesh Model specifications officially on 18 July 2017. Mesh specification enables using Bluetooth Low Energy for many-to-many device communications for home automation, sensor networks and other applications.
== Applications ==
Borrowing from the original Bluetooth specification, the Bluetooth SIG defines several profiles – specifications for how a device works in a particular application – for low energy devices. Manufacturers are expected to implement the appropriate specifications for their device in order to ensure compatibility. A device may contain implementations of multiple profiles.
The majority of current low energy application profiles are based on the Generic Attribute Profile (GATT), a general specification for sending and receiving short pieces of data, known as attributes, over a low energy link. The Bluetooth mesh profile is an exception to this rule, being based on the General Access Profile (GAP).
=== Mesh profiles ===
Bluetooth mesh profiles use Bluetooth Low Energy to communicate with other Bluetooth Low Energy devices in the network. Each device can pass the information forward to other Bluetooth Low Energy devices creating a "mesh" effect. For example, switching off an entire building of lights from a single smartphone.
MESH (Mesh Profile) – for base mesh networking.
MMDL (Mesh models) – for application layer definitions. Term "model" is used in mesh specifications instead of "profile" to avoid ambiguities.
=== Health care profiles ===
There are many profiles for Bluetooth Low Energy devices in healthcare applications. The Continua Health Alliance consortium promotes these in cooperation with the Bluetooth SIG.
BLP (Blood Pressure Profile) – for blood pressure measurement.
HTP (Health Thermometer Profile) – for medical temperature measurement devices.
GLP (Glucose Profile) – for blood glucose monitors.
CGMP (Continuous Glucose Monitor Profile)
=== Sports and fitness profiles ===
Profiles for sporting and fitness accessories include:
BCS (Body Composition Service)
CSCP (Cycling Speed and Cadence Profile) – for sensors attached to a bicycle or exercise bike to measure cadence and wheel speed.
CPP (Cycling Power Profile)
HRP (Heart Rate Profile) – for devices which measure heart rate
LNP (Location and Navigation Profile)
RSCP (Running Speed and Cadence Profile)
WSP (Weight Scale Profile)
=== Internet connection ===
=== Generic sensors ===
ESP (Environmental Sensing Profile)
UDS (User Data Service)
=== HID connectivity ===
HOGP (HID over GATT Profile) allowing Bluetooth LE-enabled Wireless mice, keyboards and other devices offering long-lasting battery life.
=== Proximity sensing ===
"Electronic leash" applications are well suited to the long battery life possible for 'always-on' devices. Manufacturers of iBeacon devices implement the appropriate specifications for their device to make use of proximity sensing capabilities supported by Apple's iOS devices.
Relevant application profiles include:
FMP – the "find me" profile – allows one device to issue an alert on a second misplaced device.
PXP – the proximity profile – allows a proximity monitor to detect whether a proximity reporter is within a close range. Physical proximity can be estimated using the radio receiver's RSSI value, although this does not have absolute calibration of distances. Typically, an alarm may be sounded when the distance between the devices exceeds a set threshold.
=== Alerts and time profiles ===
The phone alert status profile and alert notification profile allow a client device to receive notifications such as incoming call alerts from another device.
The time profile allows current time and time zone information on a client device to be set from a server device, such as between a wristwatch and a mobile phone's network time.
=== Battery ===
The Battery Service exposes the Battery State and Battery Level of a single battery or set of batteries in a device.
=== Audio ===
Announced in January 2020, LE Audio allows the protocol to carry sound and add features such as one set of headphones connecting to multiple audio sources or multiple headphones connecting to one source and also adds support for hearing aids. It introduces LC3 as its default codec. Compared with standard Bluetooth audio it offers longer battery life.
Specifications on the implementation of Basic Audio Profile and Coordinated Set Identification was released in 2021, and the Common Audio Profile and Service in March 2022.
=== Contact tracing and notification ===
In December 2020, the Bluetooth SIG released a draft specification for a wearable exposure notification service. This service allows exposure notification services on wearable devices to communicate with and be controlled by client devices such as smartphones.
== Implementation ==
=== Chip ===
Starting in late 2009, Bluetooth Low Energy integrated circuits were announced by a number of manufacturers. These ICs commonly use software radio so updates to the specification can be accommodated through a firmware upgrade.
=== Hardware ===
Current mobile devices are commonly released with hardware and software support for both classic Bluetooth and Bluetooth Low Energy.
=== Operating systems ===
iOS 5 and later
Windows Phone 8.1
Windows 8 and later (Windows 7 and earlier requires drivers from Bluetooth radio manufacturer supporting BLE stack as it has no built-in generic BLE drivers.)
Android 4.3 and later. Android 6 or later requires location permission to connect to BLE.
BlackBerry OS 10
Linux 3.4 and later through BlueZ 5.0
Unison OS 5.2
macOS 10.10
Zephyr OS
== Technical details ==
=== Radio interface ===
Bluetooth Low Energy technology operates in the same spectrum range (the 2.400–2.4835 GHz ISM band) as classic Bluetooth technology, but uses a different set of channels. Instead of the classic Bluetooth 79 1-MHz channels, Bluetooth Low Energy has 40 2-MHz channels. Within a channel, data is transmitted using Gaussian frequency shift modulation, similar to classic Bluetooth's Basic Rate scheme. The bit rate is 1 Mbit/s (with an option of 2 Mbit/s in Bluetooth 5), and the maximum transmit power is 10 mW (100 mW in Bluetooth 5). Further details are given in Volume 6 Part A (Physical Layer Specification) of the Bluetooth Core Specification V4.0.
Bluetooth Low Energy uses frequency hopping to counteract narrowband interference problems. Classic Bluetooth also uses frequency hopping but the details are different; as a result, while both FCC and ETSI classify Bluetooth technology as an FHSS scheme, Bluetooth Low Energy is classified as a system using digital modulation techniques or a direct-sequence spread spectrum.
More technical details may be obtained from official specification as published by the Bluetooth SIG. Note that power consumption is not part of the Bluetooth specification.
=== Advertising and discovery ===
BLE devices are detected through a procedure based on broadcasting advertising packets. This is done using 3 separate channels (frequencies), in order to reduce interference. The advertising device sends a packet on at least one of these three channels, with a repetition period called the advertising interval. For reducing the chance of multiple consecutive collisions, a random delay of up to 10 milliseconds is added to each advertising interval. The scanner listens to the channel for a duration called the scan window, which is periodically repeated every scan interval.
The discovery latency is therefore determined by a probabilistic process and depends on the three parameters (viz., the advertising interval, the scan interval and the scan window). The discovery scheme of BLE adopts a periodic-interval based technique, for which upper bounds on the discovery latency can be inferred for most parametrizations. While the discovery latencies of BLE can be approximated by models for purely periodic interval-based protocols, the random delay added to each advertising interval and the three-channel discovery can cause deviations from these predictions, or potentially lead to unbounded latencies for certain parametrizations.
=== Security ===
Bluetooth Low Energy has security instances such as the Encrypted Advertising Data (EAD) feature allowing some or all of the application data payload that is transmitted in advertising packets, to be encrypted. A standard mechanism for the sharing of key material between a broadcasting device and the observers that are intended to receive this data is also defined, so that the data may be decrypted when received.
All transmitted Bluetooth LE PDUs include a Cyclic Redundancy Check (CRC) that is recalculated and checked by the receiving device for the possibility of the PDU having been changed in flight.
=== Software model ===
All Bluetooth Low Energy devices use the Generic Attribute Profile (GATT). The application programming interface offered by a Bluetooth Low Energy aware operating system will typically be based around GATT concepts. GATT has the following terminology:
Client
A device that initiates GATT commands and requests, and accepts responses, for example, a computer or smartphone.
Server
A device that receives GATT commands and requests, and returns responses, for example, a temperature sensor.
Characteristic
A data value transferred between client and server, for example, the current battery voltage.
Service
A collection of related characteristics, which operate together to perform a particular function. For instance, the Health Thermometer service includes characteristics for a temperature measurement value, and a time interval between measurements.
Descriptor
A descriptor provides additional information about a characteristic. For instance, a temperature value characteristic may have an indication of its units (e.g. Celsius), and the maximum and minimum values which the sensor can measure. Descriptors are optional – each characteristic can have any number of descriptors.
Some service and characteristic values are used for administrative purposes – for instance, the model name and serial number can be read as standard characteristics within the Generic Access service. Services may also include other services as sub-functions; the main functions of the device are so-called primary services, and the auxiliary functions they refer to are secondary services.
==== Identifiers ====
Services, characteristics, and descriptors are collectively referred to as attributes, and identified by UUIDs. Any implementer may pick a random or pseudorandom UUID for proprietary uses, but the Bluetooth SIG have reserved a range of UUIDs (of the form xxxxxxxx-0000-1000-8000-00805F9B34FB) for standard attributes. For efficiency, these identifiers are represented as 16-bit or 32-bit values in the protocol, rather than the 128 bits required for a full UUID. For example, the Device Information service has the short code 0x180A, rather than 0000180A-0000-1000-... . The full list is kept in the Bluetooth Assigned Numbers document online.
==== GATT operations ====
The GATT protocol provides a number of commands for the client to discover information about the server. These include:
Discover UUIDs for all primary services
Find a service with a given UUID
Find secondary services for a given primary service
Discover all characteristics for a given service
Find characteristics matching a given UUID
Read all descriptors for a particular characteristic
Commands are also provided to read (data transfer from server to client) and write (from client to server) the values of characteristics:
A value may be read either by specifying the characteristic's UUID, or by a handle value (which is returned by the information discovery commands above).
Write operations always identify the characteristic by handle, but have a choice of whether or not a response from the server is required.
'Long read' and 'Long write' operations can be used when the length of the characteristic's data exceeds the MTU of the radio link.
Finally, GATT offers notifications and indications. The client may request a notification for a particular characteristic from the server. The server can then send the value to the client whenever it becomes available. For instance, a temperature sensor server may notify its client every time it takes a measurement. This avoids the need for the client to poll the server, which would require the server's radio circuitry to be constantly operational.
An indication is similar to a notification, except that it requires a response from the client, as confirmation that it has received the message.
==== Battery impact ====
Bluetooth Low Energy is designed to enable devices to have very low power consumption. Several chipmakers including Cambridge Silicon Radio, Dialog Semiconductor, Nordic Semiconductor, STMicroelectronics, Cypress Semiconductor, Silicon Labs and Texas Instruments had introduced Bluetooth Low Energy optimized chipsets by 2014. Devices with peripheral and central roles have different power requirements. A study by beacon software company Aislelabs reported that peripherals such as proximity beacons usually function for 1–2 years powered by a 1,000 mAh coin cell battery. This is possible because of the power efficiency of Bluetooth Low Energy protocol, which only transmits small packets as compared to Bluetooth Classic which is also suitable for audio and high bandwidth data.
In contrast, a continuous scan for the same beacons in central role can consume 1,000 mAh in a few hours. Android and iOS devices also have very different battery impact depending on type of scans and the number of Bluetooth Low Energy devices in the vicinity. With newer chipsets and advances in software, by 2014 both Android and iOS phones had negligible power consumption in real-life Bluetooth Low Energy use.
=== 2M PHY ===
Bluetooth 5 has introduced a new transmission mode with a doubled symbol rate. Bluetooth LE has been traditionally transmitting 1 bit per symbol so that theoretically the data rate doubles as well. However, the new mode doubles the bandwidth from about 1 MHz to about 2 MHz which makes for more interferences on the edge regions. The partitioning of the ISM frequency band has not changed being still 40 channels spaced at a distance of 2 MHz. This is an essential difference over Bluetooth 2 EDR which also doubled the data rate but was doing that by employing a π/4-DQPSK or 8-DPSK phase modulation on a 1 MHz channel while Bluetooth 5 continues to use just frequency shift keying.
The traditional transmission of 1 Mbit in the Bluetooth Basic Rate was renamed 1M PHY in Bluetooth 5. The new mode at a doubled symbol speed was introduced as the 2M PHY. In Bluetooth Low Energy every transmission starts on the 1M PHY leaving it to the application to initiate a switch to the 2M PHY. In that case both sender and receiver will switch to the 2M PHY for transmissions. This is designed to facilitate firmware updates where the application can switch back to a traditional 1M PHY in case of errors. In reality the target device should be close to the programming station (at a few meters).
=== LE Coded ===
Bluetooth 5 has introduced two new modes with lower data rate. The symbol rate of the new "Coded PHY" is the same as the Base Rate 1M PHY but in mode S=2 there are two symbols transmitted per data bit. In mode S=2 only a simple Pattern Mapping P=1 is used which simply produces the same stuffing bit for each input data bit. In mode S=8 there are eight symbols per data bit with a Pattern Mapping P=4 producing contrasting symbol sequences – a 0 bit is encoded as binary 0011 and a 1 bit is encoded as binary 1100. In mode S=2 using P=1 the range doubles approximately, while in mode S=8 using P=4 it does quadruple.
The "LE Coded" transmissions have not only changed the error correction scheme but it uses a fundamentally new packet format. Each "LE Coded" burst consists of three blocks. The switch block ("extended preamble") is transmitted on the LE 1M PHY but it only consists of 10 times a binary '00111100' pattern. These 80 bits are not FEC encoded as usual but they are sent directly to the radio channel. It is followed by a header block ("FEC Block 1") which is always transmitted in S=8 mode. The header block only contains the destination address ("Access Address" / 32 bit) and an encoding flag ("Coding Indicator" / 2 Bit). The Coding Indicator defines the Pattern Mapping used for the following payload block ("FEC Block 2") where S=2 is possible.
The new packet format of Bluetooth 5 allows transmitting from 2 up to 256 bytes as the payload in a single burst. This is a lot more than the maximum of 31 bytes in Bluetooth 4. Along with reach measurements this should allow for localisation functions. As a whole the quadrupled range—at the same transmission power—is achieved at the expense of a lower data being at an eighth with 125 kbit. The old transmission packet format, as it continues to be used in the 1M PHY and 2M PHY modes, has been named "Uncoded" in Bluetooth 5. The intermediate "LE Coded" S=2 mode allows for a 500 kbit data rate in the payload which is both beneficial for shorter latencies as well lower power consumption as the burst time itself is shorter.
== See also ==
ANT
ANT+
Bluetooth Low Energy denial of service attacks
DASH7
Eddystone
IEEE 802.15 / IEEE 802.15.4-2006
Indoor positioning system (IPS)
LoRa
MyriaNed
Thread
Ultra-wideband (UWB)
UWB Forum
WiMedia Alliance
WirelessHD
Wireless USB
Zigbee
Z-Wave
NearLink SparkLink Low Energy (SLE)
== Notes ==
== References ==
== Further reading ==
"Specifications – Bluetooth Technology Website". bluetooth.org. "Bluetooth 4.0 Core Specification" – GATT is described in full in Volume 3, Part G
== External links ==
Bluetooth radio versions
Gomez, Carles; Oller, Joaquim; Paradells, Josep (29 August 2012). "Overview and evaluation of Bluetooth Low Energy: an emerging low-power wireless technology". Sensors. 12 (9). Basel: 11734–11753. Bibcode:2012Senso..1211734G. doi:10.3390/s120911734. ISSN 1424-8220. PMC 3478807. | Wikipedia/Bluetooth_low_energy |
In computer science, a mutator method is a method used to control changes to a variable. They are also widely known as setter methods. Often a setter is accompanied by a getter, which returns the value of the private member variable. They are also known collectively as accessors.
The mutator method is most often used in object-oriented programming, in keeping with the principle of encapsulation. According to this principle, member variables of a class are made private to hide and protect them from other code, and can only be modified by a public member function (the mutator method), which takes the desired new value as a parameter, optionally validates it, and modifies the private member variable. Mutator methods can be compared to assignment operator overloading but they typically appear at different levels of the object hierarchy.
Mutator methods may also be used in non-object-oriented environments. In this case, a reference to the variable to be modified is passed to the mutator, along with the new value. In this scenario, the compiler cannot restrict code from bypassing the mutator method and changing the variable directly. The responsibility falls to the developers to ensure the variable is only modified through the mutator method and not modified directly.
In programming languages that support them, properties offer a convenient alternative without giving up the utility of encapsulation.
In the examples below, a fully implemented mutator method can also validate the input data or take further action such as triggering an event.
== Implications ==
The alternative to defining mutator and accessor methods, or property blocks, is to give the instance variable some visibility other than private and access it directly from outside the objects. Much finer control of access rights can be defined using mutators and accessors. For example, a parameter may be made read-only simply by defining an accessor but not a mutator. The visibility of the two methods may be different; it is often useful for the accessor to be public while the mutator remains protected, package-private or internal.
The block where the mutator is defined provides an opportunity for validation or preprocessing of incoming data. If all external access is guaranteed to come through the mutator, then these steps cannot be bypassed. For example, if a date is represented by separate private year, month and day variables, then incoming dates can be split by the setDate mutator while for consistency the same private instance variables are accessed by setYear and setMonth. In all cases month values outside of 1 - 12 can be rejected by the same code.
Accessors conversely allow for synthesis of useful data representations from internal variables while keeping their structure encapsulated and hidden from outside modules. A monetary getAmount accessor may build a string from a numeric variable with the number of decimal places defined by a hidden currency parameter.
Modern programming languages often offer the ability to generate the boilerplate for mutators and accessors in a single line—as for example C#'s public string Name { get; set; } and Ruby's attr_accessor :name. In these cases, no code blocks are created for validation, preprocessing or synthesis. These simplified accessors still retain the advantage of encapsulation over simple public instance variables, but it is common that, as system designs progress, the software is maintained and requirements change, the demands on the data become more sophisticated. Many automatic mutators and accessors eventually get replaced by separate blocks of code. The benefit of automatically creating them in the early days of the implementation is that the public interface of the class remains identical whether or not greater sophistication is added, requiring no extensive refactoring if it is.
Manipulation of parameters that have mutators and accessors from inside the class where they are defined often requires some additional thought. In the early days of an implementation, when there is little or no additional code in these blocks, it makes no difference if the private instance variable is accessed directly or not. As validation, cross-validation, data integrity checks, preprocessing or other sophistication is added, subtle bugs may appear where some internal access makes use of the newer code while in other places it is bypassed.
Accessor functions can be less efficient than directly fetching or storing data fields due to the extra steps involved, however such functions are often inlined which eliminates the overhead of a function call.
== Examples ==
=== Assembly ===
=== C ===
In file student.h:
In file student.c:
In file main.c:
In file Makefile:
=== C++ ===
In file Student.h:
In file Student.cpp:
=== C# ===
This example illustrates the C# idea of properties, which are a special type of class member. Unlike Java, no explicit methods are defined; a public 'property' contains the logic to handle the actions. Note use of the built-in (undeclared) variable value.
In later C# versions (.NET Framework 3.5 and above), this example may be abbreviated as follows, without declaring the private variable name.
Using the abbreviated syntax means that the underlying variable is no longer available from inside the class. As a result, the set portion of the property must be present for assignment. Access can be restricted with a set-specific access modifier.
=== Common Lisp ===
In Common Lisp Object System, slot specifications within class definitions may specify any of the :reader, :writer and :accessor options (even multiple times) to define reader methods, setter methods and accessor methods (a reader method and the respective setf method). Slots are always directly accessible through their names with the use of with-slots and slot-value, and the slot accessor options define specialized methods that use slot-value.
CLOS itself has no notion of properties, although the MetaObject Protocol extension specifies means to access a slot's reader and writer function names, including the ones generated with the :accessor option.
The following example shows a definition of a student class using these slot options and direct slot access:
=== D ===
D supports a getter and setter function syntax. In version 2 of the language getter and setter class/struct methods should have the @property attribute.
A Student instance can be used like this:
=== Delphi ===
This is a simple class in Delphi language which illustrates the concept of public property for accessing a private field.
=== Java ===
In this example of a simple class representing a student with only the name stored, one can see the variable name is private, i.e. only visible from the Student class, and the "setter" and "getter" are public, namely the "getName()" and "setName(name)" methods.
=== JavaScript ===
In this example constructor-function Student is used to create objects representing a student with only the name stored.
Or (using a deprecated way to define accessors in Web browsers):
Or (using prototypes for inheritance and ES6 accessor syntax):
Or (without using prototypes):
Or (using defineProperty):
=== ActionScript 3.0 ===
=== Objective-C ===
Using traditional Objective-C 1.0 syntax, with manual reference counting as the one working on GNUstep on Ubuntu 12.04:
Using newer Objective-C 2.0 syntax as used in Mac OS X 10.6, iOS 4 and Xcode 3.2, generating the same code as described above:
And starting with OS X 10.8 and iOS 6, while using Xcode 4.4 and up, syntax can be even simplified:
=== Perl ===
Or, using Class::Accessor
Or, using the Moose Object System:
=== PHP ===
PHP defines the "magic methods" __getand__set for properties of objects.
In this example of a simple class representing a student with only the name stored, one can see the variable name is private, i.e. only visible from the Student class, and the "setter" and "getter" is public, namely the getName() and setName('name') methods.
=== Python ===
This example uses a Python class with one variable, a getter, and a setter.
=== Racket ===
In Racket, the object system is a way to organize code that comes in addition to modules and units. As in the rest of the language, the object system has first-class values and lexical scope is used to control access to objects and methods.
Struct definitions are an alternative way to define new types of values, with mutators being present when explicitly required:
=== Ruby ===
In Ruby, individual accessor and mutator methods may be defined, or the metaprogramming constructs attr_reader or attr_accessor may be used both to declare a private variable in a class and to provide either read-only or read-write public access to it respectively.
Defining individual accessor and mutator methods creates space for pre-processing or validation of the data
Read-only simple public access to implied @name variable
Read-write simple public access to implied @name variable
=== Rust ===
=== Smalltalk ===
=== Swift ===
=== Visual Basic .NET ===
This example illustrates the VB.NET idea of properties, which are used in classes. Similar to C#, there is an explicit use of the Get and Set methods.
In VB.NET 2010, Auto Implemented properties can be utilized to create a property without having to use the Get and Set syntax. Note that a hidden variable is created by the compiler, called _name, to correspond with the Property name. Using another variable within the class named _name would result in an error. Privileged access to the underlying variable is available from within the class.
== See also ==
Property (programming)
Indexer (programming)
Immutable object
== References == | Wikipedia/Mutator_method |
In computer security, mandatory access control (MAC) refers to a type of access control by which a secured environment (e.g., an operating system or a database) constrains the ability of a subject or initiator to access or modify on an object or target. In the case of operating systems, the subject is a process or thread, while objects are files, directories, TCP/UDP ports, shared memory segments, or IO devices. Subjects and objects each have a set of security attributes. Whenever a subject attempts to access an object, the operating system kernel examines these security attributes, examines the authorization rules (aka policy) in place, and decides whether to grant access. A database management system, in its access control mechanism, can also apply mandatory access control; in this case, the objects are tables, views, procedures, etc.
In mandatory access control, the security policy is centrally controlled by a policy administrator and is guaranteed (in principle) to be enforced for all users. Users cannot override the policy and, for example, grant access to files that would otherwise be restricted. By contrast, discretionary access control (DAC), which also governs the ability of subjects to access objects, allows users the ability to make policy decisions or assign security attributes.
Historically and traditionally, MAC has been closely associated with multilevel security (MLS) and specialized military systems. In this context, MAC implies a high degree of rigor to satisfy the constraints of MLS systems. More recently, however, MAC has deviated out of the MLS niche and has started to become more mainstream. The more recent MAC implementations, such as SELinux and AppArmor for Linux and Mandatory Integrity Control for Windows, allow administrators to focus on issues such as network attacks and malware without the rigor or constraints of MLS.
== History and background ==
Historically, MAC was strongly associated with multilevel security (MLS) as a means of protecting classified information of the United States. The Trusted Computer System Evaluation Criteria (TCSEC), the seminal work on the subject and often known as the Orange Book, provided the original definition of MAC as "a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity". Early implementations of MAC such as Honeywell's SCOMP, USAF's SACDIN, NSA's Blacker, and Boeing's MLS LAN focused on MLS to protect military-oriented security classification levels with robust enforcement.
The word "mandatory" in MAC has acquired a special meaning derived from its use with military systems. In this context, MAC implies an extremely high degree of robustness that assures that the control mechanisms can resist any type of subversion, thereby enabling them to enforce access controls that are mandated by the order of a government such as the Executive Order 12958. Enforcement is supposed to be more imperative than for commercial applications. This precludes enforcement by best-effort mechanisms. Only mechanisms that can provide absolute or near-absolute enforcement of the mandate are acceptable for MAC. This is a tall order and sometimes assumed unrealistic by those unfamiliar with high assurance strategies, and very difficult for those who are.
In some systems, users have the authority to decide whether to grant access to any other user. To allow that, all users have clearances for all data. This is not necessarily true of an MLS system. If individuals or processes exist that may be denied access to any of the data in the system environment, then the system must be trusted to enforce MAC. Since there can be various levels of data classification and user clearances, this implies a quantified scale for robustness. For example, more robustness is indicated for system environments containing classified "Top Secret" information and uncleared users than for one with "Secret" information and users cleared to at least "Confidential." To promote consistency and eliminate subjectivity in degrees of robustness, an extensive scientific analysis and risk assessment of the topic produced a landmark benchmark standardization quantifying security robustness capabilities of systems and mapping them to the degrees of trust warranted for various security environments. The result was documented in CSC-STD-004-85. Two relatively independent components of robustness were defined: Assurance level and functionality. Both were specified with a degree of precision that warranted significant confidence in certifications based on these criteria.
The Common Criteria standard is based on this science and it intended to preserve the assurance level as EAL levels and the functionality specifications as Protection Profiles. Of these two essential components of objective robustness benchmarks, only EAL levels were faithfully preserved. In one case, TCSEC level C2 (not a MAC-capable category) was fairly faithfully preserved in the Common Criteria, as the Controlled Access Protection Profile (CAPP). MLS Protection Profiles (such as MLSOSPP similar to B2) is more general than B2. They are pursuant to MLS, but lack the detailed implementation requirements of their Orange Book predecessors, focusing more on objectives. This gives certifiers more subjective flexibility in deciding whether the evaluated product’s technical features adequately achieve the objective, potentially eroding consistency of evaluated products and making it easier to attain certification for less trustworthy products. For these reasons, the importance of the technical details of the Protection Profile is critical to determining the suitability of a product.
Such an architecture prevents an authenticated user or process at a specific classification or trust-level from accessing information, processes, or devices in a different level. This provides a containment mechanism of users and processes, both known and unknown. An unknown program might comprise an untrusted application where the system should monitor or control accesses to devices and files.
A few MAC implementations, such as Unisys' Blacker project, were certified robust enough to separate Top Secret from Unclassified late in the last millennium. Their underlying technology became obsolete and they were not refreshed. Today there are no current implementations certified by TCSEC to that level of robust implementation. However, some less robust products exist.
== In operating systems ==
=== Microsoft ===
Starting with Windows Vista and Server 2008, Microsoft has incorporated Mandatory Integrity Control (MIC) in the Windows operating system, which adds integrity levels (IL) to running processes. The goal is to restrict access of less trustworthy processes to sensitive info. MIC defines five integrity levels: Low, medium, high, system, and trusted installer. By default, processes started at medium IL. Elevated processes receive high IL. Child processes, by default, inherit their parent's integrity, although the parent process can launch them with a lower IL. For example, Internet Explorer 7 launches its subprocesses with low IL. Windows controls access to objects based on ILs. Named objects, including files, registry keys or other processes and threads, have an entry in their ACL indicating the minimum IL of the process that can use the object. MIC enforces that a process can write to or delete an object only when its IL is equal to or higher than the object’s IL. Furthermore, to prevent access to sensitive data in memory, processes can’t open processes with a higher IL for read access.
=== Apple ===
Apple Inc. has incorporated an implementation of the TrustedBSD framework in its iOS and macOS operating systems. (The word "mac" in "macOS" is short for "Macintosh" and has nothing to do with the abbreviation of "mandatory access control.") The command-line function sandbox_init provides a limited high-level sandboxing interface.
=== Google ===
Version 5.0 and later of the Android operating system, developed by Google, use SELinux to enforce a MAC security model on top of its original UID-based DAC approach.
=== Linux family ===
Linux and many other Unix distributions have MAC for CPU (multi-ring), disk, and memory. While OS software may not manage privileges well, Linux became famous during the 1990s as being more secure and far more stable than non-Unix alternatives. The three main Linux Security Modules implementing MAC are SELinux, AppArmor, and TOMOYO Linux.
Security-Enhanced Linux (SELinux) was originally developed by the NSA and released to the Open Source community in 2000.
It is one of the first MAC implementations for Linux and is also one of the most popular.
It has been incorporated into Linux kernels since v2.4, and is enabled by default on Android 5.0+ and Red Hat/Fedora. SELinux provides powerful fine-grained control which makes it suitable for high-security environments, but many users find that its power and granularity come with a high degree of complexity and a steep learning curve.
TOMOYO Linux is a lightweight MAC implementation for Linux and Embedded Linux, developed by NTT Data Corporation. It has been merged in Linux Kernel mainline version 2.6.30 in June 2009. Differently from the label-based approach used by SELinux, TOMOYO Linux performs a pathname-based Mandatory Access Control, separating security domains according to process invocation history, which describes the system behavior. Policy are described in terms of pathnames. A security domain is simply defined by a process call chain, and represented by a string. There are 4 modes: disabled, learning, permissive, enforcing. Administrators can assign different modes for different domains. TOMOYO Linux introduced the "learning" mode, in which the accesses occurred in the kernel are automatically analyzed and stored to generate MAC policy: this mode could then be the first step of policy writing, making it easy to customize later.
AppArmor is a MAC implementation which utilizes the Linux Security Modules (LSM) interface of Linux 2.6 and is incorporated into SUSE Linux and Ubuntu 7.10. LSM provides a kernel API that allows modules of kernel code to govern ACL (DAC ACL, access-control lists). AppArmor is not capable of restricting all programs and is optionally in the Linux kernel as of version 2.6.36.
Amon Ott's RSBAC (Rule Set Based Access Control) provides a framework for Linux kernels that allows several different security policy / decision modules. One of the models implemented is Mandatory Access Control model. A general goal of RSBAC design was to try to reach (obsolete) Orange Book (TCSEC) B1 level. The model of mandatory access control used in RSBAC is mostly the same as in Unix System V/MLS, Version 1.2.1 (developed in 1989 by the National Computer Security Center of the USA with classification B1/TCSEC). RSBAC requires a set of patches to the stock kernel, which are maintained quite well by the project owner.
Smack (Simplified Mandatory Access Control Kernel) is a Linux kernel security module that protects data and process interaction from malicious manipulation using a set of custom mandatory access control rules, with simplicity as its main design goal. It has been officially merged since the Linux 2.6.25 release.
grsecurity is a patch for the Linux kernel providing a MAC implementation (precisely, it is an RBAC implementation). grsecurity is not implemented via the LSM API.
Astra Linux OS developed for Russian Army has its own mandatory access control.
=== Other OSes ===
FreeBSD supports Mandatory Access Control, implemented as part of the TrustedBSD project. It was introduced in FreeBSD 5.0. Since FreeBSD 7.2, MAC support is enabled by default. The framework is extensible; various MAC modules implement policies such as Biba and multilevel security.
Sun's Trusted Solaris uses a mandatory and system-enforced access control mechanism (MAC), where clearances and labels are used to enforce a security policy. However note that the capability to manage labels does not imply the kernel strength to operate in multilevel security mode. Access to the labels and control mechanisms are not robustly protected from corruption in protected domain maintained by a kernel. The applications a user runs are combined with the security label at which the user works in the session. Access to information, programs and devices are only weakly controlled.
== See also ==
=== Access control ===
=== Other topics ===
== Footnotes ==
== References ==
P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998.
P. A. Loscocco, S. D. Smalley, Meeting Critical Security Objectives with Security-Enhanced Linux Archived 2017-07-08 at the Wayback Machine Proceedings of the 2001 Ottawa Linux Symposium.
ISO/IEC DIS 10181-3, Information Technology, OSI Security Model, Security FrameWorks, Part 3: Access Control, 1993
Robert N. M. Watson. "A decade of OS access-control extensibility". Commun. ACM 56, 2 (February 2013), 52–63.
== External links ==
Weblog post on the how virtualization can be used to implement Mandatory Access Control.
Weblog post from a Microsoft employee detailing Mandatory Integrity Control and how it differs from MAC implementations.
GWV Formal Security Policy Model A Separation Kernel Formal Security Policy, David Greve, Matthew Wilding, and W. Mark Vanfleet. | Wikipedia/Mandatory_access_control |
In computer security, organization-based access control (OrBAC) is an access control model first presented in 2003. The current approaches of the access control rest on the three entities (subject, action, object) to control the access the policy specifies that some subject has the permission to realize some action on some object.
OrBAC allows the policy designer to define a security policy independently of the implementation. The chosen method to fulfill this goal is the introduction of an abstract level.
Subjects are abstracted into roles. A role is a set of subjects to which the same security rule apply.
Similarly, an activity is a set of actions to which the same security rule apply.
And, a view is a set of objects to which the same security rule apply.
Each security policy is defined for and by an organization. Thus, the specification of the security policy is completely parameterized by the organization so that it is possible to handle simultaneously several security policies associated with different organizations. The model is not restricted to permissions, but also includes the possibility to specify prohibitions and obligations. From the three abstract entities (roles, activities, views), abstract privileges are defined. And from these abstract privileges, concrete privileges are derived.
OrBAC is context sensitive, so the policy could be expressed dynamically. Furthermore, OrBAC owns concepts of hierarchy (organization, role, activity, view, context) and separation constraints.
== See also ==
== References ==
== External links ==
OrBAC site
MotOrBAC site (OrBAC simulation and conflict detection tool) | Wikipedia/Organisation-based_access_control |
In computer security, an access-control list (ACL) is a list of permissions associated with a system resource (object or facility). An ACL specifies which users or system processes are granted access to resources, as well as what operations are allowed on given resources. Each entry in a typical ACL specifies a subject and an operation. For instance,
If a file object has an ACL that contains(Alice: read,write; Bob: read), this would give Alice permission to read and write the file and give Bob permission only to read it.
If the Resource Access Control Facility (RACF) profile CONSOLE CLASS(TSOAUTH) has an ACL that contains(ALICE:READ), this would give ALICE permission to use the TSO CONSOLE command.
== Implementations ==
Many kinds of operating systems implement ACLs or have a historical implementation; the first implementation of ACLs was in the filesystem of Multics in 1965.
=== Filesystem ACLs ===
A filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access-control entries (ACEs) in the Microsoft Windows NT, OpenVMS, and Unix-like operating systems such as Linux, macOS, and Solaris. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations, an ACE can control whether or not a user, or group of users, may alter the ACL on an object.
One of the first operating systems to provide filesystem ACLs was Multics. PRIMOS featured ACLs at least as early as 1984.
In the 1990s the ACL and role-based access control (RBAC) models were extensively tested and used to administer file permissions.
==== POSIX ACL ====
POSIX 1003.1e/1003.2c working group made an effort to standardize ACLs, resulting in what is now known as "POSIX.1e ACL" or simply "POSIX ACL". The POSIX.1e/POSIX.2c drafts were withdrawn in 1997 due to participants losing interest for funding the project and turning to more powerful alternatives such as NFSv4 ACL. As of December 2019, no live sources of the draft could be found on the Internet, but it can still be found in the Internet Archive.
Most of the Unix and Unix-like operating systems (e.g. Linux since 2.5.46 or November 2002, FreeBSD, or Solaris) support POSIX.1e ACLs (not necessarily draft 17). ACLs are usually stored in the extended attributes of a file on these systems.
==== NFSv4 ACL ====
NFSv4 ACLs are much more powerful than POSIX draft ACLs. Unlike draft POSIX ACLs, NFSv4 ACLs are defined by an actually published standard, as part of the Network File System.
NFSv4 ACLs are supported by many Unix and Unix-like operating systems. Examples include AIX, FreeBSD, Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem, support NFSv4 ACLs, which are part of the NFSv4 standard. There are two experimental implementations of NFSv4 ACLs for Linux: NFSv4 ACLs support for Ext3 filesystem and the more recent Richacls, which brings NFSv4 ACLs support for Ext4 filesystem. As with POSIX ACLs, NFSv4 ACLs are usually stored as extended attributes on Unix-like systems.
NFSv4 ACLs are organized nearly identically to the Windows NT ACLs used in NTFS. NFSv4.1 ACLs are a superset of both NT ACLs and POSIX draft ACLs. Samba supports saving the NT ACLs of SMB-shared files in many ways, one of which is as NFSv4-encoded ACLs.
=== Active Directory ACLs ===
Microsoft's Active Directory service implements an LDAP server that stores and disseminates configuration information about users and computers in a domain. Active Directory extends the LDAP specification by adding the same type of access-control list mechanism as Windows NT uses for the NTFS filesystem. Windows 2000 then extended the syntax for access-control entries such that they could not only grant or deny access to entire LDAP objects, but also to individual attributes within these objects.
=== Networking ACLs ===
On some types of proprietary computer hardware (in particular, routers and switches), an access-control list provides rules that are applied to port numbers or IP addresses that are available on a host or other layer 3, each with a list of hosts and/or networks permitted to use the service. Although it is additionally possible to configure access-control lists based on network domain names, this is a questionable idea because individual TCP, UDP, and ICMP headers do not contain domain names. Consequently, the device enforcing the access-control list must separately resolve names to numeric addresses. This presents an additional attack surface for an attacker who is seeking to compromise security of the system which the access-control list is protecting. Both individual servers and routers can have network ACLs. Access-control lists can generally be configured to control both inbound and outbound traffic, and in this context they are similar to firewalls. Like firewalls, ACLs could be subject to security regulations and standards such as PCI DSS.
=== SQL implementations ===
ACL algorithms have been ported to SQL and to relational database systems. Many "modern" (2000s and 2010s) SQL-based systems, like enterprise resource planning and content management systems, have used ACL models in their administration modules.
== Comparing with RBAC ==
The main alternative to the ACL model is the role-based access-control (RBAC) model. A "minimal RBAC model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent.
In modern SQL implementations, ACLs also manage groups and inheritance in a hierarchy of groups. So "modern ACLs" can express all that RBAC express and are notably powerful (compared to "old ACLs") in their ability to express access-control policy in terms of the way in which administrators view organizations.
For data interchange, and for "high-level comparisons", ACL data can be translated to XACML.
== See also ==
Access token manager
Cacls
Capability-based security
C-list
Confused deputy problem
DACL
Extended file attributes
File-system permissions
Privilege (computing)
Role-based access control (RBAC)
== Notes ==
== References ==
== Further reading == | Wikipedia/Access_control_list |
Border control comprises measures taken by governments to monitor and regulate the movement of people, animals, and goods across land, air, and maritime borders. While border control is typically associated with international borders, it also encompasses controls imposed on internal borders within a single state.
Border control measures serve a variety of purposes, ranging from enforcing customs, sanitary and phytosanitary, or biosecurity regulations to restricting migration. While some borders (including most states' internal borders and international borders within the Schengen Area) are open and completely unguarded, others (including the vast majority of borders between countries as well as some internal borders) are subject to some degree of control and may be crossed legally only at designated checkpoints. Border controls in the 21st century are tightly intertwined with intricate systems of travel documents, visas, and increasingly complex policies that vary between countries.
It is estimated that the indirect economic cost of border controls, particularly migration restrictions, cost many trillions of dollars and the size of the global economy could double if migration restrictions were lifted.
== History ==
In medieval Europe, the boundaries between rival countries and centres of power were largely symbolic or consisted of amorphous borderlands, 'marches', and 'debatable lands' of indeterminate or contested status and the real 'borders' consisted of the fortified walls that surrounded towns and cities, where the authorities could exclude undesirable or incompatible people at the gates, from vagrants, beggars and the 'wandering poor', to 'masterless women', lepers, Romani, or Jews.
The concept of border controls has its origins in antiquity. In Asia, the existence of border controls is evidenced in classical texts. The Arthashastra (c. 3rd century BCE) makes mention of passes issued at the rate of one masha per pass to enter and exit the country. Chapter 34 of the Second Book of Arthashastra concerns with the duties of the Mudrādhyakṣa (lit. 'Superintendent of Seals') who must issue sealed passes before a person could enter or leave the countryside. Passports resembling those issued today were an important part of the Chinese bureaucracy as early as the Western Han (202 BCE-220 CE), if not in the Qin dynasty. They required such details as age, height, and bodily features. The passports (zhuan) determined a person's ability to move throughout imperial counties and through points of control. Even children needed passports, but those of one year or less who were in their mother's care may not have needed them.
=== Medieval period ===
In the Golden age of the Islamic Caliphate (medieval time in Europe), a form of passport was the bara'a, a receipt for taxes paid. Border controls were in place to ensure that only people who paid their zakah (for Muslims) or jizya (for dhimmis) taxes could travel freely between different regions of the Caliphate; thus, the bara'a receipt was a "basic passport".
In medieval Europe, passports were issued since at least the reign of Henry V of England, as a means of helping his subjects prove who they were in foreign lands. The earliest reference to these documents is found in an act of Parliament, the Safe Conducts Act 1414 (2 Hen. 5. Stat. 1. c. 6). In 1540, granting travel documents in England became a role of the Privy Council of England, and it was around this time that the term "passport" was used. In 1794, issuing British passports became the job of the Office of the Secretary of State. The 1548 Imperial Diet of Augsburg required the public to hold imperial documents for travel, at the risk of permanent exile. During World War I, European governments introduced border passport requirements for security reasons, and to control the emigration of people with useful skills. These controls remained in place after the war, becoming a standard, though controversial, procedure. British tourists of the 1920s complained, especially about attached photographs and physical descriptions, which they considered led to a "nasty dehumanization".
Beginning in the mid-19th century, the Ottoman Empire established quarantine stations on many of its borders to control disease. For example, along the Greek-Turkish border, all travellers entering and exiting the Ottoman Empire would be quarantined for 9–15 days. The stations were often manned by armed guards. If plague appeared, the Ottoman army would be deployed to enforce border control and monitor disease.
=== Modern history ===
One of the earliest systematic attempts of modern nation states to implement border controls to restrict entry of particular groups were policies adopted by Canada, Australia, and America to curtail immigration of Asians in white settler states in the late 19th and early 20th centuries. The first anti-East Asian policy implemented in this era was the Chinese Exclusion Act of 1882 in America, which was followed suit by the Chinese Immigration Act of 1885 in Canada, which imposed what came to be called the Chinese head tax. These policies were a sign of injustice and unfair treatment to the Chinese workers because the jobs they engaged in were mostly menial jobs. Similar policies were adopted in various British colonies in Australia over the latter half of the 19th century targeting Asian immigrants arriving as a result of the region's series of gold rushes as well as Kanakas (Pacific Islanders brought into Australia as indentured labourers) who alongside the Asians were perceived by trade unionists and White blue collar workers as a threat to the wages of White settlers. Following the establishment of the Commonwealth of Australia in 1901, these discriminatory border control measures quickly expanded into the White Australia Policy, while subsequent legislation in America (e.g. the Immigration Act of 1891, the Naturalisation Act of 1906, the Immigration Act of 1917, and the Immigration Act of 1924) resulted in an even stricter policy targeting immigrants from both Asia and parts of southern and eastern Europe.
Even following the adoption of measures such as the White Australia Policy and the Chinese Exclusion Act in English-speaking settler colonies, pervasive control of international borders remained a relatively rare phenomenon until the early 20th century, prior to which many states had open international borders either in practice or due to a lack of any legal restriction. John Maynard Keynes identified World War I in particular as the point when such controls became commonplace.
Decolonisation during the twentieth century saw the emergence of mass emigration from nations in the Global South, thus leading former colonial occupiers to introduce stricter border controls. In the United Kingdom this process took place in stages, with British nationality law eventually shifting from recognising all Commonwealth citizens as British subjects to today's complex British nationality law which distinguishes between British citizens, modern British Subjects, British Overseas Citizens, and overseas nationals, with each non-standard category created as a result of attempts to balance border control and the need to mitigate statelessness. This aspect of the rise of border control in the 20th century has proven controversial. The British Nationality Act 1981 has been criticised by experts, as well as by the Committee on the Elimination of Racial Discrimination of the United Nations, on the grounds that the different classes of British nationality it created are, in fact, closely related to the ethnic origins of their holders.
The creation of British Nationality (Overseas) status, for instance, (with fewer privileges than British citizen status) was met with criticism from many Hong Kong residents who felt that British citizenship would have been more appropriate in light of the "moral debt" owed to them by the UK. Some British politicians and magazines also criticised the creation of BN(O) status. In 2020, the British government under Boris Johnson announced a program under which BN(O)s would have leave to remain in the UK with rights to work and study for five years, after which they may apply for settled status. They would then be eligible for full citizenship after holding settled status for 12 months. This was implemented as the eponymously named "British National (Overseas) visa", a residence permit that BN(O)s and their dependent family members have been able to apply for since 31 January 2021. BN(O)s and their dependents who arrived in the UK before the new immigration route became available were granted "Leave Outside the Rules" at the discretion of the Border Force to remain in the country for up to six months as a temporary measure. In effect, this retroactively granted BN(O)s a path to right of abode in the United Kingdom. Despite the COVID-19 pandemic, about 7,000 people had entered the UK under this scheme between July 2020 and January 2021.
Ethnic tensions created during colonial occupation also resulted in discriminatory policies being adopted in newly independent African nations, such as Uganda under Idi Amin which banned Asians from Uganda, thus creating a mass exodus of the (largely Gujarati) Asian community of Uganda. Such ethnically driven border control policies took forms ranging from anti-Asian sentiment in East Africa to Apartheid policies in South Africa and Namibia (then known as Southwest Africa under South African rule) which created bantustans and pass laws to segregate and impose border controls against non-whites, and encouraged immigration of whites at the expense of Blacks as well as Indians and other Asians. Whilst border control in Europe and east of the Pacific have tightened over time, they have largely been liberalized in Africa, from Yoweri Museveni's reversal of Idi Amin's anti-Asian border controls to the fall of Apartheid (and thus racialized border controls) in South Africa.
With the development of border control policies over the course of the 20th century came the standardization of refugee travel documents under the Convention Relating to the Status of Refugees of 1951 and the 1954 Convention travel document for stateless people under the similar 1954 statelessness convention.
=== COVID-19 ===
The COVID-19 pandemic in 2020 produced a drastic tightening of border controls across the globe. Many countries and regions have imposed quarantines, entry bans, or other restrictions for citizens of or recent travellers to the most affected areas. Other countries and regions have imposed global restrictions that apply to all foreign countries and territories, or prevent their own citizens from travelling overseas. The imposition of border controls has curtailed the spread of the virus, but because they were first implemented after community spread was established in multiple countries in different regions of the world, they produced only a modest reduction in the total number of people infected These strict border controls economic harm to the tourism industry through lost income and social harm to people who were unable to travel for family matters or other reasons. When the travel bans are lifted, many people are expected to resume traveling. However, some travel, especially business travel, may be decreased long-term as lower cost alternatives, such as teleconferencing and virtual events, are preferred. A possible long-term impact has been a decline of business travel and international conferencing, and the rise of their virtual, online equivalents. Concerns have been raised over the effectiveness of travel restrictions to contain the spread of COVID-19.
== Aspects ==
Contemporary border control policies are complex and address a variety of distinct phenomena depending on the circumstances and political priorities of the state(s) implementing them. Consequently, there are several aspects of border control which vary in nature and importance from region to region.
=== Air and maritime borders ===
In addition to land borders, countries also apply border control measures to airspace and waters under their jurisdiction. Such measures control access to air and maritime territory as well as extractible resources (e.g. fish, minerals, fossil fuels).
Under the United Nations Convention on the Law of the Sea (UNCLOS), states exercise varying degrees of control over different categories of territorial waters:
Internal waters: Waters landward of the baseline, over which the state has complete sovereignty: not even innocent passage is allowed without explicit permission from said state. Lakes and rivers are considered internal waters.
Territorial sea: A state's territorial sea is a belt of coastal waters extending at most 22 kilometres from the baseline of a coastal state. If this would overlap with another state's territorial sea, the border is taken as the median point between the states' baselines, unless the states in question agree otherwise. A state can also choose to claim a smaller territorial sea. The territorial sea is regarded as the sovereign territory of the state, although foreign ships (military and civilian) are allowed innocent passage through it, or transit passage for straits; this sovereignty also extends to the airspace over and seabed below. As a result of UNCLOS, states exercise a similar degree of control over its territorial sea as over land territory and may thus utilise coast guard and naval patrols to enforce border control measures provided they do not prevent innocent or transit passage.
Contiguous zone: A state's contiguous zone is a band of water extending farther from the outer edge of the territorial sea to up to 44 kilometres (27 miles) from the baseline, within which a state can implement limited border control measures for the purpose of preventing or punishing "infringement of its customs, fiscal, immigration or sanitary laws and regulations within its territory or territorial sea". This will typically be 22 kilometres (14 miles) wide, but could be more (if a state has chosen to claim a territorial sea of less than 22 kilometres), or less, if it would otherwise overlap another state's contiguous zone. However, unlike the territorial sea, there is no standard rule for resolving such conflicts and the states in question must negotiate their own compromise. America invoked a contiguous zone out to 44 kilometres from the baseline on 29 September 1999.
Exclusive economic zone: An exclusive economic zone extends from the baseline to a maximum of 370 kilometres (230 miles). A coastal nation has control of all economic resources within its exclusive economic zone, including fishing, mining, oil exploration, and any pollution of those resources. However, it cannot prohibit passage or loitering above, on, or under the surface of the sea that is in compliance with the laws and regulations adopted by the coastal State in accordance with the provisions of the UN Convention, within that portion of its exclusive economic zone beyond its territorial sea. The only authority a state has over its EEZ is therefore its ability to regulate the extraction or spoliation of resources contained therein and border control measures implemented to this effect focus on the suppression of unauthorised commercial activity.
Vessels not complying with a state's maritime policies may be subject to ship arrest and enforcement action by the state's authorities. Maritime border control measures are controversial in the context of international trade disputes, as was the case following France's detention of British fishermen in October 2021 in the aftermath of Brexit or when the Indonesian navy detained the crew of the Seven Seas Conqueress alleging that the vessel was unlawfully fishing within Indonesian territorial waters while the Singaporean government claimed the vessel was in Singaporean waters near Pedra Branca.
Similarly, international law accords each state control over the airspace above its land territory, internal waters, and territorial sea. Consequently, states have the authority to regulate flyover rights and tax foreign aircraft utilising their airspace. Additionally, the International Civil Aviation Organization designates states to administer international airspace, including airspace over waters not forming part of any state's territorial sea. Aircraft unlawfully entering a country's airspace may be grounded and their crews may be detained.
No country has sovereignty over international waters, including the associated airspace. All states have the freedom of fishing, navigation, overflight, laying cables and pipelines, as well as research. Oceans, seas, and waters outside national jurisdiction are also referred to as the high seas or, in Latin, mare liberum (meaning free sea). The 1958 Convention on the High Seas defined "high seas" to mean "all parts of the sea that are not included in the territorial sea or in the internal waters of a State" and where "no State may validly purport to subject any part of them to its sovereignty". Ships sailing the high seas are generally under the jurisdiction of their flag state (if there is one); however, when a ship is involved in certain criminal acts, such as piracy, any nation can exercise jurisdiction under the doctrine of universal jurisdiction regardless of maritime borders.
As part of their air and maritime border control policies, most countries restrict or regulate the ability of foreign airlines and vessels to transport of goods or passengers between sea ports and airports in their jurisdiction, known as cabotage. Restrictions on maritime cabotage apply exist in most countries with territorial and internal waters so as to protect the domestic shipping industry from foreign competition, preserve domestically owned shipping infrastructure for national security purposes, and ensure safety in congested territorial waters. For example, in America, the Jones Act provides for extremely strict restrictions on cabotage. Similarly, China does not permit foreign flagged vessels to conduct domestic transport or domestic transhipments without the prior approval of the Ministry of Transport. While Hong Kong and Macau maintain distinct internal cabotage regimes from the mainland, maritime cabotage between either territory and the mainland is considered domestic carriage and accordingly is off limits to foreign vessels. Similarly, maritime crossings across the Taiwan Strait requires special permits from both the People's Republic of China and the Republic of China and are usually off-limits to foreign vessels. In India, foreign vessels engaging in the coasting trade require a licence that is generally only issued when no local vessel is available. Similarly, a foreign vessel may only be issued a licence to engage in cabotage in Brazil if there are no Brazilian flagged vessels available for the intended transport.
As with maritime cabotage, most jurisdictions heavily restrict cabotage in passenger aviation, though rules regarding air cargo are typically more lax. Passenger cabotage is not usually granted under most open skies agreements. Air cabotage policies in the European Union are uniquely liberal insofar as carriers licensed by one member state are permitted to engage in cabotage in any EU member state, with few limitations. Chile has the most liberal air cabotage rules in the world, enacted in 1979, which allow foreign airlines to operate domestic flights, conditional upon reciprocal treatment for Chilean carriers in the foreign airline's country. Countries apply special provisions to the ability of foreign airlines to carry passengers between two domestic destinations through an offshore hub.
Many countries implement air defence identification zones (ADIZs) requiring aircraft approaching within a specified distance of its airspace to contact or seek prior authorization from its military or transport authorities. An ADIZ may extend beyond a country's territory to give the country more time to respond to possibly hostile aircraft. The concept of an ADIZ is not defined in any international treaty and is not regulated by any international body, but is nevertheless a well-established aerial border control measure. Usually such zones only cover undisputed territory, do not apply to foreign aircraft not intending to enter territorial airspace, and do not overlap.
=== Biosecurity ===
Biosecurity refers to measures aimed at preventing the introduction and/or spread of harmful organisms (e.g. viruses, bacteria, etc.) to animals and plants in order to mitigate the risk of transmission of infectious disease. In agriculture, these measures are aimed at protecting food crops and livestock from pests, invasive species, and other organisms not conducive to the welfare of the human population. The term includes biological threats to people, including those from pandemic diseases and bioterrorism. The definition has sometimes been broadened to embrace other concepts, and it is used for different purposes in different contexts. The most common category of biosecurity policies are quarantine measures adopted to counteract the spread of disease and, when applied as a component of border control, focus primarily on mitigating the entry of infected individuals, plants, or animals into a country. Other aspects of biosecurity related to border control include mandatory vaccination policies for inbound travellers and measures to curtail the risk posed by bioterrorism or invasive species. Quarantine measures are frequently implemented with regard to the mobility of animals, including both pets and livestock. Notably, in order to reduce the risk of introducing rabies from continental Europe, the United Kingdom used to require that dogs, and most other animals introduced to the country, spend six months in quarantine at an HM Customs and Excise pound. This policy was abolished in 2000 in favour of a scheme generally known as Pet Passports, where animals can avoid quarantine if they have documentation showing they are up to date on their appropriate vaccinations.
In the past, quarantine measures were implemented by European countries in order to curtail the Bubonic Plague and Cholera. In the British Isles, for example, the Quarantine Act 1710 (9 Ann. c. 2) established maritime quarantine policies in an era in which strict border control measures as a whole were yet to become mainstream. The first act was called for due to fears that the plague might be imported from Poland and the Baltic states. A second act of Parliamemt, the Quarantine Act 1721 (8 Geo. 1. c. 10), was due to the prevalence of the plague at Marseille and other places in Provence, France. It was renewed in 1733 after a new outbreak in continental Europe, and again in 1743, due to an epidemic in Messina. A rigorous quarantine clause was introduced into the Levant Act 1752 an act regulating trade with the Levant, and various arbitrary orders were issued during the next twenty years to meet the supposed danger of infection from the Baltic states. Although no plague cases ever came to England during that period, the restrictions on traffic became more stringent, and a very strict Quarantine and Customs Act 1788 (28 Geo. 3. c. 34) was passed, with provisions affecting cargoes in particular. The act was revised in 1801 and 1805, and in 1823–24 an elaborate inquiry was followed by an act making quarantine only at the discretion of the Privy Council, which recognised yellow fever or other highly infectious diseases as calling for quarantine, along with plague. The threat of cholera in 1831 was the last occasion in England of the use of quarantine restrictions. Cholera affected every country in Europe despite all efforts to keep it out. When cholera returned to England in 1849, 1853 and 1865–66, no attempt was made to seal the ports. In 1847 the privy council ordered all arrivals with a clean bill of health from the Black Sea and the Levant to be admitted, provided there had been no case of plague during the voyage and afterwards, the practice of quarantine was discontinued.
In modern maritime law, biosecurity measures for arriving vessels centre around 'pratique', a licence from border control officials permitting a ship to enter port on assurance from the captain that the vessel is free from contagious disease. The clearance granted is commonly referred to as 'free pratique'. A ship can signal a request for 'pratique' by flying a solid yellow square-shaped flag. This yellow flag is the Q flag in the set of international maritime signal flags. In the event that 'free pratique' is not granted, a vessel will be held in quarantine according to biosecurity rules prevailing at the port of entry until a border control officer inspects the vessel. During the COVID-19 pandemic, a controversy arose as to who granted pratique to the Ruby Princess. A related concept is the 'bill of health', a document issued by officials of a port of departure indicating to the officials of the port of arrival whether it is likely that the ship is carrying a contagious disease, either literally on board as fomites or via its crewmen or passengers. As defined in a consul's handbook from 1879:
A bill of health is a document issued by the consul or the public authorities of the port which a ship sails from, descriptive of the health of the port at the time of the vessel's clearance. A clean bill of health certifies that at the date of its issue no infectious disease was known to exist either in the port or its neighbourhood. A suspected or touched bill of health reports that rumours were in circulation that an infectious disease had appeared but that the rumour had not been confirmed by any known cases. A foul bill of health or the absence of a clean bill of health implies that the place the vessel cleared from was infected with a contagious disease. The two latter cases would render the vessel liable to quarantine.
Another category of biosecurity measures adopted by border control organisations is mandatory vaccination. As a result of the prevalence of Yellow Fever across much of the African continent, a significant portion of countries in the region require arriving passengers to present an International Certificate of Vaccination or Prophylaxis (Carte Jaune) certifying that they have received the Yellow Fever vaccine. A variety of other countries require travelers who have visited areas where Yellow Fever is endemic to present a certificate in order to clear border checkpoints as a means of preventing the spread of the disease. Prior to the emergence of COVID-19, Yellow Fever was the primary human disease subjected to de facto vaccine passport measures by border control authorities around the world. Similar measures are in place with regard to Polio and meningococcal meningitis in regions where those diseases are endemic and countries bordering those regions. Prior to the eradication of smallpox, similar Carte Jaune requirements were in force for that disease around the world.
As a result of the COVID-19 pandemic, biosecurity measures have become a highly visible aspect of border control across the globe. Most notably, quarantine and mandatory COVID-19 vaccination for international travelers. Together with a decreased willingness to travel, the implementation of biosecurity measures has had a negative economic and social impact on the travel industry. Slow travel increased in popularity during the pandemic, with tourists visiting fewer destinations during their trips.
Biosecurity measures such as restrictions on cross-border travel, the introduction of mandatory vaccination for international travellers, and the adoption of quarantine or mandatory testing measures have helped to contain the spread of COVID-19. While test-based border screening measures may prove effective under certain circumstances, they may fail to detect a significant quantity positive cases if only conducted upon arrival without follow-up. A minimum 10-day quarantine may be beneficial in preventing the spread of COVID-19 and may be more effective if combined with an additional control measure like border screening. A study in Science found that travel restrictions could delay the initial arrival of COVID-19 in a country, but that they produced only modest overall effects unless combined with domestic infection prevention and control measures to considerably reduce transmissions. (That is consistent with prior research on influenza and other communicable diseases.) Travel bans early in the pandemic were most effective for isolated locations, such as small island nations.
During the COVID-19 pandemic, many jurisdictions across the globe introduced biosecurity measures on internal borders. This ranged from quarantine measures imposed upon individuals crossing state lines within America to prohibitions on interstate travel in Australia.
=== Customs ===
Each country has its own laws and regulations for the import and export of goods into and out of a country, which its customs authority enforces. The import or export of some goods may be restricted or forbidden, in which case customs controls enforce such policies. Customs enforcement at borders can also entail collecting excise tax and preventing the smuggling of dangerous or illegal goods. A customs duty is a tariff or tax on the importation (usually) or exportation (unusually) of goods.
In many countries, border controls for arriving passengers at many international airports and some road crossings are separated into red and green channels in order to prioritise customs enforcement. Within the European Union's common customs area, airports may operate additional blue channels for passengers arriving from within that area. For such passengers, border control may focus specifically on prohibited items and other goods that are not covered by the common policy. Luggage tags for checked luggage traveling within the EU are green-edged so they may be identified. In most EU member states, travellers coming from other EU countries within the Schengen Area can use the green lane, although airports outside the Schengen Area or with frequent flights arriving from jurisdictions within Schengen but outside the European Union may use blue channels for convenience and efficiency.
A customs area is an area designated for storage of commercial goods that have not cleared border controls. Commercial goods not yet cleared through customs are often stored in a type of customs area known as a bonded warehouse, until processed or re-exported. Ports authorized to handle international cargo generally include recognised bonded warehouses. For the purpose of customs duties, goods within the customs area are treated as being outside the country. This allows easy transshipment to a third country without customs authorities being involved. For this reason, customs areas are usually carefully controlled and fenced to prevent smuggling. However, the area is still territorially part of the country, so the goods within the area are subject to other local laws (for example drug laws and biosecurity regulations), and thus may be searched, impounded or turned back. The term is also sometimes used to define an area (usually composed of several countries) which form a customs union, a customs territory, or to describe the area at airports and ports where travellers are checked through customs.
Sanitary and phytosanitary (SPS) measures are customs measures to protect humans, animals, and plants from diseases, pests, or contaminants. The Agreement on the Application of Sanitary and Phytosanitary Measures, concluded at the Uruguay Round of the Multilateral Trade Negotiations, establishes the types of SPS measures each jurisdiction is permitted to impose. Examples of SPS are tolerance limits for residues, restricted use of substances, labelling requirements related to food safety, hygienic requirements and quarantine requirements. In certain countries, sanitary and phytosanitary measures focuses extensively on curtailing and regulating the import of foreign agricultural products in order to protect domestic ecosystems. For example, Australian border controls restrict most (if not all) food products, certain wooden products and other similar items. Similar restrictions exist in Canada, America and New Zealand.
Border control in many countries in Asia and the Americas prioritizes enforcing customs laws pertaining to narcotics. For instance, India and Malaysia are focusing resources on eliminating drug smuggling from Myanmar and Thailand respectively. The issue stems largely from the high output of dangerous and illegal drugs in the Golden Triangle as well as in regions further west such as Afghanistan. A similar problem exists east of the Pacific, and has resulted in countries such as Mexico and the United States tightening border control in response to the northward flow of illegal substances from regions such as Colombia. The Mexican Drug War and similar cartel activity in neighboring areas has exacerbated the problem. In certain countries illegal importing, exporting, sale, or possession of drugs constitute capital offences and may result in a death sentence. A 2015 article by The Economist says that the laws of 32 countries provide for capital punishment for drug smuggling, but only in six countries – China, Iran, Saudi Arabia, Vietnam, Malaysia, and Singapore –are drug offenders known to be routinely executed. Additionally, Singapore, Malaysia, and Indonesia impose mandatory death sentences on individuals caught smuggling restricted substances across their borders. For example, Muhammad Ridzuan Ali was executed in Singapore on May 19, 2017 for drug trafficking. According to a 2011 article by the Lawyers Collective, an NGO in India, "32 countries impose capital punishment for offences involving narcotic drugs and psychotropic substances." South Korean law provides for capital punishment for drug offences, but South Korea has a de facto moratorium on capital punishment as there have been no executions since 1997, even though there are still people on death row and new death sentences continue to be handed down.
=== Border security ===
Border security refers to measures taken by one or more governments to enforce their border control policies. Such measures target a variety of issues, ranging from customs violations and trade in unlawful goods to the suppression of unauthorized migration or travel. The specific border security measures taken by a jurisdiction vary depending on the priorities of local authorities and are affected by social, economic, and geographical factors.
In India, which maintains free movement with Nepal and Bhutan, border security focuses primarily on the Bangladeshi, Pakistani, and Myanmar borders. India's primary focus with regard to the border with Bangladesh is to deter unlawful immigration and drug trafficking. On the Pakistani border, the Border Security Force aims to prevent the infiltration of Indian territory by terrorists from Pakistan and other countries in the west (Afghanistan, Iraq, Syria, etc.). In contrast, India's border with Myanmar is porous and the 2021 military coup in Myanmar saw an influx of refugees seeking asylum in border states including Mizoram. The refoulement of Rohingya refugees is a contentious aspect of India's border control policy vis à vis Myanmar.
Meanwhile, American border security policy is largely centered on the country's border with Mexico. Security along this border is composed of many distinct elements; including physical barriers, patrol routes, lighting, and border patrol personnel. In contrast, the border with Canada is primarily composed of joint border patrol and security camera programs, forming the longest undefended border in the world. In remote areas along the border with Canada, where staffed border crossings are not available, there are hidden sensors on roads, trails, railways, and wooded areas, which are located near crossing points.
Border security on the Schengen Area's external borders is especially restrictive. Members of the Schengen Agreement are required to apply strict checks on travellers entering and exiting the area. These checks are co-ordinated by the European Union's Frontex agency, and subject to common rules. The details of border controls, surveillance and the conditions under which permission to enter into the Schengen Area may be granted are exhaustively detailed in the Schengen Borders Code. All persons crossing external borders—inbound or outbound—are subject to a check by a border guard. The only exception is for regular cross-border commuters (both those with the right of free movement and third-country nationals) who are well known to the border guards: once an initial check has shown that there is no alert on record relating to them in the Schengen Information System or national databases, they can only be subject to occasional 'random' checks, rather than systematic checks every time they cross the border. Additionally, border security in Europe is increasingly being outsourced to private companies, with the border security market growing at a rate of 7% per year. In its Border Wars series, the Transnational Institute showed that the arms and security industry helps shape European border security policy through lobbying, regular interactions with EU's border institutions and its shaping of research policy. The institute criticises the border security industry for having a vested interest in increasing border militarisation in order to increase profits. Furthermore, the same companies are also often involved in the arms trade and thus profit twice: first from fuelling the conflicts, repression and human rights abuses that have led refugees to flee their homes and later from intercepting them along their migration routes.
==== Border walls ====
Border walls are a common aspect of border security measures across the world. Border walls generally seek to limit unauthorised travel across an international border and are frequently implemented as a populist response to refugees and economic migrants.
The India-Bangladesh barrier is a 3,406 kilometres (2,116 miles) long fence of barbed wire and concrete just under 3 metres (9 feet 10 inches) high currently under construction. Its stated aim is to limit unauthorised migration. The project has run into several delays and there is no clear completion date for the project yet. Similar to India's barrier with Bangladesh and the proposed wall between America and Mexico, Iran has constructed a wall on its frontier with Pakistan. The wall aims to reduce unauthorised border crossings and stem the flow of drugs, and is also a response to terrorist attacks, notably the one in the Iranian border town of Zahedan on February 17, 2007, which killed thirteen people, including nine Iranian Revolutionary Guard officials. Former president Donald Trump's proposal to build a new wall along the border formed a major feature of his 2016 presidential campaign and, over the course of his presidency, his administration spent approximately US$15 billion on the project, with US$5 billion appropriated from US Customs and Border Protection, US$6.3 billion appropriated from anti-narcotics initiative funded by congress, and US$3.6 billion appropriated from the American military. Members of both the Democratic and Republican parties who opposed Trump's border control policies regarded the border wall as unnecessary or undesirable, arguing that other measures would be more effective at reducing illegal immigration than building a wall, including tackling the economic issues that lead to immigration being a relevant issue altogether, border surveillance or an increase in the number of customs agents. Additionally, in August 2020, the United States constructed 3.8 km of short cable fencing along the border between Abbotsford, British Columbia, and Whatcom County, Washington.
Border walls have formed a major component of European border control policy following the European migrant crisis. The walls at Melilla and at Ceuta on Spain's border with Morocco are designed to curtail the ability of refugees and migrants to enter the European Union via the two Spanish cities on the Moroccan coast. Similar measures have been taken on the Schengen area's borders with Turkey in response to the refugee crisis in Syria. The creation of European Union's collective border security organisation, Frontex, is another aspect of the bloc's growing focus on border security. Within the Schengen Area, border security has become an especially prominent priority for the Hungarian government under right-wing strongman Viktor Orbán. Similarly, Saudi Arabia has begun construction of a border barrier or fence between its territory and Yemen to prevent the unauthorized movement of people and goods. The difference between the countries' economic situations means that many Yemenis head to Saudi Arabia to find work. Saudi Arabia does not have a barrier with its other neighbours in the Gulf Cooperation Council, whose economies are more similar. In 2006 Saudi Arabia proposed constructing a security fence along the entire length of its 900 kilometre long desert border with Iraq. As of July 2009 it was reported that Saudis will pay $3.5 billion for a security fence. The combined wall and ditch will be 965 kilometres (600 miles) long and include five layers of fencing, watch towers, night-vision cameras, and radar cameras and manned by 30,000 troops. Elsewhere in Europe, the Republic of Macedonia began erecting a fence on its border with Greece in November 2015.
In 2003, Botswana began building a 480 kilometres (300 miles) long electric fence along its border with Zimbabwe. The official reason for the fence is to stop the spread of foot-and-mouth disease among livestock. Zimbabweans argue that the height of the fence is clearly intended to keep out people. Botswana has responded that the fence is designed to keep out cattle, and to ensure that entrants have their shoes disinfected at legal border crossings. Botswana also argued that the government continues to encourage legal movement into the country. Zimbabwe was unconvinced, and the barrier remains a source of tension.
==== Border checkpoints ====
A Border checkpoint is a place where goods or individuals moving across borders are inspected for compliance with border control measures. Access-controlled borders often have a limited number of checkpoints where they can be crossed without legal sanctions. Arrangements or treaties may be formed to allow or mandate less restrained crossings (e.g. the Schengen Agreement). Land border checkpoints (land ports of entry) can be contrasted with the customs and immigration facilities at seaports, international airports, and other ports of entry.
Checkpoints generally serve two purposes:
To prevent entrance of individuals who are either undesirable (e.g. criminals or others who pose threats) or simply unauthorised to enter.
To prevent entrance of goods or contaminants that are illegal or subject to restriction, or to collect tariffs in accordance with customs or quarantine policies.
A border checkpoint at which travellers are permitted to enter a jurisdiction is known as a port of entry. International airports are usually ports of entry, as are road and rail crossings on a land border. Seaports can be used as ports of entry only if a dedicated customs presence is posted there. The choice of whether to become a port of entry is up to the civil authority controlling the port. An airport of entry is an airport that provides customs and immigration services for incoming flights. These services allow the airport to serve as an initial port of entry for foreign visitors arriving in a country. While the terms airport of entry and international airport are generally used interchangeably, not all international airports qualify as airports of entry since international airports without any immigration or customs facilities exist in the Schengen Area whose members have eliminated border controls with each other. Airports of entry are usually larger than domestic airports and often feature longer runways and facilities to accommodate the heavier aircraft commonly used for international and intercontinental travel. International airports often also host domestic flights, which often help feed both passengers and cargo into international ones (and vice versa). Buildings, operations and management have become increasingly sophisticated since the mid-20th century, when international airports began to provide infrastructure for international civilian flights. Detailed technical standards have been developed to ensure safety and common coding systems implemented to provide global consistency. The physical structures that serve millions of individual passengers and flights are among the most complex and interconnected in the world. By the second decade of the 21st century, there were over 1,200 international airports and almost two billion international passengers along with 50 million metric tons (49,000,000 long tons; 55,000,000 short tons) of cargo passing through them annually.
Border inspections are also meant to protect each country's agriculture from pests. National and international phytosanitary authorities maintain databases of border interceptions, occurrences, and establishments. Bebber et al., 2019 analyzes such records and finds that they underreport many important pest species, that island nations are more vulnerable because they have lower border-to-area ratios, and that pests are moving poleward to follow humans' crops as our crops follow global warming.
A 'Quilantan' or 'Wave Through' entry is a phenomenon at American border checkpoints authorising a form of non-standard but legal entry without any inspection of travel documents. It occurs when the border security personnel present at a border crossing choose to summarily admit some number of persons without performing a standard interview or document examination. If an individual can prove that they were waved through immigration in this manner, then they are considered to have entered with inspection despite not having answered any questions or received a passport entry stamp. This definition of legal entry only extends to foreigners who entered America at official border crossings and does not provide a path to legal residency for those who did not enter through a recognised crossing.
==== Border zones ====
Border zones are areas near borders that have special restrictions on movement. Governments may forbid unauthorised entry to or exit from border zones and restrict property ownership in the area. The zones function as buffer zones specifically monitored by border patrols in order to prevent unauthorised cross-border travel. Border zones enable authorities to detain and prosecute individuals suspected of being or aiding undocumented migrants, smugglers, or spies without necessarily having to prove that the individuals in question actually engaged in the suspected unauthorised activity since, as all unauthorised presence in the area is forbidden, the mere presence of an individual permits authorities to arrest them. Border zones between hostile states can be heavily militarised, with minefields, barbed wire, and watchtowers. Some border zones are designed to prevent illegal immigration or emigration and do not have many restrictions but may operate checkpoints to check immigration status. In most places, a border vista is usually included and/or required. In some nations, movement inside a border zone without a licence is an offence and will result in arrest. No probable cause is required as mere presence inside the zone is an offence, if it is intentional. Even with a license to enter, photography, making fires, and carrying of firearms and hunting are prohibited.
Examples of international border zones are the Border Security Zone of Russia and the Finnish border zone on the Finnish–Russian border. There are also intra-country zones such as the Cactus Curtain surrounding the Guantanamo Bay Naval Base in Cuba, the Korean Demilitarised Zone along the North Korea-South Korea demarcation line and the Frontier Closed Area in Hong Kong. Important historical examples are the Wire of Death set up by the German Empire to control the Belgium–Netherlands border and the Iron Curtain, a set of border zones maintained by the Soviet Union and its satellite states along their borders with Western states. One of the most militarised parts was the restricted zone of the inner German border. While initially and officially the zone was for border security, eventually it was engineered to prevent escape from the Soviet sphere into the West. Ultimately, the Eastern Bloc governments resorted to using lethal countermeasures against those trying to cross the border, such as mined fences and orders to shoot anyone trying to cross into the West. The restrictions on building and habitation made the area a "green corridor", today established as the European Green Belt.
In the area stretching inwards from its internal border with the mainland, Hong Kong maintains a Frontier Closed Area out of bounds to those without special authorisation. The area was established in the 1950s when Hong Kong was under British administration as a consequence of the Convention for the Extension of Hong Kong Territory prior to the Transfer of sovereignty over Hong Kong in 1997. The purposes of the area were to prevent illegal immigration and smuggling; smuggling had become prevalent as a consequence of the Korean War. Today, under the one country, two systems policy, the area continues to be used to curtail unauthorized migration to Hong Kong and the smuggling of goods in either direction.
As a result of the partition of the Korean peninsula by America and the Soviet Union after World War II, and exacerbated by the subsequent Korean War, there is a Demilitarised Zone (DMZ) spanning the de facto border between North and South Korea. The DMZ follows the effective boundaries as of the end of the Korean War in 1953. Similarly to the Frontier Closed Area in Hong Kong, this zone and the defence apparatus that exists on both sides of the border serve to curtail unauthorised passage between the two sides. In South Korea, there is an additional fenced-off area between the Civilian Control Line (CCL) and the start of the Demilitarized Zone. The CCL is a line that designates an additional buffer zone to the Demilitarized Zone within a distance of 5 to 20 kilometres (3.1 to 12.4 miles) from the Southern Limit Line of the Demilitarized Zone. Its purpose is to limit and control the entrance of civilians into the area in order to protect and maintain the security of military facilities and operations near the Demilitarized Zone. The commander of the 8th US Army ordered the creation of the CCL and it was activated and first became effective in February 1954. The buffer zone that falls south of the Southern Limit Line is called the Civilian Control Zone. Barbed wire fences and manned military guard posts mark the CCLe. South Korean soldiers typically accompany tourist buses and cars travelling north of the CCL as armed guards to monitor the civilians as well as to protect them from North Korean intruders. Most of the tourist and media photos of the "Demilitarised Zone fence" are actually photos of the CCL fence. The actual Demilitarised Zone fence on the Southern Limit Line is completely off-limits to everybody except soldiers and it is illegal to take pictures of the Demilitarized Zone fence.
Similarly, the whole estuary of the Han River in the Korean Peninsula is deemed a "Neutral Zone" and is officially off-limits to all civilian vessels. Only military vessels are allowed within this neutral zone. In recent years, Chinese fishing vessels have taken advantage of the tense situation in the Han River Estuary Neutral Zone and illegally fished in this area due to both North Korean and South Korean navies never patrolling this area due to the fear of naval battles breaking out. This has led to firefights and sinkings of boats between Chinese fishermen and South Korean Coast Guard. On January 30, 2019, North Korean and South Korean military officials signed a landmark agreement that would open the Han River Estuary to civilian vessels for the first time since the Armistice Agreement in 1953. The agreement was scheduled to take place in April 2019 but the failure of the 2019 Hanoi Summit indefinitely postponed these plans.
The Green Line separating Southern Cyprus and Northern Cyprus is a demilitarised border zone operated by the United Nations Peacekeeping Force in Cyprus operate and patrol within the buffer zone. The buffer zone was established in 1974 due to ethnic tensions between Greek and Turkish Cypriots. The green line is similar in nature to the 38th parallel separating the Republic of Korea and North Korea.
Some border zones, referred to as border vistas, are composed of legally mandated cleared space between two areas of foliage located at an international border intended to provide a clear demarcation line between two jurisdictions. Border vistas are most commonly found along undefended international boundary lines, where border security is not as much of a necessity and a built barrier is undesired, and are a treaty requirement for certain borders. An example of a border vista is a 6-metre (20-foot) cleared space around unguarded portions of the Canada–United States border. Similar clearings along the border line are provided for by many international treaties. For example, the 2006 border management treaty between Russia and China provides for a 15-metre (49-foot) cleared strip along the two nations' border.
In 2024, Egypt announced that they are building a buffer zone on the Egypt Gaza border.
=== Immigration law ===
Immigration law refers to the national statutes, regulations, and legal precedents governing immigration into and deportation from a country. Strictly speaking, it is distinct from other matters such as naturalisation and citizenship, although they are often conflated. Immigration laws vary around the world, as well as according to the social and political climate of the times, as acceptance of immigrants sways from the widely inclusive to the deeply nationalist and isolationist. Countries frequently maintain laws which regulate both the rights of entry and exit as well as internal rights, such as the duration of stay, freedom of movement, and the right to participate in commerce or government. National laws regarding the immigration of citizens of that country are regulated by international law. The United Nations' International Covenant on Civil and Political Rights mandates that all countries allow entry to their own citizens.
=== Immigration policies ===
==== Diaspora communities ====
Certain countries adopt immigration policies designed to be favourable towards members of diaspora communities with a connection to the country. For example, the Indian government confers Overseas Citizenship of India (OCI) status on foreign citizens of Indian origin to live and work indefinitely in India. OCI status was introduced in response to demands for dual citizenship by the Indian diaspora, particularly in countries with large populations of Indian origin. It was introduced by The Citizenship (Amendment) Act, 2005 in August 2005. Similar to OCI status, the UK Ancestry visa exempts members of the British diaspora from usual immigration controls. Poland issued the Karta Polaka to citizens of certain northeast European countries with Polish ancestry, but later expanded it to the worldwide Polonia.
Some nations recognise a right of return for people with ancestry in that country or a connection to a particular ethnic group. A notable example of this is the right of Sephardi Jews to acquire Spanish nationality by virtue of their community's Spanish origins. Similar exemptions to immigration controls exist for people of Armenian origin seeking to acquire Armenian citizenship. Ghana, similarly, grants an indefinite right to stay in Ghana to members of the African diaspora regardless of citizenship. Similarly, Israel maintains a policy permitting members of the Jewish diaspora to immigrate to Israel regardless of prior nationality.
South Korean immigration policy is relatively unique in that, as a consequence of its claim over the territory currently administered by North Korea, citizens of North Korea are regarded by the South as its own citizens by birth. As a result, North Korean refugees in China often attempt to travel to countries such as Thailand which, while not offering asylum to North Koreans, classifies them as unauthorized immigrants and deports them to South Korea instead of North Korea. At the same time, the policy has operated to prevent pro-North Korea Zainichi Koreans recognised by Japan as Chōsen-seki from entering South Korea without special permission from the South Korean authorities as, despite being regarded as citizens of the Republic of Korea and members of the Korean diaspora, they generally refuse to exercise that status.
==== Open borders ====
An open border is the deregulation and or lack of regulation on the movement of persons between nations and jurisdictions, this does not apply to trade or movement between privately owned land areas. Most nations have open borders for travel within their nation of travel, though more authoritarian states may limit the freedom of internal movement of its citizens, as for example in the former USSR. However, only a handful of nations have deregulated open borders with other nations, an example of this being European countries under the Schengen Agreement or the open Belarus-Russia border. Open borders used to be very common among all nations, however this became less common after the First World War, which led to the regulation of open borders, making them less common and no longer feasible for most industrialised nations.
Open borders are the norm for borders between subdivisions within the boundaries of sovereign states, though some countries do maintain internal border controls (for example between the People's Republic of China mainland and the special administrative regions of Hong Kong and Macau; or between the American mainland, the unincorporated territories other than Puerto Rico, and Hawaii. Open borders are also usual between member states of federations, though (very rarely) movement between member states may be controlled in exceptional circumstances. Federations, confederations and similar multi-national unions typically maintain external border controls through a collective border control system, though they sometimes have open borders with other non-member states through special international agreements – such as between Schengen Agreement countries as mentioned above.
Presently, open border agreements of various types are in force in several areas around the world, as outlined below:
Asia and Oceania:
Under the 1950 Indo-Nepal Treaty of Peace and Friendship, India and Nepal maintain a similar arrangement to the CTA and the Union State. Indians and Nepalis are not subject to any migration controls in each other's countries and there are few controls on land travel by citizens across the border.
India and Bhutan also have a similar programme in place The border between Jaigaon, in the Indian state of West Bengal, and the city of Phuentsholing is essentially open, and although there are internal checkpoints, Indians (as outlined under the Visa policy of Bhutan are allowed to proceed throughout Bhutan with a voter's ID or an identity slip from the Indian consulate in Phuentsholing. Similarly, Bhutanese passport holders enjoy free movement in India.
Thailand and Cambodia: Whilst not as liberal as the policies concerning the Indo-Nepalese and Indo-Bhutanese borders, Thailand and Cambodia have begun issuing combined visas to certain categories of tourists applying at specific Thai or Cambodian embassies and consulates in order to enable freer border crossings between the two countries. The policy is currently in force for nationals of America and several European (primarily EU, EEA, and GCC) and Oceanian countries as well as for Indian and Chinese nationals residing in Singapore.
Australia and New Zealand: Similar to the agreement between India and Nepal, Trans-Tasman Travel Arrangement between Australia and New Zealand is a free movement agreement citizens of each country to travel freely between them and allowing citizens and some permanent residents to reside, visit, work, study in the other country for an indefinite period, with some restrictions. The arrangement came into effect in 1973, and allows citizens of each country to reside and work in the other country, with some restrictions. Other details of the arrangement have varied over time. From 1 July 1981, all people entering Australia (including New Zealand citizens) have been required to carry a passport. Since 1 September 1994 Australia, has had a universal visa requirement, and to specifically cater for the continued free movement of New Zealanders to Australia, the Special Category Visa was introduced for New Zealanders.
Central America :The Central America-4 Border Control Agreement abolishes border controls for land travel between El Salvador, Honduras, Nicaragua, and Guatemala. However, this does not apply to air travel.
Europe and the Middle East
Union State of Russia and Belarus The Union State of Russia and Belarus is a supranational union of Russia and Belarus, which eliminates all border controls between the two nations. Before a visa agreement was signed in 2020, each country maintained its own visa policies, thus resulting in non-citizens of the two countries generally being barred from travelling directly between the two. However, since the visa agreement was signed, each side recognises the other's visas, which means that third-country citizens can enter both countries with a visa from either country.
Western Europe: The two most significant free travel areas in Western Europe are the Schengen Area, in which very little if any border control is generally visible, and the Common Travel Area (CTA), which partially eliminates such controls for nationals of the United Kingdom and Ireland. Between countries in the Schengen Area, and to an extent within the CTA on the British Isles, internal border control is often virtually unnoticeable, and often only performed by means of random car or train searches in the hinterland, while controls at borders with non-member states may be rather strict.
Gulf Cooperation Council: Members of the Gulf Cooperation Council, or GCC, allow each other's citizens freedom of movement in an arrangement similar to the CTA and to that between India and Nepal. Between 5 June 2017 and 5 January 2021, freedom of movement in Saudi Arabia, the UAE, and Bahrain was suspended for Qataris as a result of the Saudi-led blockade of the country.
==== Hostile environment policies ====
Certain jurisdictions gear their immigration policies toward creating a hostile environment for undocumented migrants in order to deter migration by creating an unwelcoming atmosphere for potential and existing immigrants. Notably, the British Home Office adopted a set of administrative and legislative measures designed to make staying in the United Kingdom as difficult as possible for people without leave to remain, in the hope that they may "voluntarily leave". The Home Office policy was first announced in 2012 under the Conservative-Liberal Democrat coalition. The policy was implemented pursuant to the 2010 Conservative Party Election Manifesto. The policy has been criticized for being unclear, has led to many incorrect threats of deportation and has been called "Byzantine" by the England and Wales Court of Appeal for its complexity.
Similarly, anti-immigration movements in America have advocated for policies aimed at creating a hostile environment for intended and existing immigrants at various points in history. Historical examples include the nativist Know Nothing movement of the mid-19th century, which advocated hostile policies against Catholic immigrants; the Workingman's Party, which promoted xenophobic attitudes toward Asians in California during the late-19th century, a sentiment that ultimately led to the Chinese Exclusion Act of 1882; the Immigration Restriction League, which advocated xenophobic policies against southern and eastern Europe during the late-19th and early 20th centuries, and the joint congressional Dillingham Commission. After World War I, these cumulatively resulted in the highly restrictive Emergency Quota Act of 1921 and the Immigration Act of 1924. Over the first two decades of the 21st century, the Republican Party adopted an increasingly nativist platform, advocating against sanctuary cities and in favour of building a wall with Mexico and reducing the number of immigrants permitted to settle in the country. Ultimately, the Trump administration furthered many of these policy goals, including the adoption of harsh policies such as the Remain in Mexico and family separation policies vis à vis refugees and migrants arriving from Central America via Mexico. Islamophobic policies such as the travel ban targeted primarily at Muslim-majority countries also feature prominently in attempts to create a hostile environment for immigrants perceived by populists as not belonging to the predominant WASP culture in the United States.
India's citizenship registration policy serves to create a hostile environment for the country's Muslim community in the regions in which it has been implemented. The Indian government is presently in the process of building several detention camps throughout India in order to detain people not listed on the register. On January 9, 2019, the Union government released a '2019 Model Detention Manual', which said that every city or district, having a major immigration check post, must have a detention centre. The guidelines suggest detention centres with 3 metres (9 feet 10 inches) high boundary walls covered with barbed wires.
=== International zones ===
An international zone is any area not fully subject to the border control policies of the state in which it is located. There are several types of international zones ranging from special economic zones and sterile zones at ports of entry exempt from customs rules to concessions over which administration is ceded to one or more foreign states. International zones may also maintain distinct visa policies from the rest of the surrounding state.
== Internal border controls ==
Internal border controls are measures implemented to control the flow of people or goods within a given country. Such measures take a variety of forms ranging from the imposition of border checkpoints to the issuance of internal travel documents and vary depending on the circumstances in which they are implemented. Circumstances resulting in internal border controls include increasing security around border areas (e.g. internal checkpoints in America or Bhutan near border regions), preserving the autonomy of autonomous or minority areas (e.g. border controls between Peninsular Malaysia, Sabah, and Sarawak; border controls between Hong Kong, Macau, and mainland China), preventing unrest between ethnic groups (e.g. Northern Ireland's peace walls, border controls in Tibet and Northeastern India), and disputes between rival governments (e.g. between the Republic of China and the People's Republic of China).
During the COVID-19 pandemic, temporary internal border controls were introduced in jurisdictions across the globe. For instance, travel between Australian states and territories was prohibited or restricted by state governments at various points of the pandemic either in conjunction with sporadic lockdowns or as a stand-alone response to COVID-19 outbreaks in neighbouring states. Internal border controls were also introduced at various stages of Malaysia's Movement Control Order, per which interstate travel was restricted depending on the severity of ongoing outbreaks. Similarly, internal controls were introduced by national authorities within the Schengen Area, though the European Union ultimately rejected the idea of suspending the Schengen Agreement per se.
=== Asia ===
Internal border controls exist in many parts of Asia. For example, travellers visiting minority regions in India and China often require special permits to enter. Internal air and rail travel within non-autonomous portions of India and mainland China also generally require travel documents to be checked by government officials as a form of the interior border checkpoint. For such travel within India, Indian citizens may utilise their Voter ID, National Identity Card, passport, or other proof of Indian citizenship whilst Nepali nationals may present any similar proof of Napali citizenship. Meanwhile, for such travel within mainland China, Chinese nationals from the mainland are required to use their national identity cards.
==== China ====
Within China, extensive border controls are maintained for those travelling between the mainland, special administrative regions of Hong Kong and Macau. Foreign nationals need to present their passports or other required types of travel documents when travelling between these jurisdictions. For Chinese nationals (including those with British National (Overseas) status), there are special documents for travel between these territories. Internal border controls in China have also resulted in the creation of special permits allowing Chinese citizens to immigrate to or reside in other immigration areas within the country.
China also maintains distinct, relaxed border control policies in the Special Economic Zones of Shenzhen, Zhuhai and Xiamen. Nationals of most countries can obtain a limited area visa upon arrival in these regions, which permit them to stay within these cities without proceeding further into other parts of mainland China. Visas for Shenzhen are valid for 5 days, and visas for Xiamen and Zhuhai are valid for 3 days. The duration of stay starts from the next day of arrival. The visa can only be obtained only upon arrival at Luohu Port, Huanggang Port Control Point, Fuyong Ferry Terminal or Shekou Passenger Terminal for Shenzhen; Gongbei Port of Entry, Hengqin Port or Jiuzhou Port for Zhuhai; and Xiamen Gaoqi International Airport for Xiamen.
Similarly, China permits nationals of non—visa-exempt ASEAN countries to visit Guilin without a visa for a maximum of 6 days if they travel with an approved tour group and enter China from Guilin Liangjiang International Airport. They may not visit other cities within Guangxi or other parts of mainland China.
Neither the People's Republic of China nor the Republic of China recognizes the passports issued by the other and neither considers travel between mainland China and areas controlled by the Republic of China
as formal international travel. There are arrangements exist for travel between territories controlled by the Republic of China and territories controlled by the People's Republic of China.
More generally, authorities in mainland China maintain a system of residency registration known as hukou (Chinese: 户口; lit. 'household individual'), by which government permission is needed to formally change one's place of residence. It is enforced with identity cards. This system of internal border control measures effectively limited internal migration before the 1980s but subsequent market reforms caused it to collapse as a means of migration control. An estimated 150 to 200 million people are part of the "blind flow" and have unofficially migrated, generally from poor, rural areas to wealthy, urban ones. However, unofficial residents are often denied official services such as education and medical care and are sometimes subject to both social and political discrimination. In essence, the denial of social services outside an individual's registered area of residence functions as an internal border control measure geared toward dissuading migration within the mainland.
==== Bhutan ====
Meanwhile, in Bhutan, accessible by road only through India, there are interior border checkpoints (primarily on the Lateral Road) and, additionally, certain areas require special permits to enter, whilst visitors not proceeding beyond the border city of Phuentsholing do not need permits to enter for the day (although such visitors are de facto subject to Indian visa policy since they must proceed through Jaigaon). Individuals who are not citizens of India, Bangladesh, or the Maldives must obtain both their visa and any regional permits required through a licensed tour operator prior to arriving in the country. Citizens of India, Bangladesh, and the Maldives may apply for regional permits for restricted areas online.
==== Malaysia ====
Another example is the Malaysian states of Sabah and Sarawak, which have maintained their own border controls since joining Malaysia in 1963. The internal border control is asymmetrical; while Sabah and Sarawak impose immigration control on Malaysian citizens from other states, there is no corresponding border control in Peninsular Malaysia, and Malaysians from Sabah and Sarawak have unrestricted right to live and work in the Peninsular. For social and business visits less than three months, Malaysian citizens may travel between the Peninsular, Sabah and Sarawak using the Malaysian identity card (MyKad) or Malaysian passport, while for longer stays in Sabah and Sarawak they are required to have an Internal Travel Document or a passport with the appropriate residential permit.
==== North Korea ====
The most restrictive internal border controls are in North Korea. Citizens are not allowed to travel outside their areas of residence without explicit authorisation, and access to the capital city of Pyongyang is heavily restricted. Similar restrictions are imposed on tourists, who are only allowed to leave Pyongyang on government-authorised tours to approved tourist sites.
=== Europe ===
An example from Europe is the implementation of border controls on travel between Svalbard, which maintains a policy of free migration as a result of the Svalbard Treaty and the Schengen Area, which includes the rest of Norway. Other examples of effective internal border controls in Europe include the closed cities of certain CIS members, areas of Turkmenistan that require special permits to enter, restrictions on travel to the Gorno-Badakhshan Autonomous Region in Tajikistan, and (depending on whether Northern and Southern Cyprus are considered separate countries) the Cypriot border. Similarly, Iraq's Kurdistan region maintains a separate and more liberal visa and customs area from the rest of the country, even allowing visa free entry for Israelis whilst the rest of the country bans them from entering. Denmark also maintains a complex system of subnational countries which, unlike the Danish mainland, are outside the European Union and maintain autonomous customs policies. In addition to the numerous closed cities of Russia, parts of 19 subjects of the Russian Federation are closed for foreigners without special permits and are consequently subject to internal border controls.
Another complex border control situation in Europe pertains to the United Kingdom. Whilst the crown dependencies are within the Common Travel Area, neither Gibraltar nor the sovereign British military exclaves of Akrotiri and Dhekelia are. The former maintains its own border control policies, thus requiring physical border security at its border with the Schengen Area as well as the implementation of border controls for travellers proceeding directly between Gibraltar and the British mainland. The latter maintains a relatively open border with Southern Cyprus, though not with Northern Cyprus. Consequently, it is a de facto member of the Schengen Area and travel to or from the British mainland requires border controls. On December 31, 2020, Spain and the United Kingdom reached an agreement in principle under which Gibraltar would join the Schengen Area, clearing the way for the European Union and the UK to start formal negotiations on the matter.
In the aftermath of Brexit, border controls for goods flowing between Great Britain and Northern Ireland were introduced in accordance with the Protocol on Ireland/Northern Ireland agreed to as part of the UK's withdrawal agreement with the EU. Due to the thirty-year internecine conflict in Northern Ireland, the UK-Ireland border has had a special status since that conflict was ended by the Belfast Agreement/Good Friday Agreement of 1998. As part of the Northern Ireland Peace Process, the border has been largely invisible, without any physical barrier or custom checks on its many crossing points; this arrangement was made possible by both countries' common membership of both the EU's Single Market and Customs Union and of their Common Travel Area. Upon the UK's withdrawal from the European Union, the border in Ireland became the only land border between the UK and EU. EU single market and UK internal market provisions require certain customs checks and trade controls at their external borders. The Northern Ireland Protocol is intended to protect the EU single market, while avoiding imposition of a 'hard border' that might incite a recurrence of conflict and destabilize the relative peace that has held since the end of the Troubles. Under the Protocol, Northern Ireland is formally outside the EU single market, but EU free movement of goods rules and EU Customs Union rules still apply; this ensures there are no customs checks or controls between Northern Ireland and the rest of the island. In place of an Ireland/Northern Ireland land border, the protocol has created a de facto customs border down the Irish Sea for customs purposes, separating Northern Ireland from the island of Great Britain, to the disquiet of prominent Unionists. To operate the terms of the protocol, the United Kingdom must provide border control posts at Northern Ireland's ports: actual provision of these facilities is the responsibility of Northern Ireland's Department of Agriculture, Environment and Rural Affairs (DAERA). Temporary buildings were put in place for 1 January 2021, but in February 2021, the responsible Northern Ireland minister, Gordon Lyons (DUP), ordered officials to stop work on new permanent facilities and to stop recruiting staff for them. In its half yearly financial report on August 26, 2021, Irish Continental Group, which operates ferries between Great Britain and the Republic of Ireland, expressed concern at the lack of implementation of checks on goods arriving into Northern Ireland from Great Britain, as required under the protocol. The company said that the continued absence of these checks (on goods destined for the Republic of Ireland) is causing a distortion in the level playing field, since goods that arrive directly into Republic of Ireland ports from Great Britain are checked on arrival. The implementation of border controls between Great Britain and Northern Ireland was the primary catalyst for the 2021 Northern Ireland riots.
An unusual example of internal border controls pertains to customs enforcement within the Schengen area. Even though borders are generally invisible, the existence of areas within the Schengen area but outside the European Union Value Added Tax Area, as well as jurisdictions such as Andorra which are not officially a part of the Schengen area but can not be accessed without passing through it, has resulted in the existence of sporadic internal border controls for customs purposes. Additionally, as per Schengen area rules, hotels and other types of commercial accommodation must register all foreign citizens, including citizens of other Schengen states, by requiring the completion of a registration form by their own hand. The Schengen rules do not require any other procedures; thus, the Schengen states are free to regulate further details on the content of the registration forms, and identity documents which are to be produced, and may also require the persons exempted from registration by Schengen laws to be registered. A Schengen state is also permitted to reinstate border controls with another Schengen country for a short period where there is a serious threat to that state's "public policy or internal security" or when the "control of an external border is no longer ensured due to exceptional circumstances". When such risks arise out of foreseeable events, the state in question must notify the European Commission in advance and consult with other Schengen states. Since the implementation of the Schengen Agreement, this provision has been invoked frequently by member states, especially in response to the European migrant crisis.
=== Middle East ===
The Israeli military maintains an intricate network of internal border controls within the Israeli-occupied West Bank as well as external border controls between the West Bank and Israel, restricting the freedom of movement of Palestinians, composed of permanent, temporary, and random manned checkpoints in the West Bank; the West Bank Barrier; and restrictions on the usage of roads by Palestinians. Spread throughout the areas of the State of Palestine under de facto Israeli control, internal border control measures are a key feature of Palestinian life and are among the most restrictive in the world. Additionally, the blockade of the Gaza Strip results in a de facto domestic customs and immigration border for Palestinians. In order to clear internal border controls, Palestinians are required to obtain a variety of permits from Israeli authorities depending on the purpose and area of their travel. The legality and impact of this network of internal border controls is controversial. B'Tselem, an Israeli non-governmental organisation that monitors human rights in Palestine, argues that they breach the rights guaranteed by the International Covenant on Economic, Social and Cultural Rights—in particular, the right to a livelihood, the right to an acceptable standard of living, the right to satisfactory nutrition, clothing, and housing, and the right to attain the best standard of physical and mental health. B'Tselem also argues that the restrictions on ill, wounded and pregnant Palestinians seeking acute medical care is in contravention of international law that states that medical professionals and the sick must be granted open passage. While Israeli Supreme Court has deemed the measures acceptable for security reasons, Haaretz's Amira Hass argues this policy defies one of the principles of the Oslo Accords, which states that Gaza and the West Bank constitute a single geographic unit.
Much like relations between Jewish settlers in Israel and the native Palestinian population, strained intercommunal relations in Northern Ireland between Irish Catholics and the descendants of Protestant settlers from England and Scotland have resulted in de facto internal checkpoints. The peace lines are an internal border security measure to separate predominantly republican and nationalist Catholic neighbourhoods from predominantly loyalist and unionist Protestant neighbourhoods. They have been in place in some form or another since the end of The Troubles in 1998, with the Good Friday Agreement. The majority of peace walls are located in Belfast, but they also exist in Derry, Portadown, and Lurgan, with more than 32 kilometres (20 miles) of walls in Northern Ireland. The peace lines range in length from a few hundred metres to over 5 kilometres. They may be made of iron, brick, steel or a combination of the three and are up to 8 metres (26 feet) high. Some have gates in them (sometimes staffed by police) that allow passage during daylight but are closed at night.
=== North America ===
Multiple types of internal border controls exist in the United States. While the American territories of Guam and the Northern Mariana Islands follow the same visa policy as the mainland, together, they also maintain their own visa waiver programme for certain nationalities. Since the two territories are outside the customs territory of the United States, there are customs inspections when travelling between them, and the rest of the U.S. American Samoa has its own customs and immigration regulations, thus travelling between it and other American jurisdictions involves both customs and immigration inspections. The Virgin Islands are a special case, falling within the American immigration zone and solely following American visa policy, but being a customs free territory. As a result, there are no immigration checks between the two, but travellers arriving in Puerto Rico or the American mainland directly from the Virgin Islands are subject to border control for customs inspection. The United States also maintains interior checkpoints, similar to those maintained by Bhutan, along its borders with Mexico and Canada, subjecting people to border controls even after they have entered the country.
The Akwesasne nation; with territory in Ontario, Quebec, and New York; features several de facto internal border controls. As a result of protests by Akwesasne residents on their rights to cross the border unimpeded, as provided under the 1795 Jay Treaty, the Canada Border Services Agency closed its post on Cornwall Island, instead requiring travellers to proceed to the checkpoint in the city of Cornwall. As a consequence of the arrangement, residents of the island are required to clear border controls when proceeding North to the Ontarian mainland, as well as when proceeding South to Akwesasne territory in New York, thus constituting internal controls both from a Canadian perspective and from the perspective of the Akwesasne nation. Similarly, travelling between Canada and the Quebec portion of the Akwesasne nation requires driving through the state of New York, meaning that individuals will be required to clear American controls when leaving Quebec proper and to clear Canadian border controls when entering Quebec proper, though Canada does not impose border controls when entering the Quebec portion of the Akwesasne nation. Nevertheless, for residents who assert a Haudenosaunee national identity distinct from Canadian or American citizenship, the intricate network of Canadian and American border controls are seen as a foreign-imposed system of internal border controls, similar to the Israeli checkpoints in Palestinian territory.
The city of Hyder, Alaska has also been subject to internal border controls since America chose to stop regulating arrivals in Hyder from British Columbia. Since travellers exiting Hyder into Stewart, British Columbia are subject to Canadian border controls, it is theoretically possible for someone to accidentally enter Hyder from Canada without their travel documents and then to face difficulties since both America and Canada would subject them to border controls that require travel documents. At the same time, however, the northern road connecting Hyder to uninhabited mountain regions of British Columbia is equipped with neither American nor Canadian border controls, meaning that tourists from Canada proceeding northwards from Hyder are required to complete Canadian immigration formalities when they return to Stewart despite never having cleared American immigration.
=== Historical ===
Identification and freedom of internal movement have sometimes been instruments of oppression, for example in Canada's pass system, or Apartheid-era South Africa's Pass laws.
== Specific requirements ==
The degree of strictness of border controls varies across countries and borders. In some countries, controls may be targeted at the traveller's religion, ethnicity, nationality, or other countries that have been visited. Others may need to be certain the traveller has paid the appropriate fees for their visas and has future travel planned out of the country. Yet others may concentrate on the contents of the traveller's baggage, and imported goods to ensure nothing is being carried that might bring a biosecurity risk into the country.
=== Biometrics ===
Several countries require all travellers, or all foreign travelers, to be fingerprinted on arrival and refuse admission to or arrest travellers who refuse to comply. In some countries, such as America, this may apply even to transit passengers proceeding to a third country. Many countries also require a photo be taken of people entering the country. The United States, which does not fully implement exit control formalities at its land frontiers (although long mandated by domestic legislation), intends to implement facial recognition for passengers departing from international airports to identify people who overstay their visa. Together with fingerprint and face recognition, iris scanning is one of three biometric identification technologies internationally standardised since 2006 by the International Civil Aviation Organization (ICAO) for use in e-passports and the United Arab Emirates conducts iris scanning on visitors who need to apply for a visa. The Department of Homeland Security has announced plans to greatly increase the biometric data it collects at American borders. In 2018, Singapore began trials of iris scanning at three land and maritime immigration checkpoints.
=== Immigration stamps ===
An immigration stamp is an inked impression in a passport or other travel document typically made by rubber stamp upon entering or exiting a territory. Depending on the jurisdiction, a stamp can serve different purposes. For example, in the United Kingdom, an immigration stamp in a passport includes the formal leave to enter granted to a person subject to entry control. In other countries, a stamp activates or acknowledges the continuing leave conferred in the passport bearer's entry clearance. Under the Schengen system, a foreign passport is stamped with a date stamp which does not indicate any duration of stay. This means that the person is deemed to have permission to remain either for three months or for the period shown on his visa if specified otherwise. Member states of the European Union are not permitted to place a stamp in the passport of a person who is not subject to immigration control. Stamping is prohibited because it is an imposition of a control to which the person is not subject. Passport stamps may occasionally take the form of sticker stamps, such as entry stamps from Japan. Depending on nationality, a visitor may not receive a stamp at all (unless specifically requested), such as an EU or EFTA citizen travelling to an EU or EFTA country, Albania, or North Macedonia. Most countries issue exit stamps in addition to entry stamps. A few countries issue only entry stamps, including Canada, El Salvador, Ireland, Mexico, New Zealand, Singapore, United Kingdom and the United States of America. Australia, Hong Kong, Israel, Macau and South Korea do not stamp passports upon entry nor exit. These countries or regions issue landing slips instead, with the exception of Australia who do not issue any form of physical evidence of entry. Visas may also take the form of passport stamps.
Immigration authorities usually have different styles of stamps for entries and exits, to make it easier to identify the movements of people. Ink colour might be used to designate mode of transportation (air, land or sea), such as in Hong Kong prior to 1997; while border styles did the same thing in Macau. Other variations include changing the size of the stamp to indicate length of stay, as in Singapore.
In many cases passengers on cruise ships do not receive passport stamps because the entire vessel has been cleared into port. It is often possible to get a souvenir stamp, although this requires finding the immigration office by the dock. In many cases officials are used to such requests and will cooperate. Also, as noted below, some of the smallest European countries will give a stamp on request, either at their border or tourist office charging, at most, a nominal fee.
=== Exit controls ===
Whilst most countries implement border controls both at entry and exit, some jurisdictions do not. For instance, the United States and Canada do not implement exit controls at land borders and collect exit data on foreign nationals through airlines and through information sharing with neighbouring countries' entry border controls. These countries consequently do not issue exit stamps even to travellers who require stamps on entry. Similarly, Australia, Singapore and South Korea have eliminated exit stamps even though they continue to implement brief border control checks upon exit for most foreign nationals. In countries where there is no formal control by immigration officials of travel documents upon departure, exit information may be recorded by immigration authorities using the information provided to them by transport operators.
No exit control:
United States of America
Canada
Mexico (by air, but entrance declaration coupon is collected)
Bahamas
Ireland
United Kingdom
Formal exit control without passport stamping:
Albania (Entry & Exit stamp may issued upon request)
Australia (Exit stamp issued upon explicit request)
China (Exit stamp issued upon request when using e-Gate)
Costa Rica (only at Costa Rican airports; different entry and exit stamps are made at the border crossing with Panama)
El Salvador
Fiji
Hong Kong (no entry or exit stamps are issued, instead landing slips are issued upon arrival only)
Iran
Israel (no entry or exit stamps are issued at Ben Gurion Airport, instead landing slips are issued upon arrival and departure)
Japan (Exit stamp issued upon request & when not using e-Gate since July 2019)
Macau (no entry or exit stamps are issued, instead landing slips are issued upon arrival only)
New Zealand
South Korea (since 1 November 2016)
Panama (only at Panamanian airports; stamps are made at the border crossing with Costa Rica)
Republic of China (exit stamp issued upon request & when not using e-Gate)
Singapore (no exit stamps since 22 April 2019)
Saint Kitts and Nevis
Schengen Area countries (when the Entry/Exit System becomes operational, it is anticipated that the passports of third-country nationals will not be stamped when they enter and leave the Schengen Area)
=== Exit permits ===
Some countries in Europe maintain controversial exit visa systems in addition to regular border controls. For instance, Uzbekistan requires its own citizens to obtain exit visas prior to leaving for countries other than fellow CIS nations in eastern Europe. Several countries in the Arabian peninsula require exit visas for foreign workers under the Kafala System meaning "sponsorship system"). Russia occasionally requires foreigners who overstay to obtain exit visas since one cannot exit Russia without a valid visa. Czechia has a similar policy. Similarly, a foreign citizen granted a temporary residence permit in Russia needs an exit visa to take a trip abroad (valid for both exit and return). Not all foreign citizens are subject to that requirement. Citizens of Germany, for example, do not require this exit visa. During the Cold War, countries in the Eastern Bloc maintained strict controls on citizens' ability to travel abroad. Citizens of the Soviet Union, East Germany, and other communist states were typically required to obtain permission prior to engaging in international travel. Unlike most of these states, citizens of Yugoslavia enjoyed a significant freedom of international movement.
Certain Asian countries have policies that similarly require certain categories of citizens to seek official authorisation prior to travelling or emigrating. This is usually either as a way to enforce national service obligations or to protect migrant workers from travelling to places where they may be abused by employers. Singapore, for instance, operates an Exit Permit scheme in order to enforce the national service obligations of its male citizens and permanent residents. These restrictions vary according to age and status. South Korea and Taiwan have similar policies. India, on the other hand, requires citizens who have not met certain educational requirements (and thus may be targeted by human traffickers or be coerced into modern slavery) to apply for approval prior to leaving the country and endorses their passports with "Emigration Check Required". Nepal similarly requires citizens emigrating to America on an H-1B visa to present an exit permit issued by the Ministry of Labour. This document is called a work permit and needs to be presented to immigration to leave the country. In a bid to increase protection for the large amount of Indian, Bangladeshi, Chinese, and Nepali citizens smuggled through Indian airports to the Middle East as underpaid labourers, many Indian airline companies require travellers to obtain an 'OK to Board' confirmation sent directly from visa authorities in certain GCC countries directly to the airline and will bar anyone who has not obtained this endorsement from clearing exit immigration.
Eritrea requires the vast majority of its citizens to apply for special authorization if they wish to leave, or even travel within, the country.
=== Travel documents ===
Border control policies typically require travellers to present valid travel documents in order to ascertain their identity, nationality or permanent residence status, and eligibility to enter a given jurisdiction. The most common form of travel document is the passport, a booklet-form identity document issued by national authorities or the governments of certain subnational territories containing an individual's personal information as well as space for the authorities of other jurisdictions to affix stamps, visas, or other permits authorising the bearer to enter, reside, or travel within their territory. Certain jurisdictions permit individuals to clear border controls using identity cards, which typically contain similar personal information.
=== Visas ===
A visa is a travel document issued to foreign nationals enabling them to clear border controls. They traditionally take the form of an adhesive sticker or, occasionally, a stamp affixed to a page in an individual's passport or equivalent document. Visas policies different purposes depending on the priorities of each jurisdiction, ranging from ensuring that visitors do not pose a national security risk or have sufficient financial resources to simply functioning as a tax on tourists, as is the case with countries like Mauritius and other leisure destinations which issue visas on arrival, electronic visas, or electronic travel authorisations (ETAs) to most or all visitors. Visas may include limits on the duration of the foreigner's stay, areas within the state they may enter, the dates they may enter, the number of permitted visits, or an individual's right to work in the state in question.
Many countries in Asia have liberalised their visa controls in recent years to encourage transnational business and tourism. For example, India, Myanmar, and Sri Lanka have introduced electronic visas to make border control less arduous for business travellers and tourists. Malaysia has introduced similar eVisa facilities, and has also introduced the eNTRI programme to expedite clearance for Indian citizens and Chinese citizens from the mainland. Thailand regularly issues visas on arrival to many non-exempt visitors at major ports of entry in order to encourage tourism. Indonesia, in recent years, has progressively liberalized its visa regime, no longer requiring visas or on-arrival visas from most nationals, while Singapore has signed visa waiver agreements with many countries in recent years and has introduced electronic visa facilities for Indians, Eastern Europeans, and mainland Chinese. This trend towards visa liberalisation in Asia is part of the regional trend toward social and economic globalisation that has been linked to heightened economic growth.
Certain countries, predominantly but not exclusively in western Europe and the Americas, issue working holiday visas for younger visitors to supplement their travel funds by working minor jobs. These are especially common in members of the European Union, and elsewhere in Europe. Saudi Arabia issues a special category visa for people on religious pilgrimage. Similar policies are in force in other countries with significant religious sites. Certain jurisdictions impose special visa requirements on journalists. Countries that require such visas include Cuba, China, North Korea, Saudi Arabia, America and Zimbabwe.
As a consequence of awkward border situations created by the fall of the Soviet Union, certain former members of the USSR and their neighbours maintain special visa exemption policies for travellers transiting across international boundaries between two points in a single country. For instance, Russia permits vehicles to transit across the Saatse Boot between the Estonian villages of Lutepää and Sesniki without any visa or border checkpoint provided that they do not stop. Similar provisions are made for the issuance of Facilitated Rail Transit Documents by Schengen Area members for travel between Kaliningrad Oblast and the Russian mainland, enabling Russian citizens to travel to and from the exclave without a passport or visa.
Many countries let individuals clear border controls using foreign visas. Notably, the Philippines permits nationals of India and China can use any of several foreign visas to clear border controls. In order to encourage tourism by transit passengers, South Korea permits passengers in transit who would otherwise require a South Korean visa to enter for up to thirty days utilizing an Australian, Canadian, American, or Schengen visa. Uniquely, the British territory of Bermuda has ceased to issue its own visas and instead requires that travellers either clear immigration visa-free in one of the three countries (Canada, America, and United Kingdom) to/from which it has direct flights, or hold a visa for one of them.
=== Electronic visas and electronic travel authorisations ===
Beginning in the 2000s, many countries introduced e-visas and electronic travel authorisations (ETAs) as an alternative to traditional visas. An ETA is a kind of pre-arrival registration, which may or may not be officially classified as a visa depending on the issuing jurisdiction, required for foreign travellers who are exempted from obtaining a full visa. In contrast to the procedures that typically apply in regard to proper visas, per which the traveller normally has no recourse if rejected, if an ETA is rejected the traveler can choose to apply for a visa instead. In contrast, an e-visa is simply a visa that travellers can apply for and receive online without visiting the issuing state's consular mission or visa agency. The following jurisdictions require certain categories of international travellers to hold an ETA or e-visa in order to clear border controls upon arrival:
Australia:
Electronic Travel Authority (ETA)
eVisitor programme
East African Community: From February 2014, Kenya, Rwanda and Uganda issue an East African Tourist Visa.
Hong Kong: Mainland Travel Permit for Taiwan Residents
India: India permits nationals of most jurisdictions to clear border controls using an e-visa.
Kenya: From 1 January 2021, Kenya solely issues e-visas and physical visas are no longer available.
New Zealand: Electronic Travel Authority (NZeTA)
North America: Canadian ETA, US Electronic System for Travel Authorisation
Pakistan: Pakistani ETA.
South Korea: eligible visa-free visitors must obtain Korea Electronic Travel Authorization (K-ETA).
Sri Lanka: Sri Lankan ETA
Qatar: ETA needed for up to 30 days.
United Kingdom: Electronic Visa Waiver, or EVW The Nationality and Borders Bill, before the parliament in Spring 2022, includes a proposal to introduce the Electronic Travel Authorisation system for all non-UK and Irish citizens.
=== Nationality and travel history ===
Many nations implement border controls restricting the entry of people of certain nationalities or who have visited certain countries. For instance Georgia refuses entry to holders of passports issued by the Republic of China. Similarly, since April 2017, nationals of Bangladesh, Pakistan, Sudan, Syria, Yemen, and Iran have been banned from entering the parts of eastern Libya under the control of the Tobruk government. The majority of Arab countries, as well as Iran and Malaysia, ban Israeli citizens, however exceptional entry to Malaysia is possible with approval from the Ministry of Home Affairs. Certain countries may also restrict entry to those with Israeli stamps or visas in their passports. As a result of tension over the Artsakh dispute, Azerbaijan currently forbids entry to Armenian citizens as well as to individuals with proof of travel to Artsakh.
Between September 2017 and January 2021, the United States did not issue new visas to nationals of Iran, North Korea, Libya, Somalia, Syria, or Yemen pursuant to restrictions imposed by the Trump administration, which were subsequently repealed by the Biden administration on 20 January 2021. While in force, the restrictions were conditional and could be lifted if the countries affected meet the required security standards specified by the Trump administration, and dual citizens of these countries could still enter if they presented a passport from a non-designated country.
=== Prescreening ===
A significant number of countries maintain prescreening facilities for passengers departing from other jurisdictions to clear border controls prior to arrival and thereby skip checkpoints upon arrival. Aside from simplifying arrival formalities, this enables border control authorities to deny entry to potentially inadmissible travellers prior to their embarking and to reduce congestion at border checkpoints located at ports of arrival.
Hong Kong and mainland China: There are two border crossings between Hong Kong and the Chinese mainland at which border controls imposed by the two jurisdictions are colocated:
West Kowloon Railway Station (simplified Chinese: 香港西九龙站; traditional Chinese: 香港西九龍站): A component of the Guangzhou–Shenzhen–Hong Kong Express Rail Link (Chinese: 廣深港高速鐵路; pinyin: Guǎng–Shēn–Gǎng Gāosù Tiělù), West Kowloon Station contains a "Mainland Port Area (simplified Chinese: 站内地口岸区; traditional Chinese: 站內地口岸區)", essentially enabling passengers and goods to clear mainland Chinese immigration on Hong Kong soil.
Shenzhen Bay Port (simplified Chinese: 深圳湾口岸; traditional Chinese: 深圳灣口岸): The land border checkpoint at Shenzhen Bay Port in the mainland contains a Hong Kong Port Area (simplified Chinese: 港方口岸区; traditional Chinese: 港方口岸區) which enables passengers and goods to clear Hong Kong border controls in the mainland. The checkpoint is located in the Chinese mainland on land leased from the city of Shenzhen in Guangdong province. By enabling travellers to clear both Chinese and Hong Kong border controls in one place, it eliminates any need for a second checkpoint on the Hong Kong side of the Shenzhen Bay Bridge.
Singapore and Malaysia:
Woodlands Train Checkpoint (Malay: Pusat Pemeriksaan Kereta Api Woodlands, Chinese: 兀兰火车关卡, Tamil: ஊட்லண்ட்ஸ் இரயில் மசாதலைச்சாவடிப): For cross-border rail passengers, Singaporean exit and Malaysian entry preclearance border controls are co-located at the Woodlands Train Checkpoint in Singapore, whilst Malaysian exit controls are located separately at Johor Bahru Sentral railway station in Malaysia.
Johor Bahru – Singapore Rapid Transit System (Malay: Sistem Transit Aliran Johor Bahru–Singapura, Chinese: 新山-新加坡捷运系统, Tamil: ஜோகூர் பாரு – சிங்கப்பூர் விரைவான போக்குவரத்து அமைப்பு, RTS): The upcoming RTS connecting Singapore and Johor Bahru will feature border control preclearance both on the Singaporean side and on the Malaysian side. This will enable passengers arriving in Singapore from Malaysia or vice versa to proceed straight to their connecting transport, since the RTS will link to both the Singapore MRT system (Thomson–East Coast Line) and Johor Bahru Sentral. Unlike the preclearance systems adopted in America and Hong Kong, but similar to the United Kingdom's juxtaposed controls, this system will mitigate arrival border controls on both sides of the border.
Malaysia and Thailand:
Padang Besar railway station (Thai: สถานีรถไฟปาดังเบซาร์, Malay: Stesen keretapi Padang Besar): The Padang Besar railway station in Padang Besar, Malaysia has co-located border control facilities for both Malaysia and Thailand, although the station is wholly located inside Malaysian territory (albeit just 200 metres south of the Malaysia-Thailand border). The facilities for each country operate from separate counters inside the railway station building at the platform level. Passengers entering Thailand clear Malaysian and Thai border formalities here in Malaysian territory before boarding their State Railway of Thailand trains which then cross the actual borderline several minutes after departing the station. Passengers from Thailand entering Malaysia are also processed here, using the same counters as there are no separate counters for processing entries and exits for either country.
United Kingdom and the Schengen Area: Border control for travel between the United Kingdom and the Schengen Area features significant prescreening under the juxtaposed controls programme for travel both by ferry and rail. This includes customs and immigration prescreening on both sides of the Channel Tunnel, and immigration-only prescreening for ferry passengers and on the Eurostar between the United Kingdom and stations located in Belgium, France, and the Netherlands. Eurostar and Eurotunnel passengers departing from the Schengen area go through both French, Dutch, or Belgian exit border control and British entry border controls before departures, while passengers departing from the United Kingdom, including those departing for Belgium or the Netherlands, undergo French border controls on British soil. For travel by ferry, French entry border control for ferries between Dover and Calais or Dunkerque takes place at the Port of Dover, whilst French exit and British entry border control takes place at Calais and Dunkerque. For travel by rail, twelve juxtaposed border control checkpoints are currently in operation.
United States: The U.S. government operates border preclearance facilities at a number of ports and airports in foreign territory. They are staffed and operated by U.S. Customs and Border Protection officers. Travellers pass through the U.S. Immigration and Customs, Public Health, and Agriculture inspections before boarding their aircraft, ship, or train. This process is intended to streamline border procedures, reduce congestion at ports of entry, and facilitate travel between the preclearance location and American airports unequipped to handle international travellers. These facilities are present at the majority of major Canadian airports, as well as selected airports in Bermuda, Aruba, the Bahamas, Abu Dhabi and Ireland. Facilities located in Canada accept NEXUS cards and United States Passport cards (land/sea entry only) in lieu of passports. A preclearance facility is currently being planned at Dubai International Airport. Citizens of the Bahamas who enter United States through either of the two preclearance facilities in that country enjoy an exemption from the general requirement to hold a visa as long as they can sufficiently prove that they do not have a significant criminal record in either the Bahamas or the U.S. All Bahamians applying for admission at a port-of-entry other than the preclearance facilities located in Nassau or Freeport International airports are required to be in possession of a valid visa. Preclearance facilities are also operated at Pacific Central Station, the Port of Vancouver, and the Port of Victoria in British Columbia, and there are plans to open one at Montreal Central Station in Quebec.
Informal prescreening: In some cases countries can introduce controls that function as border controls but are not border controls legally and do not need to be performed by government agencies. Normally they are performed and organised by private companies, based on a law that they have to check that passengers do not travel into a specific country if they are not allowed to. Such controls can take effect in one country based on the law of another country without any formalised border control prescreening agreement in force. Even if they are not border controls function as such. The most prominent example is airlines which check passports and visas before passengers are allowed to board the aircraft. Also for some passenger boats, such checks are performed before boarding.
== Expedited border controls ==
Certain countries and trade blocs establish programmes for high-frequency and/or low risk travellers to expedite border controls, subjecting them to lighter or automated checks, or priority border control facilities. In some countries, citizens or residents have access to automated facilities not available to foreigners. The following expedited border control programmes are currently in effect:
APEC Business Travel Card (ABTC): The APEC Business Travel Card, or ABTC, is an expedited border control programme for business travellers from APEC economies (excluding Canada and America). It provides visa exemptions and access to expedited border control facilities. ABTC holders are eligible for expedited border control at Canadian airports but not for any visa exemptions. ABTCs are generally issued only to citizens of APEC member countries, however Hong Kong issues them to Permanent Residents who are not Chinese citizens, a category primarily consisting of British, Indian, and Pakistani citizens. The use of ABTCs in China is restricted as a result of the One Country, Two Systems and One China policies. Chinese nationals from Hong Kong, Macau, and the Republic of China are required to use special internal travel documents to enter the mainland. Similar restrictions exist on the use of ABTC for Chinese citizens of other regions entering areas administered by the Republic of China. (see: Internal border controls).
Australia: SmartGates located at major Australian airports allow Australian ePassport holders and ePassport holders of a number of other countries to clear immigration controls more rapidly, and to enhance travel security by performing passport control checks electronically. SmartGate uses facial recognition system to verify the traveller's identity against the data stored in the chip in their biometric passport, as well as checking against immigration databases. Travellers require a biometric passport to use SmartGate as it uses information from the passport (such as photograph, name and date of birth) and in the respective countries' databases (i.e. banned travellers database) to decide whether to grant entry or departure from Australia or to generate a referral to a customs agent. These checks would otherwise require manual processing by a human which is time-consuming, costly and potentially error-prone.
British Isles: ePassport gates in the British Isles are operated by the UK Border Force and the Irish Naturalisation and Immigration Service, and are located at immigration checkpoints in the arrival halls of some airports across the British Isles, offering an alternative to using desks staffed by immigration officers. The gates use facial recognition system to verify the user's identity by comparing the user's facial features to those recorded in the photograph stored in the chip in their biometric passport. British citizens, European Economic Area citizens and citizens of Australia, Canada, Japan, New Zealand, Singapore, South Korea, Taiwan and the United States as well as Chinese citizens of Hong Kong who are enrolled in the Registered Traveller Service, can use ePassport gates at 14 ports of entry in the United Kingdom provided that they are aged either 18 and over or 12 and over travelling with an adult and holding valid biometric passports. In Ireland, eGates are available at Dublin Airport for arrivals at Terminal 1 (Piers 1 and 2) and Terminal 2 and, in addition to Irish and British citizens, they are currently available to citizens of Switzerland and the European Economic Area with electronic passports aged 18 or over though there are proposals to extend the service to non-European citizens. Irish passport cards can be used at eGates in Dublin Airport.
Caribbean Community: CARIPASS is a voluntary travel card programme that will provide secure and simple border crossings for citizens and legal residents of participating Caribbean Community jurisdictions. The CARIPASS initiative is coordinated by the Implementation Agency for Crime and Security (CARICOM IMPACS), and seeks to provide standardised border control facilities within participating Caribbean communities. CARIPASS is accepted as a valid travel document within and between participating member states and will allow cardholders to access automated gate facilities at immigration checkpoints that will use biometric technology to verify the user.
China:
Mainland China: Residents in the PRC, both Chinese citizens and foreign residents (not tourists) can use the Chinese E-Channel after registration, which is done at the border, before leaving the Mainland. Chinese citizens with Hong Kong or Macau Permanent Residence can use their Home Return Permit instead of their passport to enter and leave the Mainland.
Hong Kong & Macau: The Automated Passenger Clearance System (Chinese: 自助出入境檢查閘機), colloquially known as the e-Channel) is an automated border control facility available at airports in Hong Kong and Macau, and at land borders between the mainland and the Special Administrative Regions. It is open to residents in the appropriate regions, and to selected foreign nationals. In Hong Kong, the eChannel is also available to non-residents on departure, without registration, and to registered non-residents who qualify as "frequent travellers", including Chinese citizens from the Mainland, for both arrival and departure. Finally, Hong Kong's and Macau's eChannel systems recognise each other's Permanent Resident ID card, after registration in an automated kiosk at the ferry terminal.
Japan: Along with the introduction of J-BIS, an "Automated gate" (Japanese: 自動化ゲート) was set up at Terminal 1 and 2 at Narita Airport, Haneda Airport, Chubu Centrair Airport and Kansai Airport. With this system, when a person enters or leaves the country, rather than having to be processed by an examiner there, a person can use a machine at the gate, thereby making both entry and departure simpler and easier, as well as more convenient. Japanese people with valid passports, foreigners with both valid passports (this includes refugees with valid travel certificates and re-entry permits) and re-entry permits can use this system.
Mexico: Viajero Confiable is a Mexican trusted traveller programme which allows members to pass securely through customs and immigration controls in reduced time, using automated kiosks at participating airports. Viajero Confiable was introduced in three airports in 2014 and has since expanded to additional sites. Like the NEXUS, Global Entry, and TSA PreCheck programs, Viajero Confiable members traveling via participating airports may use designated lanes which allow them to speedily and securely clear customs, because the Mexican government has already performed a background check on them, and they are considered a trusted traveller. At the participating airports, members may use automated kiosks to scan their passport and fingerprints, and complete an electronic immigration form. The programme is targeted at Mexican citizens, as well as U.S. or Canadian citizens who are members of the Global Entry or NEXUS programme and are lawful permanent residents of Mexico.
New Zealand: In New Zealand, a SmartGate system exists at Auckland, Wellington, Christchurch and Queenstown airports, enabling holders of biometric passports issued by New Zealand, Australia, Canada, China, France, Germany, Ireland and the Netherlands, the United Kingdom, and America to clear border controls using automated facilities. The system can currently only be used by travellers 12 years of age or older, however a trial is under way that may potentially lower the age of eligibility to use eGate for people with an eligible ePassport from 12 years of age to 10 years of age. New Zealand eGates utilise biometric technology, comparing the picture of your face in your ePassport with the picture it takes of you at the gate in order to confirm your identity. To make sure eGate can do this, travellers must make sure they look as similar to their ePassport photos as possible and remove glasses, scarves and hats that they were not wearing when their passport picture was taken. eGate can handle minor changes in your face, for example if the travellers' weight or hair has changed. Customs, Biosecurity and Immigration officials utilise information provided at eGates, including photos, to clear travellers and their items across New Zealand's border. Biometric information is kept for three months before destruction but other information, including about movements across New Zealand's border is kept indefinitely and handled in accordance with the Privacy Act 1993, or as the law authorises. This might include information being used by or shared with other law enforcement or border control authorities. Since 1 July 2019, visitors from the 60 Visa Waiver countries require a New Zealand electronic Travel Authority (NZeTA). This is an online application and a further toolkit and requirements for airlines and travel agents can be downloaded from the New Zealand Immigration website.
Singapore: The enhanced-Immigration Automated Clearance System (eIACS) is available at all checkpoints for Singapore citizens, permanent residents, foreign residents with long-term passes, APEC Business Travel Card holders, and other registered travellers. Foreign visitors whose fingerprints are registered on arrival may use the eIACS lanes for exit clearance. In addition, the Biometric Identification of Motorbikers (BIKES) System is available for eligible motorcyclists at the land border crossings with Malaysia. Meanwhile, all visitors who have been fingerprinted on entry at a manned counter can use the eIACS to leave Singapore by air. Additionally, nationals of certain countries may register to use the eIACS system on entry, provided they meet prescribed conditions.
South Korea: South Korea maintains a programme known as the Smart Entry Service, open for registration by South Koreans aged 7 or above and by registered foreigners aged 17 or above. Furthermore, visitors aged 17 or older may use the Smart Entry Service on exit at international airports, as long as they have provided their biometrics on arrival.
Taiwan: An automated entry system, eGate, exists in areas administered by the Republic of China providing expedited border control for ROC nationals as well as certain classes of residents and frequent visitors. Users simply scan their travel documents at the gate and are passed through for facial recognition. As of 2019, there have been instances of foreign non-registered travellers allowed to use the eGate system to depart, notably at Taipei Taoyuan Airport Terminal 1, but not Terminal 2, using a passport scan and fingerprints.
Thailand: The automated passport control (APC) system, which uses a facial recognition system, has been available for Thai nationals since 2012 and more than 20 million have used it. Suvarnabhumi Airport opened 8 automated immigration lanes for foreigners, but only Singaporeans were allowed to use the system initially. Since then, Singaporeans and holders of the Hong Kong SAR passport have been allowed to use the system. Once processed, the foreign travellers can leave the automatic channel and present their passport to a Thai immigration officer to be stamped.
North America: North America has a wide variety of expedited border control programs:
Global Entry: Global Entry is a programme for frequent travellers that enables them to utilise automated border control facilities and priority security screening. In addition to U.S. citizens and Permanent Residents, the programme is open to Indian, Singaporean, and South Korean citizens among others. Global Entry members are eligible to use automated Global Entry facilities at certain airports to clear border control more efficiently. Enrolled users must present their machine-readable passport or permanent residency card, and submit their fingerprints to establish identity. Users then complete an electronic customs declaration, and are issued a receipt instructing them to either proceed to baggage claim, or to a normal inspection booth for an interview. Participants may utilize automated kiosks to clear U.S. border controls at participating airports.
CANPASS: Canadian citizens and Permanent Residents can apply for CANPASS which, in its present form, provides expedited border controls for individuals entering Canada on corporate and private aircraft.
NEXUS and FAST: NEXUS is a joint Canadian-U.S. expedited border control programme for low risk travellers holding Canadian or U.S. citizenship or permanent residence. Membership requires approval by Canadian and U.S. authorities and entitles members to dedicated RFID-enabled lanes when crossing the land border. A NEXUS card can also be utilised as a travel document between the two countries and entitles passengers to priority border control facilities in Canada and Global Entry facilities in the U.S. Free and Secure Trade (FAST) is a similar programme for commercial drivers and approved importers, reducing the amount of customs checks conducted at the border and expediting the border control process. When entering the U.S. by air, holders of NEXUS cards may use Global Entry kiosks to clear border controls at participating airports
SENTRI: SENTRI is a programme similar to NEXUS for U.S. and Mexican citizens that additionally allows members to register their cars for expedited land border controls. Unlike NEXUS, SENTRI is administered solely by the American government and does not provide expedited controls when entering Mexico. When entering United States by land from Canada, it can be used as a NEXUS card, but not the other way around. Individuals holding a NEXUS card may additionally register their cars for expedited land border controls under SENTRI. When entering United States by air, holders of SENTRI cards may use Global Entry kiosks to clear border controls at participating airports
=== Local border traffic ===
The local border traffic is the flow of travellers that reside within the area surrounding a controlled international or internal border. In many cases local border traffic is subject to special regulations to expedite local border traffic. Depending on the particular border in question, these measures may be restricted to local residents, implemented as a blanket regional visa waiver by one jurisdiction for nationals of the other, restricted to frequent cross-border travellers, or available to individuals lawfully present in one jurisdiction seeking to visit the other.
Schengen Area: Schengen states which share an external land border with a non-Schengen state are authorised by EU Regulation 1931/2006 to conclude bilateral agreements with neighbouring countries implementing a simplified local border traffic regime. Such agreements define a border area and provide for the issuance of local border traffic permits to residents of the border area that may be used to cross the EU external border within the border area.
=== Relaxed control in near-border areas ===
Bhutan: For example, the relaxed border controls maintained by Bhutan for those not proceeding past Phuentsholing and certain other border cities enable travellers to enter without going through any document check whatsoever.
America: The Border Crossing Card issued by American authorities to Mexican nationals enables Mexicans to enter border areas without a passport. Both United States and Bhutan maintain interior checkposts to enforce compliance.
China: China maintains relaxed border controls for individuals lawfully in Hong Kong or Macau to visit the surrounding Pearl River Delta visa-free provided that certain conditions are met.
Belarus The "Brest – Grodno" visa free territory, established by a presidential decree signed in August 2019 has permitted local visa free access to most visitors lawfully present in the neighbouring Schengen Area since 10 November 2019. Visitors are allowed to stay without a visa for 15 days. Entry is possible through designated checkpoints with Poland and Lithuania, Brest-Uschodni Railway Station, Grodno Railway Station, Brest Airport and Grodno Airport. Prior to travel, visitors must obtain authorisation from a local travel agency in Belarus.
== Border control organisations by country ==
Border control is generally the responsibility of specialised government organisations which oversee various aspects their jurisdiction's border control policies, including customs, immigration policy, border guard, biosecurity measures. Official designations, division of responsibilities, and command structures of these organisations vary considerably and some countries split border control functions across multiple agencies.
Australia
Australian Border Force
Canada
Immigration, Refugees and Citizenship Canada
Canada Border Services Agency (previously Canada Customs and Revenue Agency)
Canadian Air Transport Security Authority
China
National Immigration Administration of Ministry of Public Security
People's Armed Police
General Administration of Customs
Immigration Department (Hong Kong)
Public Security Police Force of Macau
India
Border Security Force
The Assam Rifles
Indo-Tibetan Border Police
Indonesia
Directorate General of Immigration (Indonesia)
Ireland
Irish Naturalisation and Immigration Service
Garda National Immigration Bureau
Revenue Commissioners
Iran
The Immigration & Passport Police Office, a subdivision of Law Enforcement Force of Islamic Republic of Iran
Islamic Republic of Iran Border Guard Command ("NAJA Border Guard"), a subdivision of Law Enforcement Force of Islamic Republic of Iran
Malaysia
Immigration Department of Malaysia
North Korea
Border Security Command
Coastal Security Bureau
Pakistan
Pakistan Rangers
Frontier Corps
Gilgit−Baltistan Scouts
Pakistan Army
Pakistan Rangers
Pakistan Customs
Schengen Area
European Border and Coast Guard Agency (Frontex)
France
Direction centrale de la police aux frontières (a directorate of the French National Police)
Direction générale des douanes et droits indirects (DGDDI)
Finland
Finnish Border Guard
Finnish Customs
Germany
Federal Police
Bundeszollverwaltung
Italy
Polizia di Stato
Guardia di Finanza
Arma dei Carabinieri
Netherlands
Koninklijke Marechaussee (English: Royal Military Constabulary), a branch of the Dutch Armed Forces
Fiscal Information and Investigation Service
Spain
Cuerpo Nacional de Policia
Guardia Civil
Customs Surveillance Service
Switzerland
Federal Department of Justice and Police
Federal Office of Police
Federal Department of Finance
Swiss Border Guard
Sweden
Swedish Border Police
South Korea
Korean Immigration Service, Ministry of Justice
Korea Customs Service
Singapore
Immigration and Checkpoints Authority
Taiwan
National Immigration Agency
Customs Administration
United Kingdom
HM Revenue and Customs
UK Border Force
Immigration Enforcement
United States
Department of Homeland Security (DHS)
U.S. Customs and Border Protection (CBP), a division of the DHS
United States Border Patrol
Transportation Security Administration
U.S. Immigration and Customs Enforcement, or ICE
United States Citizenship and Immigration Services
== Controversies ==
Certain border control policies of various countries have been the subject of controversy and public debate.
Australia:
Offshore detention centres: Beginning in 2001, Australia implemented border control policies featuring the detention of asylum seekers and economic migrants who arrived unlawfully by boat in nearby islands in the Pacific. These policies are controversial and in 2017 the Supreme Court of Papua New Guinea declared the detention centre at Manus Island to be unconstitutional. The adherence of these policies to international human rights law is a matter of controversy.
Travel restrictions on Australian citizens during the COVID-19 pandemic: During the COVID-19 pandemic, Australia adopted a policy of denying entry to its own citizens arriving from jurisdictions perceived to pose a high risk of COVID-19 transmission. Additionally, Australia adopted a broad policy of restricting entry to the country for all individuals located overseas, including Australian citizens, resulting in a large number of Australian citizens stranded abroad. Australia's policies with regard to its own citizens undermined the principle in international law that a state must permit entry to its own citizens, as enshrined in the International Covenant on Civil and Political Rights. At the same time, the Australian government prohibited the majority of Australian citizens from exiting the country, even if they ordinarily reside overseas.
Bhutan: Starting primarily in the 1990s, the Bhutanese government implemented strict restrictions on the country's ethnically Nepali Lhotshampa population and implemented internal border control policies to restrict immigration or return of ethnic Nepalis, creating a refugee crisis. This policy shift effectively ended previously liberal immigration policies with regards to Nepalis and counts among the most racialised border control policies in Asia.
China: China does not currently recognize North Korean defectors as refugees and subjects them to immediate deportation if caught. The China-DPRK border is fortified and both sides aim to deter refugees from crossing. This aspect of Chinese border control policy has been criticised by human rights organisations.
Cyprus: As a result of Northern Cyprus's sovereignty dispute with Southern Cyprus, the South (a member of the European Union) has imposed restrictions on the North's airports, and pressure from the European Union has resulted in all countries other than Turkey recognising the South's ability to impose a border shutdown on the North, thus negating the right to self determination of the predominantly Turkish Northern Cypriot population and subjecting their airports to border controls imposed by the predominantly Greek South. As a result, Northern Cyprus is heavily dependent on Turkey for economic support and is unable to develop a functioning economy.
Israel: Border control, both on entry and on exit, at Israeli airports rate passengers' potential threat to security using factors including nationality, ethnicity, and race. Instances of discrimination against Arabs, people perceived to be Muslim, and Russian Jews among others have been reported in the media. Security at Tel Aviv's Ben Gurion Airport relies on a number of fundamentals, including a heavy focus on what Raphael Ron, former director of security at Ben Gurion, terms the "human factor", which he generalised as "the inescapable fact that terrorist attacks are carried out by people who can be found and stopped by an effective security methodology." As part of its focus on this so-called "human factor", Israeli security officers interrogate travellers, profiling those who appear to be Arab based on name or physical appearance. Even as Israeli authorities argue that racist, ethnic, and religious profiling are effective security measures, according to Boaz Ganor, Israel has not undertaken any known empirical studies on the efficacy of the technique of racial profiling.
United States
Policies targeting Muslims: Since the implementation of added security measures in the aftermath of the 2001 World Trade Centre attacks, reports of discrimination against people perceived to be Muslim by American border security officers have been prevalent in the media. The travel restrictions implemented during the Trump presidency primarily against Muslim majority countries have provoked controversy over whether such measures are a legitimate Border security measure or unethically discriminatory.
Separation of families seeking asylum: In April 2018, as part of its "zero tolerance" policy, the American government ordered the separation of the children of refugees and asylum seekers from their parents. As a consequence of popular outrage, and criticism from the medical and religious communities, the policy was put on hold by an executive order signed by Trump on June 20, 2018. Under the policy, federal authorities separated children from their parents, relatives, or other adults who accompanied them in crossing the border, whether apprehended during an illegal crossing or, in numerous reported cases, legally presenting themselves for asylum. The policy involved prosecuting all adults detained at the Mexican border, imprisoning parents, and handing minors to the American Department of Health and Human Services (Spanish: Departamento de Salud y Servicios Sociales de los Estados Unidos). The federal government reported that the policy resulted in the separation of over 2300 children from their parents. The Trump administration blamed Congress for the atrocity and labelled the change in policy as "the Democrats' law", even though Congress had been overwhelmingly dominated by Republicans since 2016. Regardless, members of both parties criticised the policy and detractors of the Trump administration emphasise the fact there does not seem to be any written law that required the government to implement such a policy. Attorney General Jeff Sessions, in defending the policy, quoted a passage from the Bible, notwithstanding the fact that religious doctrine carries absolutely no weight in American law. Other officials praised the policy as a deterrent to unlawful immigration. The costs of separating migrant children from their parents and keeping them in "tent cities" are higher than keeping them with their parents in detention centres. To handle the large amount of immigration charges brought by the Trump administration, federal prosecutors had to divert resources from other crime cases. It costs $775 per person per night to house the children when they are separated but $256 per person per night when they are held in permanent HHS facilities and $298 per person per night to keep the children with their parents in immigration detention centres. The head of the Justice Department's major crimes unit in San Diego diverted staff from drug smuggling cases. Drug smuggling cases were also increasingly pursued in state courts rather than federal courts, as federal prosecutor were increasingly preoccupied with pursuing charges against illegal border crossings. The Kaiser Family Foundation said that costs associated with the policy may also divert resources from programmes within the Department of Health and Human Services. In July 2018, it was reported that HHS had diverted at least $40 million from its health programs to care for and reunify migrant children, and that the HHS was preparing to shift more than $200 million from other HHS accounts.
== Gallery ==
== See also ==
Asylum seeker
Border barrier
Airspace
Air sovereignty
Illegal entry
United States Border Patrol
Maritime boundary
Freedom of movement
== Notes ==
== References ==
== Further reading ==
Idrees Kahloon, "Border Control: The economics of immigration vs. the politics of immigration", The New Yorker, 12 June 2023, pp. 65–69. "The limits of immigration are not set by economics but by political psychology – by backlash unconcerned with net benefits." (p. 65.)
Susan Harbage Page & Inéz Valdez (17 April 2011). "Residues of Border Control", Southern Spaces
James, Paul (2014). "Faces of Globalization and the Borders of States: From Asylum Seekers to Citizens". Citizenship Studies. 18 (2): 208–23. doi:10.1080/13621025.2014.886440. S2CID 144816686.
Philippe Legrain (2007). Immigrants: Your Country Needs Them, Little Brown, ISBN 0-316-73248-6
Aristide Zolberg (2006). A Nation by Design: Immigration Policy in the Fashioning of America, Harvard University Press, ISBN 0-674-02218-1
Philippe Legrain (2007). Immigrants: Your Country Needs Them, Little Brown, ISBN 0-316-73248-6
Ruben Rumbaut & Walter Ewing (Spring 2007). "The Myth of Immigrant Criminality and the Paradox of Assimilation: Incarceration Rates among Native and Foreign-Born Men", The Immigration Policy Center.
Bryan Balin (2008). State Immigration Legislation and Immigrant Flows: An Analysis The Johns Hopkins University
Douglas S. Massey (September 2005). "Beyond the Border Buildup: Towards a New Approach to Mexico-U.S. Migration", Immigration Policy Center, the American Immigration Law Foundation
IPC Special Report (November 2005). "Economic Growth & Immigration: Bridging the Demographic Divide", Immigration Policy Center, the American Immigration Law Foundation
American Immigration Council (April 2014). "Immigrant Women in the United States: A Demographic Portrait"
Jill Esbenshade (Summer 2007). "Division and Dislocation: Regulating Immigration through Local Housing Ordinances". American Immigration Council
Jeffrey S. Passel & Roberto Suro (September 2005). "Rise, Peak and Decline: Trends in U.S. Immigration". Pew Hispanic Center
Jeffrey S. Passel (March 2005). "Estimates of the Size and Characteristics of the Undocumented Population". Pew Hispanic Center
Jeffrey S. Passel (March 2007). "Growing Share of Immigrants Choosing Naturalization". Pew Hispanic Center
This article incorporates public domain material from Report for Congress: Agriculture: A Glossary of Terms, Programs, and Laws, 2005 Edition (PDF). Congressional Research Service.
UNCTAD's Classification of Non-Tariff Measures (2012) report
== External links ==
ITC's Market Access Map, an online database of customs tariffs and market requirements. | Wikipedia/Border_control |
Attribute-based access control (ABAC), also known as policy-based access control for IAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes.
ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes. ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes.
Unlike role-based access control (RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups.
Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples are role and project. Atomic-valued attributes contain only one atomic value. Examples are clearance and sensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control.
Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information systems to be defined to resolve an authorization and achieve an efficient regulatory compliance, allowing enterprises flexibility in their implementations based on their existing infrastructures.
Attribute-based access control is sometimes referred to as policy-based access control (PBAC) or claims-based access control (CBAC), which is a Microsoft-specific term. The key standards that implement ABAC are XACML and ALFA (XACML).
== Dimensions of attribute-based access control ==
ABAC can be seen as:
Externalized authorization management
Dynamic authorization management
Policy-based access control
Fine-grained authorization
== Components ==
=== Architecture ===
ABAC comes with a recommended architecture which is as follows:
The PEP or Policy Enforcement Point: it is responsible for protecting the apps & data you want to apply ABAC to. The PEP inspects the request and generates an authorization request from which it sends to the PDP.
The PDP or Policy Decision Point is the brain of the architecture. This is the piece which evaluates incoming requests against policies it has been configured with. The PDP returns a Permit/Deny decision. The PDP may also use PIPs to retrieve missing metadata
The PIP or Policy Information Point bridges the PDP to external sources of attributes e.g. LDAP or databases.
=== Attributes ===
Attributes can be about anything and anyone. They tend to fall into 4 different categories:
Subject attributes: attributes that describe the user attempting the access e.g. age, clearance, department, role, job title
Action attributes: attributes that describe the action being attempted e.g. read, delete, view, approve
Object attributes: attributes that describe the object (or resource) being accessed e.g. the object type (medical record, bank account), the department, the classification or sensitivity, the location
Contextual (environment) attributes: attributes that deal with time, location or dynamic aspects of the access control scenario
=== Policies ===
Policies are statements that bring together attributes to express what can happen and is not allowed. Policies in ABAC can be granting or denying policies. Policies can also be local or global and can be written in a way that they override other policies. Examples include:
A user can view a document if the document is in the same department as the user
A user can edit a document if they are the owner and if the document is in draft mode
Deny access before 9 AM
With ABAC you can have an unlimited number of policies that cater to many different scenarios and technologies.
== Other models ==
Historically, access control models have included mandatory access control (MAC), discretionary access control (DAC), and more recently role-based access control (RBAC). These access control models are user-centric and do not take into account additional parameters such as resource information, the relationship between the user (the requesting entity) and the resource, and dynamic information, e.g. time of the day or user IP.
ABAC tries to address this by defining access control based on attributes which describe the requesting entity (the user), the targeted object or resource, the desired action (view, edit, delete), and environmental or contextual information. This is why access control is said to be attribute-based.
== Implementations ==
There are three main implementations of ABAC:
OASIS XACML
Abbreviated Language for Authorization (ALFA).
NIST's Next-generation Access Control (NGAC)
XACML, the eXtensible Access Control Markup Language, defines an architecture (shared with ALFA and NGAC), a policy language, and a request/response scheme. It does not handle attribute management (user attribute assignment, object attribute assignment, environment attribute assignment) which is left to traditional IAM tools, databases, and directories.
Companies, including every branch in the United States military, have started using ABAC. At a basic level, ABAC protects data with 'IF/THEN/AND' rules rather than assign data to users. The US Department of Commerce has made this a mandatory practice and the adoption is spreading throughout several governmental and military agencies.
== Applications ==
The concept of ABAC can be applied at any level of the technology stack and an enterprise infrastructure. For example, ABAC can be used at the firewall, server, application, database, and data layer. The use of attributes bring additional context to evaluate the legitimacy of any request for access and inform the decision to grant or deny access.
An important consideration when evaluating ABAC solutions is to understand its potential overhead on performance and its impact on the user experience. It is expected that the more granular the controls, the higher the overhead.
=== API and microservices security ===
ABAC can be used to apply attribute-based, fine-grained authorization to the API methods or functions. For instance, a banking API may expose an approveTransaction(transId) method. ABAC can be used to secure the call. With ABAC, a policy author can write the following:
Policy: managers can approve transactions up to their approval limit
Attributes used: role, action ID, object type, amount, approval limit.
The flow would be as follows:
The user, Alice, calls the API method approveTransaction(123)
The API receives the call and authenticates the user.
An interceptor in the API calls out to the authorization engine (typically called a Policy Decision Point or PDP) and asks: Can Alice approve transaction 123?
The PDP retrieves the ABAC policy and necessary attributes.
The PDP reaches a decision e.g. Permit or Deny and returns it to the API interceptor
If the decision is Permit, the underlying API business logic is called. Otherwise the API returns an error or access denied.
=== Application security ===
One of the key benefits to ABAC is that the authorization policies and attributes can be defined in a technology neutral way. This means policies defined for APIs or databases can be reused in the application space. Common applications that can benefit from ABAC are:
Content Management Systems
ERPs
Home-grown Applications
Web Applications
The same process and flow as the one described in the API section applies here too.
=== Database security ===
Security for databases has long been specific to the database vendors: Oracle VPD, IBM FGAC, and Microsoft RLS are all means to achieve fine-grained ABAC-like security.
An example would be:
Policy: managers can view transactions in their region
Reworked policy in a data-centric way: users with role = manager can do the action SELECT on table = TRANSACTIONS if user.region = transaction.region
=== Data security ===
Data security typically goes one step further than database security and applies control directly to the data element. This is often referred to as data-centric security. On traditional relational databases, ABAC policies can control access to data at the table, column, field, cell and sub-cell using logical controls with filtering conditions and masking based on attributes. Attributes can be data, user, session or tools based to deliver the greatest level of flexibility in dynamically granting/denying access to a specific data element. On big data, and distributed file systems such as Hadoop, ABAC applied at the data layer control access to folder, sub-folder, file, sub-file and other granular.
=== Big data security ===
Attribute-based access control can also be applied to Big Data systems like Hadoop. Policies similar to those used previously can be applied when retrieving data from data lakes.
=== File server security ===
As of Windows Server 2012, Microsoft has implemented an ABAC approach to controlling access to files and folders. This is achieved through dynamic access control (DAC) and Security Descriptor Definition Language (SDDL). SDDL can be seen as an ABAC language as it uses metadata of the user (claims) and of the file/ folder to control access.
== See also ==
== References ==
== External links ==
ATTRIBUTE BASED ACCESS CONTROL (ABAC) - OVERVIEW
Unified Attribute Based Access Control Model (ABAC) covering DAC, MAC and RBAC
Attribute Based Access Control Models (ABAC) and Implementation in Cloud Infrastructure as a Service | Wikipedia/Attribute-based_access_control |
In computer security, discretionary access control (DAC) is a type of access control defined by the Trusted Computer System Evaluation Criteria (TCSEC) as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).
Discretionary access control is commonly discussed in contrast to mandatory access control (MAC). Occasionally, a system as a whole is said to have "discretionary" or "purely discretionary" access control when that system lacks mandatory access control. On the other hand, systems can implement both MAC and DAC simultaneously, where DAC refers to one category of access controls that subjects can transfer among each other, and MAC refers to a second category of access controls that imposes constraints upon the first.
== Implementation ==
The meaning of the term in practice is not as clear-cut as the definition given in the TCSEC standard, because the TCSEC definition of DAC does not impose any implementation. There are at least two implementations: with owner (as a widespread example) and with capabilities.
=== With owner ===
The term DAC is commonly used in contexts that assume that every object has an owner that controls the permissions to access the object, probably because many systems do implement DAC using the concept of an owner. But the TCSEC definition does not say anything about owners, so technically an access control system doesn't have to have a concept of ownership to meet the TCSEC definition of DAC.
Users (owners) have under this DAC implementation the ability to make policy decisions and/or assign security attributes. A straightforward example is the Unix file mode which represent write, read, and execute in each of the 3 bits for each of User, Group and Others. (It is prepended by another bit that indicates additional characteristics).
=== With capabilities ===
As another example, capability systems are sometimes described as providing discretionary controls because they permit subjects to transfer their access to other subjects, even though capability-based security is fundamentally not about restricting access "based on the identity of subjects". In general, capability systems do not allow permissions to be passed "to any other subject"; the subject wanting to pass its permissions must first have access to the receiving subject, and subjects generally only have access to a strictly limited set of subjects consistent with the principle of least privilege.
== See also ==
== References ==
=== Citations ===
=== Sources === | Wikipedia/Discretionary_access_control |
A mantrap, security mantrap portal, airlock, sally port or access control vestibule is a physical security access control system comprising a small space with two sets of interlocking doors, such that the first set of doors must close before the second set opens. Airlocks have a very similar design, allowing free ingress and egress while also restricting airflow.
In a manual mantrap, a guard locks and unlocks each door in sequence. An intercom and/or video camera are often used to allow the guard to control the trap from a remote location.
In an automatic mantrap, identification may be required for each door, sometimes even different measures for each door. For example, a key may open the first door, but a personal identification number entered on a number pad opens the second. Other methods of opening doors include proximity cards or biometric devices such as fingerprint readers or iris recognition scans. Time of flight sensors are used in high security environments. Newer stereovision detection systems are often employed.
Some security portal mantraps use dual authentication, employing two separate readers (security card plus biometrics, for example). This is very typical in the data center security entrance control environment.
Security mantrap portals typically offer options for all steel construction and BR (bullet/ballistics) or RC (burglar protection) construction including thick laminated curved glass.
Metal detectors are often built in to prevent the entrance of people carrying weapons. This use is particularly frequent in banks and jewelry shops.
Turnkey systems are sometimes provided by some suppliers due to the need for specially trained installers.
Fire codes require that automatic mantraps allow exit from the intermediate space while denying access to a secure space such as a data center or research laboratory. A manually-operated mantrap may allow a guard to lock both doors, trapping a suspect between the doors for questioning or detainment.
== See also ==
Mantrap (snare)
Sally port
Optical turnstile
== References ==
== External links ==
"Frequently asked questions". Kouba Systems. Archived from the original on July 21, 2012. Retrieved 12 July 2012.
"The AI Revolution and What it Means for Data Center Security Design" (McGovern, October 2024) | Wikipedia/Mantrap_(access_control) |
Graph-based access control (GBAC) is a declarative way to define access rights, task assignments, recipients and content in information systems. Access rights are granted to objects like files or documents, but also business objects such as an account. GBAC can also be used for the assignment of agents to tasks in workflow environments. Organizations are modeled as a specific kind of semantic graph comprising the organizational units, the roles and functions as well as the human and automatic agents (i.a. persons, machines). The main difference with other approaches such as role-based access control or attribute-based access control is that in GBAC access rights are defined using an organizational query language instead of total enumeration.
== History ==
The foundations of GBAC go back to a research project named CoCoSOrg (Configurable Cooperation System) [] (in English language please see) at Bamberg University. In CoCoSOrg an organization is represented as a semantic graph and a formal language is used to specify agents and their access rights in a workflow environment. Within the C-Org-Project at Hof University's Institute for Information Systems (iisys), the approach was extended by features like separation of duty, access control in virtual organizations and subject-oriented access control.
== Definition ==
Graph-based access control consists of two building blocks:
A semantic graph modeling an organization
A query language.
=== Organizational graph ===
The organizational graph is divided into a type and an instance level. On the instance level there are node types for organizational units, functional units and agents. The basic structure of an organization is defined using so called "structural relations". They define the "is part of"- relations between functional units and organizational units as well as the mapping of agents to functional units. Additionally there are specific relationship types like "deputyship" or "informed_by". These types can be extended by the modeler. All relationships can be context sensitive through the usage of predicates.
On the type level organizational structures are described in a more general manner. It consists of organizational unit types, functional unit types and the same relationship types as on the instance level. Type definitions can be used to create new instances or reuse organizational knowledge in case of exceptions (for further reading see).
=== Query language ===
In GBAC a query language is used to define agents having certain characteristics or abilities. The following table shows the usage of the query language in the context of an access control matrix.
The first query means that all managers working for the company for more than six months can read the financial report, as well as the managers who are classified by the flag "ReadFinancialReport".
The daily financial report can only be written by the manager of the controlling department or clerks of the department that are enabled to do that (WriteFinancialReport==TRUE).
== Implementation ==
GBAC was first implemented in the CoCoS Environment within the organizational server CoCoSOrg.
In the C-Org-Project it was extended with more sophisticated features like separation of duty or access control in distributed environments.
There is also a cloud-based implementation on IBM's Bluemix platform.
In all implementations the server takes a query from a client system and resolves it to a set of agents. This set is sent back to the calling client as response. Clients can be file systems, database management systems, workflow management systems, physical security systems or even telephone servers.
== See also ==
== References == | Wikipedia/Graph-based_access_control |
In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users, and to implementing mandatory access control (MAC) or discretionary access control (DAC).
Role-based access control is a policy-neutral access control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations. RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication.
== Design ==
Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department.
Three primary rules are defined for RBAC:
Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role.
Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.
Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized.
Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles.
With the concepts of role hierarchy and constraints, one can control RBAC to create or simulate lattice-based access control (LBAC). Thus RBAC can be considered to be a superset of LBAC.
When defining an RBAC model, the following conventions are useful:
S = Subject = A person or automated agent
R = Role = Job function or title which defines an authority level
P = Permissions = An approval of a mode of access to a resource
SE = Session = A mapping involving S, R and/or P
SA = Subject Assignment
PA = Permission Assignment
RH = Partially ordered Role Hierarchy. RH can also be written: ≥ (The notation: x ≥ y means that x inherits the permissions of y.)
A subject can have multiple roles.
A role can have multiple subjects.
A role can have many permissions.
A permission can be assigned to many roles.
An operation can be assigned to many permissions.
A permission can be assigned to many operations.
A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles. Thus it can be used to achieve appropriate separation of duties. For example, the same person should not be allowed to both create a login account and to authorize the account creation.
Thus, using set theory notation:
P
A
⊆
P
×
R
{\displaystyle PA\subseteq P\times R}
and is a many to many permission to role assignment relation.
S
A
⊆
S
×
R
{\displaystyle SA\subseteq S\times R}
and is a many to many subject to role assignment relation.
R
H
⊆
R
×
R
{\displaystyle RH\subseteq R\times R}
A subject may have multiple simultaneous sessions with/in different roles.
=== Standardized levels ===
The NIST/ANSI/INCITS RBAC standard (2004) recognizes three levels of RBAC:
core RBAC
hierarchical RBAC, which adds support for inheritance between roles
constrained RBAC, which adds separation of duties
== Relation to other models ==
RBAC is a flexible access control technology whose flexibility allows it to implement DAC or MAC. DAC with groups (e.g., as implemented in POSIX file systems) can emulate RBAC. MAC can simulate RBAC if the role graph is restricted to a tree rather than a partially ordered set.
Prior to the development of RBAC, the Bell-LaPadula (BLP) model was synonymous with MAC and file system permissions were synonymous with DAC. These were considered to be the only known models for access control: if a model was not BLP, it was considered to be a DAC model, and vice versa. Research in the late 1990s demonstrated that RBAC falls in neither category. Unlike context-based access control (CBAC), RBAC does not look at the message context (such as a connection's source). RBAC has also been criticized for leading to role explosion, a problem in large enterprise systems which require access control of finer granularity than what RBAC can provide as roles are inherently assigned to operations and data types. In resemblance to CBAC, an Entity-Relationship Based Access Control (ERBAC, although the same acronym is also used for modified RBAC systems, such as Extended Role-Based Access Control) system is able to secure instances of data by considering their association to the executing subject.
=== Comparing to ACL ===
Access control lists (ACLs) are used in traditional discretionary access-control (DAC) systems to affect low-level data-objects. RBAC differs from ACL in assigning permissions to operations which change the direct-relations between several entities (see: ACLg below). For example, an ACL could be used for granting or denying write access to a particular system file, but it wouldn't dictate how that file could be changed. In an RBAC-based system, an operation might be to 'create a credit account' transaction in a financial application or to 'populate a blood sugar level test' record in a medical application. A Role is thus a sequence of operations within a larger activity. RBAC has been shown to be particularly well suited to separation of duties (SoD) requirements, which ensure that two or more people must be involved in authorizing critical operations. Necessary and sufficient conditions for safety of SoD in RBAC have been analyzed. An underlying principle of SoD is that no individual should be able to effect a breach of security through dual privilege. By extension, no person may hold a role that exercises audit, control or review authority over another, concurrently held role.
Then again, a "minimal RBAC Model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent.
In modern SQL implementations, like ACL of the CakePHP framework, ACLs also manage groups and inheritance in a hierarchy of groups. Under this aspect, specific "modern ACL" implementations can be compared with specific "modern RBAC" implementations, better than "old (file system) implementations".
For data interchange, and for "high level comparisons", ACL data can be translated to XACML.
=== Attribute-based access control ===
Attribute-based access control or ABAC is a model which evolves from RBAC to consider additional attributes in addition to roles and groups. In ABAC, it is possible to use attributes of:
the user e.g. citizenship, clearance,
the resource e.g. classification, department, owner,
the action, and
the context e.g. time, location, IP.
ABAC is policy-based in the sense that it uses policies rather than static permissions to define what is allowed or what is not allowed.
=== Relationship-based access control ===
Relationship-based access control or ReBAC is a model which evolves from RBAC. In ReBAC, a subject's permission to access a resource is defined by the presence of relationships between those subjects and resources.
The advantage of this model is that allows for fine-grained permissions; for example, in a social network where users can share posts with other specific users.
== Use and availability ==
The use of RBAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice. A 2010 report prepared for NIST by the Research Triangle Institute analyzed the economic value of RBAC for enterprises, and estimated benefits per employee from reduced employee downtime, more efficient provisioning, and more efficient access control policy administration.
In an organization with a heterogeneous IT infrastructure and requirements that span dozens or hundreds of systems and applications, using RBAC to manage sufficient roles and assign adequate role memberships becomes extremely complex without hierarchical creation of roles and privilege assignments. Newer systems extend the older NIST RBAC model to address the limitations of RBAC for enterprise-wide deployments. The NIST model was adopted as a standard by INCITS as ANSI/INCITS 359-2004. A discussion of some of the design choices for the NIST model has also been published.
== Potential Vulnerabilities ==
Role based access control interference is a relatively new issue in security applications, where multiple user accounts with dynamic access levels may lead to encryption key instability, allowing an outside user to exploit the weakness for unauthorized access. Key sharing applications within dynamic virtualized environments have shown some success in addressing this problem.
== See also ==
== References ==
== Further reading ==
David F. Ferraiolo; D. Richard Kuhn; Ramaswamy Chandramouli (2007). Role-based Access Control (2nd ed.). Artech House. ISBN 978-1-59693-113-8.
== External links ==
FAQ on RBAC models and standards
Role Based Access Controls at NIST
XACML core and hierarchical role based access control profile
Institute for Cyber Security at the University of Texas San Antonio
Practical experiences in implementing RBAC | Wikipedia/Role-based_access_control |
In computer systems security, Relationship-based access control (ReBAC) defines an authorization paradigm where a subject's permission to access a resource is defined by the presence of relationships between those subjects and resources.
In general, authorization in ReBAC is performed by traversing the directed graph of relationships. The nodes and edges of this graph are very similar to triples in the Resource Description Framework (RDF) data format. ReBAC systems allow hierarchies of relationships, and some allow more complex definitions that include algebraic operators on relationships such as union, intersection, and difference.
ReBAC gained popularity with the rise of social network web applications, where users need to control their personal information based on their relationship with the data receiver rather than the receiver’s role. Using ReBAC enabled to collectively define permissions for teams and groups, thus eliminating the need to set permissions individually for every resource.
In contrast to role-based access control (RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ReBAC (like ABAC), allows defining more fine-grained permissions. For example, if a ReBAC system defines resources of type document, which can allow one action editor, if the system contains the relationship ('alice', 'editor', 'document:budget'), then subject Alice can edit the specific resource document:budget. The downside of ReBAC is that, while it allows more fine-grained access, this means that the application may need to perform more authorization checks.
ReBAC systems are deny-by-default, and allow building RBAC systems on top of them.
== History ==
The term ReBAC was coined by Carrie E. Gates in 2006.
In 2019 Google published a paper presenting "Zanzibar: Google’s Consistent, Global Authorization System". The paper defines a system composed of a namespace configuration and relationship data expressed as triples.
Since the release of that paper, several companies have built commercial and open source offerings of ReBAC systems.
== See also ==
Role-based access control
Attribute-based access control
== References == | Wikipedia/Relationship-based_access_control |
An IP access controller is an electronic security device designed to identify users and control entry to or exit from protected areas using Internet Protocol-based technology. A typical IP access controller supports 2 or 4 basic access control readers. IP access controllers may have an internal web server that is configurable using a browser or using software installed on a host PC.
The main features that distinguish IP controllers from older generations of serial controllers are:
IP controllers connect directly to LAN or WAN
IP controllers have all the inputs and outputs necessary for controlling readers, monitoring door inputs, and controlling locks
IP controllers have an on-board network interface and do not require the use of a terminal server
== Advantages and disadvantages of IP controllers ==
Advantages:
An existing network infrastructure is fully utilized; there is no need to install new communication lines.
There are no limitations regarding the number of IP controllers in a system (the limit of 32 controllers per line is typical for systems using RS-485 communication interface).
Special knowledge of installation, termination, grounding and troubleshooting of RS-485 communication lines is not required.
Communication with IP controllers may be done at the full network speed, which is important if transferring a lot of data (databases with thousands of users, possibly including biometric records).
In case of an alarm IP controllers may initiate connection to the host PC. This ability is important in large systems as it allows reducing network traffic generated by frequent polling.
Simplifies installation of systems consisting of multiple locations separated by large distances. Basic Internet link is sufficient to establish connections to remote locations.
Wide selection of standard network equipment is available to provide connectivity in different situations (fiber, wireless, VPN, dual path, PoE)
No special hardware is required for building fail-over systems: in case the primary host PC fails, the secondary host PC may start polling IP controllers.
Disadvantages:
The system becomes susceptible to network related problems, such as delays in case of heavy traffic and network equipment failures.
IP controllers and workstations may become accessible to hackers if the network of an organization is not well protected. This threat may be eliminated by physically separating the access control network from the network of the organization. Also most IP controllers utilize either Linux platform or proprietary operating systems, which makes them more difficult to hack. Industry standard data encryption is also used.
Maximum distance from a hub or a switch to the controller is 100 meters (330 feet).
Operation of the system is dependent on the host PC. In case the host PC fails, events from IP controllers are not retrieved and functions that require interaction between readers (i.e. anti-passback) stop working. Some controllers, however, have peer-to-peer communication option in order to reduce dependency on the host PC.
Unlike IP readers, most IP controllers do not support PoE. This, however, may change if the PoE technology is improved to deliver more power, or low-power controllers and locks are introduced. Based on the current PoE standards power that can be carried by a single network cable is enough for one IP reader and an electric strike or a magnetic lock, but connecting an IP controller and 2 or more electric locks would require more power than available via PoE.
== Standards ==
HID Global
== See also ==
Access control
== External links ==
HID OPIN Summary
== References == | Wikipedia/IP_access_controller |
Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection.
== Applications ==
For some applications, dedicated resources (such as a wavelength across an optical network) may be needed in which case admission control has to verify availability of such resources before a request can be admitted.
For more elastic applications, a total volume of resources may be needed prior to some deadline in order to satisfy a new request, in which case admission control needs to verify availability of resources at the time and perform scheduling to guarantee satisfaction of an admitted request.
== Admission control systems ==
Asynchronous Transfer Mode
Audio Video Bridging using Stream Reservation Protocol
Call admission control
IEEE 1394
Integrated services on IP networks
Public switched telephone network
== References ==
== External links ==
Papers about Admission Control in DiffServ systems on Google Scholar
Deadline-aware Admission Control for Large Inter-Datacenter Transfers | Wikipedia/Admission_control |
In graph theory, reachability refers to the ability to get from one vertex to another within a graph. A vertex
s
{\displaystyle s}
can reach a vertex
t
{\displaystyle t}
(and
t
{\displaystyle t}
is reachable from
s
{\displaystyle s}
) if there exists a sequence of adjacent vertices (i.e. a walk) which starts with
s
{\displaystyle s}
and ends with
t
{\displaystyle t}
.
In an undirected graph, reachability between all pairs of vertices can be determined by identifying the connected components of the graph. Any pair of vertices in such a graph can reach each other if and only if they belong to the same connected component; therefore, in such a graph, reachability is symmetric (
s
{\displaystyle s}
reaches
t
{\displaystyle t}
iff
t
{\displaystyle t}
reaches
s
{\displaystyle s}
). The connected components of an undirected graph can be identified in linear time. The remainder of this article focuses on the more difficult problem of determining pairwise reachability in a directed graph (which, incidentally, need not be symmetric).
== Definition ==
For a directed graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
, with vertex set
V
{\displaystyle V}
and edge set
E
{\displaystyle E}
, the reachability relation of
G
{\displaystyle G}
is the transitive closure of
E
{\displaystyle E}
, which is to say the set of all ordered pairs
(
s
,
t
)
{\displaystyle (s,t)}
of vertices in
V
{\displaystyle V}
for which there exists a sequence of vertices
v
0
=
s
,
v
1
,
v
2
,
.
.
.
,
v
k
=
t
{\displaystyle v_{0}=s,v_{1},v_{2},...,v_{k}=t}
such that the edge
(
v
i
−
1
,
v
i
)
{\displaystyle (v_{i-1},v_{i})}
is in
E
{\displaystyle E}
for all
1
≤
i
≤
k
{\displaystyle 1\leq i\leq k}
.
If
G
{\displaystyle G}
is acyclic, then its reachability relation is a partial order; any partial order may be defined in this way, for instance as the reachability relation of its transitive reduction. A noteworthy consequence of this is that since partial orders are anti-symmetric, if
s
{\displaystyle s}
can reach
t
{\displaystyle t}
, then we know that
t
{\displaystyle t}
cannot reach
s
{\displaystyle s}
. Intuitively, if we could travel from
s
{\displaystyle s}
to
t
{\displaystyle t}
and back to
s
{\displaystyle s}
, then
G
{\displaystyle G}
would contain a cycle, contradicting that it is acyclic.
If
G
{\displaystyle G}
is directed but not acyclic (i.e. it contains at least one cycle), then its reachability relation will correspond to a preorder instead of a partial order.
== Algorithms ==
Algorithms for determining reachability fall into two classes: those that require preprocessing and those that do not.
If you have only one (or a few) queries to make, it may be more efficient to forgo the use of more complex data structures and compute the reachability of the desired pair directly. This can be accomplished in linear time using algorithms such as breadth first search or iterative deepening depth-first search.
If you will be making many queries, then a more sophisticated method may be used; the exact choice of method depends on the nature of the graph being analysed. In exchange for preprocessing time and some extra storage space, we can create a data structure which can then answer reachability queries on any pair of vertices in as low as
O
(
1
)
{\displaystyle O(1)}
time. Three different algorithms and data structures for three different, increasingly specialized situations are outlined below.
=== Floyd–Warshall Algorithm ===
The Floyd–Warshall algorithm can be used to compute the transitive closure of any directed graph, which gives rise to the reachability relation as in the definition, above.
The algorithm requires
O
(
|
V
|
3
)
{\displaystyle O(|V|^{3})}
time and
O
(
|
V
|
2
)
{\displaystyle O(|V|^{2})}
space in the worst case. This algorithm is not solely interested in reachability as it also computes the shortest path distance between all pairs of vertices. For graphs containing negative cycles, shortest paths may be undefined, but reachability between pairs can still be noted.
=== Thorup's Algorithm ===
For planar digraphs, a much faster method is available, as described by Mikkel Thorup in 2004. This method can answer reachability queries on a planar graph in
O
(
1
)
{\displaystyle O(1)}
time after spending
O
(
n
log
n
)
{\displaystyle O(n\log {n})}
preprocessing time to create a data structure of
O
(
n
log
n
)
{\displaystyle O(n\log {n})}
size. This algorithm can also supply approximate shortest path distances, as well as route information.
The overall approach is to associate with each vertex a relatively small set of so-called separator paths such that any path from a vertex
v
{\displaystyle v}
to any other vertex
w
{\displaystyle w}
must go through at least one of the separators associated with
v
{\displaystyle v}
or
w
{\displaystyle w}
. An outline of the reachability related sections follows.
Given a graph
G
{\displaystyle G}
, the algorithm begins by organizing the vertices into layers starting from an arbitrary vertex
v
0
{\displaystyle v_{0}}
. The layers are built in alternating steps by first considering all vertices reachable from the previous step (starting with just
v
0
{\displaystyle v_{0}}
) and then all vertices which reach to the previous step until all vertices have been assigned to a layer. By construction of the layers, every vertex appears at most two layers, and every directed path, or dipath, in
G
{\displaystyle G}
is contained within two adjacent layers
L
i
{\displaystyle L_{i}}
and
L
i
+
1
{\displaystyle L_{i+1}}
. Let
k
{\displaystyle k}
be the last layer created, that is, the lowest value for
k
{\displaystyle k}
such that
⋃
i
=
0
k
L
i
=
V
{\displaystyle \bigcup _{i=0}^{k}L_{i}=V}
.
The graph is then re-expressed as a series of digraphs
G
0
,
G
1
,
…
,
G
k
−
1
{\displaystyle G_{0},G_{1},\ldots ,G_{k-1}}
where each
G
i
=
r
i
∪
L
i
∪
L
i
+
1
{\displaystyle G_{i}=r_{i}\cup L_{i}\cup L_{i+1}}
and where
r
i
{\displaystyle r_{i}}
is the contraction of all previous levels
L
0
…
L
i
−
1
{\displaystyle L_{0}\ldots L_{i-1}}
into a single vertex. Because every dipath appears in at most two consecutive layers, and because
each
G
i
{\displaystyle G_{i}}
is formed by two consecutive layers, every dipath in
G
{\displaystyle G}
appears in its entirety in at least one
G
i
{\displaystyle G_{i}}
(and no more than 2 consecutive such graphs)
For each
G
i
{\displaystyle G_{i}}
, three separators are identified which, when removed, break the graph into three components which each contain at most
1
/
2
{\displaystyle 1/2}
the vertices of the original. As
G
i
{\displaystyle G_{i}}
is built from two layers of opposed dipaths, each separator may consist of up to 2 dipaths, for a total of up to 6 dipaths over all of the separators. Let
S
{\displaystyle S}
be this set of dipaths. The proof that such separators can always be found is related to the Planar Separator Theorem of Lipton and Tarjan, and these separators can be located in linear time.
For each
Q
∈
S
{\displaystyle Q\in S}
, the directed nature of
Q
{\displaystyle Q}
provides for a natural indexing of its vertices from the start to the end of the path. For each vertex
v
{\displaystyle v}
in
G
i
{\displaystyle G_{i}}
, we locate the first vertex in
Q
{\displaystyle Q}
reachable by
v
{\displaystyle v}
, and the last vertex in
Q
{\displaystyle Q}
that reaches to
v
{\displaystyle v}
. That is, we are looking at how early into
Q
{\displaystyle Q}
we can get from
v
{\displaystyle v}
, and how far
we can stay in
Q
{\displaystyle Q}
and still get back to
v
{\displaystyle v}
. This information is stored with
each
v
{\displaystyle v}
. Then for any pair of vertices
u
{\displaystyle u}
and
w
{\displaystyle w}
,
u
{\displaystyle u}
can reach
w
{\displaystyle w}
via
Q
{\displaystyle Q}
if
u
{\displaystyle u}
connects to
Q
{\displaystyle Q}
earlier than
w
{\displaystyle w}
connects from
Q
{\displaystyle Q}
.
Every vertex is labelled as above for each step of the recursion which builds
G
0
…
,
G
k
{\displaystyle G_{0}\ldots ,G_{k}}
. As this recursion has logarithmic depth, a total of
O
(
log
n
)
{\displaystyle O(\log {n})}
extra information is stored per vertex. From this point, a
logarithmic time query for reachability is as simple as looking over each pair
of labels for a common, suitable
Q
{\displaystyle Q}
. The original paper then works to tune the
query time down to
O
(
1
)
{\displaystyle O(1)}
.
In summarizing the analysis of this method, first consider that the layering
approach partitions the vertices so that each vertex is considered only
O
(
1
)
{\displaystyle O(1)}
times. The separator phase of the algorithm breaks the graph into components
which are at most
1
/
2
{\displaystyle 1/2}
the size of the original graph, resulting in a
logarithmic recursion depth. At each level of the recursion, only linear work
is needed to identify the separators as well as the connections possible between
vertices. The overall result is
O
(
n
log
n
)
{\displaystyle O(n\log n)}
preprocessing time with only
O
(
log
n
)
{\displaystyle O(\log {n})}
additional information stored for each vertex.
=== Kameda's Algorithm ===
An even faster method for pre-processing, due to T. Kameda in 1975,
can be used if the graph is planar, acyclic, and also exhibits the following additional properties: all 0-indegree and all 0-outdegree vertices appear on the same face (often assumed to be the outer face), and it is possible to partition the boundary of that face into two parts such that all 0-indegree vertices appear on one part, and all
0-outdegree vertices appear on the other (i.e. the two types of vertices do not alternate).
If
G
{\displaystyle G}
exhibits these properties, then we can preprocess the graph in only
O
(
n
)
{\displaystyle O(n)}
time, and store only
O
(
log
n
)
{\displaystyle O(\log {n})}
extra bits per vertex, answering
reachability queries for any pair of vertices in
O
(
1
)
{\displaystyle O(1)}
time with a simple
comparison.
Preprocessing performs the following steps. We add a new vertex
s
{\displaystyle s}
which has an edge to each 0-indegree vertex, and another new vertex
t
{\displaystyle t}
with edges from each 0-outdegree vertex. Note that the properties of
G
{\displaystyle G}
allow us to do so while maintaining planarity, that is, there will still be no edge crossings after these additions. For each vertex we store the list of adjacencies (out-edges) in order of the planarity of the graph (for example, clockwise with respect to the graph's embedding). We then initialize a counter
i
=
n
+
1
{\displaystyle i=n+1}
and begin a Depth-First Traversal from
s
{\displaystyle s}
. During this traversal, the adjacency list of each vertex is visited from left-to-right as needed. As vertices are popped from the traversal's stack, they are labelled with the value
i
{\displaystyle i}
, and
i
{\displaystyle i}
is then decremented. Note that
t
{\displaystyle t}
is always labelled with the value
n
+
1
{\displaystyle n+1}
and
s
{\displaystyle s}
is always labelled with
0
{\displaystyle 0}
. The depth-first traversal is then repeated, but this time the adjacency list of each vertex is visited from right-to-left.
When completed,
s
{\displaystyle s}
and
t
{\displaystyle t}
, and their incident edges, are removed. Each
remaining vertex stores a 2-dimensional label with values from
1
{\displaystyle 1}
to
n
{\displaystyle n}
.
Given two vertices
u
{\displaystyle u}
and
v
{\displaystyle v}
, and their labels
L
(
u
)
=
(
a
1
,
a
2
)
{\displaystyle L(u)=(a_{1},a_{2})}
and
L
(
v
)
=
(
b
1
,
b
2
)
{\displaystyle L(v)=(b_{1},b_{2})}
, we say that
L
(
u
)
<
L
(
v
)
{\displaystyle L(u)<L(v)}
if and only if
a
1
≤
b
1
{\displaystyle a_{1}\leq b_{1}}
,
a
2
≤
b
2
{\displaystyle a_{2}\leq b_{2}}
, and there exists at least one component
a
1
{\displaystyle a_{1}}
or
a
2
{\displaystyle a_{2}}
which is strictly
less than
b
1
{\displaystyle b_{1}}
or
b
2
{\displaystyle b_{2}}
, respectively.
The main result of this method then states that
v
{\displaystyle v}
is reachable from
u
{\displaystyle u}
if and
only if
L
(
u
)
<
L
(
v
)
{\displaystyle L(u)<L(v)}
, which is easily calculated in
O
(
1
)
{\displaystyle O(1)}
time.
== Related problems ==
A related problem is to solve reachability queries with some number
k
{\displaystyle k}
of vertex failures. For example: "Can vertex
u
{\displaystyle u}
still reach vertex
v
{\displaystyle v}
even though vertices
s
1
,
s
2
,
.
.
.
,
s
k
{\displaystyle s_{1},s_{2},...,s_{k}}
have failed and can no longer be used?" A similar problem may consider edge failures rather than vertex failures, or a mix of the two. The breadth-first search technique works just as well on such queries, but constructing an efficient oracle is more challenging.
Another problem related to reachability queries is in quickly recalculating changes to reachability relationships when some portion of the graph is changed. For example, this is a relevant concern to garbage collection which needs to balance the reclamation of memory (so that it may be reallocated) with the performance concerns of the running application.
== See also ==
Gammoid
st-connectivity
== References == | Wikipedia/Graph_reachability |
The chase is a simple fixed-point algorithm testing and enforcing implication of data dependencies in database systems. It plays important roles in database theory as well as in practice.
It is used, directly or indirectly, on an everyday basis by people who design databases, and it is used in commercial systems to reason about the consistency and correctness of a data design. New applications of the chase in meta-data management and data exchange are still being discovered.
The chase has its origins in two seminal papers of 1979, one by Alfred V. Aho, Catriel Beeri, and Jeffrey D. Ullman and the other by David Maier, Alberto O. Mendelzon, and Yehoshua Sagiv.
In its simplest application the chase is used for testing whether the projection of a relation schema constrained by some functional dependencies onto a given decomposition can be recovered by rejoining the projections. Let t be a tuple in
π
S
1
(
R
)
⋈
π
S
2
(
R
)
⋈
.
.
.
⋈
π
S
k
(
R
)
{\displaystyle \pi _{S_{1}}(R)\bowtie \pi _{S_{2}}(R)\bowtie ...\bowtie \pi _{S_{k}}(R)}
where R is a relation and F is a set of functional dependencies (FD). If tuples in R are represented as t1, ..., tk, the join of the projections of each ti should agree with t on
π
S
i
(
R
)
{\displaystyle \pi _{S_{i}}(R)}
where i = 1, 2, ..., k. If ti is not on
π
S
i
(
R
)
{\displaystyle \pi _{S_{i}}(R)}
, the value is unknown.
The chase can be done by drawing a tableau (which is the same formalism used in tableau query). Suppose R has attributes A, B, ... and components of t are a, b, .... For ti use the same letter as t in the components that are in Si but subscript the letter with i if the component is not in Si. Then, ti will agree with t if it is in Si and will have a unique value otherwise.
The chase process is confluent. There exist implementations of the chase algorithm, some of them are also open-source.
== Example ==
Let R(A, B, C, D) be a relation schema known to obey the set of functional dependencies F = {A→B, B→C, CD→A}. Suppose R is decomposed into three relation schemas S1 = {A, D}, S2 = {A, C} and S3 = {B, C, D}. Determining whether this decomposition is lossless can be done by performing a chase as shown below.
The initial tableau for this decomposition is:
The first row represents S1. The components for attributes A and D are unsubscripted and those for attributes B and C are subscripted with i = 1. The second and third rows are filled in the same manner with S2 and S3 respectively.
The goal for this test is to use the given F to prove that t = (a, b, c, d) is really in R. To do so, the tableau can be chased by applying the FDs in F to equate symbols in the tableau. A final tableau with a row that is the same as t implies that any tuple t in the join of the projections is actually a tuple of R.
To perform the chase test, first decompose all FDs in F so each FD has a single attribute on the right hand side of the "arrow". (In this example, F remains unchanged because all of its FDs already have a single attribute on the right hand side: F = {A→B, B→C, CD→A}.)
When equating two symbols, if one of them is unsubscripted, make the other be the same so that the final tableau can have a row that is exactly the same as t = (a, b, c, d). If both have their own subscript, change either to be the other. However, to avoid confusion, all of the occurrences should be changed.
First, apply A→B to the tableau.
The first row is (a, b1, c1, d) where a is unsubscripted and b1 is subscripted with 1. Comparing the first row with the second one, change b2 to b1. Since the third row has a3, b in the third row stays the same. The resulting tableau is:
Then consider B→C. Both first and second rows have b1 and notice that the second row has an unsubscripted c. Therefore, the first row changes to (a, b1, c, d). Then the resulting tableau is:
Now consider CD→A. The first row has an unsubscripted c and an unsubscripted d, which is the same as in third row. This means that the A value for row one and three must be the same as well. Hence, change a3 in the third row to a. The resulting tableau is:
At this point, notice that the third row is (a, b, c, d) which is the same as t. Therefore, this is the final tableau for the chase test with given R and F. Hence, whenever R is projected onto S1, S2 and S3 and rejoined, the result is in R. Particularly, the resulting tuple is the same as the tuple of R that is projected onto {B, C, D}.
== References ==
Serge Abiteboul, Richard B. Hull, Victor Vianu: Foundations of Databases. Addison-Wesley, 1995.
A. V. Aho, C. Beeri, and J. D. Ullman: The Theory of Joins in Relational Databases. ACM Transactions on Database Systems 4(3): 297-314, 1979.
J. D. Ullman: Principles of Database and Knowledge-Base Systems, Volume I. Computer Science Press, New York, 1988.
J. D. Ullman, J. Widom: A First Course in Database Systems (3rd ed.). pp. 96–99. Pearson Prentice Hall, 2008.
Michael Benedikt, George Konstantinidis, Giansalvatore Mecca, Boris Motik, Paolo Papotti, Donatello Santoro, Efthymia Tsamoura: Benchmarking the Chase. In Proc. of PODS, 2017.
== Further reading ==
Sergio Greco; Francesca Spezzano; Cristian Molinaro (2012). Incomplete Data and Data Dependencies in Relational Databases. Morgan & Claypool Publishers. ISBN 978-1-60845-926-1. | Wikipedia/Chase_(algorithm) |
The ACM Symposium on Principles of Database Systems (PODS) is an international research conference on database theory, and has been held yearly since 1982. It is sponsored by three Association for Computing Machinery SIGs, SIGAI, SIGACT, and SIGMOD. Since 1991, PODS has been held jointly with the ACM SIGMOD Conference, a research conference on systems aspects of data management.
PODS is regarded as one of the top conferences in the area of database theory and data algorithms. It holds the highest rating of A* in the CORE2021 ranking [1]. The conference typically accepts between 20 and 40 papers each year, with acceptance rates fluctuating between 25% and 35%.
== External links ==
Official website
dblp: Symposium on Principles of Database Systems (PODS) | Wikipedia/ACM_Symposium_on_Principles_of_Database_Systems |
Semi-structured data is a form of structured data that does not obey the tabular structure of data models associated with relational databases or other forms of data tables, but nonetheless contains tags or other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known as self-describing structure.
In semi-structured data, the entities belonging to the same class may have different attributes even though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of the Internet where full-text documents and databases are not the only forms of data anymore, and different applications need a medium for exchanging information. In object-oriented databases, one often finds semi-structured data.
== Types ==
=== XML ===
XML, other markup languages, email, and EDI are all forms of semi-structured data. OEM (Object Exchange Model) was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizing SOAP principles.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor as database schema, enforced by the XML schema and processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML as "human-readable", however, can only be taken so far. Some implementations/dialects of XML, such as the XML representation of the contents of a Microsoft Word document, as implemented in Office 2007 and later versions, utilize dozens or even hundreds of different kinds of tags that reflect a particular problem domain - in Word's case, formatting at the character and paragraph and document level, definitions of styles, inclusion of citations, etc. - which are nested within each other in complex ways. Understanding even a portion of such an XML document by reading it, let alone catching errors in its structure, is impossible without a very deep prior understanding of the specific XML implementation, along with assistance by software that understands the XML schema that has been employed. Such text is not "human-understandable" any more than a book written in Swahili (which uses the Latin alphabet) would be to an American or Western European who does not know a word of that language: the tags are symbols that are meaningless to a person unfamiliar with the domain.
=== JSON ===
JSON or JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects. JSON has been popularized by web services developed utilizing REST principles.
Databases such as MongoDB and Couchbase store data natively in JSON format, leveraging the pros of semi-structured data architecture.
== Pros and cons ==
=== Advantages ===
Programmers persisting objects from their application to a database do not need to worry about object-relational impedance mismatch, but can often serialize objects via a light-weight library.
Support for nested or hierarchical data often simplifies data models representing complex relationships between entities.
Support for lists of objects simplifies data models by avoiding messy translations of lists into a relational data model.
=== Disadvantages ===
The traditional relational data model has a popular and ready-made query language, SQL.
Prone to "garbage in, garbage out"; by removing restraints from the data model, there is less forethought that is necessary to operate a data application.
== Semi-structured model ==
The semi-structured model is a database model where there is no separation between the data and the schema, and the amount of structure used depends on the purpose.
The advantages of this model are the following:
It can represent the information of some data sources that cannot be constrained by schema.
It provides a flexible format for data exchange between different types of databases.
It can be helpful to view structured data as semi-structured (for browsing purposes).
The schema can easily be changed.
The data transfer format may be portable.
The primary trade-off being made in using a semi-structured database model is that queries cannot be made as efficiently as in a more constrained structure, such as in the relational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical in SQL), it is not as efficient because it has to seek around the disk following pointers.
The Object Exchange Model (OEM) is one standard to express semi-structured data, another way is XML.
== See also ==
Semi-structured model
NoSQL
Unstructured data
Structured data
== References ==
== External links ==
UPenn Database Group – semi-structured data and XML
Semi-Structured data analytics: Relational or Hadoop platform? by IBM | Wikipedia/Semi-structured_model |
In computing, data transformation is the process of converting data from one format or structure into another format or structure. It is a fundamental aspect of most data integration and data management tasks such as data wrangling, data warehousing, data integration and application integration.
Data transformation can be simple or complex based on the required changes to the data between the source (initial) data and the target (final) data. Data transformation is typically performed via a mixture of manual and automated steps. Tools and technologies used for data transformation can vary widely based on the format, structure, complexity, and volume of the data being transformed.
A master data recast is another form of data transformation where the entire database of data values is transformed or recast without extracting the data from the database. All data in a well-designed database is directly or indirectly related to a limited set of master database tables by a network of foreign key constraints. Each foreign key constraint is dependent upon a unique database index from the parent database table. Therefore, when the proper master database table is recast with a different unique index, the directly and indirectly related data are also recast or restated. The directly and indirectly related data may also still be viewed in the original form since the original unique index still exists with the master data. Also, the database recast must be done in such a way as to not impact the applications architecture software.
When the data mapping is indirect via a mediating data model, the process is also called data mediation.
== Data transformation process ==
Data transformation can be divided into the following steps, each applicable as needed based on the complexity of the transformation required.
Data discovery
Data mapping
Code generation
Code execution
Data review
These steps are often the focus of developers or technical data analysts who may use multiple specialized tools to perform their tasks.
The steps can be described as follows:
Data discovery is the first step in the data transformation process. Typically the data is profiled using profiling tools or sometimes using manually written profiling scripts to better understand the structure and characteristics of the data and decide how it needs to be transformed.
Data mapping is the process of defining how individual fields are mapped, modified, joined, filtered, aggregated etc. to produce the final desired output. Developers or technical data analysts traditionally perform data mapping since they work in the specific technologies to define the transformation rules (e.g. visual ETL tools, transformation languages).
Code generation is the process of generating executable code (e.g. SQL, Python, R, or other executable instructions) that will transform the data based on the desired and defined data mapping rules. Typically, the data transformation technologies generate this code based on the definitions or metadata defined by the developers.
Code execution is the step whereby the generated code is executed against the data to create the desired output. The executed code may be tightly integrated into the transformation tool, or it may require separate steps by the developer to manually execute the generated code.
Data review is the final step in the process, which focuses on ensuring the output data meets the transformation requirements. It is typically the business user or final end-user of the data that performs this step. Any anomalies or errors in the data that are found and communicated back to the developer or data analyst as new requirements to be implemented in the transformation process.
== Types of data transformation ==
=== Batch data transformation ===
Traditionally, data transformation has been a bulk or batch process, whereby developers write code or implement transformation rules in a data integration tool, and then execute that code or those rules on large volumes of data. This process can follow the linear set of steps as described in the data transformation process above.
Batch data transformation is the cornerstone of virtually all data integration technologies such as data warehousing, data migration and application integration.
When data must be transformed and delivered with low latency, the term "microbatch" is often used. This refers to small batches of data (e.g. a small number of rows or a small set of data objects) that can be processed very quickly and delivered to the target system when needed.
=== Benefits of batch data transformation ===
Traditional data transformation processes have served companies well for decades. The various tools and technologies (data profiling, data visualization, data cleansing, data integration etc.) have matured and most (if not all) enterprises transform enormous volumes of data that feed internal and external applications, data warehouses and other data stores.
=== Limitations of traditional data transformation ===
This traditional process also has limitations that hamper its overall efficiency and effectiveness.
The people who need to use the data (e.g. business users) do not play a direct role in the data transformation process. Typically, users hand over the data transformation task to developers who have the necessary coding or technical skills to define the transformations and execute them on the data.
This process leaves the bulk of the work of defining the required transformations to the developer, which often in turn do not have the same domain knowledge as the business user. The developer interprets the business user requirements and implements the related code/logic. This has the potential of introducing errors into the process (through misinterpreted requirements), and also increases the time to arrive at a solution.
This problem has given rise to the need for agility and self-service in data integration (i.e. empowering the user of the data and enabling them to transform the data themselves interactively).
There are companies that provide self-service data transformation tools. They are aiming to efficiently analyze, map and transform large volumes of data without the technical knowledge and process complexity that currently exists. While these companies use traditional batch transformation, their tools enable more interactivity for users through visual platforms and easily repeated scripts.
Still, there might be some compatibility issues (e.g. new data sources like IoT may not work correctly with older tools) and compliance limitations due to the difference in data governance, preparation and audit practices.
=== Interactive data transformation ===
Interactive data transformation (IDT) is an emerging capability that allows business analysts and business users the ability to directly interact with large datasets through a visual interface, understand the characteristics of the data (via automated data profiling or visualization), and change or correct the data through simple interactions such as clicking or selecting certain elements of the data.
Although interactive data transformation follows the same data integration process steps as batch data integration, the key difference is that the steps are not necessarily followed in a linear fashion and typically don't require significant technical skills for completion.
There are a number of companies that provide interactive data transformation tools, including Trifacta, Alteryx and Paxata. They are aiming to efficiently analyze, map and transform large volumes of data while at the same time abstracting away some of the technical complexity and processes which take place under the hood.
Interactive data transformation solutions provide an integrated visual interface that combines the previously disparate steps of data analysis, data mapping and code generation/execution and data inspection. That is, if changes are made at one step (like for example renaming), the software automatically updates the preceding or following steps accordingly. Interfaces for interactive data transformation incorporate visualizations to show the user patterns and anomalies in the data so they can identify erroneous or outlying values.
Once they've finished transforming the data, the system can generate executable code/logic, which can be executed or applied to subsequent similar data sets.
By removing the developer from the process, interactive data transformation systems shorten the time needed to prepare and transform the data, eliminate costly errors in the interpretation of user requirements and empower business users and analysts to control their data and interact with it as needed.
== Transformational languages ==
There are numerous languages available for performing data transformation. Many transformation languages require a grammar to be provided. In many cases, the grammar is structured using something closely resembling Backus–Naur form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. Examples of such languages include:
AWK - one of the oldest and most popular textual data transformation languages;
Perl - a high-level language with both procedural and object-oriented syntax capable of powerful operations on binary or text data.
Template languages - specialized to transform data into documents (see also template processor);
TXL - prototyping language-based descriptions, used for source code or data transformation.
XSLT - the standard XML data transformation language (suitable by XQuery in many applications);
Additionally, companies such as Trifacta and Paxata have developed domain-specific transformational languages (DSL) for servicing and transforming datasets. The development of domain-specific languages has been linked to increased productivity and accessibility for non-technical users. Trifacta's “Wrangle” is an example of such a domain-specific language.
Another advantage of the recent domain-specific transformational languages trend is that a domain-specific transformational language can abstract the underlying execution of the logic defined in the domain-specific transformational language. They can also utilize that same logic in various processing engines, such as Spark, MapReduce, and Dataflow. In other words, with a domain-specific transformational language, the transformation language is not tied to the underlying engine.
Although transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. A text editor like vim, emacs or TextPad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern. For example:
foo ("some string", 42, gCommon);
bar (someObj, anotherObj);
foo ("another string", 24, gCommon);
bar (myObj, myOtherObj);
could both be transformed into a more compact form like:
foobar("some string", 42, someObj, anotherObj);
foobar("another string", 24, myObj, myOtherObj);
In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two arguments would be replaced with a single function invocation using some or all of the original set of arguments.
== See also ==
Data cleansing
Data mapping
Data integration
Data preparation
Data wrangling
Extract, transform, load
Information integration
== References ==
== External links ==
File Formats, Transformation, and Migration, a related Wikiversity article | Wikipedia/Data_transformation_(computing) |
Optimistic concurrency control (OCC), also known as optimistic locking, is a non-locking concurrency control method applied to transactional systems such as relational database management systems and software transactional memory. OCC assumes that multiple transactions can frequently complete without interfering with each other. While running, transactions use data resources without acquiring locks on those resources. Before committing, each transaction verifies that no other transaction has modified the data it has read. If the check reveals conflicting modifications, the committing transaction rolls back and can be restarted. Optimistic concurrency control was first proposed in 1979 by H. T. Kung and John T. Robinson.
OCC is generally used in environments with low data contention. When conflicts are rare, transactions can complete without the expense of managing locks and without having transactions wait for other transactions' locks to clear, leading to higher throughput than other concurrency control methods. However, if contention for data resources is frequent, the cost of repeatedly restarting transactions hurts performance significantly, in which case other concurrency control methods may be better suited. However, locking-based ("pessimistic") methods also can deliver poor performance because locking can drastically limit effective concurrency even when deadlocks are avoided.
== Phases of optimistic concurrency control ==
Optimistic concurrency control transactions involve these phases:
Begin: Record a timestamp marking the transaction's beginning.
Modify: Read database values, and tentatively write changes.
Validate: Check whether other transactions have modified data that this transaction has used (read or written). This includes transactions that completed after this transaction's start time, and optionally, transactions that are still active at validation time.
Commit/Rollback: If there is no conflict, make all changes take effect. If there is a conflict, resolve it, typically by aborting the transaction, although other resolution schemes are possible. Care must be taken to avoid a time-of-check to time-of-use bug, particularly if this phase and the previous one are not performed as a single atomic operation.
== Web usage ==
The stateless nature of HTTP makes locking infeasible for web user interfaces. It is common for a user to start editing a record, then leave without following a "cancel" or "logout" link. If locking is used, other users who attempt to edit the same record must wait until the first user's lock times out.
HTTP does provide a form of built-in OCC. The response to an initial GET request can include an ETag for subsequent PUT requests to use in the If-Match header. Any PUT requests with an out-of-date ETag in the If-Match header can then be rejected.
Some database management systems offer OCC natively, without requiring special application code. For others, the application can implement an OCC layer outside of the database, and avoid waiting or silently overwriting records. In such cases, the form may include a hidden field with the record's original content, a timestamp, a sequence number, or an opaque token. On submit, this is compared against the database. If it differs, the conflict resolution algorithm is invoked.
=== Examples ===
MediaWiki's edit pages use OCC.
Bugzilla uses OCC; edit conflicts are called "mid-air collisions".
The Ruby on Rails framework has an API for OCC.
The Grails framework uses OCC in its default conventions.
The GT.M database engine uses OCC for managing transactions (even single updates are treated as mini-transactions).
Microsoft's Entity Framework (including Code-First) has built-in support for OCC based on a binary timestamp value.
Most revision control systems support the "merge" model for concurrency, which is OCC.
Mimer SQL is a DBMS that only implements optimistic concurrency control.
Google App Engine data store uses OCC.
The Apache Solr search engine supports OCC via the _version_ field.
The Elasticsearch search engine updates its documents via OCC. Each version of a document is assigned a sequence number, and newer versions receive higher sequence numbers. As changes to a document arrive asynchronously, the software can use the sequence number to avoid overriding a newer version with an old one.
CouchDB implements OCC through document revisions.
The MonetDB column-oriented database management system's transaction management scheme is based on OCC.
Most implementations of software transactional memory use OCC.
Redis provides OCC through WATCH command.
Firebird uses Multi-generational architecture as an implementation of OCC for data management.
DynamoDB uses conditional update as an implementation of OCC.
Kubernetes uses OCC when updating resources.
YugabyteDB is a cloud-native database that primarily uses OCC.
Firestore is a NoSQL database by Firebase that uses OCC in its transactions.
Apache Iceberg uses OCC to update tables and run maintenance operations on them.
== See also ==
Server Message Block#Opportunistic locking
== References ==
== External links ==
Kung, H. T.; John T. Robinson (June 1981). "On optimistic methods for concurrency control". ACM Transactions on Database Systems. 6 (2): 213–226. CiteSeerX 10.1.1.101.8988. doi:10.1145/319566.319567. S2CID 61600099.
Enterprise JavaBeans, 3.0, By Bill Burke, Richard Monson-Haefel, Chapter 16. Transactions, Section 16.3.5. Optimistic Locking, Publisher: O'Reilly, Pub Date: May 16, 2006, Print ISBN 0-596-00978-X,
Hollmann, Andreas (May 2009). "Multi-Isolation: Virtues and Limitations" (PDF). Multi-Isolation (what is between pessimistic and optimistic locking). 01069 Gutzkovstr. 30/F301.2, Dresden: Happy-Guys Software GbR. p. 8. Retrieved 2013-05-16.{{cite conference}}: CS1 maint: location (link) | Wikipedia/Optimistic_concurrency_control |
Multiversion concurrency control (MCC or MVCC), is a non-locking concurrency control method commonly used by database management systems to provide concurrent access to the database and in programming languages to implement transactional memory.
== Description ==
Without concurrency control, if someone is reading from a database at the same time as someone else is writing to it, it is possible that the reader will see a half-written or inconsistent piece of data. For instance, when making a wire transfer between two bank accounts if a reader reads the balance at the bank when the money has been withdrawn from the original account and before it was deposited in the destination account, it would seem that money has disappeared from the bank. Isolation is the property that provides guarantees in the concurrent accesses to data. Isolation is implemented by means of a concurrency control protocol. The simplest way is to make all readers wait until the writer is done, which is known as a read-write lock. Locks are known to create contention especially between long read transactions and update transactions. MVCC aims at solving the problem by keeping multiple copies of each data item. In this way, each user connected to the database sees a snapshot of the database at a particular instant in time. Any changes made by a writer will not be seen by other users of the database until the changes have been completed (or, in database terms: until the transaction has been committed.)
When an MVCC database needs to update a piece of data, it will not overwrite the original data item with new data, but instead creates a newer version of the data item. Thus there are multiple versions stored. The version that each transaction sees depends on the isolation level implemented. The most common isolation level implemented with MVCC is snapshot isolation. With snapshot isolation, a transaction observes a state of the data as of when the transaction started.
MVCC provides point-in-time consistent views. Read transactions under MVCC typically use a timestamp or transaction ID to determine what state of the DB to read, and read these versions of the data. Read and write transactions are thus isolated from each other without any need for locking. However, despite locks being unnecessary, they are used by some MVCC databases such as Oracle. Writes create a newer version, while concurrent reads access an older version.
MVCC introduces the challenge of how to remove versions that become obsolete and will never be read. In some cases, a process to periodically sweep through and delete the obsolete versions is implemented. This is often a stop-the-world process that traverses a whole table and rewrites it with the last version of each data item. PostgreSQL can use this approach with its VACUUM FREEZE process. Other databases split the storage blocks into two parts: the data part and an undo log. The data part always keeps the last committed version. The undo log enables the recreation of older versions of data. The main inherent limitation of this latter approach is that when there are update-intensive workloads, the undo log part runs out of space and then transactions are aborted as they cannot be given their snapshot. For a document-oriented database it also allows the system to optimize documents by writing entire documents onto contiguous sections of disk—when updated, the entire document can be re-written rather than bits and pieces cut out or maintained in a linked, non-contiguous database structure.
== Implementation ==
MVCC uses timestamps (TS), and incrementing transaction IDs, to achieve transactional consistency. MVCC ensures a transaction (T) never has to wait to Read a database object (P) by maintaining several versions of the object. Each version of object P has both a Read Timestamp (RTS) and a Write Timestamp (WTS) which lets a particular transaction Ti read the most recent version of the object which precedes the transaction's Read Timestamp RTS(Ti).
If transaction Ti wants to Write to object P, and there is also another transaction Tk happening to the same object, the Read Timestamp RTS(Ti) must precede the Read Timestamp RTS(Tk), i.e., RTS(Ti) < RTS(Tk), for the object Write Operation (WTS) to succeed. A Write cannot complete if there are other outstanding transactions with an earlier Read Timestamp (RTS) to the same object. Like standing in line at the store, you cannot complete your checkout transaction until those in front of you have completed theirs.
To restate; every object (P) has a Timestamp (TS), however if transaction Ti wants to Write to an object, and the transaction has a Timestamp (TS) that is earlier than the object's current Read Timestamp, TS(Ti) < RTS(P), then the transaction is aborted and restarted. (This is because a later transaction already depends on the old value.) Otherwise, Ti creates a new version of object P and sets the read/write timestamp TS of the new version to the timestamp of the transaction TS ← TS(Ti).
The drawback to this system is the cost of storing multiple versions of objects in the database. On the other hand, reads are never blocked, which can be important for workloads mostly involving reading values from the database. MVCC is particularly adept at implementing true snapshot isolation, something which other methods of concurrency control frequently do either incompletely or with high performance costs.
A structure to hold a record (row) for a database using MVCC could look like this in Rust.
Insert transaction identifier: 32 bits
The MVCC transaction identifier for insert.
Delete transaction identifier: 32 bits
The MVCC transaction identifier for delete.
Data length: 16 bits
The length of the data.
Data: Variable
The content stored in the record.
== Examples ==
=== Concurrent read–write ===
At Time = 1, the state of a database could be:
T0 wrote Object 1="Foo" and Object 2="Bar". After that T1 wrote Object 1="Hello" leaving Object 2 at its original value. The new value of Object 1 will supersede the value at 0 for all transactions that start after T1 commits at which point version 0 of Object 1 can be garbage collected.
If a long running transaction T2 starts a read operation of Object 2 and Object 1 after T1 committed and there is a concurrent update transaction T3 which deletes Object 2 and adds Object 3="Foo-Bar", the database state will look like this at time 2:
There is a new version as of time 2 of Object 2 which is marked as deleted and a new Object 3. Since T2 and T3 run concurrently T2 sees the version of the database before 2 i.e. before T3 committed writes, as such T2 reads Object 2="Bar" and Object 1="Hello". This is how multiversion concurrency control allows snapshot isolation reads without any locks.
== History ==
Multiversion concurrency control is described in some detail in the 1981 paper "Concurrency Control in Distributed Database Systems" by Phil Bernstein and Nathan Goodman, then employed by the Computer Corporation of America. Bernstein and Goodman's paper cites a 1978 dissertation by David P. Reed which quite clearly describes MVCC and claims it as an original work.
The first shipping, commercial database software product featuring MVCC was VAX Rdb/ELN, released in 1984, and created at Digital Equipment Corporation by Jim Starkey. Starkey went on to create the second commercially successful MVCC database - InterBase.
== See also ==
List of databases using MVCC
Read-copy-update
Timestamp-based concurrency control
Vector clock
Version control
== References ==
== Further reading ==
Gerhard Weikum, Gottfried Vossen, Transactional information systems: theory, algorithms, and the practice of concurrency control and recovery, Morgan Kaufmann, 2002, ISBN 1-55860-508-8 | Wikipedia/Multiversion_concurrency_control |
Application software is any computer program that is intended for end-user use – not operating, administering or programming the computer. An application (app, application program, software application) is any program that can be categorized as application software. Common types of applications include word processor, media player and accounting software.
The term application software refers to all applications collectively and can be used to differentiate from system and utility software.
Applications may be bundled with the computer and its system software or published separately. Applications may be proprietary or open-source.
The short term app (coined in 1981 or earlier) became popular with the 2008 introduction of the iOS App Store, to refer to applications for mobile devices such as smartphones and tablets. Later, with introduction of the Mac App Store (in 2010) and Windows Store (in 2011), the term was extended in popular use to include desktop applications.
== Terminology ==
The delineation between system software such as operating systems and application software is not exact and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separate piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable by the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
When used as an adjective, application is not restricted to mean: of or on application software. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software.
=== Killer app ===
Sometimes a new and popular application arises that only runs on one platform that results in increasing the desirability of that platform. This is called a killer application or killer app, coined in the late 1980s. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For the BlackBerry, it was its email software.
=== Platform specific naming ===
Some applications are available for multiple platforms while others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game.
== Classification ==
There are many different and alternative ways to classify application software.
From the legal point of view, application software is mainly classified with a black-box approach, about the rights of its end-users or subscribers (with eventual intermediate and tiered subscription levels).
Software applications are also classified with respect to the programming language in which the source code is written or executed, and concerning their purpose and outputs.
=== By property and use rights ===
Application software is usually distinguished into two main classes: closed source vs open source software applications, and free or proprietary software applications.
Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-ons from third parties.
Free and open-source software (FOSS) shall be run, distributed, sold, or extended for any purpose, and -being open- shall be modified or reversed in the same way.
FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use.
Public-domain software is a type of FOSS which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished, or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under a (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever).
=== By coding language ===
Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated.
=== By purpose and output ===
Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, accounting, or customer service.
There are many types of application software:
An application suite consists of multiple applications bundled together. They usually have related functions, features, and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems, data replication engines, and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.)
Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.)
Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services.
Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information systems, and individual media editors may aid in multiple information worker tasks.
Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.)
Educational software is related to content access software, but has the content or features adapted for use by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.
Simulation software simulates physical or abstract systems for either research, training, or entertainment purposes.
Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others.
Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces.
Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through the use of a computing device.
=== By platform ===
Applications can also be classified by computing platforms such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices.
The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via a command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor.
=== Information worker software ===
Accounting software
Data management
Contact manager
Spreadsheet
Database software
Documentation
Document automation
Word processor
Desktop publishing software
Diagramming software
Presentation software
Email
Blog software
Enterprise resource planning
Financial software
Banking software
Clearing systems
Financial accounting software
Financial software
Field service management
Workforce management software
Project management software
Calendaring software
Employee scheduling software
Workflow software
Reservation systems
=== Entertainment software ===
Screen savers
Video games
Arcade video games
Console games
Mobile games
Personal computer games
Software art
Demo
64K intro
=== Educational software ===
Classroom management
Reference software
Sales readiness software
Survey management
Encyclopedia software
=== Enterprise infrastructure software ===
Artificial Intelligence for IT Operations (AIOps)
Business workflow software
Database management system (DBMS)
Digital asset management (DAM) software
Document management software
Geographic information system (GIS)
=== Simulation software ===
Computer simulators
Scientific simulators
Social simulators
Battlefield simulators
Emergency simulators
Vehicle simulators
Flight simulators
Driving simulators
Simulation games
Vehicle simulation games
=== Media development software ===
3D computer graphics software
Animation software
Graphic art software
Raster graphics editor
Vector graphics editor
Image organizer
Video editing software
Audio editing software
Digital audio workstation
Music sequencer
Scorewriter
HTML editor
Game development tool
=== Product engineering software ===
Hardware engineering
Computer-aided engineering
Computer-aided design (CAD)
Computer-aided manufacturing (CAM)
Finite element analysis
=== Software engineering ===
Compiler software
Integrated development environment
Compiler
Linker
Debugger
Version control
Game development tool
License manager
== See also ==
Software development – Creation and maintenance of software
Mobile app – Software application designed to run on mobile devices
Web application – Application that uses a web browser as a client
Server application – Computer to access a central resource or service on a networkPages displaying short descriptions of redirect targets
Super-app – Mobile application that provides multiple services including financial transactions
== References ==
== External links ==
Learning materials related to Application software at Wikiversity | Wikipedia/Software_application |
Honeywell International Inc. is an American publicly traded, multinational conglomerate corporation headquartered in Charlotte, North Carolina. It primarily operates in four areas of business: aerospace, building automation, industrial automation, and energy and sustainability solutions (ESS). Honeywell also owns and operates Sandia National Laboratories under contract with the U.S. Department of Energy. Honeywell is a Fortune 500 company, ranked 115th in 2023. In 2024, the corporation had a global workforce of approximately 102,000 employees. As of 2023, the current chairman and chief executive officer is Vimal Kapur.
The corporation's name, Honeywell International Inc., is a product of the merger of Honeywell Inc. and AlliedSignal in 1999. The corporation headquarters were consolidated with AlliedSignal's headquarters in Morristown, New Jersey. The combined company chose the name "Honeywell" because of the considerable brand recognition. Honeywell was a component of the Dow Jones Industrial Average index from 1999 to 2008. Prior to 1999, its corporate predecessors were included dating back to 1925, including early entrants in the computing and thermostat industries.
In 2020, Honeywell rejoined the Dow Jones Industrial Average index. In 2021, it moved its stock listing from the New York Stock Exchange to the Nasdaq.
In 2025, Honeywell announced it would split into three companies: Honeywell Automation, Honeywell Aerospace, and Honeywell Advanced Materials.
== History ==
The Butz Thermo-Electric Regulator Company was founded in 1885 when the Swiss-born Albert Butz invented the damper-flapper, a thermostat used to control coal furnaces, bringing automated heating system regulation into homes. In 1886, he founded the Butz Thermo-Electric Regulator Company. In 1888, after a falling out with his investors, Butz left the company and transferred the patents to the legal firm Paul, Sanford, and Merwin, who renamed the company the Consolidated Temperature Controlling Company.
As the years passed, CTCC struggled with debt, and the company underwent several name changes. After it was renamed the Electric Heat Regulator Company in 1893, W.R. Sweatt, a stockholder in the company, was sold "an extensive list of patents" and named secretary-treasurer.: 22 By 1900, Sweatt had bought out the remaining shares of the company from the other stockholders.
=== 1906 Honeywell Heating Specialty Company founded ===
In 1906, Mark Honeywell founded the Honeywell Heating Specialty Company in Wabash, Indiana, to manufacture and market his invention, the mercury seal generator.
=== 1922–1934 Mergers and acquisitions ===
As Honeywell's company grew, thanks in part to the acquisition of Jewell Manufacturing Company in 1922 to better automate his heating system, it began to clash with the Electric Heat Regulator Company now-renamed Minneapolis Heat Regulator Company. In 1927, this led to the merging of both companies into the publicly-held Minneapolis-Honeywell Regulator Company. Honeywell was named the company's first president, alongside W.R. Sweatt as its first chairman.
In 1929, combined assets were valued at over $3.5 million, with less than $1 million in liabilities just months before Black Monday.: 49 In 1931, Minneapolis-Honeywell began a period of expansion and acquisition when they purchased the Time-O-Stat Controls Company, giving the company access to a greater number of patents for their controls systems.
W.R. Sweatt and his son Harold provided 75 years of uninterrupted leadership for the company. W.R. Sweatt survived rough spots and turned an innovative idea – thermostatic heating control – into a thriving business.
=== 1934–1941 International growth ===
Harold took over in 1934, leading Honeywell through a period of growth and global expansion that set the stage for Honeywell to become a global technology leader. The merger into the Minneapolis-Honeywell Regulator Company proved to be a saving grace for the corporation.
1934 marked Minneapolis-Honeywell's first foray into the international market, when they acquired the Brown Instrument Company and inherited their relationship with the Yamatake Company of Tokyo, a Japan-based distributor.: 51 Later in 1934, Minneapolis-Honeywell started distributorships across Canada, as well as one in the Netherlands, their first European office. This expansion into international markets continued in 1936, with their first distributorship in London, as well as their first foreign assembly facility being established in Canada. By 1937, ten years after the merger, Minneapolis-Honeywell had over 3,000 employees, with $16 million in annual revenue.
=== World War II ===
With the outbreak of World War II, Minneapolis-Honeywell was approached by the US military for engineering and manufacturing projects. In 1941, Minneapolis-Honeywell developed a superior tank periscope, camera stabilizers, and the C-1 autopilot.
The C-1 revolutionized precision bombing and was ultimately used on the two B-29 bombers that dropped atomic bombs on Japan in 1945. The success of these projects led Minneapolis-Honeywell to open an Aero division in Chicago on October 5, 1942.: 73 This division was responsible for the development of the formation stick to control autopilots, more accurate fuel quantity indicators for aircraft, and the turbo supercharger.: 79
In 1950, Minneapolis-Honeywell's Aero division was contracted for the controls on the first US nuclear submarine, USS Nautilus.: 88 In 1951, the company acquired Intervox Company for their sonar, ultrasonic, and telemetry technologies. Honeywell also helped develop and manufacture the RUR-5 ASROC for the US Navy.
=== 1950–1970s ===
In 1953, in cooperation with the USAF Wright-Air Development Center, Honeywell developed an automated control unit, that could control an aircraft through various stages of a flight, from taxiing to takeoff to the point where the aircraft neared its destination and the pilot took over for landing. Called the Automatic Master Sequence Selector, the onboard control operated similarly to a player piano to relay instructions to the aircraft's autopilot at certain way points during the flight, significantly reducing the pilot's workload. Technologically, this effort had parallels to contemporary efforts in missile guidance and numerical control. Honeywell also developed the Wagtail missile with the USAF.
From the 1950s until the mid-1970s, Honeywell was the United States' importer of Japanese company Asahi Optical's Pentax cameras and photographic equipment.: 153 These products were labeled "Heiland Pentax" and "Honeywell Pentax" in the U.S. In 1953, Honeywell introduced their most famous product, the T-86 Round thermostat.: 110
In 1961, James H. Binger became Honeywell's president and in 1965 its chairman. Binger revamped the company sales approach, placing emphasis on profits rather than on volume. He stepped up the company's international expansion – it had six plants producing 12% of the company's revenue. He officially changed the company's corporate name from "Minneapolis-Honeywell Regulator Co." to "Honeywell", to better represent their colloquial name. Throughout the 1960s, Honeywell continued to acquire other businesses, including Security Burglar Alarm Company in 1969.: 130
In the 1970s, after one member of a group called FREE on the Minneapolis campus (U of M) of the University of Minnesota asked five major companies with local offices to explain their attitudes toward gay men and women, three responded quickly, insisting that they did not discriminate against gay people in their hiring policies. Only Honeywell objected to hiring gay people. Later in the 1970s, when faced with a denial of access to students, Honeywell "quietly [reversed] its hiring policy".
The beginning of the 1970s saw Honeywell focus on process controls, with Honeywell merging their computer operations with GE's information systems in 1970, and later acquiring GE's process control business.: 122 With the acquisition, Honeywell took over responsibility for GE's ongoing Multics operating system project. The design and features of Multics greatly influenced the Unix operating system. Multics influenced many of the features of Honeywell/GE's GECOS and GCOS8 General Comprehensive Operating System operating systems. Honeywell, Groupe Bull, and Control Data Corporation formed a joint venture in Magnetic Peripherals Inc. which became a major player in the hard disk drive market.: 124
Honeywell was the worldwide leader in 14-inch disk drive technology in the OEM marketplace in the 1970s and early 1980s, especially with its SMD (Storage Module Drive) and CMD (Cartridge Module Drive). In the second half of the 1970s, Honeywell started to look to international markets again, acquiring the French Compagnie Internationale pour l’Informatique in 1976.: 124 In 1984, Honeywell formed Honeywell High Tech Trading to lease their foreign marketing and distribution to other companies abroad, in order to establish a better position in those markets.: 147 Under Binger's stewardship from 1961 to 1978 he expanded the company into such fields as defense, aerospace, and computing.
During and after the Vietnam Era, Honeywell's defense division produced a number of products, including cluster bombs, missile guidance systems, napalm, and land mines. Minnesota-Honeywell Corporation completed flight tests on an inertia guidance sub-system for the X-20 project at Eglin Air Force Base, Florida, utilizing an NF-101B Voodoo by August 1963. The X-20 project was canceled in December 1963. The Honeywell project, founded in 1968, organized protests against the company to persuade it to abandon weapons production
In 1980, Honeywell bought Incoterm Corporation to compete in both the airline reservations system networks and bank teller markets.
==== Honeywell Information Systems ====
In April 1955, Minneapolis-Honeywell started a joint venture with Raytheon called Datamatic to enter the computer market and compete with IBM.: 118 In 1957, their first computer, the DATAmatic 1000, was sold and installed. In 1960, just five years after embarking on this venture with Raytheon, Minneapolis-Honeywell bought Raytheon's interest in Datamatic and turned it into the Electronic Data Processing division, later Honeywell Information Systems (HIS) of Minneapolis-Honeywell.: 118
Honeywell purchased minicomputer pioneer Computer Control Corporation (3C's) in 1966, renaming it as Honeywell's Computer Control Division. Through most of the 1960s, Honeywell was one of the "Snow White and the Seven Dwarfs" of computing. IBM was "Snow White", while the dwarfs were the seven significantly smaller computer companies: Burroughs, Control Data Corporation, General Electric, Honeywell, NCR, RCA, and UNIVAC. Later, when their number had been reduced to five, they were known as "The BUNCH", after their initials: Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell.
In 1970, Honeywell acquired GE's computer business, rebadging General Electric's 600-series mainframes to Honeywell 6000 series computers, supporting GCOS, Multics, and CP-6, while forming Honeywell Information Systems. In 1973, they shipped a high speed non-impact printer called the Honeywell Page Printing System.
From 1974 to 1987, under the leadership of CEO Edson W. Spencer, the company began a shift away from computers and focused instead on aeronautics and industrial technology.
In 1975, it purchased Xerox Data Systems, whose Sigma computers had a small but loyal customer base. Some of Honeywell's systems were minicomputers, such as their Series 60 Model 6 and Model 62 and their Honeywell 200. The latter was an attempt to penetrate the IBM 1401 market. In 1987, HIS merged with Groupe Bull, a global joint venture with Compagnie des Machines Bull of France and NEC Corporation of Japan to become Honeywell Bull. In 1988 Honeywell Bull was consolidated into Groupe Bull and in 1989 renamed to Bull, a Worldwide Information Systems Company. By 1991, Honeywell was no longer involved in the computer business.
=== 1985–1999 integrations ===
==== Aerospace and defense ====
1986 marked a new direction for Honeywell, beginning with the acquisition of the Sperry Aerospace Group from the Unisys Corporation. In 1990, Honeywell spun off their Defense and Marine Systems business into Alliant Techsystems, as well as their Test Instruments division and Signal Analysis Center to streamline the company's focus. Honeywell continues to supply aerospace products including electronic guidance systems, cockpit instrumentation, lighting, and primary propulsion and secondary power turbine engines. In 1996, Honeywell acquired Duracraft and began marketing its products in the home comfort sector.
Honeywell is in the consortium that runs the Pantex Plant that assembles all of the nuclear bombs in the United States arsenal. Honeywell Federal Manufacturing & Technologies, successor to the defense products of AlliedSignal, operates the Kansas City Plant which produces and assembles 85 percent of the non-nuclear components of the bombs.
==== Home and building controls ====
Honeywell began the SmartHouse project, to combine heating, cooling, security, lighting, and appliances into one easily controlled system. They continued the trend in 1987 by releasing new security systems, and fire and radon detectors. In 1992, in another streamlining effort, Honeywell combined their Residential Controls, Commercial Systems, and Protections Services divisions into Home and Building Control, which then acquired the Enviracare air cleaner business.: 183 By 1995, Honeywell had condensed into three divisions: Space and Aviation Control, Home and Building Control, and Industrial Control.
==== Industrial control ====
Honeywell dissolved its partnership with Yamatake Company and consolidated its Process Control Products Division, Process Management System Division, and Micro Switch Division into one Industrial Control Group in 1998. It has further acquired Measurex System and Leeds & Northrup to strengthen its portfolio in 1997.
=== 1999–2002 merger, takeovers ===
==== AlliedSignal and Pittway ====
On June 7, 1999, Honeywell was acquired by AlliedSignal, who elected to retain the Honeywell name for its brand recognition. The former Honeywell moved their headquarters of 114 years to AlliedSignal's in Morristown, New Jersey. While "technically, the deal looks more like an acquisition than a merger...from a strategic standpoint, it is a merger of equals." AlliedSignal's 1998 revenue was reported at $15.1 billion to Honeywell's $8.4 billion, but together the companies share huge business interests in aerospace, chemical products, automotive parts, and building controls.
The corporate headquarters were consolidated to AlliedSignal's headquarters in Morristown, New Jersey, rather than Honeywell's former headquarters in Minneapolis, Minnesota. When Honeywell closed its corporate headquarters in Minneapolis, over one thousand employees lost their jobs. A few moved to Morristown or other company locations, but the majority were forced to find new jobs or retire. Soon after the merger, the company's stock fell significantly, and did not return to its pre-merger level until 2007.
In 2000, the new Honeywell acquired Pittway for $2.2 billion to gain a greater share of the fire-protection and security systems market, and merged it into their Home and Building Control division, taking on Pittway's $167 million in debt. Analyst David Jarrett commented that "while Honeywell offered a hefty premium, it's still getting Pittway for a bargain" at $45.50 per share, despite closing at $29 the week before. Pittway's Ademco products complemented Honeywell's existing unified controls systems.
==== General Electric Company ====
In October 2000, Honeywell, then valued at over $21 billion, accepted a takeover bid from then-CEO Jack Welch of General Electric. The American Department of Justice cleared the merger, while "GE teams swooped down on Honeywell" and "GE executives took over budget planning and employee reviews." However, on July 3, 2001, the European Commission's competition commissioner, Mario Monti, blocked the move. This decision was taken on the grounds that with GE's dominance of the large jet engine market, led by the General Electric CF34 turbofan engine, its leasing services (GECAS), and Honeywell's portfolio of regional jet engines and avionics, the new company would be able to "bundle" products and stifle competition through the creation of a horizontal monopoly.
US regulators disagreed, finding that the merger would improve competition and reduce prices; United States Assistant Attorney General Charles James called the EU's decision "antithetical to the goals of antitrust law enforcement." This led to a drop in morale and general tumult throughout Honeywell. The then-CEO Michael Bonsignore was fired as Honeywell looked to turn their business around.
=== 2002–2014 acquisitions and further expansion ===
In January 2002, Knorr-Bremse —who had been operating in a joint venture with Honeywell International Inc. —assumed full ownership of its ventures in Europe, Brazil, and the USA. Bendix Commercial Vehicle Systems became a subsidiary of Knorr-Bremse AG.
In February 2002, Honeywell's board appointed their next CEO and chairman, David M. Cote. Since 2002, Honeywell has made more than 80 acquisitions and 60 divestitures, and increasing its labor force to 131,000 as a result of these acquisitions. Honeywell's stock nearly tripled from $35.23 in April 2002 to $99.39 in January 2015.
Honeywell made a £1.2bn ($2.3bn) bid for Novar plc in December 2004. The acquisition was finalized in March 2005. In October 2005, Honeywell bought out Dow's 50% stake in UOP for $825 million, giving them complete control over the joint venture in petrochemical and refining technology. In May 2010, Honeywell outbid UK-based Cinven and acquired the French company Sperian Protection for $1.4 billion, which was then incorporated into its automation and controls safety unit.
=== 2015–present ===
In 2015, the headquarters were moved to Morris Plains, New Jersey. The headquarters in Morris Plains included a 475,000-square-foot building on 40 acres.
In December 2015, Honeywell acquired Elster for US$5.1B, entering the space of gas, electricity, and water meters with a specific focus on smart meters. Honeywell International Inc. then acquired the 30% stake in UOP Russell LLC it didn't own already for roughly $240 million in January 2016.
In April 2016, Honeywell acquired Xtralis, a provider of aspirating smoke detection, perimeter security technologies, and video analytics software, for $480 million, from funds advised by Pacific Equity Partners and Blum Capital Partners. In May 2016, Honeywell International Inc. settled its patent dispute regarding Google subsidiary Nest Labs, whose thermostats Honeywell claimed infringed on several of its patents. Google parent Alphabet Inc. and Honeywell said they reached a "patent cross-license" agreement that "fully resolves" the long-standing dispute. Honeywell sued Nest Labs in 2012. In 2017, Honeywell opened a new software center in Atlanta, Georgia.
David Cote stepped down as CEO on April 1, 2017, and was succeeded by Darius Adamczyk, who had been promoted to president and chief operating officer (COO) in 2016. Cote served as executive chairman until April 2018. In October 2017, Honeywell announced plans to spin off its Homes, ADI Global Distribution, and Transportation Systems businesses into two separate, publicly traded companies by the end of 2018.
In 2018, Honeywell spun off both Honeywell Turbo Technologies, now Garrett Advancing Motion, and its consumer products business, Resideo. Both companies are publicly traded on the New York Stock Exchange. For the fiscal year 2019, Honeywell reported net income of US$6.230 billion, with an annual revenue of US$36.709 billion, a decrease of 19.11% over the previous fiscal cycle. Honeywell's market capitalization was valued at over US$113.25 billion in September 2020.
Honeywell relocated its corporate headquarters in October 2019 to Charlotte, North Carolina. In July 2019, Honeywell moved employees into a temporary headquarters building in Charlotte before their new building was complete.
In 2020, Honeywell Forge launched as an analytics platform software for industrial and commercial applications such as aircraft, building, industrial, worker and cyber-security. In collaboration with Carnegie Mellon University National Robotics Engineering Center, the Honeywell Robotics was created in Pittsburgh to focus on supply chain transformation. The Honeywell robotic unloader grabs packages in tractor-trailers then places them on conveyor belts for handlers to sort.
In May 2019, GoDirect Trade launched as an online marketplace for surplus aircraft parts such as engines, electronics, and APU parts. In March 2020, Honeywell announced that its quantum computer is based on trapped ions. Its expected quantum volume is at least 64, which Honeywell's CEO called the world's most powerful quantum computer. In November 2021, Honeywell announced the spinoff of its quantum division into a separate company named "Quantinuum".
In March 2023, Honeywell announced Vimal Kapur as its next CEO, effective June 1, 2023. In December 2023, Honeywell acquired Carrier Global's security business.
In February 2024, Honeywell filed a lawsuit against Lone Star Aerospace, Inc., alleging that their software products infringe on five patents.
On October 1, 2024, Honeywell partnered with Google to integrate data with generative AI with an aim to streamline autonomous operations for its customers.
On October 8, 2024, it was announced that the company's advanced materials division would be spun-off into a new company.
On February 6, 2025, it was announced that Honeywell would be spun-off into three independent companies after activist investor Elliott Investment Management who is in favor of the split took a major stake in the company. With its aerospace, automation, and previously announced advanced materials segments being split into separate companies.
On May 22, 2025, the company announced it was acquiring Johnson Matthey's Catalyst Technologies arm for £1.8 billion.
==== COVID-19 pandemic ====
In response to the COVID-19 pandemic, Honeywell converted some of its manufacturing facilities in Rhode Island, Arizona, Michigan and Germany to produce supplies of personal protective equipment for healthcare workers. In April 2020, Honeywell began production of N95 masks at the company's factories in Smithfield and Phoenix, aiming to produce 20 million masks a month. Honeywell's facilities in Muskegon and Germany were converted to produce hand sanitiser for government agencies.
Several state governments contracted Honeywell to produce N95 particulate-filtering face masks during the pandemic. The North Carolina Task Force for Emergency Repurposing of Manufacturing (TFERM) awarded Honeywell a contract for the monthly delivery of 100,000 N95 masks. In April 2020, Los Angeles Mayor Eric Garcetti announced a deal with Honeywell to produce 24 million N95 masks to distribute to healthcare workers and first responders.
In May 2020, United States President Donald Trump visited the Honeywell Aerospace Technologies facility in Phoenix, where he acknowledged the "incredibly patriotic and hard-working men and women of Honeywell" for making N95 masks and referred to the company's production as a "miraculous achievement".
In April 2021, Will.i.am and Honeywell collaborated on Xupermask, a mask made of silicon and athletic mesh fabric that has LED lights, 3-speed fans and noise-canceling headphones in the mask.
In November 2024, Honeywell announced its intention to sell its personal protective equipment business to Protective Industrial Products for almost $1.33 billion in cash. The sale of this PPE business is expected to close by the first half of 2025.
After the divestment of PPE business, the company is planning to retain its gas detection portfolio.
== Business groups ==
The company operates four business groups – Honeywell Aerospace Technologies, Building Automation, Safety and Productivity Solutions (SPS), and Performance Materials and Technologies (PMT). Business units within the company are as follows:
Honeywell Aerospace Technologies provides avionics, aircraft engines, flight management systems, and service solutions to manufacturers, airlines, airport operations, militaries, and space programs. It comprises Commercial Aviation, Defense & Space, and Business & General Aviation. In January 2014, Honeywell Aerospace Technologies launched its SmartPath Precision Landing System at Malaga-Costa del Sol Airport in Spain, which augments GPS signals to make them suitable for precision approach and landing, before broadcasting the data to approaching aircraft.
In July 2014, Honeywell's Transportation Systems merged with the Aerospace division due to similarities between the businesses. In April 2018, Honeywell announced to develop laser communication products for satellite communication in collaboration with Ball Aerospace and plans future volume production. In June 2018 Honeywell spun off and rebranded its Transportation Systems as Garrett.
Building Automation and Honeywell Safety and Productivity Solutions were created when Automation and Control Solutions was split into two in July 2016. Building Automation comprises Honeywell Building Solutions, Environmental and Energy Solutions, and Honeywell Security and Fire. In December 2017, Honeywell announced that it had acquired SCAME, an Italy-based company, to add new fire and gas safety capabilities to its portfolio. Honeywell Safety and Productivity Solutions comprises Scanning & Mobility, Sensing and Internet of Things, and Industrial safety.
Honeywell Performance Materials and Technologies comprises six business units: Honeywell UOP, Honeywell Process Solutions, Fluorine Products, Electronic Materials, Resins & Chemicals, and Specialty Materials. Products include process technology for oil and gas processing, fuels, films and additives, special chemicals, electronic materials, and renewable transport fuels.
== Corporate governance ==
Honeywell's current chief executive officer is Vimal Kapur. As of June 2023, the members of the board are:
== Acquisitions since 2002 ==
Honeywell's acquisitions have consisted largely of businesses aligned with the company's existing technologies. The acquired companies are integrated into one of Honeywell's five business groups (Aerospace Technologies (AT), Building Automation (BA), Safety and Productivity Solutions (SPS), Energy and Sustainability Solutions (ESS), or Performance Materials and Technologies (PMT)) but retain their original brand name.
== Environmental issues ==
The United States Environmental Protection Agency states that no corporation has been linked to a greater number of Superfund toxic waste sites than Honeywell. In 2007, Honeywell ranked 44th in a list of US corporations most responsible for air pollution, releasing more than 4.25 million kg (9.4 million pounds) of toxins per year into the air. In 2001, Honeywell agreed to pay $150,000 in civil penalties and to perform $772,000 worth of reparations for environmental violations involving:
failure to prevent or repair leaks of hazardous organic pollutants into the air
failure to repair or report refrigeration equipment containing chlorofluorocarbons
inadequate reporting of benzene, ammonia, nitrogen oxide, dichlorodifluoromethane, sulfuric acid, sulfur dioxide, and caprolactam emissions
In 2003, a federal judge in Newark, New Jersey, ordered the company to perform an estimated $400 million environmental remediation of chromium waste, citing "a substantial risk of imminent damage to public health and safety and imminent and severe damage to the environment." In 2003, Honeywell paid $3.6 million to avoid a federal trial regarding its responsibility for trichloroethylene contamination in Lisle, Illinois. In 2004, the State of New York announced that it would require Honeywell to complete an estimated $448 million cleanup of more than 74,000 kg (165,000 lbs) of mercury and other toxic waste dumped into Onondaga Lake in Syracuse, New York, from a former Allied Chemical property.
Honeywell established three water treatment plants by November 2014. The chemicals cleanup site removed 7 tons of mercury. In November 2015, Audubon New York gave the Thomas W. Keesee Jr. Conservation Award to Honeywell for its cleanup efforts in “one of the most ambitious environmental reclamation projects in the United States.” By December 2017, Honeywell completed dredging the lake. Later in December, the Department of Justice filed a settlement requiring Honeywell to pay a separate $9.5 million in damages, as well build 20 restoration projects on the shore to help repair the greater area surrounding the lake.
In 2005, the state of New Jersey sued Honeywell, Occidental Petroleum, and PPG to compel cleanup of more than 100 sites contaminated with chromium, a metal linked to lung cancer, ulcers, and dermatitis. In 2008, the state of Arizona made a settlement with Honeywell to pay a $5 million fine and contribute $1 million to a local air-quality cleanup project, after allegations of breaking water-quality and hazardous-waste laws on hundreds of occasions between 1974 and 2004.
In 2006, Honeywell announced that its decision to stop manufacturing mercury switches had resulted in reductions of more than 11,300 kg (24,900 lb) of mercury, 2,800 kg (6,200 lb) of lead, and 1,500 kg (3,300 lb) of chromic acid usage. The largest reduction represents 5% of mercury use in the United States. The EPA acknowledged Honeywell's leadership in reducing mercury use through a 2006 National Partnership for Environmental Priorities (NPEP) Achievement Award for discontinuing the manufacturing of mercury switches.
=== Carbon footprint ===
Honeywell reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 2,248 Kt (-89 /-3.8% y-o-y). Honeywell aims to reach net zero emissions by 2035.
== Criticism ==
On March 10, 2013, The Wall Street Journal reported that Honeywell was one of sixty companies that shielded annual profits from U.S. taxes. In December 2011, the non-partisan organization Public Campaign criticized Honeywell International for spending $18.3 million on lobbying and not paying any taxes during 2008–2010, instead getting $34 million in tax rebates, despite making a profit of $4.9 billion, laying off 968 workers since 2008, and increasing executive pay by 15% to $54.2 million in 2010 for its top five executives.
Honeywell has also been criticized in the past for its manufacture of deadly and maiming weapons, such as cluster bombs.
=== Allegations of involvement in Gaza ===
In June 2024, investigative reports from various sources alleged that Honeywell's manufactured components were used in a missile that targeted a school in Gaza. Al Jazeera’s investigation traced the part's serial numbers back to Honeywell, raising concerns about U.S. involvement in these military operations. This attack resulted in numerous civilian casualties, sparking international condemnation. Honeywell has not provided a detailed response regarding these claims.
== See also ==
List of Honeywell products and services
Top 100 US Federal Contractors
== Explanatory notes ==
== References ==
== External links ==
Official website
Business data for Honeywell: | Wikipedia/Honeywell_Information_Systems |
Dimensional modeling (DM) is part of the Business Dimensional Lifecycle methodology developed by Ralph Kimball which includes a set of methods, techniques and concepts for use in data warehouse design.: 1258–1260 The approach focuses on identifying the key business processes within a business and modelling and implementing these first before adding additional business processes, as a bottom-up approach.: 1258–1260 An alternative approach from Inmon advocates a top down design of the model of all the enterprise data using tools such as entity-relationship modeling (ER).: 1258–1260
== Description ==
Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. For example, sales amount is a fact; timestamp, product, register#, store#, etc. are elements of dimensions. Dimensional models are built by business process area, e.g. store sales, inventory, claims, etc. Because the different business process areas share some but not all dimensions, efficiency in design, operation, and consistency, is achieved using conformed dimensions, i.e. using one copy of the shared dimension across subject areas.
Dimensional modeling does not necessarily involve a relational database. The same modeling approach, at the logical level, can be used for any physical form, such as multidimensional database or even flat files. It is oriented around understandability and performance.
== Design method ==
=== Designing the model ===
The dimensional model is built on a star-like schema or snowflake schema, with dimensions surrounding the fact table. To build the schema, the following design model is used:
Choose the business process
Declare the grain
Identify the dimensions
Identify the fact
Choose the business process
The process of dimensional modeling builds on a 4-step design method that helps to ensure the usability of the dimensional model and the use of the data warehouse. The basics in the design build on the actual business process which the data warehouse should cover. Therefore, the first step in the model is to describe the business process which the model builds on. This could for instance be a sales situation in a retail store. To describe the business process, one can choose to do this in plain text or use basic Business Process Model and Notation (BPMN) or other design guides like the Unified Modeling Language |UML).
Declare the grain
After describing the business process, the next step in the design is to declare the grain of the model. The grain of the model is the exact description of what the dimensional model should be focusing on. This could for instance be “An individual line item on a customer slip from a retail store”. To clarify what the grain means, you should pick the central process and describe it with one sentence. Furthermore, the grain (sentence) is what you are going to build your dimensions and fact table from. You might find it necessary to go back to this step to alter the grain due to new information gained on what your model is supposed to be able to deliver.
Identify the dimensions
The third step in the design process is to define the dimensions of the model. The dimensions must be defined within the grain from the second step of the 4-step process. Dimensions are the foundation of the fact table, and is where the data for the fact table is collected. Typically dimensions are nouns like date, store, inventory etc. These dimensions are where all the data is stored. For example, the date dimension could contain data such as year, month and weekday.
Identify the facts
After defining the dimensions, the next step in the process is to make keys for the fact table. This step is to identify the numeric facts that will populate each fact table row. This step is closely related to the business users of the system, since this is where they get access to data stored in the data warehouse. Therefore, most of the fact table rows are numerical, additive figures such as quantity or cost per unit, etc.
=== Dimension normalization ===
Dimensional normalization or snowflaking removes redundant attributes, which are known in the normal flatten de-normalized dimensions. Dimensions are strictly joined together in sub dimensions.
Snowflaking has an influence on the data structure that differs from many philosophies of data warehouses.
Single data (fact) table surrounded by multiple descriptive (dimension) tables
Developers often don't normalize dimensions due to several reasons:
Normalization makes the data structure more complex
Performance can be slower, due to the many joins between tables
The space savings are minimal
Bitmap indexes can't be used
Query performance. 3NF databases suffer from performance problems when aggregating or retrieving many dimensional values that may require analysis. If you are only going to do operational reports then you may be able to get by with 3NF because your operational user will be looking for very fine grain data.
There are some arguments on why normalization can be useful. It can be an advantage when part of hierarchy is common to more than one dimension. For example, a geographic dimension may be reusable because both the customer and supplier dimensions use it.
== Benefits of dimensional modeling ==
Benefits of the dimensional model are the following:
Understandability. Compared to the normalized model, the dimensional model is easier to understand and more intuitive. In dimensional models, information is grouped into coherent business categories or dimensions, making it easier to read and interpret. Simplicity also allows software to navigate databases efficiently. In normalized models, data is divided into many discrete entities and even a simple business process might result in dozens of tables joined together in a complex way.
Query performance. Dimensional models are more denormalized and optimized for data querying, while normalized models seek to eliminate data redundancies and are optimized for transaction loading and updating. The predictable framework of a dimensional model allows the database to make strong assumptions about the data which may have a positive impact on performance. Each dimension is an equivalent entry point into the fact table, and this symmetrical structure allows effective handling of complex queries. Query optimization for star-joined databases is simple, predictable, and controllable.
Extensibility. Dimensional models are scalable and easily accommodate unexpected new data. Existing tables can be changed in place either by simply adding new data rows into the table or executing SQL alter table commands. No queries or applications that sit on top of the data warehouse need to be reprogrammed to accommodate changes. Old queries and applications continue to run without yielding different results. But in normalized models each modification should be considered carefully, because of the complex dependencies between database tables.
== Dimensional models, Hadoop, and big data ==
We still get the benefits of dimensional models on Hadoop and similar big data frameworks. However, some features of Hadoop require us to slightly adapt the standard approach to dimensional modelling.
The Hadoop File System is immutable. We can only add but not update data. As a result we can only append records to dimension tables. Slowly Changing Dimensions on Hadoop become the default behavior. In order to get the latest and most up to date record in a dimension table we have three options. First, we can create a View that retrieves the latest record using windowing functions. Second, we can have a compaction service running in the background that recreates the latest state. Third, we can store our dimension tables in mutable storage, e.g. HBase and federate queries across the two types of storage.
The way data is distributed across HDFS makes it expensive to join data. In a distributed relational database (MPP) we can co-locate records with the same primary and foreign keys on the same node in a cluster. This makes it relatively cheap to join very large tables. No data needs to travel across the network to perform the join. This is very different on Hadoop and HDFS. On HDFS tables are split into big chunks and distributed across the nodes on our cluster. We don’t have any control on how individual records and their keys are spread across the cluster. As a result joins on Hadoop for two very large tables are quite expensive as data has to travel across the network. We should avoid joins where possible. For a large fact and dimension table we can de-normalize the dimension table directly into the fact table. For two very large transaction tables we can nest the records of the child table inside the parent table and flatten out the data at run time.
== Literature ==
Kimball, Ralph; Margy Ross (2013). The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling (3rd ed.). Wiley. ISBN 978-1-118-53080-1.
Ralph Kimball (1997). "A Dimensional Modeling Manifesto". DBMS and Internet Systems. 10 (9).
Margy Ross (Kimball Group) (2005). "Identifying Business Processes". Kimball Group, Design Tips (69). Archived from the original on 12 June 2013.
== References == | Wikipedia/Dimensional_modelling |
Datavault or data vault modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that deals with issues such as auditing, tracing of data, loading speed and resilience to change as well as emphasizing the need to trace where all the data in the database came from. This means that every row in a data vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source. The concept was published in 2000 by Dan Linstedt.
Data vault modeling makes no distinction between good and bad data ("bad" meaning not conforming to business rules). This is summarized in the statement that a data vault stores "a single version of the facts" (also expressed by Dan Linstedt as "all the data, all of the time") as opposed to the practice in other data warehouse methods of storing "a single version of the truth" where data that does not conform to the definitions is removed or "cleansed". A data vault enterprise data warehouse provides both; a single version of facts and a single source of truth.
The modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes. Data vault is designed to enable parallel loading as much as possible, so that very large implementations can scale out without the need for major redesign.
Unlike the star schema (dimensional modelling) and the classical relational model (3NF), data vault and anchor modeling are well-suited for capturing changes that occur when a source system is changed or added, but are considered advanced techniques which require experienced data architects. Both data vaults and anchor models are entity-based models, but anchor models have a more normalized approach.
== History and philosophy ==
In its early days, Dan Linstedt referred to the modeling technique which was to become data vault as common foundational warehouse architecture or common foundational modeling architecture. In data warehouse modeling there are two well-known competing options for modeling the layer where the data are stored. Either you model according to Ralph Kimball, with conformed dimensions and an enterprise data bus, or you model according to Bill Inmon with the database normalized. Both techniques have issues when dealing with changes in the systems feeding the data warehouse. For conformed dimensions you also have to cleanse data (to conform it) and this is undesirable in a number of cases since this inevitably will lose information. Data vault is designed to avoid or minimize the impact of those issues, by moving them to areas of the data warehouse that are outside the historical storage area (cleansing is done in the data marts) and by separating the structural items (business keys and the associations between the business keys) from the descriptive attributes.
Dan Linstedt, the creator of the method, describes the resulting database as follows:
"The Data Vault Model is a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3rd normal form (3NF) and star schema. The design is flexible, scalable, consistent and adaptable to the needs of the enterprise"
Data vault's philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data are not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being "wrong" is an interpretation of the data that stems from a particular point of view that may not be valid for everyone, or at every point in time. Therefore the data vault must capture all data and only when reporting or extracting data from the data vault is the data being interpreted.
Another issue to which data vault is a response is that more and more there is a need for complete auditability and traceability of all the data in the data warehouse. Due to Sarbanes-Oxley requirements in the USA and similar measures in Europe this is a relevant topic for many business intelligence implementations, hence the focus of any data vault implementation is complete traceability and auditability of all information.
Data Vault 2.0 is the new specification. It is an open standard. The new specification consists of three pillars: methodology (SEI/CMMI, Six Sigma, SDLC, etc..), the architecture (amongst others an input layer (data stage, called persistent staging area in Data Vault 2.0) and a presentation layer (data mart), and handling of data quality services and master data services), and the model. Within the methodology, the implementation of best practices is defined. Data Vault 2.0 has a focus on including new components such as big data, NoSQL - and also focuses on the performance of the existing model. The old specification (documented here for the most part) is highly focused on data vault modeling. It is documented in the book: Building a Scalable Data Warehouse with Data Vault 2.0.
It is necessary to evolve the specification to include the new components, along with the best practices in order to keep the EDW and BI systems current with the needs and desires of today's businesses.
=== History ===
Data vault modeling was originally conceived by Dan Linstedt in the 1990s and was released in 2000 as a public domain modeling method. In a series of five articles in The Data Administration Newsletter the basic rules of the Data Vault method are expanded and explained. These contain a general overview, an overview of the components, a discussion about end dates and joins, link tables, and an article on loading practices.
An alternative (and seldom used) name for the method is "Common Foundational Integration Modelling Architecture."
Data Vault 2.0 has arrived on the scene as of 2013 and brings to the table Big Data, NoSQL, unstructured, semi-structured seamless integration, along with methodology, architecture, and implementation best practices.
=== Alternative interpretations ===
According to Dan Linstedt, the Data Model is inspired by (or patterned off) a simplistic view of neurons, dendrites, and synapses – where neurons are associated with Hubs and Hub Satellites, Links are dendrites (vectors of information), and other Links are synapses (vectors in the opposite direction). By using a data mining set of algorithms, links can be scored with confidence and strength ratings. They can be created and dropped on the fly in accordance with learning about relationships that currently don't exist. The model can be automatically morphed, adapted, and adjusted as it is used and fed new structures.
Another view is that a data vault model provides an ontology of the Enterprise in the sense that it describes the terms in the domain of the enterprise (Hubs) and the relationships among them (Links), adding descriptive attributes (Satellites) where necessary.
Another way to think of a data vault model is as a graphical model. The data vault model actually provides a "graph based" model with hubs and relationships in a relational database world. In this manner, the developer can use SQL to get at graph-based relationships with sub-second responses.
== Basic notions ==
Data vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.
The business keys and their associations are structural attributes, forming the skeleton of the data model. The data vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a data warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the hubs is of prime importance for the stability of your model. The keys are stored in tables with a few constraints on the structure. These key-tables are called hubs.
=== Hubs ===
Hubs contain a list of unique business keys with low propensity to change. Hubs also contain a surrogate key for each Hub item and metadata describing the origin of the business key. The descriptive attributes for the information on the Hub (such as the description for the key, possibly in multiple languages) are stored in structures called Satellite tables which will be discussed below.
The Hub contains at least the following fields:
a surrogate key, used to connect the other structures to this table.
a business key, the driver for this hub. The business key can consist of multiple fields.
the record source, which can be used to see what system loaded each business key first.
optionally, you can also have metadata fields with information about manual updates (user/time) and the extraction date.
A hub is not allowed to contain multiple business keys, except when two systems deliver the same business key but with collisions that have different meanings.
Hubs should normally have at least one satellite.
==== Hub example ====
This is an example for a hub-table containing cars, called "Car" (H_CAR). The driving key is vehicle identification number.
=== Links ===
Associations or transactions between business keys (relating for instance the hubs for customer and product with each other through the purchase transaction) are modeled using link tables. These tables are basically many-to-many join tables, with some metadata.
Links can link to other links, to deal with changes in granularity (for instance, adding a new key to a database table would change the grain of the database table). For instance, if you have an association between customer and address, you could add a reference to a link between the hubs for product and transport company. This could be a link called "Delivery". Referencing a link in another link is considered a bad practice, because it introduces dependencies between links that make parallel loading more difficult. Since a link to another link is the same as a new link with the hubs from the other link, in these cases creating the links without referencing other links is the preferred solution (see the section on loading practices for more information).
Links sometimes link hubs to information that is not by itself enough to construct a hub. This occurs when one of the business keys associated by the link is not a real business key. As an example, take an order form with "order number" as key, and order lines that are keyed with a semi-random number to make them unique. Let's say, "unique number". The latter key is not a real business key, so it is no hub. However, we do need to use it in order to guarantee the correct granularity for the link. In this case, we do not use a hub with surrogate key, but add the business key "unique number" itself to the link. This is done only when there is no possibility of ever using the business key for another link or as key for attributes in a satellite. This construct has been called a 'peg-legged link' by Dan Linstedt on his (now defunct) forum.
Links contain the surrogate keys for the hubs that are linked, their own surrogate key for the link and metadata describing the origin of the association. The descriptive attributes for the information on the association (such as the time, price or amount) are stored in structures called satellite tables which are discussed below.
==== Link example ====
This is an example for a link-table between two hubs for cars (H_CAR) and persons (H_PERSON). The link is called "Driver" (L_DRIVER).
=== Satellites ===
The hubs and links form the structure of the model, but have no temporal attributes and hold no descriptive attributes. These are stored in separate tables called satellites. These consist of metadata linking them to their parent hub or link, metadata describing the origin of the association and attributes, as well as a timeline with start and end dates for the attribute. Where the hubs and links provide the structure of the model, the satellites provide the "meat" of the model, the context for the business processes that are captured in hubs and links. These attributes are stored both with regards to the details of the matter as well as the timeline and can range from quite complex (all of the fields describing a client's complete profile) to quite simple (a satellite on a link with only a valid-indicator and a timeline).
Usually the attributes are grouped in satellites by source system. However, descriptive attributes such as size, cost, speed, amount or color can change at different rates, so you can also split these attributes up in different satellites based on their rate of change.
All the tables contain metadata, minimally describing at least the source system and the date on which this entry became valid, giving a complete historical view of the data as it enters the data warehouse.
An effectivity satellite is a satellite built on a link, "and record[s] the time period when the corresponding link records start and end effectivity".
==== Satellite example ====
This is an example for a satellite on the drivers-link between the hubs for cars and persons, called "Driver insurance" (S_DRIVER_INSURANCE). This satellite contains attributes that are specific to the insurance of the relationship between the car and the person driving it, for instance an indicator whether this is the primary driver, the name of the insurance company for this car and person (could also be a separate hub) and a summary of the number of accidents involving this combination of vehicle and driver. Also included is a reference to a lookup- or reference table called R_RISK_CATEGORY containing the codes for the risk category in which this relationship is deemed to fall.
(*) at least one attribute is mandatory.
(**) sequence number becomes mandatory if it is needed to enforce uniqueness for multiple valid satellites on the same hub or link.
=== Reference tables ===
Reference tables are a normal part of a healthy data vault model. They are there to prevent redundant storage of simple reference data that is referenced a lot. More formally, Dan Linstedt defines reference data as follows:
Any information deemed necessary to resolve descriptions from codes, or to translate keys in to (sic) a consistent manner. Many of these fields are "descriptive" in nature and describe a specific state of the other more important information. As such, reference data lives in separate tables from the raw Data Vault tables.
Reference tables are referenced from Satellites, but never bound with physical foreign keys. There is no prescribed structure for reference tables: use what works best in your specific case, ranging from simple lookup tables to small data vaults or even stars. They can be historical or have no history, but it is recommended that you stick to the natural keys and not create surrogate keys in that case. Normally, data vaults have a lot of reference tables, just like any other Data Warehouse.
==== Reference example ====
This is an example of a reference table with risk categories for drivers of vehicles. It can be referenced from any satellite in the data vault. For now we reference it from satellite S_DRIVER_INSURANCE. The reference table is R_RISK_CATEGORY.
(*) at least one attribute is mandatory.
== Loading practices ==
The ETL for updating a data vault model is fairly straightforward (see Data Vault Series 5 – Loading Practices). First you have to load all the hubs, creating surrogate IDs for any new business keys. Having done that, you can now resolve all business keys to surrogate ID's if you query the hub. The second step is to resolve the links between hubs and create surrogate IDs for any new associations. At the same time, you can also create all satellites that are attached to hubs, since you can resolve the key to a surrogate ID. Once you have created all the new links with their surrogate keys, you can add the satellites to all the links.
Since the hubs are not joined to each other except through links, you can load all the hubs in parallel. Since links are not attached directly to each other, you can load all the links in parallel as well. Since satellites can be attached only to hubs and links, you can also load these in parallel.
The ETL is quite straightforward and lends itself to easy automation or templating. Problems occur only with links relating to other links, because resolving the business keys in the link only leads to another link that has to be resolved as well. Due to the equivalence of this situation with a link to multiple hubs, this difficulty can be avoided by remodeling such cases and this is in fact the recommended practice.
Data is never deleted from the data vault, unless you have a technical error while loading data.
== Data vault and dimensional modelling ==
The data vault modelled layer is normally used to store data. It is not optimised for query performance, nor is it easy to query by the well-known query-tools such as Cognos, Oracle Business Intelligence Suite Enterprise Edition, SAP Business Objects, Pentaho et al. Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.
For this purpose, the hubs and related satellites on those hubs can be considered as dimensions and the links and related satellites on those links can be viewed as fact tables in a dimensional model. This enables you to quickly prototype a dimensional model out of a data vault model using views.
Note that while it is relatively straightforward to move data from a data vault model to a (cleansed) dimensional model, the reverse is not as easy, given the denormalized nature of the dimensional model's fact tables, fundamentally different to the third normal form of the data vault.
== Methodology ==
The data vault methodology is based on SEI/CMMI Level 5 best practices. It includes multiple components of CMMI Level 5, and combines them with best practices from Six Sigma, total quality management (TQM), and SDLC. Particularly, it is focused on Scott Ambler's agile methodology for build out and deployment. Data vault projects have a short, scope-controlled release cycle and should consist of a production release every 2 to 3 weeks.
Teams using the data vault methodology should readily adapt to the repeatable, consistent, and measurable projects that are expected at CMMI Level 5. Data that flow through the EDW data vault system will begin to follow the TQM life-cycle that has long been missing from BI (business intelligence) projects.
== Tools ==
Some examples of tools are:
DataVault4dbt
2150 Datavault Builder
Astera DW Builder
Wherescape
Vaultspeed
AutomateDV
== See also ==
Bill Inmon – American computer scientist
Data lake – Repository of data stored in a raw format
Data warehouse – Centralized storage of knowledge
The Kimball lifecycle – Methodology for developing data warehousesPages displaying short descriptions of redirect targets, developed by Ralph Kimball – American computer scientist
Staging area – Location where items are gathered before use
Agile Business Intelligence – Use of agile software development for business intelligence projectsPages displaying short descriptions of redirect targets
== References ==
=== Citations ===
=== Sources ===
== Literature ==
Patrick Cuba: The Data Vault Guru. A Pragmatic Guide on Building a Data Vault. Selbstverlag, ohne Ort 2020, ISBN 979-86-9130808-6.
John Giles: The Elephant in the Fridge. Guided Steps to Data Vault Success through Building Business-Centered Models. Technics, Basking Ridge 2019, ISBN 978-1-63462-489-3.
Kent Graziano: Better Data Modeling. An Introduction to Agile Data Engineering Using Data Vault 2.0. Data Warrior, Houston 2015.
Hans Hultgren: Modeling the Agile Data Warehouse with Data Vault. Brighton Hamilton, Denver u. a. 2012, ISBN 978-0-615-72308-2.
Dirk Lerner: Data Vault für agile Data-Warehouse-Architekturen. In: Stephan Trahasch, Michael Zimmer (Hrsg.): Agile Business Intelligence. Theorie und Praxis. dpunkt.verlag, Heidelberg 2016, ISBN 978-3-86490-312-0, S. 83–98.
Daniel Linstedt: Super Charge Your Data Warehouse. Invaluable Data Modeling Rules to Implement Your Data Vault. Linstedt, Saint Albans, Vermont 2011, ISBN 978-1-4637-7868-2.
Daniel Linstedt, Michael Olschimke: Building a Scalable Data Warehouse with Data Vault 2.0. Morgan Kaufmann, Waltham, Massachusetts 2016, ISBN 978-0-12-802510-9.
Dani Schnider, Claus Jordan u. a.: Data Warehouse Blueprints. Business Intelligence in der Praxis. Hanser, München 2016, ISBN 978-3-446-45075-2, S. 35–37, 161–173.
== External links ==
The homepage of Dan Linstedt, the inventor of Data Vault modeling | Wikipedia/Data_vault_modelling |
An entity is something that exists as itself. It does not need to be of material existence. In particular, abstractions and legal fictions are usually regarded as entities. In general, there is also no presumption that an entity is animate, or present. The verb tense of this form is to 'entitize' - meaning to convert into an entity; to perceive as tangible or alive.
The term is broad in scope and may refer to animals; natural features such as mountains; inanimate objects such as tables; numbers or sets as symbols written on a paper; human contrivances such as laws, corporations and academic disciplines; or supernatural beings such as gods and spirits.
The adjectival form is entitative.
== Etymology ==
The word entity is derived from the Latin entitas, which in turn derives from the Latin ens meaning "being" or "existing" (compare English essence). Entity may hence literally be taken to mean "thing which exists".
== In philosophy ==
Ontology is the study of concepts of existence, and of recognition of entities. The words ontic and entity are derived respectively from the ancient Greek and Latin present participles that mean "being".
In an ontic inquiry... one asks about the properties or the physical relations and structures peculiar to some entity – in the pen's case, for example, we might make the following ontic observations about it: it is black, full of blue ink, and sitting on top of my desk.
== In law, politics, economics, accounting ==
In law, a legal entity is an entity that is capable of bearing legal rights and obligations, such as a natural person or an artificial person (e.g. business entity or a corporate entity).
In politics, entity is used as term for territorial divisions of some countries (e.g. Bosnia and Herzegovina).
In economics, economic entity is one of the assumptions made in generally accepted accounting principles. Almost any type of organization or unit in society can be an economic entity.
In accounting, the entity concept is the concept that a business or an organization and its owners are treated as two separate parties.
== In medicine ==
In medicine, a disease entity is an illness due to a particular definite cause or to a specific pathological process. While a disease entity is not defined by a syndrome, it may or may not be manifest in one or more particular syndromes.
== In computer science ==
In computer science, an entity is an object that has an identity, which is independent of the changes of its attributes. It represents long-lived information relevant for the users and is usually stored in a database.
== See also ==
Digital identity
Elementary entity
Entity–relationship model
Entity–control–boundary
Entity realism, a form of scientific realism
Entitativity
Everything
Html entity
Non-physical entity
Object (philosophy)
Circular reference
== References ==
== External links ==
Media related to Entities at Wikimedia Commons
Quotations related to Entity at Wikiquote | Wikipedia/Entity_(computer_science) |
The nested set model is a technique for representing nested set collections (also known as trees or hierarchies) in relational databases.
It is based on Nested Intervals, that "are immune to hierarchy reorganization problem, and allow answering ancestor path hierarchical queries algorithmically — without accessing the stored hierarchy relation".
== Motivation ==
The standard relational algebra and relational calculus, and the SQL operations based on them, are unable to express directly all desirable operations on hierarchies. The nested set model is a solution to that problem.
An alternative solution is the expression of the hierarchy as a parent-child relation. Joe Celko called this the adjacency list model. If the hierarchy can have arbitrary depth, the adjacency list model does not allow the expression of operations such as comparing the contents of hierarchies of two elements, or determining whether an element is somewhere in the subhierarchy of another element. When the hierarchy is of fixed or bounded depth, the operations are possible, but expensive, due to the necessity of performing one relational join per level. This is often known as the bill of materials problem.
Hierarchies may be expressed easily by switching to a graph database. Alternatively, several resolutions exist for the relational model and are available as a workaround in some relational database management systems:
support for a dedicated hierarchy data type, such as in SQL's hierarchical query facility;
extending the relational language with hierarchy manipulations, such as in the nested relational algebra.
extending the relational language with transitive closure, such as SQL's CONNECT statement; this allows a parent-child relation to be used, but execution remains expensive;
the queries can be expressed in a language that supports iteration and is wrapped around the relational operations, such as PL/SQL, T-SQL or a general-purpose programming language
When these solutions are not available or not feasible, another approach must be taken.
== Technique ==
The nested set model is to number the nodes according to a tree traversal, which visits each node twice, assigning numbers in the order of visiting, and at both visits. This leaves two numbers for each node, which are stored as two attributes. Querying becomes inexpensive: hierarchy membership can be tested by comparing these numbers. Updating requires renumbering and is therefore expensive. Refinements that use rational numbers instead of integers can avoid renumbering, and so are faster to update, although much more complicated.
== Example ==
In a clothing store catalog, clothing may be categorized according to the hierarchy given on the left:
The "Clothing" category, with the highest position in the hierarchy, encompasses all subordinating categories. It is therefore given left and right domain values of 1 and 22, the latter value being the double of the total number of nodes being represented. The next hierarchical level contains "Men's" and "Women's", both containing levels within themselves that must be accounted for. Each level's data node is assigned left and right domain values according to the number of sublevels contained within, as shown in the table data.
== Performance ==
Queries using nested sets can be expected to be faster than queries using a stored procedure to traverse an adjacency list, and so are the faster option for databases which lack native recursive query constructs, such as MySQL 5.x. However, recursive SQL queries can be expected to perform comparably for 'find immediate descendants' queries, and much faster for other depth search queries, and so are the faster option for databases which provide them, such as PostgreSQL, Oracle, and Microsoft SQL Server. MySQL used to lack recursive query constructs but added such features in version 8.
== Drawbacks ==
The use case for a dynamic endless database tree hierarchy is rare. The Nested Set model is appropriate where the tree element and one or two attributes are the only data, but is a poor choice when more complex relational data exists for the elements in the tree. Given an arbitrary starting depth for a category of 'Vehicles' and a child of 'Cars' with a child of 'Mercedes', a foreign key table relationship must be established unless the tree table is natively non-normalized. Attributes of a newly created tree item may not share all attributes with a parent, child or even a sibling. If a foreign key table is established for a table of 'Plants' attributes, no integrity is given to the child attribute data of 'Trees' and its child 'Oak'. Therefore, in each case of an item inserted into the tree, a foreign key table of the item's attributes must be created for all but the most trivial of use cases.
If the tree isn't expected to change often, a properly normalized hierarchy of attribute tables can be created in the initial design of a system, leading to simpler, more portable SQL statements; specifically ones that don't require an arbitrary number of runtime, programmatically created or deleted tables for changes to the tree. For more complex systems, hierarchy can be developed through relational models rather than an implicit numeric tree structure. Depth of an item is simply another attribute rather than the basis for an entire DB architecture. As stated in SQL Antipatterns:
Nested Sets is a clever solution – maybe too clever. It also fails to support referential integrity. It’s best used when you need to query a tree more frequently than you need to modify the tree.
The model doesn't allow for multiple parent categories. For example, an 'Oak' could be a child of 'Tree-Type', but also 'Wood-Type'. An additional tagging or taxonomy has to be established to accommodate this, again leading to a design more complex than a straightforward fixed model.
Nested sets are very slow for inserts because it requires updating left and right domain values for all records in the table after the insert. This can cause a lot of database stress as many rows are rewritten and indexes rebuilt. However, if it is possible to store a forest of small trees in table instead of a single big tree, the overhead may be significantly reduced, since only one small tree must be updated.
The nested interval model does not suffer from this problem, but is more complex to implement, and is not as well known. It still suffers from the relational foreign-key table problem. The nested interval model stores the position of the nodes as rational numbers expressed as quotients (n/d). [1]
== Variations ==
Using the nested set model as described above has some performance limitations during certain tree traversal operations. For example, trying to find the immediate child nodes given a parent node requires pruning the subtree to a specific level as in the following SQL code example:
Or, equivalently:
The query will be more complicated when searching for children more than one level deep. To overcome this limitation and simplify tree traversal an additional column is added to the model to maintain the depth of a node within a tree.
In this model, finding the immediate children given a parent node can be accomplished with the following SQL code:
== See also ==
Adjacency list
Calkin–Wilf tree
Tree traversal
Tree (data structure)
== References ==
== External links ==
Troels' links to Hierarchical data in RDBMSs
Managing hierarchical data in relational databases
PHP PEAR Implementation for Nested Sets – by Daniel Khan
Transform any Adjacency List to Nested Sets using MySQL stored procedures
PHP Doctrine DBAL implementation for Nested Sets – by PreviousNext
R Nested Set – Nested Set example in R | Wikipedia/Nested_set_model |
The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data are represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.
The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.
== History ==
The relational model was developed by Edgar F. Codd as a general model of data, and subsequently promoted by Chris Date and Hugh Darwen among others. In their 1995 The Third Manifesto, Date and Darwen try to demonstrate how the relational model can accommodate certain "desired" object-oriented features.
=== Extensions ===
Some years after publication of his 1970 model, Codd proposed a three-valued logic (True, False, Missing/NULL) version of it to deal with missing information, and in his The Relational Model for Database Management Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version.
== Conceptualization ==
=== Basic concepts ===
A relation consists of a heading and a body. The heading defines a set of attributes, each with a name and data type (sometimes called a domain). The number of attributes in this set is the relation's degree or arity. The body is a set of tuples. A tuple is a collection of n values, where n is the relation's degree, and each value in the tuple corresponds to a unique attribute. The number of tuples in this set is the relation's cardinality.: 17–22
Relations are represented by relational variables or relvars, which can be reassigned.: 22–24 A database is a collection of relvars.: 112–113
In this model, databases follow the Information Principle: At any given time, all information in the database is represented solely by values within tuples, corresponding to attributes, in relations identified by relvars.: 111
=== Constraints ===
A database may define arbitrary boolean expressions as constraints. If all constraints evaluate as true, the database is consistent; otherwise, it is inconsistent. If a change to a database's relvars would leave the database in an inconsistent state, that change is illegal and must not succeed.: 91
In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient.
Two special cases of constraints are expressed as keys and foreign keys:
==== Keys ====
A candidate key, or simply a key, is the smallest subset of attributes guaranteed to uniquely differentiate each tuple in a relation. Since each tuple in a relation must be unique, every relation necessarily has a key, which may be its complete set of attributes. A relation may have multiple keys, as there may be multiple ways to uniquely differentiate each tuple.: 31–33
An attribute may be unique across tuples without being a key. For example, a relation describing a company's employees may have two attributes: ID and Name. Even if no employees currently share a name, if it is possible to eventually hire a new employee with the same name as a current employee, the attribute subset {Name} is not a key. Conversely, if the subset {ID} is a key, this means not only that no employees currently share an ID, but that no employees will ever share an ID.: 31–33
==== Foreign keys ====
A foreign key is a subset of attributes A in a relation R1 that corresponds with a key of another relation R2, with the property that the projection of R1 on A is a subset of the projection of R2 on A. In other words, if a tuple in R1 contains values for a foreign key, there must be a corresponding tuple in R2 containing the same values for the corresponding key.: 34
=== Relational operations ===
Users (or programs) request data from a relational database by sending it a query. In response to a query, the database returns a result set.
Often, data from multiple tables are combined into one, by doing a join. Conceptually, this is done by taking all possible combinations of rows (the Cartesian product), and then filtering out everything except the answer.
There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (that lists the rows in one table that are not found in the other), intersect (that lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators – many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (relation-valued attribute), then operators such as group and ungroup.
The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.
=== Database normalization ===
Relations are classified based upon the types of anomalies to which they're vulnerable. A database that is in the first normal form is vulnerable to all types of anomalies, while a database that is in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.
== Logical interpretation ==
The relational model is a formal system. A relation's attributes define a set of logical propositions. Each proposition can be expressed as a tuple. The body of a relation is a subset of these tuples, representing which propositions are true. Constraints represent additional propositions which must also be true. Relational algebra is a set of logical rules that can validly infer conclusions from these propositions.: 95–101
The definition of a tuple allows for a unique empty tuple with no values, corresponding to the empty set of attributes. If a relation has a degree of 0 (i.e. its heading contains no attributes), it may have either a cardinality of 0 (a body containing no tuples) or a cardinality of 1 (a body containing the single empty tuple). These relations represent Boolean truth values. The relation with degree 0 and cardinality 0 is False, while the relation with degree 0 and cardinality 1 is True.: 221–223
=== Example ===
If a relation of Employees contains the attributes {Name, ID}, then the tuple {Alice, 1} represents the proposition: "There exists an employee named Alice with ID 1". This proposition may be true or false. If this tuple exists in the relation's body, the proposition is true (there is such an employee). If this tuple is not in the relation's body, the proposition is false (there is no such employee).: 96–97
Furthermore, if {ID} is a key, then a relation containing the tuples {Alice, 1} and {Bob, 1} would represent the following contradiction:
There exists an employee with the name Alice and the ID 1.
There exists an employee with the name Bob and the ID 1.
There do not exist multiple employees with the same ID.
Under the principle of explosion, this contradiction would allow the system to prove that any arbitrary proposition is true. The database must enforce the key constraint to prevent this.: 104
== Examples ==
=== Database ===
An idealized, very simple example of a description of some relvars (relation variables) and their attributes:
Customer (Customer ID, Name)
Order (Order ID, Customer ID, Invoice ID, Date)
Invoice (Invoice ID, Customer ID, Order ID, Status)
In this design we have three relvars: Customer, Order, and Invoice. The bold, underlined attributes are candidate keys. The non-bold, underlined attributes are foreign keys.
Usually one candidate key is chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys.
A candidate key is a unique identifier enforcing that no tuple will be duplicated; this would make the relation into something else, namely a bag, by violating the basic definition of a set. Both foreign keys and superkeys (that includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar.
=== Customer relation ===
If we attempted to insert a new customer with the ID 123, this would violate the design of the relvar since Customer ID is a primary key and we already have a customer 123. The DBMS must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint. However, it is possible to insert another customer named Alice, as long as this new customer has a unique ID, since the Name field is not part of the primary key.
Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example, in the Order relation the attribute Customer ID is a foreign key. A join is the operation that draws on information from several relations at once. By joining relvars from the example above we could query the database for all of the Customers, Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition. If we wanted to retrieve all of the Orders for Customer 123, we could query the database to return every row in the Order table with Customer ID 123 .
There is a flaw in our database design above. The Invoice relvar contains an Order ID attribute. So, each tuple in the Invoice relvar will have one Order ID, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice ID attribute, implying that each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words, there can be many Invoices per Order and many Orders per Invoice. This is a many-to-many relationship between Order and Invoice (also called a non-specific relationship). To represent this relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices:
OrderInvoice (Order ID, Invoice ID)
Now, the Order relvar has a one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders where Order ID in the Order relation equals the Order ID in OrderInvoice, and where Invoice ID in OrderInvoice equals the Invoice ID in Invoice.
== Application to relational databases ==
A data type in a relational database might be the set of integers, the set of character strings, the set of dates, etc. The relational model does not dictate what types are to be supported.
Attributes are commonly represented as columns, tuples as rows, and relations as tables. A table is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row.
A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by an update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluating a query are determined by the definitions of the operators used in that query.
=== SQL and the relational model ===
SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts.
According to the relational model, a Relation's attributes and tuples are mathematical sets, meaning they are unordered and unique. In a SQL table, neither rows nor columns are proper sets. A table may contain both duplicate rows and duplicate columns, and a table's columns are explicitly ordered. SQL uses a Null value to indicate missing data, which has no analog in the relational model. Because a row can represent unknown information, SQL does not adhere to the relational model's Information Principle.: 153–155, 162
== Set-theoretic formulation ==
Basic notions in the relational model are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables
r
,
s
,
t
,
…
{\displaystyle r,s,t,\ldots }
and
a
,
b
,
c
{\displaystyle a,b,c}
to range over them. Another basic notion is the set of atomic values that contains values such as numbers and strings.
Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table:
Tuple
A tuple is a partial function from attribute names to atomic values.
Header
A header is a finite set of attribute names.
Projection
The projection of a tuple
t
{\displaystyle t}
on a finite set of attributes
A
{\displaystyle A}
is
t
[
A
]
=
{
(
a
,
v
)
:
(
a
,
v
)
∈
t
,
a
∈
A
}
{\displaystyle t[A]=\{(a,v):(a,v)\in t,a\in A\}}
.
The next definition defines relation that formalizes the contents of a table as it is defined in the relational model.
Relation
A relation is a tuple
(
H
,
B
)
{\displaystyle (H,B)}
with
H
{\displaystyle H}
, the header, and
B
{\displaystyle B}
, the body, a set of tuples that all have the domain
H
{\displaystyle H}
.
Such a relation closely corresponds to what is usually called the extension of a predicate in first-order logic except that here we identify the places in the predicate with attribute names. Usually in the relational model a database schema is said to consist of a set of relation names, the headers that are associated with these names and the constraints that should hold for every instance of the database schema.
Relation universe
A relation universe
U
{\displaystyle U}
over a header
H
{\displaystyle H}
is a non-empty set of relations with header
H
{\displaystyle H}
.
Relation schema
A relation schema
(
H
,
C
)
{\displaystyle (H,C)}
consists of a header
H
{\displaystyle H}
and a predicate
C
(
R
)
{\displaystyle C(R)}
that is defined for all relations
R
{\displaystyle R}
with header
H
{\displaystyle H}
. A relation satisfies a relation schema
(
H
,
C
)
{\displaystyle (H,C)}
if it has header
H
{\displaystyle H}
and satisfies
C
{\displaystyle C}
.
=== Key constraints and functional dependencies ===
One of the simplest and most important types of relation constraints is the key constraint. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes.
Superkey
A superkey is a set of column headers for which the values of those columns concatenated are unique across all rows. Formally:
A superkey is written as a finite set of attribute names.
A superkey
K
{\displaystyle K}
holds in a relation
(
H
,
B
)
{\displaystyle (H,B)}
if:
K
⊆
H
{\displaystyle K\subseteq H}
and
there exist no two distinct tuples
t
1
,
t
2
∈
B
{\displaystyle t_{1},t_{2}\in B}
such that
t
1
[
K
]
=
t
2
[
K
]
{\displaystyle t_{1}[K]=t_{2}[K]}
.
A superkey holds in a relation universe
U
{\displaystyle U}
if it holds in all relations in
U
{\displaystyle U}
.
Theorem: A superkey
K
{\displaystyle K}
holds in a relation universe
U
{\displaystyle U}
over
H
{\displaystyle H}
if and only if
K
⊆
H
{\displaystyle K\subseteq H}
and
K
→
H
{\displaystyle K\rightarrow H}
holds in
U
{\displaystyle U}
.
Candidate key
A candidate key is a superkey that cannot be further subdivided to form another superkey.
A superkey
K
{\displaystyle K}
holds as a candidate key for a relation universe
U
{\displaystyle U}
if it holds as a superkey for
U
{\displaystyle U}
and there is no proper subset of
K
{\displaystyle K}
that also holds as a superkey for
U
{\displaystyle U}
.
Functional dependency
Functional dependency is the property that a value in a tuple may be derived from another value in that tuple.
A functional dependency (FD for short) is written as
X
→
Y
{\displaystyle X\rightarrow Y}
for
X
,
Y
{\displaystyle X,Y}
finite sets of attribute names.
A functional dependency
X
→
Y
{\displaystyle X\rightarrow Y}
holds in a relation
(
H
,
B
)
{\displaystyle (H,B)}
if:
X
,
Y
⊆
H
{\displaystyle X,Y\subseteq H}
and
∀
{\displaystyle \forall }
tuples
t
1
,
t
2
∈
B
{\displaystyle t_{1},t_{2}\in B}
,
t
1
[
X
]
=
t
2
[
X
]
⇒
t
1
[
Y
]
=
t
2
[
Y
]
{\displaystyle t_{1}[X]=t_{2}[X]~\Rightarrow ~t_{1}[Y]=t_{2}[Y]}
A functional dependency
X
→
Y
{\displaystyle X\rightarrow Y}
holds in a relation universe
U
{\displaystyle U}
if it holds in all relations in
U
{\displaystyle U}
.
Trivial functional dependency
A functional dependency is trivial under a header
H
{\displaystyle H}
if it holds in all relation universes over
H
{\displaystyle H}
.
Theorem: An FD
X
→
Y
{\displaystyle X\rightarrow Y}
is trivial under a header
H
{\displaystyle H}
if and only if
Y
⊆
X
⊆
H
{\displaystyle Y\subseteq X\subseteq H}
.
Closure
Armstrong's axioms: The closure of a set of FDs
S
{\displaystyle S}
under a header
H
{\displaystyle H}
, written as
S
+
{\displaystyle S^{+}}
, is the smallest superset of
S
{\displaystyle S}
such that:
Y
⊆
X
⊆
H
⇒
X
→
Y
∈
S
+
{\displaystyle Y\subseteq X\subseteq H~\Rightarrow ~X\rightarrow Y\in S^{+}}
(reflexivity)
X
→
Y
∈
S
+
∧
Y
→
Z
∈
S
+
⇒
X
→
Z
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}\land Y\rightarrow Z\in S^{+}~\Rightarrow ~X\rightarrow Z\in S^{+}}
(transitivity) and
X
→
Y
∈
S
+
∧
Z
⊆
H
⇒
(
X
∪
Z
)
→
(
Y
∪
Z
)
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}\land Z\subseteq H~\Rightarrow ~(X\cup Z)\rightarrow (Y\cup Z)\in S^{+}}
(augmentation)
Theorem: Armstrong's axioms are sound and complete; given a header
H
{\displaystyle H}
and a set
S
{\displaystyle S}
of FDs that only contain subsets of
H
{\displaystyle H}
,
X
→
Y
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}}
if and only if
X
→
Y
{\displaystyle X\rightarrow Y}
holds in all relation universes over
H
{\displaystyle H}
in which all FDs in
S
{\displaystyle S}
hold.
Completion
The completion of a finite set of attributes
X
{\displaystyle X}
under a finite set of FDs
S
{\displaystyle S}
, written as
X
+
{\displaystyle X^{+}}
, is the smallest superset of
X
{\displaystyle X}
such that:
Y
→
Z
∈
S
∧
Y
⊆
X
+
⇒
Z
⊆
X
+
{\displaystyle Y\rightarrow Z\in S\land Y\subseteq X^{+}~\Rightarrow ~Z\subseteq X^{+}}
The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.
Theorem: Given a set
S
{\displaystyle S}
of FDs,
X
→
Y
∈
S
+
{\displaystyle X\rightarrow Y\in S^{+}}
if and only if
Y
⊆
X
+
{\displaystyle Y\subseteq X^{+}}
.
Irreducible cover
An irreducible cover of a set
S
{\displaystyle S}
of FDs is a set
T
{\displaystyle T}
of FDs such that:
S
+
=
T
+
{\displaystyle S^{+}=T^{+}}
there exists no
U
⊂
T
{\displaystyle U\subset T}
such that
S
+
=
U
+
{\displaystyle S^{+}=U^{+}}
X
→
Y
∈
T
⇒
Y
{\displaystyle X\rightarrow Y\in T~\Rightarrow Y}
is a singleton set and
X
→
Y
∈
T
∧
Z
⊂
X
⇒
Z
→
Y
∉
S
+
{\displaystyle X\rightarrow Y\in T\land Z\subset X~\Rightarrow ~Z\rightarrow Y\notin S^{+}}
.
=== Algorithm to derive candidate keys from functional dependencies ===
algorithm derive candidate keys from functional dependencies is
input: a set S of FDs that contain only subsets of a header H
output: the set C of superkeys that hold as candidate keys in
all relation universes over H in which all FDs in S hold
C := ∅ // found candidate keys
Q := { H } // superkeys that contain candidate keys
while Q <> ∅ do
let K be some element from Q
Q := Q – { K }
minimal := true
for each X->Y in S do
K' := (K – Y) ∪ X // derive new superkey
if K' ⊂ K then
minimal := false
Q := Q ∪ { K' }
end if
end for
if minimal and there is not a subset of K in C then
remove all supersets of K from C
C := C ∪ { K }
end if
end while
== Alternatives ==
Other models include the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs, or where existing systems are so complex and abstract that it would be cost-prohibitive to migrate to systems employing the relational model. Also of note are newer object-oriented databases. and Datalog.
Datalog is a database definition language, which combines a relational view of data, as in the relational model, with a logical view, as in logic programming. Whereas relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, Datalog uses logical connectives, such as if, or, and and not to define relations as part of the database itself.
In contrast with the relational model, which cannot express recursive queries without introducing a least-fixed-point operator, recursive relations can be defined in Datalog, without introducing any new logical connectives or operators.
== See also ==
== Notes ==
== References ==
== Further reading ==
Date, Christopher J.; Darwen, Hugh (2000). Foundation for future database systems: the third manifesto; a detailed study of the impact of type theory on the relational model of data, including a comprehensive model of type inheritance (2 ed.). Reading, MA: Addison-Wesley. ISBN 978-0-201-70928-5.
——— (2007). An Introduction to Database Systems (8 ed.). Boston: Pearson Education. ISBN 978-0-321-19784-9.
== External links ==
Childs (1968), Feasibility of a set-theoretic data structure: a general structure based on a reconstituted definition of relation (research), Handle, hdl:2027.42/4164 cited in Codd's 1970 paper.
Darwen, Hugh, The Third Manifesto (TTM).
"Relational Model", C2.
Binary relations and tuples compared with respect to the semantic web (World Wide Web log), Sun. | Wikipedia/Relational_database_model |
NebulaGraph is a free software distributed graph database built for super large-scale graphs with milliseconds of latency. NebulaGraph adopts the Apache 2.0 license and also comes with a wide range of data visualization tools.
== History ==
NebulaGraph was developed in 2018 by Vesoft Inc. In May 2019, NebulaGraph made free software on GitHub and its alpha version was released same year.
In June 2020, NebulaGraph raised $8M in a series pre-A funding round led by Redpoint China Ventures and Matrix Partners China.
In June 2019, NebulaGraph 1.0 GA version was released while version 2.0 GA was released in March 2021. The latest version 3.0.2 of Nebula was released in March 2022.
In September 2023, NebulaGraph and LlamaIndex introduced Graph RAG for retrieval-augmented generation.
== See also ==
Graph database
== References ==
== External links ==
Official website | Wikipedia/NebulaGraph |
In natural language processing (NLP), a text graph is a graph representation of a text item (document, passage or sentence). It is typically created as a preprocessing step to support NLP tasks such as text condensation
term disambiguation
(topic-based) text summarization, relation extraction and textual entailment.
== Representation ==
The semantics of what a text graph's nodes and edges represent can vary widely. Nodes for example can simply connect to tokenized words, or to domain-specific terms, or to entities mentioned in the text. The edges, on the other hand, can be between these text-based tokens or they can also link to a knowledge base.
== TextGraphs Workshop series ==
The TextGraphs Workshop series is a series of regular academic workshops intended to encourage the synergy between the fields of natural language processing (NLP) and graph theory. The mix between the two started small, with graph theoretical framework providing efficient and elegant solutions for NLP applications that focused on single documents for part-of-speech tagging, word-sense disambiguation and semantic role labelling, got progressively larger with ontology learning and information extraction from large text collections.
The 11th edition of the workshop (TextGraphs-11) will be collocated with the Annual Meeting of Association for Computational Linguistics (ACL 2017) in Vancouver, BC, Canada.
== Areas of interest ==
Graph-based methods for providing reasoning and interpretation of deep learning methods
Graph-based methods for reasoning and interpreting deep processing by neural networks,
Explorations of the capabilities and limits of graph-based methods applied to neural networks in general
Investigation of which aspects of neural networks are not susceptible to graph-based methods.
Graph-based methods for Information Retrieval, Information Extraction, and Text Mining
Graph-based methods for word sense disambiguation,
Graph-based representations for ontology learning,
Graph-based strategies for semantic relations identification,
Encoding semantic distances in graphs,
Graph-based techniques for text summarization, simplification, and paraphrasing
Graph-based techniques for document navigation and visualization
Reranking with graphs
Applications of label propagation algorithms, etc.
New graph-based methods for NLP applications
Random walk methods in graphs
Spectral graph clustering
Semi-supervised graph-based methods
Methods and analyses for statistical networks
Small world graphs
Dynamic graph representations
Topological and pretopological analysis of graphs
Graph kernels, etc.
Graph-based methods for applications on social networks
Rumor proliferation
E-reputation
Multiple identity detection
Language dynamics studies
Surveillance systems, etc.
Graph-based methods for NLP and Semantic Web
Representation learning methods for knowledge graphs (i.e., knowledge graph embedding)
Using graphs-based methods to populate ontologies using textual data,
Inducing knowledge of ontologies into NLP applications using graphs,
Merging ontologies with graph-based methods using NLP techniques.
== See also ==
Bag-of-words model
Document classification
Document-term matrix
Hyperlinking
Graph database
Wiki
== References ==
== External links ==
Gabor Melli's page on text graphs Description of text graphs from a semantic processing perspective. | Wikipedia/Text_graph |
InfiniteGraph is a distributed graph database implemented in Java and C++ and is from a class of NOSQL ("Not Only SQL") database technologies that focus on graph data structures. Developers use InfiniteGraph to find useful and often hidden relationships in highly connected, complex big data sets. InfiniteGraph is cross-platform, scalable, cloud-enabled, and is designed to handle very high throughput.
InfiniteGraph can easily and efficiently perform queries that are difficult to perform, such as finding all paths or the shortest path between two items. InfiniteGraph is suited for applications and services that solve graph problems in operational environments. InfiniteGraphs "DO" query language enables both value-based queries and complex graph queries. InfiniteGraph goes beyond graph databases to also support complex object queries.
Adoption is seen in federal government, telecommunications, healthcare, cybersecurity, manufacturing, finance, and networking applications.
== History ==
InfiniteGraph is produced and supported by Objectivity, Inc., a company that develops database management technologies for large-scale, distributed data management and relationship analytics. The new InfiniteGraph was released in May 2021.
== Features ==
API/Protocols: Java, core C++, REST API
Graph Model: Labeled directed multigraph. An edge is a first-class entity with an identity independent of the vertices it connects..
Concurrency: Update locking on subgraphs, concurrent non-blocking ingest.
Consistency: Flexible (from ACID to relaxed).
Distribution: Lock server and 64-bit object IDs support dynamic addressing space (with each federation capable of managing up to 65,535 individual databases and 10^24 bytes (one quadrillion gigabytes, or a yottabyte) of physical addressing space).
Processing: Multi-threaded.
Query Methods: "DO" Query Language, Traverser and graph navigation API, predicate language qualification, path pattern matching.
Parallel query support
Visualization: InfiniteGraph "Studio."
Schema: Supports schema-full plus provides a mechanism for attaching side data.
Transactions: Fully ACID.
Source: Proprietary, with open-source extensions, integrated components, and third-party connectors.
Platforms: Windows and Linux
== References ==
== External links ==
Official website | Wikipedia/InfiniteGraph |
AllegroGraph is a closed source triplestore which is designed to store RDF triples, a standard format for Linked Data.
It also operates as a document store designed for storing, retrieving and managing document-oriented information, in JSON-LD format.
AllegroGraph is currently in use in commercial projects and a US Department of Defense project. It is also the storage component for the TwitLogic project that is bringing the Semantic Web to Twitter data.
== Implementation ==
AllegroGraph was developed to meet W3C standards for the Resource Description Framework, so it is properly considered an RDF Database. It is a reference implementation for the SPARQL protocol. SPARQL is a standard query language for linked data, serving the same purposes for RDF databases that SQL serves for relational databases.
Franz Inc. is the developer of AllegroGraph. It also develops Allegro Common Lisp, an implementation of Common Lisp, a dialect of Lisp (programming language). The functionality of AllegroGraph is made available through Java, Python, Common Lisp and other APIs.
The first version of AllegroGraph was made available at the end of 2004.
=== Languages ===
AllegroGraph has client interfaces for Java, Python, Ruby, Perl, C#, Clojure, and Common Lisp. The product is available for Windows, Linux, and Mac OS X platforms, supporting 32 or 64 bits.
For query languages, besides SPARQL, AllegroGraph also supports Prolog and JavaScript.
== References ==
== External links ==
Official website
Archived released
Practical Semantic Web and Linked Data Applications Archived 2016-01-22 at the Wayback Machine — a book by Mark Watson | Wikipedia/AllegroGraph |
TigerGraph is a private company headquartered in Redwood City, California. It provides graph database and graph analytics software.
== History ==
TigerGraph was founded in 2012 by programmer Dr. Yu Xu under the name GraphSQL.
In September 2017, the company came out of stealth mode under the name TigerGraph with $33 million in funding. It raised an additional $32 million in funding in September 2019 and another $105 million in a series C round in February 2021. Cumulative funding as of March 2021 is $170 million.
== Products ==
TigerGraph's hybrid transactional/analytical processing database and analytics software can scale to hundreds of terabytes of data with trillions of edges, and is used for data intensive applications such as fraud detection, customer data analysis (customer 360), IoT, artificial intelligence and machine learning. It is available using the cloud computing delivery model. The analytics uses C++ based software and a parallel processing engine to process algorithms and queries. It has its own graph query language that is similar to SQL.: 9–10 TigerGraph also provides a software development kit for creating graphs and visual representations.
As of Mar 2024, TigerGraph version is up to version 4.2.0
TigerGraph offers free Community Edition for developers, researchers, and educators. It can be obtained from https://dl.tigergraph.com/
== Query Language ==
GSQL is a SQL-like Turing complete query language designed by TigerGraph.
== See also ==
Graph Query Language
== References ==
== External links ==
Official website
TigerGraph Paper at SIGMOD Conference | Wikipedia/TigerGraph |
GraphQL is a data query and manipulation language that allows specifying what data is to be retrieved ("declarative data fetching") or modified. A GraphQL server can process a client query using data from separate sources and present the results in a unified graph. The language is not tied to any specific database or storage engine. There are several open-source runtime engines for GraphQL.
== History ==
Facebook started GraphQL development in 2012 and released a draft specification and reference implementation as open source in 2015. In 2018, GraphQL was moved to the newly established GraphQL Foundation, hosted by the non-profit Linux Foundation.
On February 9, 2018, the GraphQL Schema Definition Language became part of the specification.
Many popular public APIs adopted GraphQL as the default way to access them. These include public APIs of Facebook, GitHub, Yelp, Shopify, Google Directions API and many others.
== Design ==
GraphQL supports reading, writing (mutating), and subscribing to changes to data (realtime updates – commonly implemented using WebSockets). A GraphQL service is created by defining types with fields, then providing functions to resolve the data for each field. The types and fields make up what is known as the schema definition. The functions that retrieve and map the data are called resolvers.
After being validated against the schema, a GraphQL query is executed by the server. The server returns a result that mirrors the shape of the original query, typically as JSON.
=== Type system ===
With GraphQL, you model your business domain as a graph by defining a schema; within your schema, you define different types of nodes and how they connect/relate to one another.
The GraphQL type system describes what data can be queried from the API. The collection of those capabilities is referred to as the service’s schema and clients can use that schema to send queries to the API that return predictable results.
The root type of a GraphQL schema, Query by default, contains all of the fields that can be queried. Other types define the objects and fields that the GraphQL server can return. There are several base types, called scalars, to represent things like strings, numbers, and IDs.
Fields are defined as nullable by default, and a trailing exclamation mark can be used to make a field non-nullable (required). A field can be defined as a list by wrapping the field's type in square brackets (for example, authors: [String]).
=== Queries ===
A GraphQL query defines the exact shape of the data needed by the client.Once validated and executed by the GraphQL server, the data is returned in the same shape.
=== Mutations ===
A GraphQL mutation allows for data to be created, updated, or deleted. Mutations generally contain variables, which allow data to be passed into the server from the client. The mutation also defines the shape of the data that will be returned to the client after the operation is complete.
The variables are passed as an object with fields that match the variable names in the mutation.Once the operation is complete, the GraphQL server will return data matching the shape defined by the mutation.
=== Subscriptions ===
GraphQL also supports live updates sent from the server to client in an operation called a subscription. Again, the client defines the shape of the data that it needs whenever an update is made.When a mutation is made through the GraphQL server that updates the associated field, data is sent to all subscribed clients in the format setup through the subscription.
=== Versioning ===
While there’s nothing that prevents a GraphQL service from being versioned just like any other API, GraphQL takes a strong opinion on avoiding versioning by providing the tools for the continuous evolution of a GraphQL schema.
The @deprecated built-in directive is used within the type system definition language to indicate deprecated portions of a GraphQL service’s schema, such as deprecated fields on a type or deprecated enum values.
GraphQL only returns the data that’s explicitly requested, so new capabilities can be added via new types or new fields on existing types without creating a breaking change. This has led to a common practice of always avoiding breaking changes and serving a versionless API.
=== Comparison to other query languages ===
GraphQL does not provide a full-fledged graph query language such as SPARQL, or even in dialects of SQL that support transitive closure. For example, a GraphQL interface that reports the parents of an individual cannot return, in a single query, the set of all their ancestors.
== Testing ==
GraphQL APIs can be tested manually or with automated tools issuing GraphQL requests and verifying the correctness of the results. Automatic test generation is also possible. New requests may be produced through search-based techniques due to a typed schema and introspection capabilities.
Some of the software tools used for testing GraphQL implementations include Postman, GraphiQL, Apollo Studio, GraphQL Hive, GraphQL Editor, and Step CI.
== See also ==
Query by Example
OpenAPI Specification
Microservices
== References ==
== External links ==
Official website
GraphQL: The Documentary on YouTube | Wikipedia/GraphQL |
GQL (Graph Query Language) is a standardized query language for property graphs first described in ISO/IEC 39075, released in April 2024 by ISO/IEC.
== History ==
The GQL project is the culmination of converging initiatives dating back to 2016, particularly a private proposal from Neo4j to other database vendors in July 2016, and a proposal from Oracle technical staff within the ISO/IEC JTC 1 standards process later that year.
=== 2019 GQL project proposal ===
In September 2019 a proposal for a project to create a new standard graph query language (ISO/IEC 39075 Information Technology — Database Languages — GQL) was approved by a vote of national standards bodies which are members of ISO/IEC Joint Technical Committee 1(ISO/IEC JTC 1). JTC 1 is responsible for international Information Technology standards. GQL is intended to be a declarative database query language, like SQL.
The 2019 GQL project proposal states:
"Using graph as a fundamental representation for data modeling is an emerging approach in data management. In this approach, the data set is modeled as a graph, representing each data entity as a vertex (also called a node) of the graph and each relationship between two entities as an edge between corresponding vertices. The graph data model has been drawing attention for its unique advantages.
Firstly, the graph model can be a natural fit for data sets that have hierarchical, complex, or even arbitrary structures. Such structures can be easily encoded into the graph model as edges. This can be more convenient than the relational model, which requires the normalization of the data set into a set of tables with fixed row types.
Secondly, the graph model enables efficient execution of expensive queries or data analytic functions that need to observe multi-hop relationships among data entities, such as reachability queries, shortest or cheapest path queries, or centrality analysis. There are two graph models in current use: the Resource Description Framework (RDF) model and the Property Graph model. The RDF model has been standardized by W3C in a number of specifications. The Property Graph model, on the other hand, has a multitude of implementations in graph databases, graph algorithms, and graph processing facilities. However, a common, standardized query language for property graphs (like SQL for relational database systems) is missing. GQL is proposed to fill this void."
==== Official ISO standard ====
The GQL standard, ISO/IEC 39075:2024 Information technology – Database languages – GQL, was officially published by ISO on
12 April 2024.
=== GQL project organisation ===
The GQL project is led by Stefan Plantikow (who was the first lead engineer of Neo4j's Cypher for Apache Spark project) and Stephen Cannan (Technical Corrigenda editor of SQL). They are also the editors of the initial early working drafts of the GQL specification.
As originally motivated, the GQL project aims to complement the work of creating an implementable normative natural-language specification with supportive community efforts that enable contributions from those who are unable or uninterested in taking part in the formal process of defining a JTC 1 International Standard. In July 2019 the Linked Data Benchmark Council (LDBC) agreed to become the umbrella organization for the efforts of community technical working groups. The Existing Languages and the Property Graph Schema working groups formed in late 2018 and early 2019 respectively. A working group to define formal denotational semantics for GQL was proposed at the third GQL Community Update in October 2019.
=== ISO/IEC JTC 1/SC 32 WG3 ===
Seven national standards bodies (those of the United States, China, Korea, the Netherlands, the United Kingdom, Denmark and Sweden) have nominated national subject-matter experts to work on the project, which is conducted by Working Group 3 (Database Languages) of ISO/IEC JTC 1's Subcommittee 32 (Data Management and Interchange), usually abbreviated as ISO/IEC JTC 1/SC 32 WG3, or just WG3 for short. WG3 (and its direct predecessor committees within JTC 1) has been responsible for the SQL standard since 1987.
=== ISO stages ===
ISO stages by date
2019-09-10 : 10.99 New project approved
2019-09-10 : 20.00 New project registered in TC/SC work programme
2021-11-22 : 30.00 Committee draft (CD) registered
2021-11-23 : 30.20 CD study initiated
2022-02-25 : 30.60 Close of comment period
2022-08-29 : 30.92 CD referred back to Working Group
2022-08-29 : 30.00 Committee draft (CD) registered
2022-08-30 : 30.20 CD study initiated
2022-10-26 : 30.60 Close of comment period
2023-03-22 : 30.99 CD approved for registration as DIS
2023-03-24 : 40.00 DIS registered
2023-05-24 : 40.20 DIS ballot initiated: 12 weeks
2023-08-17 : 40.60 Close of voting
2023-11-28 : 40.99 Full report circulated: DIS approved for registration as FDIS
2023-12-11 : 50.00 Final text received or FDIS registered for formal approval
2024-01-26 : 50.20 Proof sent to secretariat or FDIS ballot initiated: 8 weeks
2024-03-23 : 50.60 Close of voting. Proof returned by secretariat
2024-03-23 : 60.00 International Standard under publication
2024-04-12 : 60.60 International Standard published
== GQL property graph data model ==
GQL is a query language specifically for property graphs. A property graph closely resembles a conceptual data model, as expressed in an entity–relationship model or in a UML class diagram (although it does not include n-ary relationships linking more than two entities). Entities are modelled as nodes, and relationships as edges, in a graph. Property graphs are multigraphs: there can be many edges between the same pair of nodes. GQL graphs can be mixed: they can contain directed edges, where one of the endpoint nodes of an edge is the tail (or source) and the other node is the head (or target or destination), but they can also contain undirected (bidirectional or reflexive) edges.
Nodes and edges, collectively known as elements, have attributes. Those attributes may be data values, or labels (tags). Values of properties cannot be elements of graphs, nor can they be whole graphs: these restrictions intentionally force a clean separation between the topology of a graph, and the attributes carrying data values in the context of a graph topology. The property graph data model therefore deliberately prevents nesting of graphs, or treating nodes in one graph as edges in another. Each property graph may have a set of labels and a set of properties that are associated with the graph as a whole.
Current graph database products and projects often support a limited version of the model described here. For example, Apache Tinkerpop forces each node and each edge to have a single label; Cypher allows nodes to have zero to many labels, but relationships only have a single label (called a reltype). Neo4j's database supports undocumented graph-wide properties, Tinkerpop has graph values which play the same role, and also supports "metaproperties" or properties on properties. Oracle's PGQL supports zero to many labels on nodes and on edges, whereas SQL/PGQ supports one to many labels for each kind of element. The NGSI-LD information model specified by ETSI is an attempt at formally specifying property graphs, with node and relationship (edge) types that may play the role of labels in previously mentioned models and support semantic referencing by inheriting classes defined in shared ontologies.
The GQL project will define a standard data model, which is likely to be the superset of these variants, and at least the first version of GQL is likely to permit vendors to decide on the cardinalities of labels in each implementation, as does SQL/PGQ, and to choose whether to support undirected relationships.
Additional aspects of the ERM or UML models (like generalization or subtyping, or entity or relationship cardinalities) may be captured by GQL schemas or types that describe possible instances of the general data model.
== Implementations ==
The first in-memory graph database that can interpret GQL is available. Aside from the implementation, one can also find a formalization and read the syntax of the specific subset of GQL.
== Extending existing graph query languages ==
The GQL project draws on multiple sources or inputs, notably existing industrial languages and a new section of the SQL standard. In preparatory discussions within WG3 surveys of the history and comparative content of some of these inputs were presented. GQL is a declarative language with its own distinct syntax, playing a similar role to SQL in the building of a database application. Other graph query languages have been defined which offer direct procedural features such as branching and looping (Apache Tinkerpop's Gremlin), and GSQL, making it possible to traverse a graph iteratively to perform a class of graph algorithms, but GQL will not directly incorporate such features. However, GQL is envisaged as a specific case of a more general class of graph languages, which share a graph type system and a calling interface for procedures that process graphs.
=== SQL/PGQ Property Graph Query ===
Prior work by WG3 and SC32 mirror bodies, particularly in INCITS Data Management (formerly INCITS DM32), has helped to define a new planned Part 16 of the SQL Standard, which allows a read-only graph query to be called inside a SQL SELECT statement, matching a graph pattern using syntax which is very close to Cypher, PGQL and G-CORE, and returning a table of data values as the result. SQL/PGQ also contains DDL to allow SQL tables to be mapped to a graph view schema object with nodes and edges associated to sets of labels and set of data properties. The GQL project coordinates closely with the SQL/PGQ "project split" of (extension to) ISO 9075 SQL, and the technical working groups in the U.S. (INCITS DM32) and at the international level (SC32/WG3) have several expert contributors who work on both projects. The GQL project proposal mandates close alignment of SQL/PGQ and GQL, indicating that GQL will in general be a superset of SQL/PGQ.
More details about the pattern matching language can be found in the paper "Graph Pattern Matching in GQL and SQL/PGQ"
=== Cypher ===
Cypher is a language originally designed by Andrés Taylor and colleagues at Neo4j Inc., and first implemented by that company in 2011. Since 2015 it has been made available as an open source language description with grammar tooling, a JVM front-end that parses Cypher queries, and a Technology Compatibility Kit (TCK) of over 2000 test scenarios, using Cucumber for implementation language portability. The TCK reflects the language description and an enhancement for temporal datatypes and functions documented in a Cypher Improvement Proposal.
Cypher allows creation, reading, updating and deleting of graph elements, and is a language that can therefore be used for analytics engines and transactional databases.
==== Querying with visual path patterns ====
Cypher uses compact fixed- and variable-length patterns which combine visual representations of node and relationship (edge) topologies, with label existence and property value predicates. (These patterns are usually referred to as "ASCII art" patterns, and arose originally as a way of commenting programs which used a lower-level graph API.) By matching such a pattern against graph data elements, a query can extract references to nodes, relationships and paths of interest. Those references are emitted as a "binding table" where column names are bound to a multiset of graph elements. The name of a column becomes the name of a "binding variable", whose value is a specific graph element reference for each row of the table.
For example, a pattern MATCH (p:Person)-[:LIVES_IN]->(c:City) will generate a two-column output table. The first column named p will contain references to nodes with a label Person . The second column named c will contain references to nodes with a label City , denoting the city where the person lives.
The binding variables p and c can then be dereferenced to obtain access to property values associated with the elements referred to by a variable. The example query might be terminated with a RETURN, resulting in a complete query like this:
This would result in a final four-column table listing the names of the residents of the cities stored in the graph.
Pattern-based queries are able to express joins, by combining multiple patterns which use the same binding variable to express a natural join using the MATCH clause:
This query would return the residential location only of EU nationals.
An outer join can be expressed by MATCH ... OPTIONAL MATCH :
This query would return the city of residence of each person in the graph with residential information, and, if an EU national, which country they come from.
Queries are therefore able to first project a sub-graph of the graph input into the query, and then extract the data values associated with that subgraph. Data values can also be processed by functions, including aggregation functions, leading to the projection of computed values which render the information held in the projected graph in various ways. Following the lead of G-CORE and Morpheus, GQL aims to project the sub-graphs defined by matching patterns (and graphs then computed over those sub-graphs) as new graphs to be returned by a query.
Patterns of this kind have become pervasive in property graph query languages, and are the basis for the advanced pattern sub-language being defined in SQL/PGQ, which is likely to become a subset of the GQL language. Cypher also uses patterns for insertion and modification clauses ( CREATE and MERGE ), and proposals have been made in the GQL project for collecting node and edge patterns to describe graph types.
==== Cypher 9 and Cypher 10 ====
The current version of Cypher (including the temporal extension) is referred to as Cypher 9. Prior to the GQL project it was planned to create a new version, Cypher 10 [REF HEADING BELOW], that would incorporate features like schema and composable graph queries and views. The first designs for Cypher 10, including graph construction and projection, were implemented in the Cypher for Apache Spark project starting in 2016.
=== PGQL ===
PGQL
is a language designed and implemented by Oracle Inc., but made available as an open source specification, along with JVM parsing software. PGQL combines familiar SQL SELECT syntax including SQL expressions and result ordering and aggregation with a pattern matching language very similar to that of Cypher. It allows the specification of the graph to be queried, and includes a facility for macros to capture "pattern views", or named sub-patterns. It does not support insertion or updating operations, having been designed primarily for an analytics environment, such as Oracle's PGX product. PGQL has also been implemented in Oracle Big Data Spatial and Graph, and in a research project, PGX.D/Async.
=== G-CORE ===
G-CORE is a research language designed by a group of academic and industrial researchers and language designers which draws on features of Cypher, PGQL and SPARQL. The project was conducted under the auspices of the Linked Data Benchmark Council (LDBC), starting with the formation of a Graph Query Language task force in late 2015, with the bulk of the work of paper writing occurring in 2017. G-CORE is a composable language which is closed over graphs: graph inputs are processed to create a graph output, using graph projections and graph set operations to construct the new graph. G-CORE queries are pure functions over graphs, having no side effects, which mean that the language does not define operations which mutate (update or delete) stored data. G-CORE introduces views (named queries). It also incorporates paths as elements in a graph ("paths as first class citizens"), which can be queried independently of projected paths (which are computed at query time over node and edge elements). G-CORE has been partially implemented in open-source research projects in the LDBC GitHub organization.
=== GSQL ===
GSQL is a language designed for TigerGraph Inc.'s proprietary graph database. Since October 2018 TigerGraph language designers have been promoting and working on the GQL project. GSQL is a Turing-complete language that incorporates procedural flow control and iteration, and a facility for gathering and modifying computed values associated with a program execution for the whole graph or for elements of a graph called accumulators. These features are designed to enable iterative graph computations to be combined with data exploration and retrieval. GSQL graphs must be described by a schema of vertexes and edges, which constrains all insertions and updates. This schema therefore has the closed world property of an SQL schema, and this aspect of GSQL (also reflected in design proposals deriving from the Morpheus project) is proposed as an important optional feature of GSQL.
Vertexes and edges are named schema objects which contain data but also define an imputed type, much as SQL tables are data containers, with an associated implicit row type. GSQL graphs are then composed from these vertex and edge sets, and multiple named graphs can include the same vertex or edge set. GSQL has developed new features since its release in September 2017, most notably introducing variable-length edge pattern matching using a syntax related to that seen in Cypher, PGQL and SQL/PGQ, but also close in style to the fixed-length patterns offered by Microsoft SQL/Server Graph
GSQL also supports the concept of Multigraphs
which allow subsets of a graph to have role-based access control. Multigraphs are important for enterprise-scale graphs that need fine-grain access control for different users.
=== Morpheus: multiple graphs and composable graph queries in Apache Spark ===
The opencypher Morpheus project implements Cypher for Apache Spark users. Commencing in 2016, this project originally ran alongside three related efforts, in which Morpheus designers also took part: SQL/PGQ, G-CORE and design of Cypher extensions for querying and constructing multiple graphs. The Morpheus project acted as a testbed for extensions to Cypher (known as "Cypher 10") in the two areas of graph DDL and query language extensions.
Graph DDL features include
definition of property graph views over JDBC-connected SQL tables and Spark DataFrames
definition of graph schemas or types defined by assembling node type and edge type patterns, with subtyping
constraining the content of a graph by a closed or fixed schema
creating catalog entries for multiple named graphs in a hierarchically organized catalog
graph data sources to form a federated, heterogeneous catalog
creating catalog entries for named queries (views)
Graph query language extensions include
graph union
projection of graphs computed from the results of pattern matches on multiple input graphs
support for tables (Spark DataFrames) as inputs to queries ("driving tables")
views which accept named or projected graphs as parameters.
These features have been proposed as inputs to the standardization of property graph query languages in the GQL project.
== See also ==
Graph Modeling Language (GML)
GraphQL
Cypher (query language)
Graph database
Graph (abstract data type)
Graph traversal
Regular path query
== References ==
== External links ==
GQL Standard (Official website) | Wikipedia/GQL_Graph_Query_Language |
In the mathematical discipline of graph theory, a graph labeling is the assignment of labels, traditionally represented by integers, to edges and/or vertices of a graph.
Formally, given a graph G = (V, E), a vertex labeling is a function of V to a set of labels; a graph with such a function defined is called a vertex-labeled graph. Likewise, an edge labeling is a function of E to a set of labels. In this case, the graph is called an edge-labeled graph.
When the edge labels are members of an ordered set (e.g., the real numbers), it may be called a weighted graph.
When used without qualification, the term labeled graph generally refers to a vertex-labeled graph with all labels distinct. Such a graph may equivalently be labeled by the consecutive integers { 1, …, |V| } , where |V| is the number of vertices in the graph. For many applications, the edges or vertices are given labels that are meaningful in the associated domain. For example, the edges may be assigned weights representing the "cost" of traversing between the incident vertices.
In the above definition a graph is understood to be a finite undirected simple graph. However, the notion of labeling may be applied to all extensions and generalizations of graphs. For example, in automata theory and formal language theory it is convenient to consider labeled multigraphs, i.e., a pair of vertices may be connected by several labeled edges.
== History ==
Most graph labelings trace their origins to labelings presented by Alexander Rosa in his 1967 paper. Rosa identified three types of labelings, which he called α-, β-, and ρ-labelings. β-labelings were later renamed as "graceful" by Solomon Golomb, and the name has been popular since.
== Special cases ==
=== Graceful labeling ===
A graph is known as graceful if its vertices are labeled from 0 to |E|, the size of the graph, and if this vertex labeling induces an edge labeling from 1 to |E|. For any edge e, the label of e is the positive difference between the labels of the two vertices incident with e. In other words, if e is incident with vertices labeled i and j, then e will be labeled |i − j|. Thus, a graph G = (V, E) is graceful if and only if there exists an injection from V to {0, ..., |E|} that induces a bijection from E to {1, ..., |E|}.
In his original paper, Rosa proved that all Eulerian graphs with size equivalent to 1 or 2 (mod 4) are not graceful. Whether or not certain families of graphs are graceful is an area of graph theory under extensive study. Arguably, the largest unproven conjecture in graph labeling is the Ringel–Kotzig conjecture, which hypothesizes that all trees are graceful. This has been proven for all paths, caterpillars, and many other infinite families of trees. Anton Kotzig himself has called the effort to prove the conjecture a "disease".
=== Edge-graceful labeling ===
An edge-graceful labeling on a simple graph without loops or multiple edges on p vertices and q edges is a labeling of the edges by distinct integers in {1, …, q} such that the labeling on the vertices induced by labeling a vertex with the sum of the incident edges taken modulo p assigns all values from 0 to p − 1 to the vertices. A graph G is said to be "edge-graceful" if it admits an edge-graceful labeling.
Edge-graceful labelings were first introduced by Sheng-Ping Lo in 1985.
A necessary condition for a graph to be edge-graceful is "Lo's condition":
q
(
q
+
1
)
=
p
(
p
−
1
)
2
mod
p
.
{\displaystyle q(q+1)={\frac {p(p-1)}{2}}\mod p.}
=== Harmonious labeling ===
A "harmonious labeling" on a graph G is an injection from the vertices of G to the group of integers modulo k, where k is the number of edges of G, that induces a bijection between the edges of G and the numbers modulo k by taking the edge label for an edge (x, y) to be the sum of the labels of the two vertices x, y (mod k). A "harmonious graph" is one that has a harmonious labeling. Odd cycles are harmonious, as are Petersen graphs. It is conjectured that trees are all harmonious if one vertex label is allowed to be reused. The seven-page book graph K1,7 × K2 provides an example of a graph that is not harmonious.
=== Graph coloring ===
A graph coloring is a subclass of graph labelings. Vertex colorings assign different labels to adjacent vertices, while edge colorings assign different labels to adjacent edges.
=== Lucky labeling ===
A lucky labeling of a graph G is an assignment of positive integers to the vertices of G such that if S(v) denotes the sum of the labels on the neighbors of v, then S is a vertex coloring of G. The "lucky number" of G is the least k such that G has a lucky labeling with the integers {1, …, k}.
== References == | Wikipedia/Graph_labeling |
The Resource Description Framework (RDF) is a method to describe and exchange graph data. It was originally designed as a data model for metadata by the World Wide Web Consortium (W3C). It provides a variety of syntax notations and formats, of which the most widely used is Turtle (Terse RDF Triple Language).
RDF is a directed graph composed of triple statements. An RDF graph statement is represented by: (1) a node for the subject, (2) an arc from subject to object, representing a predicate, and (3) a node for the object. Each of these parts can be identified by a Uniform Resource Identifier (URI). An object can also be a literal value. This simple, flexible data model has a lot of expressive power to represent complex situations, relationships, and other things of interest, while also being appropriately abstract.
RDF was adopted as a W3C recommendation in 1999. The RDF 1.0 specification was published in 2004, and the RDF 1.1 specification in 2014. SPARQL is a standard query language for RDF graphs. RDF Schema (RDFS), Web Ontology Language (OWL) and SHACL (Shapes Constraint Language) are ontology languages that are used to describe RDF data.
== Overview ==
The RDF data model is similar to classical conceptual modeling approaches (such as entity–relationship or class diagrams). It is based on the idea of making statements about resources (in particular web resources) in expressions of the form subject–predicate–object, known as triples. The subject denotes the resource; the predicate denotes traits or aspects of the resource, and expresses a relationship between the subject and the object.
For example, one way to represent the notion "The sky has the color blue" in RDF is as the triple: a subject denoting "the sky", a predicate denoting "has the color", and an object denoting "blue". Therefore, RDF uses subject instead of object (or entity) in contrast to the typical approach of an entity–attribute–value model in object-oriented design: entity (sky), attribute (color), and value (blue).
RDF is an abstract model with several serialization formats (being essentially specialized file formats). In addition the particular encoding for resources or triples can vary from format to format.
This mechanism for describing resources is a major component in the W3C's Semantic Web activity: an evolutionary stage of the World Wide Web in which automated software can store, exchange, and use machine-readable information distributed throughout the Web, in turn enabling users to deal with the information with greater efficiency and certainty. RDF's simple data model and ability to model disparate, abstract concepts has also led to its increasing use in knowledge management applications unrelated to Semantic Web activity.
A collection of RDF statements intrinsically represents a labeled, directed multigraph. This makes an RDF data model better suited to certain kinds of knowledge representation than other relational or ontological models.
As RDFS, OWL and SHACL demonstrate, one can build additional ontology languages upon RDF.
== History ==
The initial RDF design, intended to "build a vendor-neutral and operating system- independent system of metadata", derived from the W3C's Platform for Internet Content Selection (PICS), an early web content labelling system, but the project was also shaped by ideas from Dublin Core, and from the Meta Content Framework (MCF), which had been developed during 1995 to 1997 by Ramanathan V. Guha at Apple and Tim Bray at Netscape.
A first public draft of RDF appeared in October 1997, issued by a W3C working group that included representatives from IBM, Microsoft, Netscape, Nokia, Reuters, SoftQuad, and the University of Michigan.
In 1999, the W3C published the first recommended RDF specification, the Model and Syntax Specification ("RDF M&S"). This described RDF's data model and an XML serialization.
Two persistent misunderstandings about RDF developed at this time: firstly, due to the MCF influence and the RDF "Resource Description" initialism, the idea that RDF was specifically for use in representing metadata; secondly that RDF was an XML format rather than a data model, and only the RDF/XML serialisation being XML-based. RDF saw little take-up in this period, but there was significant work done in Bristol, around ILRT at Bristol University and HP Labs, and in Boston at MIT. RSS 1.0 and FOAF became exemplar applications for RDF in this period.
The recommendation of 1999 was replaced in 2004 by a set of six specifications: "The RDF Primer", "RDF Concepts and Abstract", "RDF/XML Syntax Specification (revised)", "RDF Semantics", "RDF Vocabulary Description Language 1.0", and "The RDF Test Cases".
This series was superseded in 2014 by the following six "RDF 1.1" documents: "RDF 1.1 Primer", "RDF 1.1 Concepts and Abstract Syntax", "RDF 1.1 XML Syntax", "RDF 1.1 Semantics", "RDF Schema 1.1", and "RDF 1.1 Test Cases".
== RDF topics ==
=== Vocabulary ===
The vocabulary defined by the RDF specification is as follows:
==== Classes ====
===== rdf =====
rdf:XMLLiteral
the class of XML literal values
rdf:Property
the class of properties
rdf:Statement
the class of RDF statements
rdf:Alt, rdf:Bag, rdf:Seq
containers of alternatives, unordered containers, and ordered containers (rdfs:Container is a super-class of the three)
rdf:List
the class of RDF Lists
rdf:nil
an instance of rdf:List representing the empty list
===== rdfs =====
rdfs:Resource
the class resource, everything
rdfs:Literal
the class of literal values, e.g. strings and integers
rdfs:Class
the class of classes
rdfs:Datatype
the class of RDF datatypes
rdfs:Container
the class of RDF containers
rdfs:ContainerMembershipProperty
the class of container membership properties, rdf:_1, rdf:_2, ..., all of which are sub-properties of rdfs:member
==== Properties ====
===== rdf =====
rdf:type
an instance of rdf:Property used to state that a resource is an instance of a class
rdf:first
the first item in the subject RDF list
rdf:rest
the rest of the subject RDF list after rdf:first
rdf:value
idiomatic property used for structured values
rdf:subject
the subject of the RDF statement
rdf:predicate
the predicate of the RDF statement
rdf:object
the object of the RDF statement
rdf:Statement, rdf:subject, rdf:predicate, rdf:object are used for reification (see below).
===== rdfs =====
rdfs:subClassOf
the subject is a subclass of a class
rdfs:subPropertyOf
the subject is a subproperty of a property
rdfs:domain
a domain of the subject property
rdfs:range
a range of the subject property
rdfs:label
a human-readable name for the subject
rdfs:comment
a description of the subject resource
rdfs:member
a member of the subject resource
rdfs:seeAlso
further information about the subject resource
rdfs:isDefinedBy
the definition of the subject resource
This vocabulary is used as a foundation for RDF Schema, where it is extended.
=== Serialization formats ===
Several common serialization formats are in use, including:
Turtle, a compact, human-friendly format.
TriG, an extension of Turtle to datasets.
N-Triples, a very simple, easy-to-parse, line-based format that is not as compact as Turtle.
N-Quads, a superset of N-Triples, for serializing multiple RDF graphs.
JSON-LD, a JSON-based serialization.
N3 or Notation3, a non-standard serialization that is very similar to Turtle, but has some additional features, such as the ability to define inference rules.
RDF/XML, an XML-based syntax that was the first standard format for serializing RDF.
RDF/JSON, an alternative syntax for expressing RDF triples using a simple JSON notation.
RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format. However, it is important to distinguish the RDF/XML format from the abstract RDF model itself. Although the RDF/XML format is still in use, other RDF serializations are now preferred by many RDF users, both because they are more human-friendly, and because some RDF graphs are not representable in RDF/XML due to restrictions on the syntax of XML QNames.
With a little effort, virtually any arbitrary XML may also be interpreted as RDF using GRDDL (pronounced 'griddle'), Gleaning Resource Descriptions from Dialects of Languages.
RDF triples may be stored in a type of database called a triplestore.
=== Resource identification ===
The subject of an RDF statement is either a uniform resource identifier (URI) or a blank node, both of which denote resources. Resources indicated by blank nodes are called anonymous resources. They are not directly identifiable from the RDF statement. The predicate is a URI which also indicates a resource, representing a relationship. The object is a URI, blank node or a Unicode string literal.
As of RDF 1.1 resources are identified by Internationalized Resource Identifiers (IRIs); IRI are a generalization of URI.
In Semantic Web applications, and in relatively popular applications of RDF like RSS and FOAF (Friend of a Friend), resources tend to be represented by URIs that intentionally denote, and can be used to access, actual data on the World Wide Web. But RDF, in general, is not limited to the description of Internet-based resources. In fact, the URI that names a resource does not have to be dereferenceable at all. For example, a URI that begins with "http:" and is used as the subject of an RDF statement does not necessarily have to represent a resource that is accessible via HTTP, nor does it need to represent a tangible, network-accessible resource—such a URI could represent absolutely anything. However, there is broad agreement that a bare URI (without a # symbol) which returns a 300-level coded response when used in an HTTP GET request should be treated as denoting the internet resource that it succeeds in accessing.
Therefore, producers and consumers of RDF statements must agree on the semantics of resource identifiers. Such agreement is not inherent to RDF itself, although there are some controlled vocabularies in common use, such as Dublin Core Metadata, which is partially mapped to a URI space for use in RDF. The intent of publishing RDF-based ontologies on the Web is often to establish, or circumscribe, the intended meanings of the resource identifiers used to express data in RDF. For example, the URI:
http://www.w3.org/TR/2004/REC-owl-guide-20040210/wine#Merlot
is intended by its owners to refer to the class of all Merlot red wines by vintner (i.e., instances of the above URI each represent the class of all wine produced by a single vintner), a definition which is expressed by the OWL ontology—itself an RDF document—in which it occurs. Without careful analysis of the definition, one might erroneously conclude that an instance of the above URI was something physical, instead of a type of wine.
Note that this is not a 'bare' resource identifier, but is rather a URI reference, containing the '#' character and ending with a fragment identifier.
=== Statement reification and context ===
The body of knowledge modeled by a collection of statements may be subjected to reification, in which each statement (that is each triple subject-predicate-object altogether) is assigned a URI and treated as a resource about which additional statements can be made, as in "Jane says that John is the author of document X". Reification is sometimes important in order to deduce a level of confidence or degree of usefulness for each statement.
In a reified RDF database, each original statement, being a resource, itself, most likely has at least three additional statements made about it: one to assert that its subject is some resource, one to assert that its predicate is some resource, and one to assert that its object is some resource or literal. More statements about the original statement may also exist, depending on the application's needs.
Borrowing from concepts available in logic (and as illustrated in graphical notations such as conceptual graphs and topic maps), some RDF model implementations acknowledge that it is sometimes useful to group statements according to different criteria, called situations, contexts, or scopes, as discussed in articles by RDF specification co-editor Graham Klyne. For example, a statement can be associated with a context, named by a URI, in order to assert an "is true in" relationship. As another example, it is sometimes convenient to group statements by their source, which can be identified by a URI, such as the URI of a particular RDF/XML document. Then, when updates are made to the source, corresponding statements can be changed in the model, as well.
Implementation of scopes does not necessarily require fully reified statements. Some implementations allow a single scope identifier to be associated with a statement that has not been assigned a URI, itself. Likewise named graphs in which a set of triples is named by a URI can represent context without the need to reify the triples.
=== Query and inference languages ===
The predominant query language for RDF graphs is SPARQL. SPARQL is an SQL-like language, and a recommendation of the W3C as of January 15, 2008.
The following is an example of a SPARQL query to show country capitals in Africa, using a fictional ontology:
Other non-standard ways to query RDF graphs include:
RDQL, precursor to SPARQL, SQL-like
Versa, compact syntax (non–SQL-like), solely implemented in 4Suite (Python).
RQL, one of the first declarative languages for uniformly querying RDF schemas and resource descriptions, implemented in RDFSuite.
SeRQL, part of Sesame
XUL has a template element in which to declare rules for matching data in RDF. XUL uses RDF extensively for data binding.
SHACL Advanced Features specification (W3C Working Group Note), the most recent version of which is maintained by the SHACL Community Group, defines support for SHACL Rules, used for data transformations, inferences and mappings of RDF based on SHACL shapes.
=== Validation and description ===
The predominant language for describing and validating RDF graphs is SHACL (Shapes Constraint Language). SHACL specification is divided in two parts: SHACL Core and SHACL-SPARQL. SHACL Core consists of a list of built-in constraints such as cardinality, range of values and many others. SHACL-SPARQL describes SPARQL-based constraints and an extension mechanism to declare new constraint components.
Other non-standard ways to describe and validate RDF graphs include:
SPARQL Inferencing Notation (SPIN) was based on SPARQL queries. It has been effectively deprecated in favor of SHACL.
ShEx (Shape Expressions) is a concise language for RDF validation and description.
== Examples ==
=== Example 1: Description of a person named Eric Miller ===
The following example is taken from the W3C website describing a resource with statements "there is a Person identified by http://www.w3.org/People/EM/contact#me, whose name is Eric Miller, whose email address is e.miller123(at)example (changed for security purposes), and whose title is Dr."
The resource "http://www.w3.org/People/EM/contact#me" is the subject.
The objects are:
"Eric Miller" (with a predicate "whose name is"),
mailto:e.miller123(at)example (with a predicate "whose email address is"), and
"Dr." (with a predicate "whose title is").
The subject is a URI.
The predicates also have URIs. For example, the URI for each predicate:
"whose name is" is http://www.w3.org/2000/10/swap/pim/contact#fullName,
"whose email address is" is http://www.w3.org/2000/10/swap/pim/contact#mailbox,
"whose title is" is http://www.w3.org/2000/10/swap/pim/contact#personalTitle.
In addition, the subject has a type (with URI http://www.w3.org/1999/02/22-rdf-syntax-ns#type), which is person (with URI http://www.w3.org/2000/10/swap/pim/contact#Person).
Therefore, the following "subject, predicate, object" RDF triples can be expressed:
http://www.w3.org/People/EM/contact#me, http://www.w3.org/2000/10/swap/pim/contact#fullName, "Eric Miller"
http://www.w3.org/People/EM/contact#me, http://www.w3.org/2000/10/swap/pim/contact#mailbox, mailto:e.miller123(at)example
http://www.w3.org/People/EM/contact#me, http://www.w3.org/2000/10/swap/pim/contact#personalTitle, "Dr."
http://www.w3.org/People/EM/contact#me, http://www.w3.org/1999/02/22-rdf-syntax-ns#type, http://www.w3.org/2000/10/swap/pim/contact#Person
In standard N-Triples format, this RDF can be written as:
Equivalently, it can be written in standard Turtle (syntax) format as:
Or more concisely, using a common shorthand syntax of Turtle as:
Or, it can be written in RDF/XML format as:
=== Example 2: The postal abbreviation for New York ===
Certain concepts in RDF are taken from logic and linguistics, where subject-predicate and subject-predicate-object structures have meanings similar to, yet distinct from, the uses of those terms in RDF. This example demonstrates:
In the English language statement 'New York has the postal abbreviation NY' , 'New York' would be the subject, 'has the postal abbreviation' the predicate and 'NY' the object.
Encoded as an RDF triple, the subject and predicate would have to be resources named by URIs. The object could be a resource or literal element. For example, in the N-Triples form of RDF, the statement might look like:
In this example, "urn:x-states:New%20York" is the URI for a resource that denotes the US state New York, "http://purl.org/dc/terms/alternative" is the URI for a predicate (whose human-readable definition can be found here), and "NY" is a literal string. Note that the URIs chosen here are not standard, and do not need to be, as long as their meaning is known to whatever is reading them.
=== Example 3: A Wikipedia article about Tony Benn ===
In a like manner, given that "https://en.wikipedia.org/wiki/Tony_Benn" identifies a particular resource (regardless of whether that URI could be traversed as a hyperlink, or whether the resource is actually the Wikipedia article about Tony Benn), to say that the title of this resource is "Tony Benn" and its publisher is "Wikipedia" would be two assertions that could be expressed as valid RDF statements. In the N-Triples form of RDF, these statements might look like the following:
To an English-speaking person, the same information could be represented simply as:
The title of this resource, which is published by Wikipedia, is 'Tony Benn'
However, RDF puts the information in a formal way that a machine can understand. The purpose of RDF is to provide an encoding and interpretation mechanism so that resources can be described in a way that particular software can understand it; in other words, so that software can access and use information that it otherwise could not use.
Both versions of the statements above are wordy because one requirement for an RDF resource (as a subject or a predicate) is that it be unique. The subject resource must be unique in an attempt to pinpoint the exact resource being described. The predicate needs to be unique in order to reduce the chance that the idea of Title or Publisher will be ambiguous to software working with the description. If the software recognizes http://purl.org/dc/elements/1.1/title (a specific definition for the concept of a title established by the Dublin Core Metadata Initiative), it will also know that this title is different from a land title or an honorary title or just the letters t-i-t-l-e put together.
The following example, written in Turtle, shows how such simple claims can be elaborated on, by combining multiple RDF vocabularies. Here, we note that the primary topic of the Wikipedia page is a "Person" whose name is "Tony Benn":
== Applications ==
DBpedia – Extracts facts from Wikipedia articles and publishes them as RDF data.
YAGO – Similar to DBpedia extracts facts from Wikipedia articles and publishes them as RDF data.
Wikidata – Collaboratively edited knowledge base hosted by the Wikimedia Foundation.
Creative Commons – Uses RDF to embed license information in web pages and mp3 files.
FOAF (Friend of a Friend) – designed to describe people, their interests and interconnections.
Haystack client – Semantic web browser from MIT CS & AI lab.
IDEAS Group – developing a formal 4D ontology for Enterprise Architecture using RDF as the encoding.
Microsoft shipped a product, Connected Services Framework, which provides RDF-based Profile Management capabilities.
MusicBrainz – Publishes information about Music Albums.
NEPOMUK, an open-source software specification for a Social Semantic desktop uses RDF as a storage format for collected metadata. NEPOMUK is mostly known because of its integration into the KDE SC 4 desktop environment.
Cochrane is a global publisher of clinical study meta-analyses in evidence based healthcare. They use an ontology driven data architecture to semantically annotate their published reviews with RDF based structured data.
RDF Site Summary – one of several "RSS" languages for publishing information about updates made to a web page; it is often used for disseminating news article summaries and sharing weblog content.
Simple Knowledge Organization System (SKOS) – a KR representation intended to support vocabulary/thesaurus applications
SIOC (Semantically-Interlinked Online Communities) – designed to describe online communities and to create connections between Internet-based discussions from message boards, weblogs and mailing lists.
Smart-M3 – provides an infrastructure for using RDF and specifically uses the ontology agnostic nature of RDF to enable heterogeneous mashing-up of information
LV2 - a libre plugin format using Turtle to describe API/ABI capabilities and properties
Some uses of RDF include research into social networking. It will also help people in business fields understand better their relationships with members of industries that could be of use for product placement. It will also help scientists understand how people are connected to one another.
RDF is being used to gain a better understanding of road traffic patterns. This is because the information regarding traffic patterns is on different websites, and RDF is used to integrate information from different sources on the web. Before, the common methodology was using keyword searching, but this method is problematic because it does not consider synonyms. This is why ontologies are useful in this situation. But one of the issues that comes up when trying to efficiently study traffic is that to fully understand traffic, concepts related to people, streets, and roads must be well understood. Since these are human concepts, they require the addition of fuzzy logic. This is because values that are useful when describing roads, like slipperiness, are not precise concepts and cannot be measured. This would imply that the best solution would incorporate both fuzzy logic and ontology.
== See also ==
Notations for RDF
TRiG
TRiX
RDF/XML
RDFa
JSON-LD
Notation3
Similar concepts
Entity–attribute–value model
Graph theory – an RDF model is a labeled, directed multi-graph.
Tag (metadata)
SciCrunch
Semantic network
Other (unsorted)
Semantic technology
Business Intelligence 2.0 (BI 2.0)
Data portability
EU Open Data Portal
RDF Schema
Folksonomy
LSID - Life Science Identifier
Swoogle
Universal Networking Language (UNL)
VoID
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
W3C's RDF at W3C: specifications, guides, and resources
RDF Semantics: specification of semantics, and complete systems of inference rules for both RDF and RDFS
== External links == | Wikipedia/RDF_(computer_science) |
Sparksee (formerly known as DEX) is a high-performance and scalable graph database management system written in C++. From version 6.0, Sparksee has shifted its focus to embedded systems and mobile, becoming the first graph database specialized in mobile platforms with versions for IOS and Android.
Its development started in 2006 and its first version was available on Q3 - 2008. The sixth version is available since Q2-2021. There is a free community version, for academic or evaluation purposes, available to download, limited to 1 million nodes, no limit on edges.
Sparksee is a product originated by the research carried out at DAMA-UPC (Data Management group at the Polytechnic University of Catalonia). In March 2010 a spin-off called Sparsity-Technologies has been created at the UPC to commercialize and give services to the technologies developed at DAMA-UPC.
DEX changed name to Sparksee on its 5th release in February 2014.
== Graph model ==
Sparksee is based on a graph database model, that is basically characterized by three properties: data structures are graphs or any other structure similar to a graph; data manipulation and queries are based on graph-oriented operations; and there are data constraints to guarantee the integrity of the data and its relationships.
A Sparksee graph is a labeled-property graph (referred to in the official documentation as Labeled Attributed Multigraph). Labeled because nodes and edges in a graph belong to types. Attributed because both nodes and edges may have attributes and Multigraph meaning that there may be multiple edges between the same nodes even if they are from the same edge type. It supports both directed edges as well as undirected.
One of its main characteristics is its performance storage and retrieval for large graphs (in the order of billions of nodes, edges and attributes) implemented with specialized structures.
== Technical details ==
Programming language: C++
API: Java, .NET, C++, Python, Objective-C
OS compatibility: Windows, Linux, macOS, iOS, Android
Persistency: Disk
Transactions: full ACID
Recovery Manager
Encryption
Open Cypher Query Language
== See also ==
Graph Database
NoSQL
== References ==
=== Also ===
D. Domínguez-Sal, P. Urbón-Bayes, A.Giménez-Vañó, S. Gómez-Villamor, N.Martínez-Bazán, J.L. Larriba-Pey. Survey of Graph Database Performance on the HPC Scalable Graph Analysis Benchmark. International Workshop on Graph Databases. July 2010.
== External links ==
Sparksee homepage at Sparsity-Technologies | Wikipedia/Sparksee_(graph_database) |
An interest graph is a digital portrayal of an individual's specific interests. Its perceived utility and value stem from the premise that a person's interests form a significant component of their personal identity. They can be used as indicators of various aspects, such as a person's preferences regarding activities, purchases, destinations, as well as who they may choose to meet, follow, or support politically.
== Relationship of interest graph to social graph ==
Interest graphs and social graphs are closely related, but they are not synonymous. Where Facebook and other social networks are organized around an individual's friends or social graph, interest networks are organized around an individual's interests, which are represented as an interest graph.
Both graphs extend across the web, with social graphs serving as maps of a person's social media connections, and interest graphs as mappings of an individual's interests. In this way an individual's interests represented in an interest graph provide a means of further personalizing the web based on intersecting the interest graphs with web content.
Interest graphs or interest networks can in some cases be derived from social graphs or social networks and may maintain their context within that social network. These are specifically social interest graphs or interest-based social graphs.
For an interest graph to be accurate and expressive, it must consider explicitly declared interests, for example "Likes" on Facebook or “Interests” in a LinkedIn profile, as well as implicit interest inferred from user activities such as clicks, comments, tagged photos and check-ins. Social networks are often a source for this data.
== Uses of interest graph ==
There are several personal and commercial uses for interest graphs. They can be applied in conjunction with social graphs as a way to meet or connect with people in a social network or community with shared or common interests, and who may not otherwise know each other.
Interest graphs can also be applied to marketing for purposes such as audience analytics and audience-based buying, for sentiment analysis, and for advertising as another form of behavioral profiling and targeting based on interests. Companies like Twitter, for example, use interest graphs to specifically target advertisements to their users based on their interests. Interest graphs may be applied to product development by using customer interests to help determine which new features or capabilities to provide in future versions of a product.
Interest graphs have many other uses as well, including simulation, research and other content discovery and filtering tasks, as input to recommendation engines for films, books, music, etc., and for learning and education.
== See also ==
Community of interest
Social web
Social graph
== References == | Wikipedia/Interest_graph |
JanusGraph is an open source, distributed graph database under The Linux Foundation. JanusGraph is available under the Apache License 2.0. The project is supported by IBM, Google, Hortonworks and Grakn Labs.
JanusGraph supports various storage backends (Apache Cassandra, Apache HBase, Google Cloud Bigtable, Oracle BerkeleyDB, ScyllaDB). The Scalability of JanusGraph depends on the underlying technologies, which are used with JanusGraph. For example, by using Apache Cassandra as a storage backend scaling to multiple datacenters is provided out of the box.
JanusGraph supports global graph data analytics, reporting, and ETL through integration with big data platforms (Apache Spark, Apache Giraph, Apache Hadoop).
JanusGraph supports geo, numeric range, and full-text search via external index storages (ElasticSearch, Apache Solr, Apache Lucene).
JanusGraph has native integration with the Apache TinkerPop graph stack (Gremlin graph query language, Gremlin graph server, Gremlin applications).
== History ==
JanusGraph is the fork of TitanDB graph database which is being developed since 2012.
Version 0.1.0 was released on Apr 20, 2017.
Version 0.1.1 was released on May 16, 2017.
Version 0.2.0 was released on Oct 12, 2017.
Version 0.2.1 was released on Jul 10, 2018.
Version 0.2.2 was released on Oct 9, 2018.
Version 0.2.3 was released on May 21, 2019.
Version 0.3.0 was released on Jul 31, 2018.
Version 0.3.1 was released on Oct 2, 2018.
Version 0.3.2 was released on Jun 16, 2019.
Version 0.3.3 was released on Jan 11, 2020.
Version 0.4.0 was released on Jul 1, 2019.
Version 0.4.1 was released on Jan 14, 2020.
Version 0.5.0 was released on Mar 10, 2020.
Version 0.5.1 was released on Mar 25, 2020.
Version 0.5.2 was released on May 3, 2020.
Version 0.5.3 was released on December 24, 2020.
Version 0.6.0 was released on September 3, 2021.
Version 0.6.1 was released on January 18, 2022.
Version 0.6.3 was released on February 18, 2023.
Version 1.0.0 was released on October 21, 2023.
== Licensing and contributions ==
JanusGraph is available under Apache Software License 2.0.
For contributions an individual or an organisation must sign a CLA paper.
== Literature ==
Kelvin R. Lawrence. PRACTICAL GREMLIN An Apache TinkerPop Tutorial. Version 282-preview. - February 2019, pp. 324 – 363.
== Publications ==
Gabriel Campero Durand, Jingy Ma, Marcus Pinnecke, Gunter Saake: Piecing together large puzzles, efficiently: Towards scalable loading into graph database systems, May 2018
Hima Karanam, Sumit Neelam, Udit Sharma, Sumit Bhatia, Srikanta Bedathur, L. Venkata Subramaniam, Maria Chang, Achille Fokoue-Nkoutche, Spyros Kotoulas, Bassem Makni, Mariano Rodriguez Muro, Ryan Musa, Michael Witbrock: Scalable Reasoning Infrastructure for Large Scale Knowledge Bases, October 2018
Gabriel Campero Durand, Anusha Janardhana, Marcus Pinnecke, Yusra Shakeel, Jacob Krüger, Thomas Leich, Gunter Saake: Exploring Large Scholarly Networks with Hermes
Gabriel Tanase, Toyotaro Suzumura, Jinho Lee, Chun-Fu (Richard) Chen, Jason Crawford, Hiroki Kanezashi: System G Distributed Graph Database
Bogdan Iancu, Tiberiu Marian Georgescu: Saving Large Semantic Data in Cloud: A Survey of the Main DBaaS Solutions
Jingyi Ma. An Evaluation of the Design Space for Scalable Data Loading into Graph Databases - February 2018, pp. 39–47.
== References ==
== External links ==
Official website
Official documentation
JanusGraph deployment / IBM, 11 April 2018
Developing a JanusGraph-backed Service on Google Cloud Platform / Google, 19 July 2018
Performance optimization of JanusGraph / Expero, 23 January 2018
Graph Computing with JanusGraph Archived 2018-10-07 at the Wayback Machine / IBM, 8 June 2018
Large Scale Graph Analytics with JanusGraph / Hortonworks, 13 June 2017
JanusGraph Concepts / IBM, 12 December 2017
Apache Atlas and JanusGraph – Graph-based Meta Data Management / IBM, 8 November 2018 | Wikipedia/JanusGraph |
Ontotext is a software company that produces software relating to data management. Its main products are GraphDB, an RDF database; and Ontotext Platform, a general data management platform based on knowledge graphs. It was founded in 2000 in Bulgaria, and now has offices internationally. Together with the BBC, Ontotext developed one of the early large-scale industrial semantic applications, Dynamic Semantic Publishing, starting in 2010.
Ontotext GraphDB, formerly OWLIM, is an RDF triplestore optimized for metadata and master data management, as well as graph analytics and data publishing. Since version 8.0 GraphDB integrates OpenRefine to allow for easy ingestion and reconciliation of tabular data. Ontotext Platform is a general-purpose data management tool centered around the idea of knowledge graphs.
== Ontotext GraphDB ==
Ontotext GraphDB (previously known as BigOWLIM) is a graph-based database capable of working with knowledge graphs produced by Ontotext, compliant with the RDF graph data model and the SPARQL query language. Some categorize it as a NoSQL database, meaning that it does not use tables like some other databases. In 2014 Ontotext acquired the trademark "GraphDB" from Sones.
GraphDB is also an advanced ontology (specification of entities, their properties, and their relationships) repository. The underlying idea of the database is of a semantic repository, storing semantic relationships between objects.
=== Architecture ===
GraphDB is used to store and manage semantic knowledge graph data. It is built on top of the RDF4J architecture for handling RDF data, implemented through the use of RDF4J's Storage and Inference Layer (SAIL). The architecture is made of three main components:
The Workbench is a web-based administration tool. The user interface is based on RDF4J Workbench Web Application.
The Engine consists of a query optimizer, reasoner, and a storage and plugin manager. The reasoner in GraphDB is forward chaining, reasoning forward from given priors, with the goal of total materialization. The plugin manager supports user-defined indexes and can be configured dynamically during run-time.
=== Uses ===
Ontotext Graph DB has been used in genetics, healthcare, data forensics, cultural heritage studies, geography, infrastructure planning, civil engineering, digital historiography, and oceanography. Commercial clients include the BBC, the Financial Times, Springer Nature, the UK Parliament, and AstraZeneca.
== See also ==
Graph databases
Graph theory
RDF database
Glossary of graph theory
== References ==
== External links ==
Ontotext's Product Website
Github repository for Apache Licensed Workbench for GraphDB
Register Article from 15 Jan 2020 about Ontotext GraphDB
W3.org entry for GraphDB | Wikipedia/Ontotext_GraphDB |
Oracle Spatial and Graph, formerly Oracle Spatial, is a free option component of the Oracle Database. The spatial features in Oracle Spatial and Graph aid users in managing geographic and location-data in a native type within an Oracle database, potentially supporting a wide range of applications — from automated mapping, facilities management, and geographic information systems (AM/FM/GIS), to wireless location services and location-enabled e-business. The graph features in Oracle Spatial and Graph include Oracle Network Data Model (NDM) graphs used in traditional network applications in major transportation, telcos, utilities and energy organizations and RDF semantic graphs used in social networks and social interactions and in linking disparate data sets to address requirements from the research, health sciences, finance, media and intelligence communities.
== Components ==
The geospatial feature of Oracle Spatial and Graph provides a SQL schema and functions that facilitate the storage, retrieval, update, and query of collections of spatial features in an Oracle database. (The spatial component of a spatial feature consists of the geometric representation of its shape in some coordinate space — referred to as its "geometry".)
=== Geospatial data features ===
The Oracle Spatial geospatial data features consist of:
a schema - MDSYS (as in "multi-dimensional system") - that prescribes the storage, syntax, and semantics of supported geometric data types
a spatial indexing system
operators, functions, and procedures for performing area-of-interest queries, spatial join queries, and other spatial analysis operations
functions and procedures for utility and tuning operations
vector performance acceleration for substantially faster querying and more efficient use of CPU, memory, and partitioning
support for parametric curves (NURBS) for mathematically precise representation of free-form curves that can be reproduced exactly for 2D and 3D data
a topology data model for working with data about nodes, edges, and faces in a topology
a GeoRaster feature to store, index, query, analyze, and deliver GeoRaster data (raster image and gridded data and its associated metadata) with virtual mosaics, raster-algebra operations, image processing, Java API, and GDAL-Based ETL Wizard
3-dimensional data-types and operators including Triangulated Irregular Networks (TINs), Point Clouds and LiDAR data sets with Spatial R-tree indexing, SQL operators and analysis functions, and metadata for visualization
geocoding that converts location and address data into formal geographic coordinates from point addresses and address ranges, and supports reverse geocoding
a routing engine that creates fastest or shortest routes with driving distances, times, directions and turn-specific geometries based on commercial and publicly available street network data, and restrictions and conditions for advanced routing, such as truck-specific routing
Open Geospatial Consortium-compliant Web Services for geocoding, routing, mapping, business-directory, catalog, and geospatial feature transactions
Spatial Visualization components to render data on maps.
=== Network Data Model ===
The Network Data Model feature is a property graph model used to model and analyze physical and logical networks used in industries such as transportation, logistics, and utilities. Its features include:
Persistent management of the network connectivity in the database
A data model for representing capabilities or objects (modeled as nodes and links) in a network with a PL/SQL API for managing network data.
User-determined link and node properties, such as costs and restrictions, including temporal properties.
Association of real world objects with network elements to simplify application development and maintenance.
A Java API for in-memory network path analytics, including shortest path, nearest neighbors, within cost, and reachability, with partitioned loading of large networks into memory.
=== RDF semantic ===
The RDF Semantic Graph feature supports the World Wide Web Consortium (W3C) RDF standards. It provides RDF data management, querying and inferencing that are commonly used in a variety of applications ranging from semantic data integration to social network analysis and linked open data applications. Its features include:
An RDF triple store and ontology management with automatic partitioning and data compression.
Proven scalability to over 54 billion triples (LUBM 200K benchmark) with scalability to the 8 petabyte limit of Oracle Database.
High performance bulk loading with Oracle Database parallel and direct path loading and loading through Jena.
SPARQL and SQL parallel querying and updating of RDF graphs with SPARQL 1.1, SPARQL endpoint web services, SPARQL/Update, Java APIs with open source Apache Jena & Sesame, SQL queries with embedded SPARQL graph patterns, SQL insert/update.
Ontology-assisted querying of table data using SQL operators to expand SQL relational queries with related terms for more comprehensive results.
Native inferencing with parallel, incremental and secure operation for scalable reasoning with RDFS, Web Ontology Language (OWL 2 RL/EL), Simple Knowledge Organization System (SKOS), user-defined rules, user-defined inference extensions, and an extensibility framework to plug-in special purpose reasoners, such as PelletDB, TrOWL.
GeoSPARQL support for storing / querying spatial data in RDF per Open GeoSpatial Consortium (OGC) specification.
RDF views on relational data to apply semantic analysis with support for automatic (Direct Mapping) and custom (W3C R2RML language) mapping of relational data to RDF triples.
Triple-level security that meets the most stringent security requirements with Oracle Label Security.
Integration with open source Apache Jena and Sesame application development environments.
Integration with XML-based tools, such as Oracle Business Intelligence Enterprise Edition (OBIEE) for reporting and dashboards.
Integration with Network Data Model graph analytics for shortest path, nearest neighbors, within cost, and reachability.
Integration with Oracle Advanced Analytics features: Oracle Data Mining for exploiting predictive analytics and pattern discovery and Oracle R Enterprise for statistical computing and charting visualization of graph data.
Semantic indexing for text mining and entity analytics integrated with popular natural language processors.
Integration with leading commercial and open source tools for querying, visualization, and ontology management.
== Availability ==
Oracle Spatial and Graph is an option for Oracle Enterprise Edition, and must be licensed separately. Free since December 5, 2019. It is also included in Oracle Database Cloud Service (High Performance Edition and Extreme Performance Edition). It is not included in Oracle Standard Edition or Oracle Standard Edition One. However, the latter two editions allow the use of a subset of spatial features (called Oracle Locator) at no extra cost. An appendix of the Oracle Spatial and Graph Developer's Guide specifies the functions allowed in Locator.
== History ==
The Oracle RDBMS first incorporated spatial-data capability with a modification to Oracle 4 made by scientists working with the Canadian Hydrographic Service (CHS). A joint development team of CHS and Oracle personnel subsequently redesigned the Oracle kernel, resulting in the "Spatial Data Option" or "SDO" for Oracle 7. (The SDO_ prefix continues in use within Oracle Spatial implementations.) The spatial indexing system for SDO involved an adaptation of Riemannian hypercube data-structures, invoking a helical spiral through 3-dimensional space, which allows n-size of features. This also permitted a highly efficient compression of the resulting data, suitable for the petabyte-size data repositories that CHS and other major corporate users required, and also improving search and retrieval times. The "helical hyperspatial code", or HHCode, as developed by CHS and implemented by Oracle Spatial, comprises a form of space-filling curve.
With Oracle 8, Oracle Corporation marketing dubbed the spatial extension simply "Oracle Spatial". The primary spatial indexing system no longer uses the HHCode, but a standard r-tree index.
Since July, 2012, the option has been named Oracle Spatial and Graph to highlight the graph database capabilities in the product - Network Data Model graph introduced with Oracle Database 10g Release 1 and RDF Semantic Graph introduced with Oracle Database 10g Release 2.
== Further reading ==
Albert Godfrind, Richard Pitts, Hans Viehmann, Ravikanth Kothuri. Pro Oracle Spatial for Oracle Database 12c. Apress (2015) ISBN 978-1-4302-6313-5
Simon Greener, Siva Ravada. Applying and Extending Oracle Spatial. Packt Publishing (2013) ISBN 184968636X
Euro Beinat, Albert Godfrind & Ravikanth V. Kothuri. Pro Oracle Spatial for Oracle Database 11g. Apress (2007) ISBN 1-59059-899-7
Euro Beinat, Albert Godfrind & Ravikanth V. Kothuri. Pro Oracle Spatial. Apress (2004) ISBN 1-59059-383-9
== See also ==
OGR – The OGR Simple Feature Library is an open source interface to Oracle Spatial data
Oracle Multimedia
== References ==
Oracle Documentation Library https://www.oracle.com/database/technologies/oraclecertificationenvironment-docs-library.html See:
Spatial and Graph Developer's Guide
Spatial and Graph GeoRaster Developer's Guide
Spatial and Graph Topology Data Model and Network Data Model Graph Developer's Guide
Spatial and Graph Java API Reference (Javadoc)
Spatial and Graph RDF Semantic Graph Developer's Guide
== Notes ==
== External links ==
http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html – Oracle Corporation's official website
http://fdo.osgeo.org/fdooracle/index.html – Open Source FDO interface to Oracle Spatial data | Wikipedia/Oracle_Spatial_and_Graph |
InterSystems Caché ( kashay) is a commercial operational database management system from InterSystems, used to develop software applications for healthcare management, banking and financial services, government, and other sectors. Customer software can use the database with object and SQL code. Caché also allows developers to directly manipulate its underlying data structures: hierarchical arrays known as M technology.
== Description ==
Internally, Caché stores data in multidimensional arrays capable of carrying hierarchically structured data. These are the same “global” data structures used by the MUMPS programming language, which influenced the design of Caché, and are similar to those used by MultiValue (also known as PICK) systems. In most applications, however, object and/or SQL access methods are used.
Caché ObjectScript, Caché Basic or SQL can be used to develop application business logic. External interfaces include native object binding for C++, Java, EJB, ActiveX, and .NET. Caché supports JDBC and ODBC for relational access. XML and web services are also supported.
Caché Server Pages (CSP) technology allows tag-based creation of web applications that generate dynamic web pages, typically using data from a Caché database. Caché also includes InterSystems Zen, an implementation of AJAX that enables component-based development of rich web applications.
== History ==
InterSystems was founded in 1979 to commercialize MUMPS hierarchical databases. It launched Caché in 1997 as its flagship product and at that time ceased further development of its original MUMPS product line.
== Market ==
InterSystems claims Caché is the world's fastest object database. However, high performance is achieved only for transactional operations that have a significantly hierarchical nature.
This database management system (DBMS) is used as part of hospital patient tracking, electronic medical record and medicine management systems, in products developed by companies such as Epic Systems as well as the VistA system used by the U.S. Department of Veteran Affairs. Sungard includes Caché in the AddVantage asset management software to finance industry customers such as banks. Telecommunications vendors BT Group and Vodacom also use Caché.
The DB-Engines website ranked Caché as the most popular object-oriented DBMS every month from March 2013 to January 2023, when it was overtaken by InterSystems IRIS Data Platform.
== Platforms ==
Caché runs on Windows, Linux, Solaris, HP-UX, AIX, macOS and OpenVMS platforms.
== See also ==
GT.M, a related database system
== References == | Wikipedia/InterSystems_Caché |
InterSystems Corporation is a privately held vendor of software systems and technology for high-performance database management, rapid application development, integration, and healthcare information systems. The vendor's products include InterSystems IRIS Data Platform, Caché Database Management System, the InterSystems Ensemble integration platform, the HealthShare healthcare informatics platform and TrakCare healthcare information system, which is sold outside the United States.
InterSystems is based in Cambridge, Massachusetts. The company's revenue was $727 million in 2019.
== History ==
InterSystems was founded in 1978 by Phillip T. (Terry) Ragon, its current CEO. The firm was one of the vendors of M-technology (aka MUMPS) systems, with a product called ISM-11 (an DSM-11 clone) for the DEC PDP-11 . Over the years, it acquired several other MUMPS implementations: DTM from Data Tree (1993); DSM from Digital (1995); and MSM from Micronetics (1998); making InterSystems the dominant M technology vendor.
The firm eventually started combining features from these products into one they called OpenM, then consolidated the technologies into a product, Caché, in 1997. At that time they stopped new development for all of their legacy M-based products (although the company still supports existing customers). They launched Ensemble, an integration platform, in 2003 and HealthShare, a scalable health informatics platform, in 2006. In 2007, InterSystems purchased TrakHealth, an Australian vendor of TrakCare, a modular healthcare information system based on InterSystems technology. In May 2011, the firm launched Globals as a free database based on the multi-dimensional array storage technology used in Caché. In September 2011, InterSystems purchased Siemens Health Services (SHS) France from its parent company, Siemens. In September 2017, InterSystems announced InterSystems IRIS Data Platform, which, the company said, combines database management capabilities together with interoperability and analytics, as well as technologies such as sharding for performance.
== Products ==
The company's products include the following:
InterSystems IRIS data platform, a hybrid multi-model database management system for real-time transactions and analytics that is available as a private or public fully managed cloud platform.
InterSystems IRIS for Health, a data platform that supports healthcare messaging protocols such as FHIR, HL7, and IHE.
HealthShare, a healthcare informatics platform that supports the creation of and secure access to unified care records.
TrakCare, a web-based healthcare information system, available outside the U.S.
InterSystems Caché, a multi-model database management systems and application server. A MUMPS Server with SQL-Overlay and featured developer tools
InterSystems Ensemble, a rapid integration and application development platform.
In 2020, InterSystems was named a Visionary in Gartner’s Magic Quadrant for cloud database management systems for its InterSystems IRIS technology.
== Customers ==
Epic Systems, a privately held health records vendor, is the company’s largest customer and has been using InterSystems technology for more than 40 years. Epic originally built its electronic medical records software on InterSystems Caché but used InterSystems IRIS data platform as the foundation of a new release of its software launched in 2020. As of 2022, Epic EMR software held the records of 78% of all U.S. patients and 3% of patients globally.
In July 2020, the U.S. Department of Veterans Affairs launched a HealthShare-based platform called InterSystems Veterans Data Integration and Federation Enterprise Platform (VDIF EP) for developing longitudinal patient records. VDIF EP enables care providers both within and outside the Veterans Health Administration to access veterans’ patient records. The VA has used VDIF EP for tracking COVID-19 infections among veterans and VA medical personnel and for managing resource deployment across 172 VA medical centers and more than 1,000 outpatient clinics.
Other major InterSystems customers include Credit Suisse, whose trading platform uses InterSystems Caché; the European Space Agency, which used InterSystems Caché for its Gaia mission to create a 3D map of the Milky Way; Mass General Brigham, which built its electronic health records system using InterSystems Caché and Ensemble; and the national health services of England, Scotland, and Wales, which use TrakCare for sharing patient health information and e-prescribing. 3M, BNY Mellon, Canon, Franklin Templeton, HSBC, MSC Mediterranean Shipping Company, Olympus, Ricoh, SPAR, and TD Ameritrade also use InterSystems software.
== Microsoft dispute ==
On August 14, 2008, the Boston Globe reported that InterSystems was filing a lawsuit against Microsoft Corporation, another tenant in its Cambridge, Mass., headquarters, seeking to prevent Microsoft from expanding in the building. InterSystems also filed a lawsuit against building owner Equity Office Partners, a subsidiary of the Blackstone Group, "contending that it conspired with Microsoft to lease space that InterSystems had rights to, and sought to drive up rents in the process".
In 2010, CEO Terry Ragon led a coalition in Cambridge called Save Our Skyline to protest a city zoning change that would have allowed more signs on top of commercial buildings, partly in response to Microsoft's desire to put a sign on top of their shared building.
Both disputes were eventually settled, and Microsoft and InterSystems agreed to both put low signs only in front of the building at street level.
== References ==
== External links ==
InterSystems website | Wikipedia/InterSystems |
GE Power (formerly known as GE Energy) was an American energy technology company owned by General Electric (GE). In April 2024, GE completed the spin-off of GE Power into a separate company, GE Vernova. Following this, General Electric ceased to exist as a conglomerate and pivoted to aviation, rebranding as GE Aerospace.
== Structure ==
As of July 2019, GE Power is divided into the following divisions:
GE Gas Power (formerly Alstom Power Turbomachines), based in Atlanta, Georgia.
Gas turbines
Heat recovery steam generators (HRSG)
GE Steam Power (formerly Alstom Power Systems), based in Baden, Switzerland.
Steam turbines
Electric generators
Boilers
Air Quality Control Systems (AQCS)
GE Power Conversion (formerly Converteam), based in Paris, France.
GE Energy Consulting
== History ==
=== GE Power Systems ===
GE Power Systems was a division of General Electric operating as supplier of power generation technology, energy services and energy management and also included oil and gas, distributed power and energy rental related solutions. The unit was based originally in Schenectady, NY and relocated to Atlanta, GA in 2000.
It acquired Enter Software in 2001.
It acquired Enron's Wind solution in 2002.
=== GE Energy (2005-2008) ===
GE Energy was a division of General Electric and was headquartered in Atlanta, Georgia, United States.
In 2008, a company-wide reorganization prompted by financial losses led to the unit's formation from companies within GE Infrastructure division. Before this reorganization, GE had nine decades of history in industrial power production including building a record-capacity three-phase generator for Niagara Falls in 1918 and installation of generators at the Grand Coulee Dam in 1942.
On March 29, 2011, GE Energy announced plans to acquire a 90% stake in the French company Converteam for $3.2 billion.
In July 2012, John Krenicki announced that he would be stepping down as president of GE Energy, and the business would be broken into three new GE businesses consisting of the following divisions:
GE Energy Management
Digital Energy
Industrial Solutions
Environmental Services
Power Conversion (former Converteam assets)
Bethesda Counsel
GE Oil & Gas
Drilling Solutions: Land & Offshore
Offshore Solutions
Subsea Solutions
Enhanced Oil Recovery (EOR) Solutions
Unconventional Resources
Full Range LNG Solutions
Industrial Power Generation
Refinery & Petrochemicals
Gas Storage & Pipeline
GE Power & Water
Power Generation Products (previously known as Thermal Products)
Power Generation Services
Distributed Power
GE Hitachi Nuclear Energy
Renewable Energy (Wind Energy)
Water & Process Technology
=== 2013–2024 ===
After lengthy negotiations, on 2 November 2015, GE finalized the acquisition of Alstom's power generation and electricity transmission business, that were integrated into GE Power & Water. Later, the newly acquired Hydro and Wind business of Alstom, together with GE's own Wind Energy division, were spun-off to create a new subsidiary called GE Renewable Energy.
In 2015, GE Power garnered press attention when a model 9FB gas turbine in Texas was shut down for two months due to the break of a turbine blade. This model uses similar blade technology to GE's newest and most efficient model, the HA. After the break, GE developed new protective coatings and heat treatment methods. Gas turbines represent a significant portion of GE Power's revenue, and also represent a significant portion of the power generation fleet of several utility companies in the United States. Chubu Electric of Japan and Électricité de France also had units that were impacted. Initially, GE did not realize the turbine blade issue of the 9FB unit would impact the new HA units.
Forced by a wave of very negative financial results, the company went through a series of disinvestments and reorganization in 2017.
In May 2017, GE Oil & Gas was combined with Baker Hughes Incorporated to create Baker Hughes, a GE company (BHGE), a new tier-1 business inside the parent group.
In June 2017, GE Energy Connections merged again with GE Power & Water, to become the present GE Power. The new combined business unit is led by Scott Strazik.
Swiss-based ABB announced in September 2017, a $2.6 billion deal with GE Power to acquire the Industrial Solutions division.
In October 2017, GE Power sold its Water & Process Technology division to French-based utility company Suez for a total of $3.4 billion.
In June 2018, the private equity firm Advent International agreed to buy GE's distributed power unit for $3.25 billion.
In 2019, in a strategic realignment to cut costs and satisfy the surging demand in the renewable power market, it was decided to merge the Grid Solutions portfolio into the Renewable Power business. That move took GE assets on electrical transmission grids, battery storage and solar inverters away from GE Power.
In June 2019, GE Steam Power started manufacturing half-speed steam turbines for the four Rosatom VVER-1200s being built at Akkuyu Nuclear Power Plant, Turkey's first nuclear power plant. This is part of a joint venture established in 2007, between General Electric and Rosatom subsidiary Atomenergomash, called AAEM Turbine Technology, to supply equipment for VVER nuclear power plants. The joint venture includes the manufacture of heat exchange equipment in Russia. GE has installed about half of all nuclear power plant steam turbines around the world.
In November 2022, Électricité de France (EDF) agreed the acquisition of GE Steam Power's nuclear activities, including the manufacture of non-nuclear equipment for new nuclear power plants including steam turbines and the maintenance and upgrade of existing nuclear power plants outside America.
In 2021 GE announced a plan to split GE into three new public companies: GE Vernova, GE HealthCare and GE Aerospace was announced. GE Power along with GE Digital, GE Renewable Energy, and GE Energy Financial Services will come together as GE Vernova.
On April 2, 2024, the divestiture of GE Power into GE Vernova was completed.
== Notes ==
== References ==
== Further reading ==
Sonal Patel (Jul 8, 2019). "A Brief History of GE Gas Turbines". Power Magazine. | Wikipedia/GE_Energy |
GemStone/S is computer software, an application framework that was first available for the programming language Smalltalk as an object database. It is proprietary commercial software.
== Company history ==
GemStone Systems was founded on March 1, 1982, as Servio Logic, to build a database machine based on a set theory model. Ian Huang instigated the founding, as the technology adviser to the CEO of Sampoerna Holdings (Putera Sampoerna), by recruiting the following team, consisting of:
Frank Bouton - President, who was the cofounder of Floating Point Systems Inc
Dr. Michael Mulder - Vice President of Engineering, who was the Group Manager for Advanced Processor Design at Sperry Univac and Principal Architect for the Univac 1180 mainframe
Steve Ivy - Vice President of Operation, who was a senior manager at Tektronix
Leonard Yuen - Vice President, Business Development, who was the Development Manager for the IBM DB2 database
Dr. George Copeland - Chief Architect, who was the Senior Staff Engineer at the Advanced Development Group in Tektronix
Steve Redfield - Chief Engineer, who was the Chief Engineer for the Intel 80286 microprocessor
Alan Purdy - who was a Staff Engineer at Tektronix
Bob Bretl - who was a software engineering manager at Tektronix Signal Processing Systems
Allen Otis - who was also with Tektronix
John Telford - who was a software engineering manager from Electro Scientific Industries
Monty Williams
Servio Logic was renamed GemStone Systems, Inc. in June 1995. The firm developed its first hardware prototype in 1982, and shipped its first software product (GemStone 1.0) in 1986. The engineering group resides in Beaverton, Oregon. Three of the original cofounding engineers, Bob Bretl, Allen Otis, and Monty Williams (now retired), have been with the firm since its start.
GemStone's owners pioneered implementing distributed computing in business systems. Many information system features now associated with Java EE were implemented earlier in GemStone. GemStone and VisualWave were an early web application server platform. (VisualWave and VisualWorks are now owned by Cincom.) GemStone played an important sponsorship role in the Smalltalk Industry Council at the time when IBM was backing VisualAge Smalltalk. As of 2005, Instantiations acquired the world-wide rights to the IBM VisualAge Smalltalk product and has rebranded it as the VAST (VA Smalltalk) Platform.
After a major transition, GemStone for Smalltalk continued as GemStone/S and various C++ and Java products for scalable, multitier architecture distributed computing systems evolved into the GemStone/J product. This in turn gave rise to GemFire, an early example of a Data Fabric for complex event processing (CEP), event stream processing (ESP), data virtualization, and distributed caching.
On May 6, 2010, SpringSource, a division of VMware, announced it had entered into a definitive agreement to acquire GemStone.
On May 2, 2013, GemTalk Systems acquired the GemStone/S platform from Pivotal Software (the EMC and VMware spin-off).
Gemfire remained with Pivotal's Big Data division. The product is available standalone but is also integrated into its Cloud Foundry PaaS as Pivotal Cloud Cache.
== Product ==
GemStone builds on the programming language Smalltalk. GemStone systems serve as mission-critical applications. GemStone frameworks still see some interest for web services and service-oriented architectures.
GemStone is an advanced Smalltalk platform for developing, deploying, and managing scalable, high-performance, multi-tier applications based on business objects.
A recent revival of interest in Smalltalk has occurred as a result of its use to generate JavaScript for e-commerce web pages or in web application frameworks such as the Seaside web framework. Systems based on object databases are not as common as those based on ORM or object-relational mapping frameworks such as TopLink or Hibernate. In the application framework market, JBoss and BEA Weblogic are somewhat analogous to GemStone.
GemTalk Systems, the creator of GemStone, also has a series of products under the GemBuilder moniker, which provide an interface between Smalltalk or Java clients and GemStone databases. Versions of this product exist for VisualWorks Smalltalk, VA Smalltalk (VAST Platform), and Java environments.
== See also ==
SpringSource
== References ==
== External links ==
Official website
IBM
GemStone FAQ (v.1.0) | Wikipedia/GemTalk_Systems |
Web science is an emerging interdisciplinary field concerned with the study of large-scale socio-technical systems, particularly the World Wide Web. It considers the relationship between people and technology, the ways that society and technology co-constitute one another and the impact of this co-constitution on broader society. Web Science combines research from disciplines as diverse as sociology, computer science, economics, and mathematics.
An earlier definition was given by American computer scientist Ben Shneiderman: "Web Science" is processing the information available on the web in similar terms to those applied to natural environment.
The Web Science Institute describes Web Science as focusing "the analytical power of researchers from disciplines as diverse as mathematics, sociology, economics, psychology, law and computer science to understand and explain the Web. It is necessarily interdisciplinary – as much about social and organizational behaviour as about the underpinning technology." A central pillar of Web science development is Artificial Intelligence or "AI". The current artificial intelligence that in development at the moment is Human-Centered, with goals to further professional development courses as well as influencing public policy. Artificial intelligence developers are focused on the most impactful uses of this technology, while also hoping to expedite the growth and development of the human race.
== Areas of activity ==
=== Emergent properties ===
Philip Tetlow, an IBM-based scientist influential in the emergence of web science as an independent discipline, argued for the concept of web life, which considers the Web not as a connected network of computers, as in common interpretations of the Internet, but rather as a sociotechnical machine capable of fusing together individuals and organisations into larger coordinated groups. It argues that unlike the technologies that have come before it, the Web is different in that its phenomenal growth and complexity are starting to outstrip our capability to control it directly, making it impossible for us to grasp its completeness in one go. Tetlow made use of Fritjof Capra's concept of the 'web of life' as a metaphor.
== Research groups ==
There are numerous academic research groups engaged in Web Science research, many of which are members of WSTNet, the Web Science Trust Network of research labs. Health Web Science emerged as a sub-discipline of Web Science that studies the role of the Web's impact on human's health outcomes and how to further utilize the Web to improve health outcomes. These groups focus on the developmental possibilities, provided through Web Science, in areas such as health care and social welfare. Discussion of web science has been widely adopted as a method in which the internet can have a real world impact in the field of medicine, currently coined Medicine 2.0. The World Wide Web acts as a medium for the spread and circulation of knowledge, though these various research groups consider themselves responsible for maintaining verifiable and testable knowledge. Using their knowledge of the healthcare system as well as web science, researchers are focused on formatting and structuring their knowledge in a way that is easily accessible throughout the internet. The World Wide Web is quickly evolving meaning that the information we provide and its formatting must also. Recognizing the overlap between both aspects, the spread of knowledge and development of the internet, allows us to properly display our knowledge in a manner that evolves as quickly as the internet and everyday medical research. The accessibility of the internet and quick development of knowledge must be companied with efficient formatting to allocate successful dissemination of information, as described by these various researcher groups.
== Related major conferences ==
Association for Computing Machinery (ACM), Hypertext Conference (HT) sponsored by SIGWEB
ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)
International AAAI Conference on Weblogs and Social Media (ICWSM)
The Web Conference (WWW)
Association for Computing Machinery (ACM) Web Science Conference (WebSci)
== See also ==
Digital anthropology
Digital sociology
Health Web Science
Sociology of the Internet
Technology and society
Web Science Trust
== References ==
== External links ==
A Framework for Web Science
Talk on web science by W3C
MSc on Web Science at Institute WeST, University of Koblenz-Landau, Germany Archived 2021-12-09 at the Wayback Machine
MSc on Web Sciences divided into different branches of study at Johannes Kepler University Linz, Austria Archived 2018-01-17 at the Wayback Machine
What is Web Science? (Video clip) on YouTube
The Web Science Education Workshop
The Web Science Education Map
Master's Programme WebScience at Cologne University of Applied Sciences
The Web Science Institute at the University of Southampton | Wikipedia/Web_science |
Exasol is an analytics engine company headquartered in Germany, EU. It supports a wide range of use cases, from standalone data warehouse deployments to analytics acceleration and AI/ML model enablement. It's technology is based on in-memory, column-oriented, relational database management systems
Since 2008, Exasol led the Transaction Processing Performance Council's TPC-H benchmark for analytical scenarios, in all data volume-based categories 100 GB, 300 GB, 1 TB, 3 TB, 10 TB, 30 TB and 100 TB. Exasol holds the top position in absolute performance as well as price/performance.
== Products ==
Exasol is a parallelized relational database management system (RDBMS) which runs on a cluster of standard computer hardware servers. Following the SPMD model, on each node the identical code is executed simultaneously. The data is stored in a column-oriented way and proprietary in-memory compression methods are used. The company claims that tuning efforts are not necessary since the database includes some kind of automatic self-optimization (like automatic indices, table statistics, and distributing of data).
Exasol is designed to run in memory, although data is persistently stored on disk following the ACID rules. Exasol supports the SQL Standard 2003 via interfaces like ODBC, JDBC or ADO.NET. A software development kit (SDK) is provided for native integration. For online analytical processing (OLAP) applications, the Multidimensional Expressions (MDX) extension of SQL is supported via OLE DB for OLAP and XML for Analysis.
The license model is based on the allocated RAM for the database software (per GB RAM) and independent to the physical hardware. Customers gain the maximal performance if their compressed active data fits into that licensed RAM, but it can also be much larger.
Exasol has implemented a so-called cluster operating system (EXACluster OS). It is based on Linux and provides a runtime environment and storage layer for the RDBMS, employing a proprietary, cluster-based file system (ExaStorage). Cluster management algorithms are provided like failover mechanisms or automatic cluster installation.
In-database analytics is supported. Exasol integrates support to run Lua, Java, Python and GNU R scripts in parallel inside user defined functions (UDFs) within the DBMS' SQL pipeline.
== See also ==
Shared-nothing architecture
Column-oriented database
In-memory database
SQL(:2008)
(R)OLAP, i.e. MDX over ODBO or XMLA
Business analytics
Predictive analytics
== References ==
== External links ==
Official website
Technical Details
TPC-H benchmark results | Wikipedia/EXASolution |
Customer relationship management (CRM) is a strategic process that organizations use to manage, analyze, and improve their interactions with customers. By leveraging data-driven insights, CRM helps businesses optimize communication, enhance customer satisfaction, and drive sustainable growth.
CRM systems compile data from a range of different communication channels, including a company's website, telephone (which many services come with a softphone), email, live chat, marketing materials and more recently, social media. They allow businesses to learn more about their target audiences and how to better cater to their needs, thus retaining customers and driving sales growth. CRM may be used with past, present or potential customers. The concepts, procedures, and rules that a corporation follows when communicating with its consumers are referred to as CRM. This complete connection covers direct contact with customers, such as sales and service-related operations, forecasting, and the analysis of consumer patterns and behaviours, from the perspective of the company.
The global customer relationship management market size is projected to grow from $101.41 billion in 2024 to $262.74 billion by 2032, at a CAGR of 12.6%
== History ==
The concept of customer relationship management started in the early 1970s, when customer satisfaction was evaluated using annual surveys or by front-line asking. At that time, businesses had to rely on standalone mainframe systems to automate sales, but the extent of technology allowed them to categorize customers in spreadsheets and lists. One of the best-known precursors of modern-day CRM is the Farley File. Developed by Franklin Roosevelt's campaign manager, James Farley, the Farley File was a comprehensive set of records detailing political and personal facts about people FDR and Farley met or were supposed to meet. Using it, people that FDR met were impressed by his "recall" of facts about their family and what they were doing professionally and politically. In 1982, Kate and Robert D. Kestenbaum introduced the concept of database marketing, namely applying statistical methods to analyze and gather customer data. By 1986, Pat Sullivan and Mike Muhney had released a customer evaluation system called ACT! based on the principle of a digital Rolodex, which offered a contact management service for the first time.
The trend was followed by numerous companies and independent developers trying to maximize lead potential, including Tom Siebel of Siebel Systems, who designed the first CRM product, Siebel Customer Relationship Management, in 1993. In order to compete with these new and quickly growing stand-alone CRM solutions, established enterprise resource planning (ERP) software companies like Oracle, Zoho Corporation, SAP, Peoplesoft (an Oracle subsidiary as of 2005) and Navision started extending their sales, distribution and customer service capabilities with embedded CRM modules. This included embedding sales force automation or extended customer service (e.g. inquiry, activity management) as CRM features in their ERP.
Customer relationship management was popularized in 1997 due to the work of Siebel, Gartner, and IBM. Between 1997 and 2000, leading CRM products were enriched with shipping and marketing capabilities. Siebel introduced the first mobile CRM app called Siebel Sales Handheld in 1999. The idea of a stand-alone, cloud-hosted customer base was soon adopted by other leading providers at the time, including PeopleSoft (acquired by Oracle), Oracle, SAP and Salesforce.com.
The first open-source CRM system was developed by SugarCRM in 2004. During this period, CRM was rapidly migrating to the cloud, as a result of which it became accessible to sole entrepreneurs and small teams. This increase in accessibility generated a huge wave of price reduction. Around 2009, developers began considering the options to profit from social media's momentum and designed tools to help companies become accessible on all users' favourite networks. Many startups at the time benefited from this trend to provide exclusively social CRM solutions, including Base and Nutshell. The same year, Gartner organized and held the first Customer Relationship Management Summit, and summarized the features systems should offer to be classified as CRM solutions. In 2013 and 2014, most of the popular CRM products were linked to business intelligence systems and communication software to improve corporate communication and end-users' experience. The leading trend is to replace standardized CRM solutions with industry-specific ones, or to make them customizable enough to meet the needs of every business. In November 2016, Forrester released a report where it "identified the nine most significant CRM suites from eight prominent vendors".
== Types ==
=== Strategic ===
Strategic CRM concentrates upon the development of a customer-centric business culture.
The focus of a business on being customer-centric (in design and implementation of their CRM strategy) will translate into an improved CLV.
=== Operational ===
The primary goal of CRM systems is integration and automation of sales, marketing, and customer support. Therefore, these systems typically have a dashboard that gives an overall view of the three functions on a single customer view, a single page for each customer that a company may have. The dashboard may provide client information, past sales, previous marketing efforts, and more, summarizing all of the relationships between the customer and the firm. Operational CRM is made up of three main components: sales force automation, marketing automation, and service automation.
Sales force automation works with all stages in the sales cycle, from initially entering contact information to converting a prospective client into an actual client. It implements sales promotion analysis, automates the tracking of a client's account history for repeated sales or future sales and coordinates sales, marketing, call centers, and retail outlets. It prevents duplicate efforts between a salesperson and a customer and also automatically tracks all contacts and follow-ups between both parties.
Marketing automation focuses on easing the overall marketing process to make it more effective and efficient. CRM tools with marketing automation capabilities can automate repeated tasks, for example, sending out automated marketing emails at certain times to customers or posting marketing information on social media. The goal with marketing automation is to turn a sales lead into a full customer. CRM systems today also work on customer engagement through social media.
Service automation is the part of the CRM system that focuses on direct customer service technology. Through service automation, customers are supported through multiple channels such as phone, email, knowledge bases, ticketing portals, FAQs, and more.
=== Analytical ===
The role of analytical CRM systems is to analyze customer data collected through multiple sources and present it so that business managers can make more informed decisions. Analytical CRM systems use techniques such as data mining, correlation, and pattern recognition to analyze customer data. These analytics help improve customer service by finding small problems which can be solved, perhaps by marketing to different parts of a consumer audience differently. For example, through the analysis of a customer base's buying behavior, a company might see that this customer base has not been buying a lot of products recently. After reviewing their data, the company might think to market to this subset of consumers differently to best communicate how this company's products might benefit this group specifically.
=== Collaborative ===
The third primary aim of CRM systems is to incorporate external stakeholders such as suppliers, vendors, and distributors, and share customer information across groups/departments and organizations. For example, feedback can be collected from technical support calls, which could help provide direction for marketing products and services to that particular customer in the future.
=== Customer data platform ===
A customer data platform (CDP) is a computer system used by marketing departments that assembles data about individual people from various sources into one database, with which other software systems can interact. As of February 2017, about twenty companies were selling such systems and revenue for them was around US$300 million.
== Components ==
The main components of CRM are building and managing customer relationships through marketing, observing relationships as they mature through distinct phases, managing these relationships at each stage and recognizing that the distribution of the value of a relationship to the firm is not homogeneous. When building and managing customer relationships through marketing, firms might benefit from using a variety of tools to help organizational design, incentive schemes, customer structures, and more to optimize the reach of their marketing campaigns. Through the acknowledgment of the distinct phases of CRM, businesses will be able to benefit from seeing the interaction of multiple relationships as connected transactions. The final factor of CRM highlights the importance of CRM through accounting for the profitability of customer relationships. By studying the particular spending habits of customers, a firm may be able to dedicate different resources and amounts of attention to different types of consumers.
Relational Intelligence, which is the awareness of the variety of relationships a customer can have with a firm and the ability of the firm to reinforce or change those connections, is an important component of the main phases of CRM. Companies may be good at capturing demographic data, such as gender, age, income, and education, and connecting them with purchasing information to categorize customers into profitability tiers, but this is only a firm's industrial view of customer relationships. A lack of relational intelligence is a sign that firms still see customers as resources that can be used for up-sell or cross-sell opportunities, rather than people looking for interesting and personalized interactions.
CRM systems include:
Data warehouse technology, which is used to aggregate transaction information, to merge the information with CRM products, and to provide key performance indicators.
Opportunity management, which helps the company to manage unpredictable growth and demand and implement a good forecasting model to integrate sales history with sales projections.
CRM systems that track and measure marketing campaigns over multiple networks, tracking customer analysis by customer clicks and sales.
Some CRM software is available as a software as a service (SaaS), delivered via the internet and accessed via a web browser instead of being installed on a local computer. Businesses using the software do not purchase it but typically pay a recurring subscription fee to the software vendor.
For small businesses, a CRM system may consist of a contact management system that integrates emails, documents, jobs, faxes, and scheduling for individual accounts. CRM systems available for specific markets (legal, finance) frequently focus on event management and relationship tracking as opposed to financial return on investment (ROI).
CRM systems for eCommerce focus on marketing automation tasks such as cart rescue, re-engaging users with email, and personalization.
Customer-centric relationship management (CCRM) is a nascent sub-discipline that focuses on customer preferences instead of customer leverage. CCRM aims to add value by engaging customers in individual, interactive relationships.
Systems for non-profit and membership-based organizations help track constituents, fundraising, sponsors' demographics, membership levels, membership directories, volunteering and communication with individuals.
CRM not only indicates technology and strategy but also indicates an integrated approach that includes employees knowledge and organizational culture to embrace the CRM philosophy.
== Effect on customer satisfaction ==
Customer satisfaction has important implications for the economic performance of firms because it has the ability to increase customer loyalty and usage behavior and reduce customer complaints and the likelihood of customer defection. The implementation of a CRM approach is likely to affect customer satisfaction and customer knowledge for a variety of different reasons.
Firstly, firms can customize their offerings for each customer. By accumulating information across customer interactions and processing this information to discover hidden patterns, CRM applications help firms customize their offerings to suit the individual tastes of their customers. This customization enhances the perceived quality of products and services from a customer's viewpoint, and because the perceived quality is a determinant of customer satisfaction, it follows that CRM applications indirectly affect customer satisfaction. CRM applications also enable firms to provide timely, accurate processing of customer orders and requests and the ongoing management of customer accounts. For example, Piccoli and Applegate discuss how Wyndham uses IT tools to deliver a consistent service experience across its various properties to a customer. Both an improved ability to customize and reduced variability of the consumption experience enhance perceived quality, which in turn positively affects customer satisfaction. CRM applications also help firms manage customer relationships more effectively across the stages of relationship initiation, maintenance, and termination.
=== Customer benefits ===
With CRM systems, customers are served on the day-to-day process. With more reliable information, their demand for self-service from companies will decrease. If there is less need to interact with the company for different problems, then the customer satisfaction level is expected to increase. These central benefits of CRM will be connected hypothetically to the three kinds of equity, which are relationship, value, and brand, and in the end to customer equity. Eight benefits were recognized to provide value drivers.
Enhanced ability to target profitable customers.
Integrated assistance across channels.
Enhanced sales force efficiency and effectiveness.
Improved pricing.
Customized products and services.
Improved customer service efficiency and effectiveness.
Individualized marketing messages are also called campaigns.
Connect customers and all channels on a single platform.
=== Examples ===
Research has found a 5% increase in customer retention boosts lifetime customer profits by 50% on average across multiple industries, as well as a boost of up to 90% within specific industries such as insurance. Companies that have mastered customer relationship strategies have the most successful CRM programs. For example, MBNA Europe has had a 75% annual profit growth since 1995. The firm heavily invests in screening potential cardholders. Once proper clients are identified, the firm retains 97% of its profitable customers. They implement CRM by marketing the right products to the right customers. The firm's customers' card usage is 52% above the industry norm, and the average expenditure is 30% more per transaction. Also 10% of their account holders ask for more information on cross-sale products.
Amazon has also seen successes through its customer proposition. The firm implemented personal greetings, collaborative filtering, and more for the customer. They also used CRM training for the employees to see up to 80% of customers repeat.
== Customer profile ==
A customer profile is a detailed description of any particular classification of customer which is created to represent the typical users of a product or service. Customer profiling is a method to understand your customers in terms of demographics, behaviour and lifestyle. It is used to help make customer-focused decisions without confusing the scope of the project with personal opinion. Overall profiling is gathering information that sums up consumption habits so far and projects them into the future so that they can be grouped for marketing and advertising purposes.
Customer or consumer profiles are the essences of the data that is collected alongside core data (name, address, company) and processed through customer analytics methods, essentially a type of profiling.
The three basic methods of customer profiling are the psychographic approach, the consumer typology approach, and the consumer characteristics approach. These customer profiling methods help you design your business around who your customers are and help you make better customer-centered decisions.
== Improving CRM ==
Consultants hold that it is important for companies to establish strong CRM systems to improve their relational intelligence. According to this argument, a company must recognize that people have many different types of relationships with different brands. One research study analyzed relationships between consumers in China, Germany, Spain, and the United States, with over 200 brands in 11 industries including airlines, cars, and media. This information is valuable as it provides demographic, behavioral, and value-based customer segmentation. These types of relationships can be both positive and negative. Some customers view themselves as friends of the brands, while others as enemies, and some are mixed with a love-hate relationship with the brand. Some relationships are distant, intimate, or anything in between.
=== Data analysis ===
Managers must understand the different reasons for the types of relationships, and provide the customer with what they are looking for. Companies can collect this information by using surveys, interviews, and more, with current customers.
Companies must also improve the relational intelligence of their CRM systems. Companies store and receive huge amounts of data through emails, online chat sessions, phone calls, and more. Many companies do not properly make use of this great amount of data, however. All of these are signs of what types of relationships the customer wants with the firm, and therefore companies may consider investing more time and effort in building out their relational intelligence. Companies can use data mining technologies and web searches to understand relational signals. Social media such as social networking sites, blogs, and forums can also be used to collect and analyze information. Understanding the customer and capturing this data allows companies to convert customers' signals into information and knowledge that the firm can use to understand a potential customer's desired relations with a brand.
=== Employee training ===
Many firms have also implemented training programs to teach employees how to recognize and create strong customer-brand relationships. Other employees have also been trained in social psychology and the social sciences to help bolster customer relationships. Customer service representatives must be trained to value customer relationships and trained to understand existing customer profiles. Even the finance and legal departments should understand how to manage and build relationships with customers.
== In practice ==
=== Call centers ===
Contact centre CRM providers are popular for small and mid-market businesses. These systems codify the interactions between the company and customers by using analytics and key performance indicators to give the users information on where to focus their marketing and customer service. This allows agents to have access to a caller's history to provide personalized customer communication. The intention is to maximize average revenue per user, decrease churn rate and decrease idle and unproductive contact with the customers.
Growing in popularity is the idea of gamifying, or using game design elements and game principles in a non-game environment such as customer service environments. The gamification of customer service environments includes providing elements found in games like rewards and bonus points to customer service representatives as a method of feedback for a job well done.
Gamification tools can motivate agents by tapping into their desire for rewards, recognition, achievements, and competition.
=== Contact-center automation ===
Contact-center automation, CCA, the practice of having an integrated system that coordinates contacts between an organization and the public, is designed to reduce the repetitive and tedious parts of a contact center agent's job. Automation prevents this by having pre-recorded audio messages that help customers solve their problems. For example, an automated contact center may be able to re-route a customer through a series of commands asking him or her to select a certain number to speak with a particular contact center agent who specializes in the field in which the customer has a question. Software tools can also integrate with the agent's desktop tools to handle customer questions and requests. This also saves time on behalf of the employees.
=== Social media ===
Social CRM involves the use of social media and technology to engage and learn from consumers. Because the public, especially young people, are increasingly using social networking sites, companies use these sites to draw attention to their products, services and brands, with the aim of building up customer relationships to increase demand. With the increase in the use of social media platforms, integrating CRM with the help of social media can potentially be a quicker and more cost-friendly process.
Some CRM systems integrate social media sites like Twitter, LinkedIn, and Facebook to track and communicate with customers. These customers also share their own opinions and experiences with a company's products and services, giving these firms more insight. Therefore, these firms can both share their own opinions and also track the opinions of their customers.
Enterprise feedback management software platforms combine internal survey data with trends identified through social media to allow businesses to make more accurate decisions on which products to supply.
=== Location-based services ===
CRM systems can also include technologies that create geographic marketing campaigns. The systems take in information based on a customer's physical location and sometimes integrates it with popular location-based GPS applications. It can be used for networking or contact management as well to help increase sales based on location.
=== Business-to-business transactions ===
Despite the general notion that CRM systems were created for customer-centric businesses, they can also be applied to B2B environments to streamline and improve customer management conditions. For the best level of CRM operation in a B2B environment, the software must be personalized and delivered at individual levels.
The main differences between business-to-consumer (B2C) and business-to-business CRM systems concern aspects like sizing of contact databases and length of relationships.
== Market trends ==
=== Social networking ===
In the Gartner CRM Summit 2010 challenges like "system tries to capture data from social networking traffic like Twitter, handles Facebook page addresses or other online social networking sites" were discussed and solutions were provided that would help in bringing more clientele.
The era of the "social customer" refers to the use of social media by customers.
=== Mobile ===
Some CRM systems are equipped with mobile capabilities, making information accessible to remote sales staff.
=== Cloud computing and SaaS ===
Many CRM vendors offer subscription-based web tools (cloud computing) and SaaS. Salesforce.com was the first company to provide enterprise applications through a web browser, and has maintained its leadership position.
Traditional providers moved into the cloud-based market via acquisitions of smaller providers: Oracle purchased RightNow in October 2011, and Taleo and Eloqua in 2012; SAP acquired SuccessFactors in December 2011 and NetSuite acquired Verenia in 2022.
=== Sales and sales force automation ===
Sales forces also play an important role in CRM, as maximizing sales effectiveness and increasing sales productivity is a driving force behind the adoption of CRM software. Some of the top CRM trends identified in 2021 include focusing on customer service automation such as chatbots, hyper-personalization based on customer data and insights, and the use of unified CRM systems. CRM vendors support sales productivity with different products, such as tools that measure the effectiveness of ads that appear in 3D video games.
Pharmaceutical companies were some of the first investors in sales force automation (SFA) and some are on their third- or fourth-generation implementations. However, until recently, the deployments did not extend beyond SFA—limiting their scope and interest to Gartner analysts.
=== Vendor relationship management ===
Another related development is vendor relationship management (VRM), which provide tools and services that allow customers to manage their individual relationship with vendors. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine.
=== Customer success ===
Another trend worth noting is the rise of Customer Success as a discipline within companies. More and more companies establish Customer Success teams as separate from the traditional Sales team and task them with managing existing customer relations. This trend fuels demand for additional capabilities for a more holistic understanding of customer health, which is a limitation for many existing vendors in the space. As a result, a growing number of new entrants enter the market while existing vendors add capabilities in this area to their suites.
=== AI and predictive analytics ===
In 2017, artificial intelligence and predictive analytics were identified as the newest trends in CRM.
== Criticism ==
Companies face large challenges when trying to implement CRM systems. Consumer companies frequently manage their customer relationships haphazardly and unprofitably. They may not effectively or adequately use their connections with their customers, due to misunderstandings or misinterpretations of a CRM system's analysis. Clients may be treated like an exchange party, rather than a unique individual, due to, occasionally, a lack of a bridge between the CRM data and the CRM analysis output. Many studies show that customers are frequently frustrated by a company's inability to meet their relationship expectations, and on the other side, companies do not always know how to translate the data they have gained from CRM software into a feasible action plan. In 2003, a Gartner report estimated that more than $2 billion had been spent on software that was not being used. According to CSO Insights, less than 40 percent of 1,275 participating companies had end-user adoption rates above 90 percent. Many corporations only use CRM systems on a partial or fragmented basis. In a 2007 survey from the UK, four-fifths of senior executives reported that their biggest challenge is getting their staff to use the systems they had installed. Forty-three percent of respondents said they use less than half the functionality of their existing systems. However, market research regarding consumers' preferences may increase the adoption of CRM among developing countries' consumers.
Collection of customer data such as personally identifiable information must strictly obey customer privacy laws, which often requires extra expenditures on legal support.
Part of the paradox with CRM stems from the challenge of determining exactly what CRM is and what it can do for a company. The CRM paradox, also referred to as the "dark side of CRM", may entail favoritism and differential treatment of some customers. This can happen because a business prioritizes customers who are more profitable, more relationship-orientated or tend to have increased loyalty to the company. Although focusing on such customers by itself is not a bad thing, it can leave other customers feeling left out and alienated potentially decreasing profits because of it.
CRM technologies can easily become ineffective if there is no proper management, and they are not implemented correctly. The data sets must also be connected, distributed, and organized properly so that the users can access the information that they need quickly and easily. Research studies also show that customers are increasingly becoming dissatisfied with contact center experiences due to lags and wait times. They also request and demand multiple channels of communication with a company, and these channels must transfer information seamlessly. Therefore, it is increasingly important for companies to deliver a cross-channel customer experience that can be both consistent as well as reliable.
== See also ==
Business portal
== References == | Wikipedia/CRM_systems |
Model–view–controller (MVC) is a software architectural pattern commonly used for developing user interfaces that divides the related program logic into three interconnected elements. These elements are:
the model, the internal representations of information
the view, the interface that presents information to and accepts it from the user
the controller, the software linking the two.
Traditionally used for desktop graphical user interfaces (GUIs), this pattern became popular for designing web applications. Popular programming languages have MVC frameworks that facilitate the implementation of the pattern.
== History ==
One of the seminal insights in the early development of graphical user interfaces, MVC became one of the first approaches to describe and implement software constructs in terms of their responsibilities.
Trygve Reenskaug created MVC while working on Smalltalk-79 as a visiting scientist at the Xerox Palo Alto Research Center (PARC) in the late 1970s.: 330 He wanted a pattern that could be used to structure any program where users interact with a large, convoluted data set. His design initially had four parts: Model, view, thing, and editor. After discussing it with the other Smalltalk developers, he and the rest of the group settled on model, view, and controller instead.
In their final design, a model represents some part of the program purely and intuitively. A view is a visual representation of a model, retrieving data from the model to display to the user and passing requests back and forth between the user and the model. A controller is an organizational part of the user interface that lays out and coordinates multiple Views on the screen, and which receives user input and sends the appropriate messages to its underlying Views. This design also includes an Editor as a specialized kind of controller used to modify a particular view, and which is created through that view.
Smalltalk-80 supports a version of MVC that evolved from this one. It provides abstract view and controller classes as well as various concrete subclasses of each that represent different generic widgets. In this scheme, a View represents some way of displaying information to the user, and a controller represents some way for the user to interact with a view. A view is also coupled to a model object, but the structure of that object is left up to the application programmer. The Smalltalk-80 environment also includes an "MVC Inspector", a development tool for viewing the structure of a given model, view, and controller side-by-side.
In 1988, an article in The Journal of Object Technology (JOT) by two ex-PARC employees presented MVC as a general "programming paradigm and methodology" for Smalltalk-80 developers. However, their scheme differed from both Reenskaug et al.'s and that presented by the Smalltalk-80 reference books. They defined a view as covering any graphical concern, with a controller being a more abstract, generally invisible object that receives user input and interacts with one or many views and only one model.
The MVC pattern subsequently evolved, giving rise to variants such as hierarchical model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter (MVP), model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.
The use of the MVC pattern in web applications grew after the introduction of NeXT's WebObjects in 1996, which was originally written in Objective-C (that borrowed heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became popular with Java developers when WebObjects was ported to Java. Later frameworks for Java, such as Spring (released in October 2002), continued the strong bond between Java and MVC.
In 2003, Martin Fowler published Patterns of Enterprise Application Architecture, which presented MVC as a pattern where an "input controller" receives a request, sends the appropriate messages to a model object, takes a response from the model object, and passes the response to the appropriate view for display.: 56 This is close to the approach taken by the Ruby on Rails web application framework (August 2004), which has the client send requests to the server via an in-browser view, these requests are handled by a controller on the server, and the controller communicates with the appropriate model objects. The Django framework (July 2005, for Python) put forward a similar "model-template-view" (MTV) take on the pattern, in which a view retrieves data from models and passes it to templates for display. Both Rails and Django debuted with a strong emphasis on rapid deployment, which increased MVC's popularity outside the traditional enterprise environment in which it has long been popular.
== Components ==
=== Model ===
The central component of the pattern. It is the application's dynamic data structure, independent of the user interface. It directly manages the data, logic and rules of the application. In Smalltalk-80, the design of a model type is left entirely to the programmer. With WebObjects, Rails, and Django, a model type typically represents a table in the application's database. The model is essential for keeping the data organized and consistent. It ensures that the application's data behaves according to the defined rules and logic.
=== View ===
Any representation of information such as a chart, diagram or table. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
In Smalltalk-80, a view is just a visual representation of a model, and does not handle user input. With WebObjects, a view represents a complete user interface element such as a menu or button, and does receive input from the user. In both Smalltalk-80 and WebObjects, however, views are meant to be general-purpose and composable.
With Rails and Django, the role of the view is played by HTML templates, so in their scheme a view specifies an in-browser user interface rather than representing a user interface widget directly. (Django opts to call this kind of object a "template" in light of this.) This approach puts relatively less emphasis on small, composable views; a typical Rails view has a one-to-one relationship with a controller action.
Smalltalk-80 views communicate with both a model and a controller, whereas with WebObjects, a view talks only to a controller, which then talks to a model. With Rails and Django, a view/template is used by a controller/view when preparing a response to the client.
=== Controller ===
Accepts input and converts it to commands for the model or view.
A Smalltalk-80 controller handles user input events, such as button presses or mouse movement. At any given time, each controller has one associated view and model, although one model object may hear from many different controllers. Only one controller, the "active" controller, receives user input at any given time; a global window manager object is responsible for setting the current active controller. If user input prompts a change in a model, the controller will signal the model to change, but the model is then responsible for telling its views to update.
In WebObjects, the views handle user input, and the controller mediates between the views and the models. There may be only one controller per application, or one controller per window. Much of the application-specific logic is found in the controller.
In Rails, requests arriving at the on-server application from the client are sent to a "router", which maps the request to a specific method of a specific controller. Within that method, the controller interacts with the request data and any relevant model objects and prepares a response using a view. Conventionally, each view has an associated controller; for example, if the application had a client view, it would typically have an associated Clients controller as well. However, developers are free to make other kinds of controllers if they wish.
Django calls the object playing this role a "view" instead of a controller. A Django view is a function that receives a web request and returns a web response. It may use templates to create the response.
== Interactions ==
In addition to dividing the application into a model, a view and a controller component, the MVC design pattern defines the interactions between these three components :
The model is responsible for managing the data of the application. It receives user input from the controller.
The view renders presentation of the model in a particular format.
The controller responds to the user input and performs interactions on the data model objects. The controller receives the input, optionally validates it and then passes the input to the model.
As with other software patterns, MVC expresses the "core of the solution" to a problem while allowing it to be adapted for each system. Particular MVC designs can vary significantly from the traditional description here.
== Motivation ==
As Alan Kay wrote in 2003, the original motivation behind the MVC was to allow creation of a graphical interface for any object. That was outlined in detail in Richard Pawson's book Naked Objects.
Trygve Reenskaug, originator of MVC at PARC, has written that "MVC was conceived as a general solution to the problem of users controlling a large and complex data set."
In their 1991 guide Inside Smalltalk, Carleton University computer science professors Wilf LaLonde and John Pugh described the advantages of Smalltalk-80-style MVC as:
independence of presentation and data, e.g. multiple views on one model simultaneously,
composable presentation widgets, e.g. one view used as a subview of another,
switchable input modes, by swapping one controller out for another during runtime, and
independence of input and output processing, via the separate responsibilities of controllers and views.
== Use in web applications ==
Although originally developed for desktop computing, MVC has been widely adopted as a design for World Wide Web applications in major programming languages. Several web frameworks have been created that enforce the pattern. These software frameworks vary in their interpretations, mainly in the way that the MVC responsibilities are divided between the client and server. Early MVC frameworks took a thin client approach that placed almost the entire model, view and controller logic on the server. In this approach, the client sends hyperlink requests or form submissions to the controller and then receives a complete and updated web page (or other document) from the view; the model exists entirely on the server. Later frameworks have allowed the MVC components to execute partly on the client, using Ajax to synchronize data.
== See also ==
== References ==
== Bibliography == | Wikipedia/Model–view–controller |
Salesforce, Inc. is an American cloud-based software company headquartered in San Francisco, California. It provides applications focused on sales, customer service, marketing automation, e-commerce, analytics, artificial intelligence, and application development.
Founded by former Oracle executive Marc Benioff in March 1999, Salesforce grew quickly, making its initial public offering in 2004. As of September 2022, Salesforce is the 61st largest company in the world by market cap with a value of nearly US$153 billion. It became the world's largest enterprise applications firm in 2022. Salesforce ranked 491st on the 2023 edition of the Fortune 500, making $31.352 billion in revenue. Since 2020, Salesforce has also been a component of the Dow Jones Industrial Average.
== History ==
Salesforce was founded on March 8, 1999 by former Oracle executive Marc Benioff, together with Parker Harris, Dave Moellenhoff, and Frank Dominguez as a software-as-a-service (SaaS) company. The first prototype of Salesforce was launched in November 1999.
Two of Salesforce's earliest investors were Larry Ellison, the co-founder and first CEO of Oracle, and Halsey Minor, the founder of CNET.
Salesforce was severely affected by the dot-com bubble bursting at the beginning of the new millennium, resulting in the company laying off 20% of its workforce. Despite its losses, Salesforce continued strong during the early 2000s. Salesforce also gained notability during this period for its "the end of software" tagline and marketing campaign, and even hired actors to hold up signs with its slogan outside a Siebel Systems conference. Salesforce's revenue continued to increase from 2000 to 2003, with 2003's revenue skyrocketing from $5.4 million in the fiscal year 2001 to over $100 million by December 2003.
In 2003, Salesforce held its first annual Dreamforce conference in San Francisco.
In June 2004, the company had its initial public offering on the New York Stock Exchange under the stock symbol CRM and raised US$110 million. In 2006, Salesforce launched Idea Exchange, a platform that allows customers to connect with company product managers.
In 2009, Salesforce passed $1 billion in annual revenue. Also, in 2009, the company launched Service Cloud, an application that helps companies manage service conversations about their products and services.
In 2014, the company released Trailhead, a free online learning platform. In October 2014, Salesforce announced the development of its Customer Success Platform. In September 2016, Salesforce announced the launch of Einstein, an artificial intelligence platform that supports several of Salesforce's cloud services. It reportedly acquired a 20-year license to be the exclusive business-oriented software company allowed to use Albert Einstein's likeness for $20 million.
Salesforce launched the Sustainability Cloud (Net Zero Cloud as of 2022), which is used by companies to track progress towards achieving their net zero emissions goals.
In 2020, Salesforce joined the Dow Jones Industrial Average, replacing energy giant and Standard Oil-descendant ExxonMobil. Salesforce's ascension to the Dow Jones was concurrent with that of Amgen and Honeywell. Because the Dow Jones factors its components by market price, Salesforce was the largest technology component of the index at its accession.
Across 2020 and 2021, Salesforce saw some notable leadership changes; in February 2020, co-chief executive officer Keith Block stepped down from his position in the company. Marc Benioff remained as chairman and chief executive officer. In February 2021, Amy Weaver, previously the chief legal officer, became CFO. Former CFO Mark Hawkins announced that he would be retiring in October. In November 2021, Bret Taylor was named vice chair and co-CEO of the company.
In December 2020, it was announced that Salesforce would acquire Slack for $27.7 billion, its largest acquisition to date. The acquisition closed in July 2021. Journalists covering the acquisition emphasized the price Salesforce paid for Slack, which was a 54% premium compared to Slack's market value.
In April 2022, "Salesforce.com, Inc." changed its legal name to "Salesforce, Inc."
Acceleration Economy reported that Salesforce had surpassed SAP to become the world's largest enterprise software vendor in August 2022.
The next month, Salesforce announced a partnership with Meta Platforms. The deal called for Meta's consumer application WhatsApp to integrate Salesforce's Customer 360 platform to allow consumers to communicate with companies directly.
In November 2022, Salesforce announced it would terminate some employees from its sales team. That same month, Salesforce announced its co-CEO and vice chair, Bret Taylor, would be stepping down from his roles at the end of January 2023, with Benioff continuing to run the company and serve as board chair. Within the week, former Tableau CEO Mark Nelson and former Slack CEO Stewart Butterfield also announced their departures. When asked about the departures, Benioff stated, "people come and people go"; Salesforce's stock dropped to a 52-week low after Nelson's resignation.
In January 2023, the company announced a layoff of about 10%, or approximately 8,000 positions. According to Benioff, the company hired too aggressively during the COVID-19 pandemic and the increase in working from home led to the layoff. The company also reduced office space as part of the restructuring plan. The same month brought an announcement from activist investor Elliott Management that it would acquire a "big stake" in the company.
In January 2024, Salesforce announced it was laying off 700 employees (about 1%) of its global staff.
== Services ==
Salesforce offers several customer relationship management (CRM) services, including: Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud and Platform. Additional technologies include Slack.
Other services include app creation, data integration and visualization, and training.
Salesforce launched a suite of features called Salesforce Foundations in September 2024, bundling connected functionality across department-specific Sales Cloud and Service Cloud products.
=== Artificial intelligence ===
Launched at Dreamforce in 2016, Salesforce Einstein was the company’s first artificial intelligence product, developed from a set of technologies underlying the Salesforce platform.
In March 2023, Salesforce announced ChatGPT integration in Slack was available to any organization, and the launch of Einstein GPT, a generative AI service.
In March 2024, Salesforce launched Einstein Copilot: Health Actions, a conversation assistant based on its earlier artificial intelligence platform Einstein. It helps with making appointments, referrals, and gathering patient information.In July, Salesforce released an AI agent, the Einstein Service Agent, with the ability to perform customer service actions, like enabling product returns or refunds.
In September 2024, the company deployed Agentforce (succeeding Salesforce Einstein), an agentic AI platform where users can create autonomous agents for customer service assistance, developing marketing campaigns, and coaching salespersons.
=== Salesforce Platform ===
Salesforce Platform (formerly known as Force.com) is a platform as a service (PaaS) that allows developers to add applications to the main Salesforce.com application. These applications are hosted on Salesforce.com infrastructure.
Force.com applications are built using Apex, a proprietary Java-like programming language to generate HTML originally via the "Visualforce" framework. Beginning in 2015 the "Lightning Components" framework has been supported. The Apex compiler was designed by James Spagnola.
As of 2014, the Force.com platform had 1.5 million registered developers according to Salesforce.
=== AppExchange ===
Launched in 2005, the Salesforce AppExchange is an online app store that allows users to sell third-party applications and consulting services.
As of 2021, the exchange has over 5,000 apps listed.
=== Trailhead ===
Launched in 2014, Trailhead is a free online learning platform with courses focused on Salesforce technologies.
=== Discontinued ===
Desk.com was a SaaS help desk and customer support product acquired by Salesforce for $50 million in 2011,and consolidated with other services into Service Cloud Essentials in March 2018.
Do.com was a cloud-based task management system for small groups and businesses, introduced in 2011, and discontinued in 2014.
== Operations ==
Salesforce is headquartered in San Francisco in the Salesforce Tower. Salesforce has 110 offices, including ones in Hong Kong, Israel, London, Paris, Sydney and Tokyo.
Standard & Poor's added Salesforce to the S&P 500 Index in September 2008. In August 2020, S&P Dow Jones Indices announced that Salesforce would replace ExxonMobil in the Dow Jones Industrial Average.
=== Culture ===
According to Marc Benioff, Salesforce corporate culture is based on the concept of Ohana.
In 2021, Cynthia Perry, a design research senior manager, resigned, alleging discrimination in the workplace and posting her resignation letter on LinkedIn.
On September 10, 2021, Benioff tweeted that the company is prepared to help any employee who wishes to move out of the state of Texas, following abortion legislation in Texas, announced on September 1, 2021.
=== Finances ===
For the fiscal year 2022, Salesforce reported revenue of US$26.49 billion, an increase of 25% year-over-year and 24% in constant currency. Salesforce ranked 126 on the 2022 Fortune 500 list of the largest United States companies by revenue.
=== IT infrastructure ===
In 2008, Salesforce migrated from Sun Fire E25K servers with SPARC processors running Solaris, to Dell servers with AMD processors, running Linux.
In 2012, Salesforce announced plans to build a data center in the UK to handle European citizens' personal data. The center opened in 2014.
In 2013, Salesforce and Oracle announced a nine-year partnership focusing on applications, platforms, and infrastructure.
In 2016, Salesforce announced that it will use Amazon Web Services hosting for countries with restrictive data residency requirements and where no Salesforce data centers are operating.
== Acquisitions ==
=== 2006–2015 ===
In 2006, Salesforce acquired Sendia, a mobile web service firm, for $15 million and Kieden, an online advertising company. In 2007, Koral, a content management service, was acquired. In 2008, Salesforce acquired Instranet for $31.5 million. In 2010, Salesforce acquired multiple companies, including Jigsaw, a cloud-based data service provider, for $142 million, Heroku, a Ruby application platform-as-a-service, for $212 million, and Activa Live Chat, a live chat software provider.
In 2011, Salesforce acquired Dimdim, a web conferencing platform, for $31 million, Radian6, a social media tracking company, for $340 million, and Rypple, a performance management software company. Rypple became known as Work.com in 2012. In 2012, Salesforce acquired Buddy Media, a social media marketer, for $689 million, and GoInstant, a browser collaboration startup, for $70 million.
In 2013, Salesforce acquired ExactTarget, an email marketer, for $2.5 billion. In 2014, Salesforce acquired RelateIQ, a data company, for $390 million. In 2015, Salesforce acquired multiple companies for undisclosed sums, including Toopher, a mobile authentication company, Tempo, an AI calendar app, and MinHash, an AI platform. The company also acquired SteelBrick, a software company, for $360 million.
=== 2016–present ===
In 2016, Salesforce acquired Demandware, a cloud-based provider of e-commerce services, for $2.8 billion and Quip, a word processing app, for $750 million. In 2017, the company acquired Sequence, a user experience design agency. In 2018, Salesforce acquired several companies, including MuleSoft, a cloud service company, for $6.5 billion, as well as Rebel, an email services provider, and Datorama, an AI marketing platform, for undisclosed amounts.
In 2019, Salesforce completed its acquisition of analytics software company Tableau for $15.7 billion in 2019, and Slack Technologies for $27.7 billion in 2021. Salesforce also made smaller acquisitions throughout 2019, 2020, and 2021, which included ClickSoftware for $1.35 billion, consulting firm Acumen Solutions for $570 million, CRM firm Vlocity for $1.33 billion, privacy compliance startup Phennecs for $16.5 million, and robotic process automation firm Servicetrace for an undisclosed amount.
Salesforce's most recent acquisition was Slack-bot maker Troops.ai, announced in May 2022, and expected to close in 2023.
In September 2023, Salesforce acquired Airkit.ai, a creator of AI-powered customer service applications and experiences. In December 2023, Salesforce announced it would acquire Spiff, an automated commission management platform for an undisclosed amount.
In September 2024, Salesforce acquired data management firm Own for $1.9 billion, giving Own's about 1000 employees the deadline of January 31, 2025 in those positions. Salesforce has also acquired PredictSpring and Tenyx in 2024.
In May 2025, Salesforce announced plans to acquire data management platform Informatica for about $8 billion. It had initially been in talks to purchase the company the prior year, but the two parties were unable to agree on terms.
== Controversies ==
=== Phishing attack ===
In November 2007, a phishing attack compromised contact information on a number of Salesforce customers. Some customers then received phishing emails that appeared to be invoices from Salesforce. Salesforce stated that "a phisher tricked someone into disclosing a password, but this intrusion did not stem from a security flaw in [the salesforce.com] application or database."
=== ‘Meatpistol’ presenters fired at Def Con ===
In 2017, at DEF CON, two security engineers were fired after giving a presentation on an internal project called MEATPISTOL. The presenters were sent a message 30 minutes prior to the presentation telling them not to go on stage, but the message wasn't seen until after they finished. The MEATPISTOL tool was anticipated to be released as open-source at the time of the presentation, but Salesforce did not release the code to developers or the public during the conference. The terminated employees called on the company to open-source the software after being dismissed.
=== RAICES donation refusal ===
The not-for-profit organization Refugee and Immigrant Center for Education and Legal Services (RAICES) rejected a US$250,000 donation from Salesforce because the company has contracts with U.S. Customs and Border Protection.
=== 2018 taxes ===
In December 2019, the Institute on Taxation and Economic Policy found that Salesforce was one of 91 companies who "paid an effective federal tax rate of 0% or less" in 2018, as a result of the Tax Cuts and Jobs Act of 2017. Their findings were published in a report based on the 379 Fortune 500 companies that declared a profit in 2018.
=== Sex-trafficking lawsuit ===
In March 2019, Salesforce faced a lawsuit by 50 anonymous women claiming to be victims and survivors of sex trafficking, abuse, and rape, alleging the company profited from and helped build technology that facilitated sex trafficking on the now-defunct Backpage.com. In March 2021, a judge granted partial dismissal of the case, dismissing charges of negligence and conspiracy, but allowed the case to proceed regarding charges of sex trafficking. In March 2024, the case was dismissed without prejudice. In September 2024, the US Court of Appeals for the Ninth Circuit denied a request to reverse the dismissal.
=== Disability discrimination lawsuit in Japan ===
In July 2021, Salesforce Japan faced a discrimination lawsuit from a former employee, according to Japanese legal media. The firm declined to comment on the suit to the media. The ex-employee, who has Autism Spectrum Disorder and ADHD, claimed she was discriminated against because of her disability and terminated in the firm's Japan web marketing team.
The suit alleged that the anonymous woman, as an employee at Salesforce Japan from 2018 to 2020, faced hate speech, microaggressions and rejection of reasonable accommodation from the manager. She alleged that her attempts to resolve the problem were met with pressure from HR and job coach. The lawsuit is still continuing in Tokyo district court.
In Japan, the legal disability quota for private companies is 2.3%. But Salesforce Japan has not met the quota and pay levy from 2009 to 2021 except 2017. In 2020 the firm did not report the number of disabled employees to Japanese labor official. Depending on the result of lawsuit, it is undeniable that the firm may face a risk of negative impact to disability hiring such as performance improvement plan on the disability employment act or disclosure as social punishment from the labor official.
=== Employee layoffs/Matthew McConaughey's salary ===
In January 2023, Salesforce reported that 8,000 employees had been laid off as a result of over-hiring during the Covid lockdown and a global economic downturn. In March 2023, the Wall Street Journal reported that actor Matthew McConaughey was paid 10 million dollars yearly for his role as a "creative advisor and TV pitchman". American musician will.i.am was also cited to be on the company's payroll due to his "strong understanding of technology".
=== Wrongful termination lawsuit ===
In September 2024, former Salesforce Senior Director Dina Zelikson filed a lawsuit against the company in San Francisco Superior Court, alleging wrongful termination, discrimination, and retaliation during a medical leave.
== Salesforce Ventures ==
In 2009, Salesforce began investing in startups. These investments became Salesforce Ventures, headed by John Somorjai In September 2014, SFV set up Salesforce1 Fund, aimed at start-ups creating applications primarily for mobile phones. In December 2018, Salesforce Ventures announced the launch of the Japan Trailblazer Fund, focused on Japanese startups.
In August 2018, Salesforce Ventures reported investments totaling over $1 billion in 275 companies, including CloudCraze (e-commerce), Figure Eight (artificial intelligence), Forter (online fraud prevention), and FinancialForce (automation software). In 2019, SFV's five largest investments—Domo (data-visualization software), SurveyMonkey (online survey software), Twilio (cloud-communication), Dropbox (cloud storage), and DocuSign (secure e-signature company)—accounted for nearly half of its portfolio. In 2021, Salesforce announced that its investments had resulted in $2.17 Billion annual gain. In June 2023 Salesforce increased the size of its Generative AI Fund for startups from $250 million to $500 million, and in September 2024 to $1 billion.
== Office locations ==
Salesforce Tower (San Francisco, US)
Salesforce Tower (Indianapolis, US)
Salesforce Tower (London, UK)
Salesforce Tower (Sydney, AU)
Salesforce Singapore
== Notes ==
== References ==
== External links ==
Business data for Salesforce, Inc.: | Wikipedia/Salesforce.com |
Microsoft Dynamics 365 is an integrated suite of enterprise resource planning (ERP) and customer relationship management (CRM) applications offered by Microsoft. Combines various functions such as sales, customer service, field service, operations, finance, marketing, and project service automation into a single platform.
Dynamics 365 integrates with other Microsoft products such as Office 365, Power BI, and Azure, allowing businesses to streamline their operations, improve customer engagement, and make data-driven decisions. The platform is highly customizable, enabling organizations to tailor it to their specific needs and industry requirements.
Dynamics 365 is designed to help businesses unify their processes, gain insights into their operations, and foster better relationships with customers. It provides tools for managing sales leads, automating marketing campaigns, tracking customer interactions, managing finances, optimizing operations, and more. The platform is available on a subscription basis, with different modules and pricing options to suit the needs of various businesses.
== Applications ==
Microsoft Dynamics is largely made up of products developed by companies that Microsoft acquired: Dynamics GP (formerly Great Plains), Dynamics NAV (formerly Navision; now forked into Dynamics 365 Business Central), Dynamics SL (formerly Solomon), and Dynamics AX (formerly Axapta; now forked into Dynamics 365 Finance and Operations). The various products are aimed at different market segments, ranging from small and medium-sized businesses (SMBs) to large organizations with multi-language, currency, and legal entity capability. In recent years Microsoft Dynamics ERP has focused its marketing and innovation efforts on SaaS suites.
Microsoft Dynamics 365 contains more than 15 applications:
Dynamics 365 Sales – Sales Leaders, Sales Operations
Dynamics 365 Customer data platform- Customer Insights
Dynamics 365 Customer data platform- Customer Voice
Dynamics 365 Customer Service – Customer Service Leaders, Customer Service Operations
Dynamics 365 Field Service – Field Service Leaders, Field Service Operations
Dynamics 365 Remote Assist
Dynamics 365 Human Resources – Attract, Onboard, Core HR
Dynamics 365 Finance & Operations – Finance Leaders, Operation Leaders
Dynamics 365 Supply Chain Management – Streamline planning, production, stock, warehouse, and transportation.
Dynamics 365 Intelligent Order Management
Dynamics 365 Commerce
Dynamics 365 Project Operations
Dynamics 365 Marketing—Adobe Marketing Cloud, Dynamics 365 for Marketing
Dynamics 365 Artificial Intelligence – AI for Sales, AI for Customer Service, AI for Market Insight
Dynamics 365 Mixed Reality – Remote Assist, Layout, Guides
Dynamics 365 Business Central – ERP for SMBs
== Microsoft Dynamics 365 for Finance and Operations ==
Microsoft Dynamics 365 for Finance and Operations Enterprise Edition (formerly Microsoft Dynamics AX) – ERP and CRM software-as-a-service product meant for mid-sized and large enterprises. Integrating both Dynamics AX and Dynamics CRM features, consisting of the following modules: for Financials and Operations, for Sales Enterprise, for Marketing, for Customer Service, for Field Service, for Project Service Automation. It is designed to be easily connected with Office 365 and PowerBI.
=== Microsoft Dynamics ax ===
Microsoft Dynamics AX was one of Microsoft's Enterprise resource planning (ERP) software products. In 2018, its thick-client interface was removed and the web product was rebranded as Microsoft Dynamics 365 for Finance and Operations as a part of the Dynamics 365 suite. MDCC or Microsoft Development Center Copenhagen was once the primary development center for Dynamics AX. Microsoft Dynamics AX contained 19 core modules:
==== Traditional core (since axapta 2.5) ====
General ledger – ledger, sales tax, currency, and fixed assets features
Bank management – receives and pays cash
Customer relationship management (CRM) – business relations contact and maintenance (customers, vendors, and leads)
Accounts receivable – order entry, shipping, and invoicing
Accounts payable – purchase orders, goods received into inventory
Inventory management – inventory management and valuation
Master planning (resources) – purchase and production planning
Production – bills of materials, manufacturing tracking
Store, manage, and interpret data.
==== Extended core ====
The following modules are part of the core of AX 2009 (AX 5.0) and available on a per-license basis in AX 4.0:
Shop floor control
Cost accounting
Balanced scorecards
Service management
Expense management
Payroll management
Environmental management
==== Morphx and x++ ====
X++ integrates SQL queries into standard Java-style code.
==== Presence on the internet ====
Information about Axapta prior to the Microsoft purchase was available on technet.navision.com, a proprietary web-based newsgroup, which grew to a considerable number of members and posts before the Microsoft purchase in 2002.
After Microsoft incorporated Axapta into their Business Solution suite, they transferred the newsgroup's content to the Microsoft Business Solutions newsgroup. The oldest Axapta Technet post that can be found dates to August 2000.
==== Events ====
Extreme Conferences: extreme365 is a conference for the Dynamics 365 Partner Community which now includes Dynamics AX, featuring an Executive Forum.
==== Personalization and predictive analytics ====
At the National Retail Federation (NRF) Conference 2016 in New York, Microsoft unveiled its partnership with Infinite Analytics, a Cambridge-based predictive analytics and personalization company.
== Microsoft Dynamics 365 Business Central ==
Microsoft Dynamics 365 Business Central (formerly Microsoft Dynamics NAV) – ERP and CRM software-as-a-service product meant for small and mid-sized businesses. Integrating both Dynamics NAV and Dynamics CRM features, consisting of the following modules: for Financials and Operations, for Sales Professionals, for Marketing. Easily connected with Office 365 and PowerBI.
Microsoft Dynamics 365 Customer Engagement (formerly Microsoft CRM). Microsoft Dynamics 365 Customer Engagement contains modules to interact with customers: Marketing, Sales, Customer Service, Field Service. The Customer Service is a module used to automate customer service processes providing performance data reports and dashboards.
=== Online and on-premises deployment ===
The Dynamics 365 Business Central system comes in both an online hosted (SaaS) version and an on-premises version for manual deployment and administration.
Some features, such as integration with other online Microsoft services, are not available in the on-premises version and only in the online edition.
=== Localization ===
As an international ERP system, Business Central is available with 24 official localizations to work with the local features and requirements of various countries. Local partners provide an additional 47 localizations.
The system is compliant with various internal financial standards to meet local requirements, such as GDPR, IAS/IFRS and SOX.
=== Editions and licensing ===
There are two editions of Business Central, Essentials and Premium. Essentials covers Finance, Sales, Marketing, Purchasing, Inventory, Warehousing, and Project Management. Premium includes all of Essentials functionality plus Service Management and Manufacturing features.
With the arrival of NAV 2013, Microsoft introduced a new licensing model that operated on a concurrent user basis. With this model, user licenses were of two types: A full user or a limited user. The full user has access to the entire system, whereas the limited user only has read access to the system and limited write access.
From the Business Central rebrand launch, the licensing model changed to a per-seat license model with a 3x concurrent seat multiplier added to any existing perpetual licences from previous Dynamics NAV versions. Customers with a Dynamics NAV Extended Pack license were moved to the Premium edition.
== Microsoft Dynamics 365 Sales ==
Microsoft Dynamics 365 Sales is a customer relationship management software package developed by Microsoft. The current version is Dynamics 365. The name and licensing changed with the update from Dynamics 2016. 365 Sales comes with softphone capabilities.
== History ==
Microsoft Dynamics was a line of Business Applications, containing enterprise resource planning (ERP) and customer relationship management (CRM). Microsoft marketed Dynamics applications through a network of reselling partners who provided specialized services. Microsoft Dynamics formed part of "Microsoft Business Solutions". Dynamics can be used with other Microsoft programs and services, such as SharePoint, Yammer, Office 365, Azure and Outlook. The Microsoft Dynamics focus-industries are retail, services, manufacturing, financial services, and the public sector. Microsoft Dynamics offers services for small, medium, and large businesses.
=== Business Central ===
Business Central was first published as Dynamics NAV and Navision, which Microsoft acquired in 2002.
==== Navision ====
Navision originated at PC&C A/S (Personal Computing and Consulting), a company founded in Denmark in 1984. PC&C released its first accounting package, PCPlus, in 1985—a single-user application with basic accounting functionality. There followed in 1987 the first version of Navision, a client/server-based accounting application that allowed multiple users to access the system simultaneously. The success of the product prompted the company to rename itself to Navision Software A/S in 1995.
The Navision product sold primarily in Denmark until 1990. From Navision version 3 the product was distributed in other European countries, including Germany and the United Kingdom.
In 1995 the first version of Navision based on Microsoft Windows 95 was released.
In 2000, Navision Software A/S merged with fellow Danish firm Damgaard A/S (founded 1983) to form NavisionDamgaard A/S. In 2001 the company changed its name to "Navision A/S".
On July 11, 2002, Microsoft bought Navision A/S to go with its previous acquisition of Great Plains Software. Navision became a new division at Microsoft, named Microsoft Business Solutions, which also handled Microsoft CRM.
In 2003 Microsoft announced plans to develop an entirely new ERP system (Project Green). But it later decided to continue development of all ERP systems (Dynamics AX, Dynamics NAV, Dynamics GP and Dynamics SL). Microsoft launched all four ERP systems with the same new role-based user interface, SQL-based reporting and analysis, SharePoint-based portal, Pocket PC-based mobile clients and integration with Microsoft Office.
==== Dynamics NAV ====
In September 2005, Microsoft re-branded the product and re-released it as Microsoft Dynamics NAV.
In December 2008, Microsoft released Dynamics NAV 2009, which contains both the original "classic" client, as well as a new .NET Framework-based three-tier GUI called the RoleTailored Client (RTC).
In first quarter of 2014 NAV reached 102,000 current customers.
In 2016, Microsoft announced the creation of Dynamics 365 — a rebranding of the suite of Dynamics ERP and CRM products as a part of a new online-only offering. As a part of this suite, the successor to NAV was codenamed "Madeira".
==== Dynamics 365 Business Central ====
In September 2017 at the Directions conference, Microsoft announced the new codename
"Tenerife" as the next generation of the Dynamics NAV product. This replaced codename "Madeira".
On April 2, 2018, Business Central was released publicly and plans for semi-annual releases were announced.
Business Central introduced a new AL language for development and translated the codebase from Dynamics NAV (C/AL).
=== Dynamics SL, Dynamics GP, Dynamics C5 ===
Several variants of the Dynamics brand have migration paths to Business Central with most having not had a new release since 2018. The later releases of the SL, GP, and C5 products adopted the Dynamics NAV Role-Tailored Client UI which helped pave the transition to the Business Central product.
==== History of Dynamics C5 ====
Dynamics C5 was developed in Denmark as the successor to the DOS-based Concorde C4. The developing company Damgaard Data merged with Navision in 2001 which was subsequently acquired by Microsoft Microsoft in 2002 rebranding the solution from Navision C5 to Microsoft Dynamics C5.
The product handles currently more than 70,000 installations in Denmark.
==== History of Dynamics SL ====
Based in Findlay, Ohio, Solomon's roots go back more than 35 years, when co-founders Gary Harpst, Jack Ridge and Vernon Strong started TLB, Inc. in 1980. TLB, Inc. stands for The Lord's Business, "to remind the founders why the business was started: to conduct the business according to biblical principles." TLB was later renamed Solomon Software, and then Microsoft Dynamics SL.
==== History of Dynamics GP ====
The Dynamics GP product was originally developed by Great Plains Software, an independent company located in Fargo, North Dakota run by Doug Burgum. Dynamics Release 1.0 was released in February 1993. It was one of the first accounting packages in the United States that were designed and written to be multi-user and to run under Windows as 32-bit software. In late 2000, Microsoft announced the purchase of Great Plains Software. This acquisition was completed in April 2001.
Dynamics GP is written in a language called Dexterity. Previous versions were compatible with Microsoft SQL Server, Pervasive PSQL, Btrieve, and earlier versions also used C-tree, although after the buyout all new versions switched entirely to Microsoft SQL Server databases.
Dynamics GP will no longer be updated after September 2029, with security updates through April 2031.
=== Finance ===
Microsoft Dynamics 365 Finance is a Microsoft enterprise resource planning (ERP) system for medium to large organizations. The software, part of the Dynamics 365 product line, was first on general release in November 2016, initially branded as Dynamics 365 for Operations. In July 2017, it was rebranded to Dynamics 365 for Finance and Operations. At the same time, Microsoft rebranded their business software suite for small businesses (Business Edition, Financials) to Finance and Operations, Business Edition, however, the two applications are based on completely different platforms. Its history includes:
1998 (March) – Axapta, a collaboration between IBM and Danish Damgaard Data, released in the Danish and US markets.
2000 – Damgaard Data merged with Navision Software A/S to form NavisionDamgaard, later named Navision A/S. Released Axapta 2.5. IBM returned all rights in the product to Damgaard Data shortly after the release of Version 1.5.
2002 – Microsoft acquires Navision A/S. Released Axapta 3.0.
2006 – Released Microsoft Dynamics AX 4.0.
2008 – Released Microsoft Dynamics AX 2009.
2011 – Released Microsoft Dynamics AX 2012. It was made available and supported in more than 30 countries and 25 languages. Dynamics AX is used in over 20,000 organizations of all sizes, worldwide.
2016 – Released Microsoft Dynamics AX 7. Later rebranded to Dynamics 365 for Operations. This update was a major revision with a completely new UI delivered through a browser-based HTML5 client, and initially only available as a cloud-hosted application. This version lasted only a few months, though, as Dynamics AX was rebranded Microsoft Dynamics 365 for Operations in October 2016, and once more as Dynamics 365 for Finance and Operations in July 2017.
2017 – Rebranded to Dynamics 365 for Finance and Operations, Enterprise Edition (not to be mistaken with Dynamics 365 for Finance and Operations Business Edition, which is based on former Microsoft Dynamics NAV).
2018 – Rebranded to Dynamics 365 for Finance and Operations
2018 – The Human Resources Module became Dynamics 365 for Talent, now Dynamics 365 Human Resources.
2020 – Rebranded and split into two products:
Dynamics 365 Finance
Dynamics 365 Supply Chain Management
2023 – Dynamics 365 Human Resources re-integrated
=== Sales ===
Microsoft Dynamics 365 Sales has undergone several iterations over its history.
==== Microsoft CRM 1.2 ====
Microsoft CRM 1.2 was released on December 8, 2003. Microsoft CRM 1.2 was not widely adopted by industry.
It was not possible to create custom entities but there was a software development kit (SDK) available using SOAP and XML endpoints to interact with it.
==== Microsoft Dynamics CRM 3.0 ====
The second version was rebranded as Microsoft Dynamics 3.0 (version 2.0 was skipped entirely) to signify its inclusion within the Dynamics product family and was released on December 5, 2005.
Notable updates over version 1.2 are the ease of creating customizations to CRM, the switch from using Crystal Reports to Microsoft SQL Reporting Services, and the ability to run on Windows Vista and Outlook 2007.
Significant additions released later by Microsoft also allowed Dynamics CRM 3.0 to be accessed by various mobile devices and integration with Siebel Systems. This was the first version that saw reasonable take up by customers.
You could create custom entities and (1xN) relations between the system/custom entities.
==== Microsoft Dynamics CRM 4.0 ====
Dynamics CRM 4.0 (a.k.a. Titan) was introduced in December 2007 (RTM build number 4.0.7333.3 Microsoft CRM build numbers from version 4.0 to version 8). It features multi-tenancy, improved reporting security, data importing, direct mail merging and support for newer technologies such as Windows Server 2008 and SQL Server 2008 (Update Rollup 4).
Dynamics CRM 4.0 also implements CRM Online, a hosted solution that is offered directly by Microsoft. The multi-tenancy option also allows ISVs to offer hosted solutions to end customers as well.
Dynamics CRM 4.0 is the first version of the product, which has seen significant take up in the market and passed the 1 million user mark in July 2009.
Additional support for NxN relations was added, which solved a lot of 'in-between' entities. "Connections" were also introduced in favour of "Relations". The UI design was based on Office 2007 look and feel, with the same blue shading and round button as "start".
==== Microsoft Dynamics CRM 2011 ====
Dynamics CRM 2011 was released to open Beta in February 2010. It then went into Release Candidate stage in December 2010. The product was then released in February 2011 (build number 5.0.9688.583)
Browsers such as Internet Explorer, Chrome and Firefox browsers are fully supported since Microsoft Dynamics CRM 2011 Update Rollup 12. Because of this browser compatibility R12 was highly anticipated but also caused a lot of stress for customers that had used unsupported customizations. R12 broke those customizations and clients had to rethink their changes. Microsoft offered additional wizards to pinpoint the problems.
==== Microsoft Dynamics CRM 2013 ====
Dynamics CRM 2013 was released to a closed beta group on July 28, 2013. Dynamics CRM 2013 Online went live for new signups in October 2013. It was released in November 2013 (build number 6.0.0000.0809).
==== Microsoft Dynamics CRM 2015 ====
On September 16, 2014, Microsoft announced that Microsoft Dynamics CRM 2015, as well as updates to its Microsoft Dynamics CRM Online and Microsoft Dynamics Marketing services, will be generally available in the fourth quarter of 2014. Microsoft also released a preview guide with details.
On November 30, 2014, Microsoft announced the general availability of Microsoft Dynamics CRM 2015 and the 2015 Update of Microsoft Dynamics Marketing.
On January 6, 2015, Microsoft announced the availability of a CRM Cloud service specifically for the US Government that is designed for FedRAMP compliance.
==== Microsoft Dynamics CRM 2016 ====
Microsoft Dynamics CRM 2016 was officially released on November 30, 2015. Versions for CRM 2016 was 8.0, 8.1 and 8.2. With version 8.2 the name, Microsoft Dynamics CRM 2016, was changed to Dynamics 365
Microsoft Dynamics CRM 2016 was officially released on November 30, 2015. It includes advancements in intelligence, mobility and service, with significant productivity enhancements. In June 2016 was developed a special application which sends scanned info from business cards into MS Dynamics CRM named Business Card Reader for MS Dynamics and Call Tracker application in 2017.
==== Microsoft Dynamics 365 sales ====
Microsoft Dynamics 365 was officially released on November 1, 2016, as the successor to Dynamics CRM. The product combines Microsoft business products (CRM and ERP Dynamics AX). A softphone dialer can be added as an extension.
The on-premises application, called Dynamics 365 Customer Engagement contained the following applications:
Dynamics 365 for Sales
Dynamics 365 for Customer Service
Dynamics 365 for Marketing
Dynamics 365 for Field Service
Dynamics 365 for Project Service Automation
The offerings Dynamics 365 for Finance and Operations cover the ERP needs, such as bookkeeping, invoice and order handling and manufacturing.
In Dynamics 365 version 9.0.0.1 many notable features like Virtual entities in Dynamics 365, Auto Numbering Attributes, Multi Select Options sets etc. were introduced.
== Product updates ==
=== October 2018 update ===
The update released in October 2018 included new features for sales, marketing, customer service, and recruitment.
=== April 2019 update ===
This update was released on April 5, 2019. The features added after the update, included a user interface (UUI) to embed canvas apps created in PowerApps and it also brought back the tabs facility. The update also led to the removal of the Xrm.Page.data.
=== February 2020 update ===
An update was announced on February 19, 2019. The update included additions to the Customer Insights, Microsoft's customer data platform (CDP) such as new first and third-party data connections. In addition to this, this update brought forth new sales forecasting tools and Dynamics 365 Sales Engagement Center. The Dynamics 365 Project Operations was introduced in this update.
=== October 2021 update (wave 1) ===
An update was announced on October 5, 2019. This update included a replacement of bank reconciliation reports. The payment reconciliation journal was improved to support preview posting, separate number series, and user-defined document numbers. Microsoft Dynamics 365 also welcomes Correct Dimensions action. With this update, Microsoft Dynamics 365 has welcomed integration with Microsoft Teams search box, Microsoft Word, and Microsoft Universal Print technology.
== Support and end of life ==
== Related products ==
Microsoft Dynamics includes a set of related products:
Microsoft Dynamics Management Reporter. Management Reporter is a financial reporting and analysis application. Its main feature is to create income statements, balance sheet statements, cash flow statements and other financial reports. Reports can be stored in a centralized Report Library along with external supporting files. Security on reports and files may be controlled using Windows Authentication and SQL Server.
Microsoft Dynamics for Retail (formerly Microsoft Dynamics RMS, QuickSell 2000 and Dynamics POS)
Microsoft Dynamics for Marketing (formerly MDM and MarketingPilot 2012)
Microsoft Dynamics Social Listening (formerly Netbreeze 2013)
Power Automate, formerly Microsoft Flow (until 2019), a toolkit similar to IFTTT for implementing business workflow products.
Power Automate Desktop, robotic process automation software for automating graphical user interfaces (acquired in May 2020)
Parature customer engagement software in the customer support and service channels (acquired in January 2014)
Microsoft also sells Sure Step as an implementation methodology for Microsoft Dynamics for its re-sellers.
In July 2018, Microsoft announced Dynamics 365 AI for sales applications.
== See also ==
Microsoft Azure
Microsoft Dataverse
Microsoft Office
Microsoft Power Platform
List of Microsoft software
== References ==
== Further reading ==
Bellu, Renato (2018). Microsoft Dynamics 365 For Dummies. For Dummies. ISBN 978-1119508861.
Houdeshell, Robert (2021). Microsoft Dynamics 365 Project Operations: Deliver profitable projects with effective project planning and productive operational workflows. Packt Publishing. ISBN 978-1801072076.
Newell, Eric (2021). Mastering Microsoft Dynamics 365 Implementations. Sybex. ISBN 978-1119789321.
Brummel, Marije; Studebaker, David; Studebaker, Chris (2019). Programming Microsoft Dynamics 365 Business Central: Build customized business applications with the latest tools in Dynamics 365 Business Central. Packt Publishing. ISBN 978-1789137798.
Demiliani, Stefano; Tacconi, Duilio (2019). Mastering Microsoft Dynamics 365 Business Central: Discover extension development best practices, build advanced ERP integrations, and use DevOps tools. Packt Publishing. ISBN 978-1789951257.
Yadav, JJ; Shukla, Sandeep; Mohta, Rahul; Kasat, Yogesh (2020). Implementing Microsoft Dynamics 365 for Finance and Operations Apps: Learn best practices, architecture, tools, techniques, and more. Packt Publishing. ISBN 978-1789950847.
Luszczak, Andreas (2018). Using Microsoft Dynamics 365 for Finance and Operations: Learn and understand the functionality of Microsoft's enterprise solution. Springer Vieweg. ISBN 978-3658241063.
== External links ==
Official website
Microsoft Dynamics AX 2012 Launches Worldwide
Microsoft Dynamics 365 for Finance and Operations official webpage | Wikipedia/Microsoft_Dynamics |
Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries.
== Definition ==
=== Relational database ===
Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row.
We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and
h : R → 2C
a function that associates a header with each relation name in R. (Note that this is a simplification from the full relational model where there is more than one domain and a header is not just a set of column names but also maps these column names to a domain.) Given a domain D we define a tuple over D as a partial function that maps some column names to an atomic value in D. An example would be (name : "Harry", age : 25).
t : C ⇸ D
The set of all tuples over D is denoted as TD. The subset of C for which a tuple t is defined is called the domain of t (not to be confused with the domain in the schema) and denoted as dom(t).
Finally we define a relational database given a schema S = (D, R, h) as a function
db : R → 2TD
that maps the relation names in R to finite subsets of TD, such that for every relation name r in R and tuple t in db(r) it holds that
dom(t) = h(r).
The latter requirement simply says that all the tuples in a relation should contain the same column names, namely those defined for it in the schema.
=== Atoms ===
For the construction of the formulas we will assume an infinite set V of tuple variables. The formulas are defined given a database schema S = (D, R, h) and a partial function type : V ⇸ 2C, called at type assignment, that assigns headers to some tuple variables. We then define the set of atomic formulas A[S,type] with the following rules:
if v and w in V, a in type(v) and b in type(w) then the formula v.a = w.b is in A[S,type],
if v in V, a in type(v) and k denotes a value in D then the formula v.a = k is in A[S,type], and
if v in V, r in R and type(v) = h(r) then the formula r(v) is in A[S,type].
Examples of atoms are:
(t.age = s.age) — t has an age attribute and s has an age attribute with the same value
(t.name = "Codd") — tuple t has a name attribute and its value is "Codd"
Book(t) — tuple t is present in relation Book.
The formal semantics of such atoms is defined given a database db over S and a tuple variable binding val : V → TD that maps tuple variables to tuples over the domain in S:
v.a = w.b is true if and only if val(v)(a) = val(w)(b)
v.a = k is true if and only if val(v)(a) = k
r(v) is true if and only if val(v) is in db(r)
=== Formulas ===
The atoms can be combined into formulas, as is usual in first-order logic, with the logical operators ∧ (and), ∨ (or) and ¬ (not), and we can use the existential quantifier (∃) and the universal quantifier (∀) to bind the variables. We define the set of formulas F[S,type] inductively with the following rules:
every atom in A[S,type] is also in F[S,type]
if f1 and f2 are in F[S,type] then the formula f1 ∧ f2 is also in F[S,type]
if f1 and f2 are in F[S,type] then the formula f1 ∨ f2 is also in F[S,type]
if f is in F[S,type] then the formula ¬ f is also in F[S,type]
if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∃ v : H ( f ) is also in F[S,type], where type[v->H] denotes the function that is equal to type except that it maps v to H,
if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∀ v : H ( f ) is also in F[S,type]
Examples of formulas:
t.name = "C. J. Date" ∨ t.name = "H. Darwen"
Book(t) ∨ Magazine(t)
∀ t : {author, title, subject} ( ¬ ( Book(t) ∧ t.author = "C. J. Date" ∧ ¬ ( t.subject = "relational model")))
Note that the last formula states that all books that are written by C. J. Date have as their subject the relational model. As usual we omit brackets if this causes no ambiguity about the semantics of the formula.
We will assume that the quantifiers quantify over the universe of all tuples over the domain in the schema. This leads to the following formal semantics for formulas given a database db over S and a tuple variable binding val : V -> TD:
f1 ∧ f2 is true if and only if f1 is true and f2 is true,
f1 ∨ f2 is true if and only if f1 is true or f2 is true or both are true,
¬ f is true if and only if f is not true,
∃ v : H ( f ) is true if and only if there is a tuple t over D such that dom(t) = H and the formula f is true for val[v->t], and
∀ v : H ( f ) is true if and only if for all tuples t over D such that dom(t) = H the formula f is true for val[v->t].
=== Queries ===
Finally we define what a query expression looks like given a schema S = (D, R, h):
{ v : H | f(v) }
where v is a tuple variable, H a header and f(v) a formula in F[S,type] where type = { (v, H) } and with v as its only free variable. The result of such a query for a given database db over S is the set of all tuples t over D with dom(t) = H such that f is true for db and val = { (v, t) }.
Examples of query expressions are:
{ t : {name} | ∃ s : {name, wage} ( Employee(s) ∧ s.wage = 50.000 ∧ t.name = s.name ) }
{ t : {supplier, article} | ∃ s : {s#, sname} ( Supplier(s) ∧ s.sname = t.supplier ∧ ∃ p : {p#, pname} ( Product(p) ∧ p.pname = t.article ∧ ∃ a : {s#, p#} ( Supplies(a) ∧ s.s# = a.s# ∧ a.p# = p.p# ))) }
== Semantic and syntactic restriction ==
=== Domain-independent queries ===
Because the semantics of the quantifiers is such that they quantify over all the tuples over the domain in the schema it can be that a query may return a different result for a certain database if another schema is presumed. For example, consider the two schemas S1 = ( D1, R, h ) and S2 = ( D2, R, h ) with domains D1 = { 1 }, D2 = { 1, 2 }, relation names R = { "r1" } and headers h = { ("r1", {"a"}) }. Both schemas have a common instance:
db = { ( "r1", { ("a", 1) } ) }
If we consider the following query expression
{ t : {a} | t.a = t.a }
then its result on db is either { (a : 1) } under S1 or { (a : 1), (a : 2) } under S2. It will also be clear that if we take the domain to be an infinite set, then the result of the query will also be infinite. To solve these problems we will restrict our attention to those queries that are domain independent, i.e., the queries that return the same result for a database under all of its schemas.
An interesting property of these queries is that if we assume that the tuple variables range over tuples over the so-called active domain of the database, which is the subset of the domain that occurs in at least one tuple in the database or in the query expression, then the semantics of the query expressions does not change. In fact, in many definitions of the tuple calculus this is how the semantics of the quantifiers is defined, which makes all queries by definition domain independent.
=== Safe queries ===
In order to limit the query expressions such that they express only domain-independent queries a syntactical notion of safe query is usually introduced. To determine whether a query expression is safe we will derive two types of information from a query. The first is whether a variable-column pair t.a is bound to the column of a relation or a constant, and the second is whether two variable-column pairs are directly or indirectly equated (denoted t.v == s.w).
For deriving boundedness we introduce the following reasoning rules:
in " v.a = w.b " no variable-column pair is bound,
in " v.a = k " the variable-column pair v.a is bound,
in " r(v) " all pairs v.a are bound for a in type(v),
in " f1 ∧ f2 " all pairs are bound that are bound either in f1 or in f2,
in " f1 ∨ f2 " all pairs are bound that are bound both in f1 and in f2,
in " ¬ f " no pairs are bound,
in " ∃ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v, and
in " ∀ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v.
For deriving equatedness we introduce the following reasoning rules (next to the usual reasoning rules for equivalence relations: reflexivity, symmetry and transitivity):
in " v.a = w.b " it holds that v.a == w.b,
in " v.a = k " no pairs are equated,
in " r(v) " no pairs are equated,
in " f1 ∧ f2 " it holds that v.a == w.b if it holds either in f1 or in f2,
in " f1 ∨ f2 " it holds that v.a == w.b if it holds both in f1 and in f2,
in " ¬ f " no pairs are equated,
in " ∃ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v, and
in " ∀ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v.
We then say that a query expression { v : H | f(v) } is safe if
for every column name a in H we can derive that v.a is equated with a bound pair in f,
for every subexpression of f of the form " ∀ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g, and
for every subexpression of f of the form " ∃ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g.
The restriction to safe query expressions does not limit the expressiveness since all domain-independent queries that could be expressed can also be expressed by a safe query expression. This can be proven by showing that for a schema S = (D, R, h), a given set K of constants in the query expression, a tuple variable v and a header H we can construct a safe formula for every pair v.a with a in H that states that its value is in the active domain. For example, assume that K={1,2}, R={"r"} and h = { ("r", {"a, "b"}) } then the corresponding safe formula for v.b is:
v.b = 1 ∨ v.b = 2 ∨ ∃ w ( r(w) ∧ ( v.b = w.a ∨ v.b = w.b ) )
This formula, then, can be used to rewrite any unsafe query expression to an equivalent safe query expression by adding such a formula for every variable v and column name a in its type where it is used in the expression. Effectively this means that we let all variables range over the active domain, which, as was already explained, does not change the semantics if the expressed query is domain independent.
== Systems ==
DES – An educational tool for working with Tuple Relational Calculus and other formal languages
WinRDBI – An educational tool for working with Tuple Relational Calculus and other formal languages
== See also ==
Relational algebra
Relational calculus
Domain relational calculus (DRC)
== References ==
Codd, E. F. (June 1970). "A relational model of data for large shared data banks". Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685. | Wikipedia/Tuple_relational_calculus |
A computer network is a collection of communicating computers and other devices, such as printers and smart phones. In order to communicate, the computers and devices must be connected by wired media like copper cables, optical fibers, or by wireless communication. The devices may be connected in a variety of network topologies. In order to communicate over the network, computers use agreed-on rules, called communication protocols, over whatever medium is used.
The computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications.
== History ==
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963).
In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project.
In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.
In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric.
Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks.
In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
In 1972, commercial services were first deployed on experimental public data networks in Europe.
In 1973, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.
In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network.
In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a local area networking system he created with David Boggs. It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s. Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking.
In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking.
In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention.
Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.
In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
In 1979, Robert Metcalfe pursued making Ethernet an open standard.
In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal.
In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use.
== Use ==
Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively.
== Network packet ==
Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network.
Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free.
The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.
== Network topology ==
The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.
Common topologies are:
Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
Fully connected network: each node is connected to every other node in the network.
Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.
=== Overlay network ===
An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.
== Network links ==
The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family that uses copper and fiber media in local area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
=== Wired ===
The following classes of wired technologies are used in computer networking.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network.
Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.
=== Wireless ===
Network connections can be established wirelessly using radio or other electromagnetic means of communication.
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart.
Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver.
Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet.
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput).
== Network nodes ==
Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.
=== Network interfaces ===
A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
=== Repeaters and hubs ===
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule.
An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches.
=== Bridges and switches ===
Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches.
Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame.
They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply.
Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks.
=== Routers ===
A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks.
=== Modems ===
Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology.
=== Firewalls ===
A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
== Communication protocols ==
A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
In a protocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.
=== Common protocols ===
==== Internet protocol suite ====
The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet.
==== IEEE 802 ====
IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".
===== Ethernet =====
Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
===== Wireless LAN =====
Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.
==== SONET/SDH ====
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
==== Asynchronous Transfer Mode ====
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.
==== Cellular standards ====
There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).
=== Routing ===
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.
In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.
Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks.
== Geographic scale ==
Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale.
=== Nanoscale network ===
A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques.
=== Personal area network ===
A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
=== Local area network ===
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.
A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010.
=== Home area network ===
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider.
=== Storage area network ===
A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
=== Campus area network ===
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
=== Backbone network ===
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.
=== Metropolitan area network ===
A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.
=== Wide area network ===
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.
=== Enterprise private network ===
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
=== Virtual private network ===
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider.
=== Global area network ===
A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
== Organizational scope ==
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
=== Intranet ===
An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.
=== Extranet ===
An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.
=== Internet ===
An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.
The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services.
Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
=== Darknet ===
A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.
== Network service ==
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address.
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
== Network performance ==
=== Bandwidth ===
Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation).
=== Network delay ===
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.
=== Performance metrics ===
The parameters that affect performance typically can include throughput, jitter, bit error rate and latency.
In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.
=== Network congestion ===
Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard.
For the Internet, RFC 2914 addresses the subject of congestion control in detail.
=== Network resilience ===
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."
== Security ==
Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.
=== Network security ===
Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.
=== Network surveillance ===
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
=== End to end encryption ===
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.
Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.
The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
=== SSL/TLS ===
The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.
== Views of networks ==
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology.
== See also ==
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
== Further reading ==
Kurose James F and Keith W. Ross: Computer Networking: A Top-Down Approach Featuring the Internet, Pearson Education 2005.
William Stallings, Computer Networking with Internet Protocols and Technology, Pearson Education 2004.
Dimitri Bertsekas, and Robert Gallager, "Data Networks," Prentice Hall, 1992. | Wikipedia/Computer_Network |
In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible.
Computer systems, both software and hardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (in memory or storage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, and theories to maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in a concurrent algorithm compared to the simpler sequential algorithm.
For example, a failure in concurrency control can result in data corruption from torn read or write operations.
== Concurrency control in databases ==
Comments:
This section is applicable to all transactional systems, i.e., to all systems that use database transactions (atomic transactions; e.g., transactional objects in Systems management and in networks of smartphones which typically implement private, dedicated database systems), not only general-purpose database management systems (DBMSs).
DBMSs need to deal also with concurrency control issues not typical just to database transactions but rather to operating systems in general. These issues (e.g., see Concurrency control in operating systems below) are out of the scope of this section.
Concurrency control in Database management systems (DBMS; e.g., Bernstein et al. 1987, Weikum and Vossen 2001), other transactional objects, and related distributed applications (e.g., Grid computing and Cloud computing) ensures that database transactions are performed concurrently without violating the data integrity of the respective databases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency control theory for database systems is outlined in the references mentioned above: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions over abstract data types is presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis and insight. To some extent they are complementary, and their merging may be useful.
To ensure correctness, a DBMS usually guarantees that only serializable transaction schedules are generated, unless serializability is intentionally relaxed to increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have the recoverability (from abort) property. A DBMS also guarantees that no effect of committed transactions is lost, and no effect of aborted (rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by the ACID rules below. As databases have become distributed, or needed to cooperate in distributed environments (e.g., Federated databases in the early 1990, and Cloud computing currently), the effective distribution of concurrency control mechanisms has received special attention.
=== Database transaction and the ACID rules ===
The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs):
Atomicity - Either the effects of all or none of its operations remain ("all or nothing" semantics) when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears (by its effects on the database) to be indivisible (atomic), and an aborted transaction does not affect the database at all. Either all the operations are done or none of them are.
Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state (however, it is the responsibility of the transaction's programmer to make sure that the transaction itself is correct, i.e., performs correctly what it intends to perform (from the application's point of view) while the predefined integrity rules are enforced by the DBMS). Thus since a database can be normally changed only by transactions, all the database's states are consistent.
Isolation - Transactions cannot interfere with each other (as an end result of their executions). Moreover, usually (depending on concurrency control method) the effects of an incomplete transaction are not even visible to another transaction. Providing isolation is the main goal of concurrency control.
Durability - Effects of successful (committed) transactions must persist through crashes (typically by recording the transaction's effects and its commit event in a non-volatile memory).
The concept of atomic transaction has been extended during the years to what has become Business transactions which actually implement types of Workflow and are not atomic. However also such enhanced transactions typically utilize atomic transactions as components.
=== Why is concurrency control needed? ===
If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as:
The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a first value written by a first concurrent transaction, and the first value is lost to other transactions running concurrently which need, by their precedence, to read the first value. The transactions that have read the wrong value end with incorrect results.
The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This value disappears from the database upon abort, and should not have been read by any transaction ("dirty read"). The reading transactions end with incorrect results.
The incorrect summary problem: While one transaction takes a summary over the values of all the instances of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary does not reflect a correct result for any (usually needed for correctness) precedence order between the two transactions (if one is executed before the other), but rather some random result, depending on the timing of the updates, and whether certain update results have been included in the summary or not.
Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently.
=== Concurrency control mechanisms ===
==== Categories ====
The main categories of concurrency control mechanisms are:
Optimistic - Allow transactions to proceed without blocking any of their (read, write) operations ("...and be optimistic about the rules being met..."), and only check for violations of the desired integrity rules (e.g., serializability and recoverability) at each transaction's commit. If violations are detected upon a transaction's commit, the transaction is aborted and restarted. This approach is very efficient when few transactions are aborted.
Pessimistic - Block an operation of a transaction, if it may cause violation of the rules (e.g., serializability and recoverability), until the possibility of violation disappears. Blocking operations is typically involved with performance reduction.
Semi-optimistic - Responds pessimistically or optimistically depending on the type of violation and how quickly it can be detected.
Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance.
The mutual blocking between two transactions (where each one blocks the other) or more results in a deadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low.
Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories.
==== Methods ====
Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods, which have each many variants, and in some cases may overlap or be combined, are:
Locking (e.g., Two-phase locking - 2PL) - Controlling access to data by locks assigned to the data. Access of a transaction to a data item (database object) locked by another transaction may be blocked (depending on lock type and access operation type) until lock release.
Serialization graph checking (also called Serializability, or Conflict, or Precedence graph checking) - Checking for cycles in the schedule's graph and breaking them by aborts.
Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order.
Other major concurrency control types that are utilized in conjunction with the methods above include:
Multiversion concurrency control (MVCC) - Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method.
Index concurrency control - Synchronizing access operations to indexes, rather than to user data. Specialized methods provide substantial performance gains.
Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and Vossen 2001). This model provides a different concurrency control behavior with benefits in many cases.
The most common mechanism type in database systems since their early days in the 1970s has been Strong strict Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of Two-phase locking (2PL). It is pessimistic. In spite of its long name (for historical reasons) the idea of the SS2PL mechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these SS2PL (or Rigorous) schedules have the SS2PL (or Rigorousness) property.
=== Major goals of concurrency control mechanisms ===
Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions are distributed over processes, computers, and computer networks. Other subjects that may affect concurrency control are recovery and replication.
==== Correctness ====
===== Serializability =====
For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the Serializability property. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere. Serializability of a schedule means equivalence (in the resulting database values) to some serial schedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level of isolation among database transactions, and the major correctness criterion for concurrent transactions. In some cases compromised, relaxed forms of serializability are allowed for better performance (e.g., the popular Snapshot isolation mechanism) or to meet availability requirements in highly distributed systems (see Eventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed for money transactions, since by relaxation money can disappear, or appear from nowhere).
Almost all implemented concurrency control mechanisms achieve serializability by providing Conflict serializability, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently.
===== Recoverability =====
See Recoverability in Serializability
Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons). Recoverability (from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability is Strictness, which allows efficient database recovery from failure (but excludes optimistic implementations.
==== Distribution ====
With the fast technological development of computing the difference between local and distributed computing over low latency networks or buses is blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., in computer clusters and multi-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well.
===== Recovery =====
All systems are prone to failures, and handling recovery from failure is a must. The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery. For example, the Strictness property (mentioned in the section Recoverability above) is often desirable for an efficient recovery.
===== Replication =====
For high availability database objects are often replicated. Updates of replicas of a same database object need to be kept synchronized. This may affect the way concurrency control is done (e.g., Gray et al. 1996).
== Concurrency control in operating systems ==
Multitasking operating systems, especially real-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing is to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such as deadlock. Other solutions are Non-blocking algorithms and Read-copy-update.
== See also ==
Linearizability – Property of some operation(s) in concurrent programming
Lock (computer science) – Synchronization mechanism for enforcing limits on access to a resource
Mutual exclusion – In computing, restricting data to be accessible by one thread at a time
Search engine indexing – Method for data management
Semaphore (programming) – Variable used in a concurrent system
Software transactional memory – Concurrency control mechanism in software
Transactional Synchronization Extensions – Extension to the x86 instruction set architecture that adds hardware transactional memory support
Database transaction schedule
Isolation (computer science)
Distributed concurrency control
== References ==
Andrew S. Tanenbaum, Albert S Woodhull (2006): Operating Systems Design and Implementation, 3rd Edition, Prentice Hall, ISBN 0-13-142938-8
Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts, 8th edition. John Wiley & Sons. ISBN 978-0-470-12872-5.
Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems (free PDF download), Addison Wesley Publishing Company, 1987, ISBN 0-201-10715-5
Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, ISBN 1-55860-508-8
Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete (1993): Atomic Transactions in Concurrent and Distributed Systems , Morgan Kaufmann (Elsevier), August 1993, ISBN 978-1-55860-104-8, ISBN 1-55860-104-X
Yoav Raz (1992): "The Principle of Commitment Ordering, or Guaranteeing Serializability in a Heterogeneous Environment of Multiple Autonomous Resource Managers Using Atomic Commitment." (PDF), Proceedings of the Eighteenth International Conference on Very Large Data Bases (VLDB), pp. 292-312, Vancouver, Canada, August 1992. (also DEC-TR 841, Digital Equipment Corporation, November 1990)
== Citations == | Wikipedia/Global_concurrency_control |
webMethods was an enterprise software company focused on application integration, business process integration and B2B partner integration.
Founded in 1996, the company sold systems for organizations to use web services to connect software applications over the Internet. In 2000, the company stock shares rose over 500% the first day it was publicly traded.
In 2007 webMethods was acquired by Software AG for $546 million and was made a subsidiary. By 2010 the webMethods division accounted for almost half of the parent company's revenues.
Software AG retained the webMethods name, and uses it as a brand to identify a software suite encompassing process improvement, service-oriented architecture (SOA), IT modernization and business and partner integration.
In July 2024, IBM completed its purchase of webMethods, and related products.
== History ==
The company was founded in 1996 by married couple Phillip Merrick (who was chief executive) and Caren Merrick (who was vice president for marketing using the name Caren DeWitt at the time) to use Web standards such as Hypertext Transfer Protocol (HTTP) and (later) XML to allow software applications to communicate with one another in real time. This type of technology would later be referred to as "web services". The company's first product, called the Web Automation Server was released in August 1996; this was later superseded by the webMethods B2B Server also called as webMethods Integration Server, which was the company's first product to see significant commercial use.
Initially, the founders used their savings and credit cards to keep the company operating in their house in Fairfax, Virginia.
By 1999 the company had clients such as DHL Express, Dell, Dun & Bradstreet and Hewlett-Packard, and had completed several rounds of venture capital investment.
Mayfield Fund and FBR Technology Venture Partners (an arm of Friedman Billings Ramsey) were among investors.
In March 1999 the company entered into a partnership with SAP AG to create an SAP-focused integration product called the SAP Business Connector. The company's revenue went from around $500,000 in 1997 to $14 million in 1999 and $202 million in 2001.
In February 2000, webMethods had its initial public offering (IPO) on the NASDAQ exchange. Just before the offering, the share price rose from its planned $13 to $35, and in its first day of trading, closed over $212 per share. The company raised only $175 million, while being valued at almost $7 billion. Although the term "unicorn" was not yet used, one analyst said "The market is kind of foaming at the mouth on three-letter buzzwords, like B2B and XML".
The quick rise of its share price is given as an example of the excess of the dot-com bubble.
The IPO allowed webMethods to acquire Active Software for an estimated $1.3 billion in stock shares in August 2000.
Active Software, a public company based in Santa Clara, California and founded in 1994, had acquired Alier Inc., TransLink Software Inc. and Premier Software Technologies Inc in April 2000.
In January, 2001, webMethods acquired IntelliFrame Corporation, which had been part of Computer Network Technology Corporation, for about $31 million.
While revenues grew, the company posted continuing operating losses due to the early 2000s recession following the bursting of the dot-com bubble through 2002.
Although its share price declined sharply from its peak, company executives, directors and investors still made large profits on their shares.
In October, 2003, the company announced it acquired three smaller companies in the integration market, for a combined estimated value of $32 million.
The Mind Electric developed a technology called Glue, and its founder Graham Glass became the webMethods chief technical officer.
The Dante Group developed software for business activity monitoring (BAM).
The former DataChannel assets from Netegrity were used in a portal.
Deloitte estimated webMethods was the fourth fastest growing technology company in North America in 2003, on the Deloitte Fast 500.
By October, 2004, after revenues declined and losses rose, Phillip Merrick was replaced as CEO by David Mitchell.
In August, 2006, webMethods acquired Cerebra, a privately held company that developed metadata management software.
In September, 2006, webMethods acquired Infravio (which developed a software registry) for $38 million.
The company was an early developer and promoter of standards for web service technologies, having worked on XML-RPC, a precursor to SOAP, and developed Web Interface Definition Language, a precursor to the Web Services Description Language standard.
As part of a larger trend of consolidation, Software AG (based in Darmstadt, Germany) bid to acquire webMethods in April 2007 for an estimated $546 million in cash.
The offer price was more than 25% over the market price of its shares, and came one day after activist shareholders Augustus Oliver and Clifford Press disclosed a 6% stake and claimed the company was under-valued.
Although speculation persisted that a competitor might make a higher bid, the deal closed in June, 2007.
The brand webMethods was retained, effectively making webMethods its flagship product line, immediately doubling Software AG revenues in North America.
WebMethods version 8.0 was released in 2009, supplemented with other Software AG products such as Centrasite, Tamino and EntireX. In 2010, the webMethods division of Software AG, known as business processes excellence (BPE) recorded $668 million (499 million Euros) in revenues and was a major contributor to company net income.
In 2011, Caren Merrick ran as a Republican for the Virginia state senate, saying her history with webMethods made her a "jobs creator", but was defeated by Barbara Favola.
In 2023, IBM acquired webMethods along with streamsets from Software AG. The acquisition was completed on 1st July 2024.
== releases for the webMethods Integration Server ==
IBM webMethods Integration Server 11.1 - October 2024 (LINK)
== See also ==
Middleware (distributed applications)
== References ==
== External links ==
webMethods product page
webMethods Integration Free Trial - A Free Trial is available through a sign-up page.
webMethods Technical Community - As a part of IBM's TechXchange the webMethods community offers a forum to ask questions and connect with other webMethods professionals.
webMethods Integration - iPaaS solution, running in the cloud, that doesn't require installation and maintenance by the customer. Includes a Free Trial and Free Forever edition to test and use it. | Wikipedia/WebMethods |
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. Examples of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered.
A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues.
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
== Introduction ==
The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.
While there is no single definition of a distributed system, the following defining properties are commonly used as:
There are several autonomous computational entities (computers or nodes), each of which has its own local memory.
The entities communicate with each other by message passing.
A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.
Other typical properties of distributed systems include the following:
The system has to tolerate failures in individual computers.
The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.
Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.
== Patterns ==
Here are common architectural patterns used for distributed computing:
Saga interaction pattern
Microservices
Event driven architecture
== Events vs. Messages ==
In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself.
In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics.
Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination.
Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements.
== Parallel and distributed computing ==
Distributed systems are groups of networked computers which share a common goal for their work.
The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:
In parallel computing, all processors may have access to a shared memory to exchange information between processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
== History ==
The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s.
ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems.
The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs.
== Architectures ==
Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
Whether these CPUs share resources or not determines a first distinction between three types of architecture:
Shared memory
Shared disk
Shared nothing.
Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources.: 227 Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
=== Cell-Based Architecture ===
Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability.
A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets of services across multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures.
Some implementations employ circuit breakers within and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational.
Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives.
== Applications ==
Reasons for using distributed systems and distributed computing may include:
The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location.
There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example:
It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine.
It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.
It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.
== Examples ==
Examples of distributed systems and applications of distributed computing include the following:
telecommunications networks:
telephone networks and cellular networks,
computer networks such as the Internet,
wireless sensor networks,
routing algorithms;
network applications:
World Wide Web and peer-to-peer networks,
massively multiplayer online games and virtual reality communities,
distributed databases and distributed database management systems,
network file systems,
distributed cache such as burst buffers,
distributed information processing systems such as banking systems and airline reservation systems;
real-time process control:
aircraft control systems,
industrial control systems;
parallel computation:
scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects,
distributed rendering in computer graphics.
peer-to-peer
== Reactive distributed systems ==
According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive.
== Theoretical foundations ==
=== Models ===
Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.
Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm.
The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?
The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.
Three viewpoints are commonly used:
Parallel algorithms in shared-memory model
All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor.
One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory.
Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems.
A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature.
Parallel algorithms in message-passing model
The algorithm designer chooses the structure of the network, as well as the program executed by each computer.
Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer.
Distributed algorithms in message-passing model
The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network.
A commonly used model is a graph with one finite-state machine per node.
In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example.
=== An example ===
Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches:
Centralized algorithms
The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result.
Parallel algorithms
Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part.
The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel.
Distributed algorithms
The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output.
The main focus is on coordinating the operation of an arbitrary distributed system.
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm.
Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
=== Complexity measures ===
In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa.
In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.
This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds).
On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model.
Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits.
=== Other problems ===
Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur.
There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation.
Much research is also focused on understanding the asynchronous nature of distributed systems:
Synchronizers can be used to run synchronous algorithms in asynchronous systems.
Logical clocks provide a causal happened-before ordering of events.
Clock synchronization algorithms provide globally consistent physical time stamps.
Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading.
=== Election ===
Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator.
The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.
The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost.
Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing.
Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran.
In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist.
=== Properties of distributed systems ===
So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system.
The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer.
However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
== See also ==
== Notes ==
== References ==
== Further reading ==
== External links ==
Media related to Distributed computing at Wikimedia Commons | Wikipedia/Distributed_application |
Content-Centric Networking (CCN) diverges from the IP-based, host-oriented Internet architecture by prioritizing content, making it directly addressable and routable. In CCN, endpoints communicate based on named data rather than IP addresses. This approach is a part of information-centric networking (ICN) architecture and involves the exchange of content request messages (termed "Interests") and content return messages (termed "Content Objects").
In this paradigm, connectivity may well be intermittent, end-host and in-network storage can be capitalized upon transparently, as bits in the network and on data storage devices have exactly the same value, mobility and multi access are the norm and anycast, multicast, and broadcast are natively supported. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth demand and better robustness in challenging communication scenarios. In information-centric networking the cache is a network level solution, and it has rapidly changing cache states, higher request arrival rates and smaller cache sizes. In particular, information-centric networking caching policies should be fast and lightweight.
== History ==
The principles behind information-centric networks were first described in the original 17 rules of Ted Nelson's Project Xanadu in 1979. In 2002, Brent Baccala submitted an Internet-Draft differentiating between connection-oriented and data-oriented networking and suggested that the Internet web architecture was rapidly becoming more data-oriented. In 2006, the DONA project at UC Berkeley and ICSI proposed an information-centric network architecture, which improved TRIAD by incorporating security (authenticity) and persistence as first-class primitives in the architecture. On August 30, 2006, PARC Research Fellow Van Jacobson gave a talk titled "A new way to look at Networking" at Google. The CCN project was officially launched at PARC in 2007. In 2009, PARC announced the CCNx project (Content-Centric Network), publishing the interoperability specifications and an open-source implementation on the Project CCNx website on September 21, 2009. The original CCN design was described in a paper published at the International Conference on Emerging Networking EXperiments and Technologies (CoNEXT) in December 2009.
Annual CCNx Community meetings were held in 2011, 2012, 2013 and 2015.
The protocol specification for CCNx 1.0 has been made available for comment and discussion. Work on CCNx happens openly in the ICNRG IRTF research group.
== Specification ==
The CCNx specification was published in some IETF drafts. The specifications included:
draft-irtf-icnrg-ccnxsemantics-01
draft-irtf-icnrg-ccnxmessages-01
draft-mosko-icnrg-ccnxurischeme-00
Seamless data integration within an open-run environment was proposed as a major contributing factor in protecting the security of cloud-based analytics and key network encryption. The driving force in adopting these heuristics was twofold: Batch-interrupted data streams remaining confined to an optimal run environment, and secure shared cloud access depending upon integrative analytic processes.
== Software ==
The CCNx software was available on GitHub.
== Motivation and benefits ==
The functional goal of the Internet Protocol as conceived and created in the 1970s was to enable two machines, one comprising resources and the other desiring access to those resources, to have a conversation with each other. The operating principle was to assign addresses to endpoints, thereby enabling these endpoints to locate and connect with one another.
Since those early days, there have been fundamental changes in the way the Internet is used — from the proliferation of social networking services to viewing and sharing digital content such as videos, photographs, documents, etc. Instead of providing basic connectivity, the Internet has become largely a distribution network with massive amounts of video and web page content flowing from content providers to viewers. Increasingly, today's internet users demand faster, more efficient, and more secure access to content without concern about where that content might be located.
Networks are also used in many environments where the traditional TCP/IP communication model doesn't fit. The Internet of Things (IoT) and sensor networks are environments where the source-destination communication model doesn't always provide the best solution.
CCN was designed to work in many environments from high-speed data centers to resource-constrained sensors. CCN aims to be:
Secure - The CCN communication model secures data and not the communication pipe between two specific end hosts. However, ubiquitous content caching and the absence of a secure communication pipe between end hosts introduce the challenge to content protection against unauthorized access, which requires extra care and solutions.
Flexible - CCN uses names to communicate. Names can be location-independent and are much more adaptable than IP addresses. Network elements can make more advanced choices based on the named requests and data.
Scalable - CCN enables the network to scale by allowing caching, enabling native multicast traffic, providing native load balancing, and facilitating resource planning.
== Basic concepts ==
Content Object messages are named payloads that are network-sized chunks of data. Names are a hierarchical series of binary name segments that are assigned to Content Objects by content publishers. Signatures are cryptographic bindings between a name, a payload, and the Key ID of the publisher. This is used for provenance. Interest messages are requests for Content Objects that match the name along with some optional restrictions on that object.
The core protocol operates as follows: Consumers request content by sending an Interest message with the name of the desired content. The network routes the interest based on the name using longest prefix match. The interest leaves the state as it traverses the network. This state is stored in the Pending Interest Table (PIT). When a match is found (when an Interest matches a Content Object) the content is sent back on the reverse path of the Interest, following the PIT state created by the Interest.
Because the content is self-identifiable (via the name and the security binding) any Content Object can be cached. Interest messages may be matched against caches along the way, not only at the publishers.
Distributed caching within a content-centric network is also possible, requiring multi-functional access parameters across the database. This essentially enables shared network encryption algorithms to employ role-based access limitations to users based on defined authorization levels.
== CCNx releases ==
=== CCNx 0.x ===
Interests match Content Objects based on name prefixes. For example, an Interest for /a/b would match a Content Object named /a/b/c/d or /a/b.
Interests include restrictions in the form of selectors. These help the network select which of the possible prefix matches are actual matches. For example, an Interest might exclude certain names, ask for a minimum or a maximum number of extra name segments, etc.
Content Objects have an implicit final name component that is equal to the hash of the Content Object. This may be used for matching to a name.
Packet encoding is done using CCNB (a proprietary format based on a type of binary XML).
The last version of this branch is 0.8.2 Software is available under a GPL license. Specifications and documentation are also available.
=== CCNx 1.x ===
CCNx 1.x differs from CCNx 0.x in the following ways:
Interests match Content Objects on exact names, not name prefixes. Therefore, an Interest for /a/b/ will only match a Content Object with the name /a/b.
Interests can restrict matches on the publisher KeyID or the object's ContentObjectHash.
A nested type–length–value (TLV) format is used to encode all messages on the wire. Each message is composed of a set of packet headers and a protocol message that includes the name, the content (or payload), and information used to cryptographically validate the message – all contained in nested TLVs.
The specification of CCNx 1.0 is available at: http://blogs.parc.com/ccnx/specifications/
== Derivative works ==
Named data networking is an NSF-funded project based on the original CCNx 0.x code.
CCN-lite is a lightweight version of CCNx functionally interoperable with CCN 0.x.
== See also ==
Information-centric networking
Named data networking
Information-centric networking caching policies
== References == | Wikipedia/Content-centric_networking |
Enterprise application integration (EAI) is the use of software and computer systems' architectural principles to integrate a set of enterprise computer applications.
== Overview ==
Enterprise application integration is an integration framework composed of a collection of technologies and services which form a middleware or "middleware framework" to enable integration of systems and applications across an enterprise.
Many types of business software such as supply chain management applications, ERP systems, CRM applications for managing customers, business intelligence applications, payroll, and human resources systems typically cannot communicate with one another in order to share data or business rules. For this reason, such applications are sometimes referred to as islands of automation or information silos. This lack of communication leads to inefficiencies, wherein identical data are stored in multiple locations, or straightforward processes are unable to be automated.
Enterprise application integration is the process of linking such applications within a single organization together in order to simplify and automate business processes to the greatest extent possible, while at the same time avoiding having to make sweeping changes to the existing applications or data structures. Applications can be linked either at the back-end via APIs or (seldom) the front-end (GUI).
In the words of research firm Gartner: "[EAI is] the unrestricted sharing of data and business processes among any connected application or data sources in the enterprise."
The various systems that need to be linked together may reside on different operating systems, use different database solutions or computer languages, or different date and time formats, or could be legacy systems that are no longer supported by the vendor who originally created them. In some cases, such systems are dubbed "stovepipe systems" because they consist of components that have been jammed together in a way that makes it very hard to modify them in any way.
=== Improving connectivity ===
If integration is applied without following a structured EAI approach, point-to-point connections grow across an organization. Dependencies are added on an impromptu basis, resulting in a complex structure that is difficult to maintain. This is commonly referred to as spaghetti, an allusion to the programming equivalent of spaghetti code.
For example, the number of connections needed to have fully meshed point-to-point connections, with n points, is given by
(
n
2
)
=
n
(
n
−
1
)
2
{\displaystyle {\tbinom {n}{2}}={\tfrac {n(n-1)}{2}}}
(see binomial coefficient). Thus, for ten applications to be fully integrated point-to-point,
10
×
9
2
=
45
{\displaystyle {\tfrac {10\times 9}{2}}=45}
point-to-point connections are needed, following a quadratic growth pattern.
However, the number of connections within organizations does not necessarily grow according to the square of the number of points. In general, the number of connections to any point is only limited by the number of other points in an organization, but can be significantly smaller in principle. EAI can also increase coupling between systems and therefore increase management overhead and costs.
EAI is not just about sharing data between applications but also focuses on sharing both business data and business processes. A middleware analyst attending to EAI will often look at the system of systems.
=== Purposes ===
EAI can be used for different purposes:
Data integration: Ensures that information in multiple systems is kept consistent. This is also known as enterprise information integration (EII).
Vendor independence: Extracts business policies or rules from applications and implements them in the EAI system, so that even if one of the business applications is replaced with a different vendor's application, the business rules do not have to be re-implemented.
Common facade: An EAI system can front-end a cluster of applications, providing a single consistent access interface to these applications and shielding users from having to learn to use different software packages.
=== Patterns ===
This section describes common design patterns for implementing EAI, including integration, access and lifetime patterns. These are abstract patterns and can be implemented in many different ways. There are many other patterns commonly used in the industry, ranging from high-level abstract design patterns to highly specific implementation patterns.
==== Integration patterns ====
EAI systems implement two patterns:
Mediation (intra-communication)
Here, the EAI system acts as the go-between or broker between multiple applications. Whenever an interesting event occurs in an application (for instance, new information is created or a new transaction completed) an integration module in the EAI system is notified. The module then propagates the changes to other relevant applications.
Federation (inter-communication)
In this case, the EAI system acts as the overarching facade across multiple applications. All event calls from the 'outside world' to any of the applications are front-ended by the EAI system. The EAI system is configured to expose only the relevant information and interfaces of the underlying applications to the outside world, and performs all interactions with the underlying applications on behalf of the requester.
Both patterns are often used concurrently. The same EAI system could be keeping multiple applications in sync (mediation), while servicing requests from external users against these applications (federation).
==== Access patterns ====
EAI supports both asynchronous (fire and forget) and synchronous access patterns, the former being typical in the mediation case and the latter in the federation case.
==== Lifetime patterns ====
An integration operation could be short-lived (e.g., keeping data in sync across two applications could be completed within a second) or long-lived (e.g., one of the steps could involve the EAI system interacting with a human work flow application for approval of a loan that takes hours or days to complete).
=== Topologies ===
There are two major topologies: hub-and-spoke, and bus. Each has its own advantages and disadvantages. In the hub-and-spoke model, the EAI system is at the center (the hub), and interacts with the applications via the spokes. In the bus model, the EAI system is the bus (or is implemented as a resident module in an already existing message bus or message-oriented middleware).
Most large enterprises use zoned networks to create a layered defense against network oriented threats. For example, an enterprise typically has a credit card processing (PCI-compliant) zone, a non-PCI zone, a data zone, a DMZ zone to proxy external user access, and an IWZ zone to proxy internal user access. Applications need to integrate across multiple zones. The Hub and spoke model would work better in this case.
=== Technologies ===
Multiple technologies are used in implementing each of the components of the EAI system:
Bus/hub
This is usually implemented by enhancing standard middleware products (application server, message bus) or implemented as a stand-alone program (i. e., does not use any middleware), acting as its own middleware.
Application connectivity
The bus/hub connects to applications through a set of adapters (also referred to as connectors). These are programs that know how to interact with an underlying business application. The adapter performs one-way communication(unidirectional), performing requests from the hub against the application, and notifying the hub when an event of interest occurs in the application (a new record inserted, a transaction completed, etc.). Adapters can be specific to an application (e. g., built against the application vendor's client libraries) or specific to a class of applications (e. g., can interact with any application through a standard communication protocol, such as SOAP, SMTP or Action Message Format (AMF)). The adapter could reside in the same process space as the bus/hub or execute in a remote location and interact with the hub/bus through industry-standard protocols such as message queues, web services, or even use a proprietary protocol. In the Java world, standards such as JCA allow adapters to be created in a vendor-neutral manner.
Data format and transformation
To avoid every adapter having to convert data to/from every other application's formats, EAI systems usually stipulate an application-independent (or common) data format. The EAI system usually provides a data transformation service as well to help convert between application-specific and common formats. This is done in two steps: the adapter converts information from the application's format to the bus's common format. Then, semantic transformations are applied to this (converting zip codes to city names, splitting/merging objects from one application into objects in the other applications, and so on).
Integration modules
An EAI system could be participating in multiple concurrent integration operations at any given time, each type of integration being processed by a different integration module. Integration modules subscribe to events of specific types and process notifications that they receive when these events occur. These modules could be implemented in different ways: on Java-based EAI systems, these could be web applications or EJBs or even POJOs that conform to the EAI system's specifications.
Support for transactions
When used for process integration, the EAI system also provides transactional consistency across applications by executing all integration operations across all applications in a single overarching distributed transaction (using two-phase commit protocols or compensating transactions).
=== Communication architectures ===
Currently, there are many variations of thought on what constitutes the best infrastructure, component model, and standards structure for Enterprise Application Integration. There seems to be a consensus that four components are essential for a modern enterprise application integration architecture:
A centralized broker that handles security, access, and communication. This can be accomplished through integration servers (like the School Interoperability Framework (SIF) Zone Integration Servers) or through similar software like the enterprise service bus (ESB) model that acts as a services manager.
An independent data model based on a standard data structure, also known as a canonical data model. It appears that XML and the use of XML style sheets have become the de facto and in some cases de jure standard for this uniform business language.
A connector, or agent model where each vendor, application, or interface can build a single component that can speak natively to that application and communicate with the centralized broker.
A system model that defines the APIs, data flow and rules of engagement to the system such that components can be built to interface with it in a standardized way.
Although other approaches like connecting at the database or user-interface level have been explored, they have not been found to scale or be able to adjust. Individual applications can publish messages to the centralized broker and subscribe to receive certain messages from that broker. Each application only requires one connection to the broker. This central control approach can be extremely scalable and highly evolvable.
Enterprise Application Integration is related to middleware technologies such as message-oriented middleware (MOM), and data representation technologies such as XML or JSON. Other EAI technologies involve using web services as part of service-oriented architecture as a means of integration. Enterprise Application Integration tends to be data centric. In the near future, it will come to include content integration and business processes.
=== Implementation pitfalls ===
In 2003 it was reported that 70% of all EAI projects fail. Most of these failures are not due to the software itself or technical difficulties, but due to management issues. Integration Consortium European Chairman Steve Craggs has outlined the seven main pitfalls undertaken by companies using EAI systems and explains solutions to these problems.
Constant change: The very nature of EAI is dynamic and requires dynamic project managers to manage their implementation.
Shortage of EAI experts: EAI requires knowledge of many issues and technical aspects.
Competing standards: Within the EAI field, the paradox is that EAI standards themselves are not universal.
EAI is a tool paradigm: EAI is not a tool, but rather a system and should be implemented as such.
Building interfaces is an art: Engineering the solution is not sufficient. Solutions need to be negotiated with user departments to reach a common consensus on the final outcome. A lack of consensus on interface designs leads to excessive effort to map between various systems' data requirements.
Loss of detail: Information that seemed unimportant at an earlier stage may become crucial later.
Accountability: Since so many departments have many conflicting requirements, there should be clear accountability for the system's final structure.
Other potential problems may arise in these areas:
Lack of centralized co-ordination of EAI work.
Emerging Requirements: EAI implementations should be extensible and modular to allow for future changes.
Protectionism: The applications whose data is being integrated often belong to different departments that have technical, cultural, and political reasons for not wanting to share their data with other departments
=== See also ===
Enterprise architecture framework
Strategies for Enterprise Application Integration
Business semantics management
Data integration
Enterprise information integration
Enterprise integration
Enterprise Integration Patterns
Enterprise service bus
Generalised Enterprise Reference Architecture and Methodology
Integration appliance
Integration competency center
Integration platform
System integration
==== Initiatives and organizations ====
Health Level 7
Open Knowledge Initiative
OSS through Java
Schools Interoperability Framework (SIF)
== References ==
7. CloudLeap, Inc., Enterprise Resource Planning (ERP) Seamlessly integrate with any ERP Systems and technologies, streamlining parcel shipping process. | Wikipedia/Enterprise_application_integration |
Network management is the process of administering and managing computer networks. Services provided by this discipline include fault analysis, performance management, provisioning of networks and maintaining quality of service. Network management software is used by network administrators to help perform these functions.
== Technologies ==
A small number of accessory methods exist to support network and network device management. Network management allows IT professionals to monitor network components within large network area. Access methods include the SNMP, command-line interface (CLI), custom XML, CMIP, Windows Management Instrumentation (WMI), Transaction Language 1 (TL1), CORBA, NETCONF, RESTCONF and the Java Management Extensions (JMX).
Schemas include the Structure of Management Information (SMI), YANG, WBEM, the Common Information Model (CIM Schema), and MTOSI amongst others.
== Value ==
Effective network management can provide positive strategic impacts. For example, in the case of developing an infrastructure, providing participants with some interactive space allows them to collaborate with each other, thereby promoting overall benefits. At the same time, the value of network management to the strategic network is also affected by the relationship between participants. Active participation, interaction and collaboration can make them more trusting of each other and enhance cohesion.
== See also ==
== References ==
== External links ==
Network Monitoring and Management Tools
Software-Defined Network Management | Wikipedia/Network_management |
Objective Interface Systems, Inc. is a computer communications software and hardware company. The company's headquarters are in Herndon, Virginia, USA. OIS develops, manufactures, licenses, and supports software and hardware products that generally fit into one or more of the following markets:
Real-time communications middleware software and hardware
Embedded communications middleware software and hardware
High-performance communications middleware software and hardware
Secure communications software and hardware
A popular OIS product is the ORBexpress CORBA middleware software. ORBexpress is most popular in the real-time and embedded computer markets. OIS supports the software version ORBexpress on more than 6,000 computing platforms (combinations of the versions of CPU families, operating systems, and language compilers). OIS also has FPGA versions of ORBexpress to allow hardware blocks on an FPGA to interoperate with software.
OIS engineers invented a form of communications security called the Partitioning Communication System or PCS. The PCS is a technical architecture that protects multiple Information Flows from influencing each other when communicated on a single network wire. The PCS is best implemented on a software separation operating system such as SELinux or a separation kernel.
OIS's communications products are most frequently found in the enterprise, telecom/datacom, mil/aero, medical, robotics, process control and transportation industries. Objective Interface is a privately held company and has developed software products since 1989 and hardware products since 2001.
The company is actively involved with various standards groups including:
Common Criteria
IEEE
Network Centric Operations Industry Consortium
Object Management Group (OMG)
The Open Group
Wireless Innovation Forum
== Corporate Headquarters ==
OIS headquarters is located at 220 Spring Street, Herndon, VA, 20170-6201.
== References ==
== External links ==
Objective interface
Objective Interface Systems - 'Official website'
Object Management Group (OMG) [1]
The Open Group [2] Archived 2011-02-26 at the Wayback Machine
Wireless Innovation Forum [3] | Wikipedia/Objective_Interface_Systems |
Enterprise application integration (EAI) is the use of software and computer systems' architectural principles to integrate a set of enterprise computer applications.
== Overview ==
Enterprise application integration is an integration framework composed of a collection of technologies and services which form a middleware or "middleware framework" to enable integration of systems and applications across an enterprise.
Many types of business software such as supply chain management applications, ERP systems, CRM applications for managing customers, business intelligence applications, payroll, and human resources systems typically cannot communicate with one another in order to share data or business rules. For this reason, such applications are sometimes referred to as islands of automation or information silos. This lack of communication leads to inefficiencies, wherein identical data are stored in multiple locations, or straightforward processes are unable to be automated.
Enterprise application integration is the process of linking such applications within a single organization together in order to simplify and automate business processes to the greatest extent possible, while at the same time avoiding having to make sweeping changes to the existing applications or data structures. Applications can be linked either at the back-end via APIs or (seldom) the front-end (GUI).
In the words of research firm Gartner: "[EAI is] the unrestricted sharing of data and business processes among any connected application or data sources in the enterprise."
The various systems that need to be linked together may reside on different operating systems, use different database solutions or computer languages, or different date and time formats, or could be legacy systems that are no longer supported by the vendor who originally created them. In some cases, such systems are dubbed "stovepipe systems" because they consist of components that have been jammed together in a way that makes it very hard to modify them in any way.
=== Improving connectivity ===
If integration is applied without following a structured EAI approach, point-to-point connections grow across an organization. Dependencies are added on an impromptu basis, resulting in a complex structure that is difficult to maintain. This is commonly referred to as spaghetti, an allusion to the programming equivalent of spaghetti code.
For example, the number of connections needed to have fully meshed point-to-point connections, with n points, is given by
(
n
2
)
=
n
(
n
−
1
)
2
{\displaystyle {\tbinom {n}{2}}={\tfrac {n(n-1)}{2}}}
(see binomial coefficient). Thus, for ten applications to be fully integrated point-to-point,
10
×
9
2
=
45
{\displaystyle {\tfrac {10\times 9}{2}}=45}
point-to-point connections are needed, following a quadratic growth pattern.
However, the number of connections within organizations does not necessarily grow according to the square of the number of points. In general, the number of connections to any point is only limited by the number of other points in an organization, but can be significantly smaller in principle. EAI can also increase coupling between systems and therefore increase management overhead and costs.
EAI is not just about sharing data between applications but also focuses on sharing both business data and business processes. A middleware analyst attending to EAI will often look at the system of systems.
=== Purposes ===
EAI can be used for different purposes:
Data integration: Ensures that information in multiple systems is kept consistent. This is also known as enterprise information integration (EII).
Vendor independence: Extracts business policies or rules from applications and implements them in the EAI system, so that even if one of the business applications is replaced with a different vendor's application, the business rules do not have to be re-implemented.
Common facade: An EAI system can front-end a cluster of applications, providing a single consistent access interface to these applications and shielding users from having to learn to use different software packages.
=== Patterns ===
This section describes common design patterns for implementing EAI, including integration, access and lifetime patterns. These are abstract patterns and can be implemented in many different ways. There are many other patterns commonly used in the industry, ranging from high-level abstract design patterns to highly specific implementation patterns.
==== Integration patterns ====
EAI systems implement two patterns:
Mediation (intra-communication)
Here, the EAI system acts as the go-between or broker between multiple applications. Whenever an interesting event occurs in an application (for instance, new information is created or a new transaction completed) an integration module in the EAI system is notified. The module then propagates the changes to other relevant applications.
Federation (inter-communication)
In this case, the EAI system acts as the overarching facade across multiple applications. All event calls from the 'outside world' to any of the applications are front-ended by the EAI system. The EAI system is configured to expose only the relevant information and interfaces of the underlying applications to the outside world, and performs all interactions with the underlying applications on behalf of the requester.
Both patterns are often used concurrently. The same EAI system could be keeping multiple applications in sync (mediation), while servicing requests from external users against these applications (federation).
==== Access patterns ====
EAI supports both asynchronous (fire and forget) and synchronous access patterns, the former being typical in the mediation case and the latter in the federation case.
==== Lifetime patterns ====
An integration operation could be short-lived (e.g., keeping data in sync across two applications could be completed within a second) or long-lived (e.g., one of the steps could involve the EAI system interacting with a human work flow application for approval of a loan that takes hours or days to complete).
=== Topologies ===
There are two major topologies: hub-and-spoke, and bus. Each has its own advantages and disadvantages. In the hub-and-spoke model, the EAI system is at the center (the hub), and interacts with the applications via the spokes. In the bus model, the EAI system is the bus (or is implemented as a resident module in an already existing message bus or message-oriented middleware).
Most large enterprises use zoned networks to create a layered defense against network oriented threats. For example, an enterprise typically has a credit card processing (PCI-compliant) zone, a non-PCI zone, a data zone, a DMZ zone to proxy external user access, and an IWZ zone to proxy internal user access. Applications need to integrate across multiple zones. The Hub and spoke model would work better in this case.
=== Technologies ===
Multiple technologies are used in implementing each of the components of the EAI system:
Bus/hub
This is usually implemented by enhancing standard middleware products (application server, message bus) or implemented as a stand-alone program (i. e., does not use any middleware), acting as its own middleware.
Application connectivity
The bus/hub connects to applications through a set of adapters (also referred to as connectors). These are programs that know how to interact with an underlying business application. The adapter performs one-way communication(unidirectional), performing requests from the hub against the application, and notifying the hub when an event of interest occurs in the application (a new record inserted, a transaction completed, etc.). Adapters can be specific to an application (e. g., built against the application vendor's client libraries) or specific to a class of applications (e. g., can interact with any application through a standard communication protocol, such as SOAP, SMTP or Action Message Format (AMF)). The adapter could reside in the same process space as the bus/hub or execute in a remote location and interact with the hub/bus through industry-standard protocols such as message queues, web services, or even use a proprietary protocol. In the Java world, standards such as JCA allow adapters to be created in a vendor-neutral manner.
Data format and transformation
To avoid every adapter having to convert data to/from every other application's formats, EAI systems usually stipulate an application-independent (or common) data format. The EAI system usually provides a data transformation service as well to help convert between application-specific and common formats. This is done in two steps: the adapter converts information from the application's format to the bus's common format. Then, semantic transformations are applied to this (converting zip codes to city names, splitting/merging objects from one application into objects in the other applications, and so on).
Integration modules
An EAI system could be participating in multiple concurrent integration operations at any given time, each type of integration being processed by a different integration module. Integration modules subscribe to events of specific types and process notifications that they receive when these events occur. These modules could be implemented in different ways: on Java-based EAI systems, these could be web applications or EJBs or even POJOs that conform to the EAI system's specifications.
Support for transactions
When used for process integration, the EAI system also provides transactional consistency across applications by executing all integration operations across all applications in a single overarching distributed transaction (using two-phase commit protocols or compensating transactions).
=== Communication architectures ===
Currently, there are many variations of thought on what constitutes the best infrastructure, component model, and standards structure for Enterprise Application Integration. There seems to be a consensus that four components are essential for a modern enterprise application integration architecture:
A centralized broker that handles security, access, and communication. This can be accomplished through integration servers (like the School Interoperability Framework (SIF) Zone Integration Servers) or through similar software like the enterprise service bus (ESB) model that acts as a services manager.
An independent data model based on a standard data structure, also known as a canonical data model. It appears that XML and the use of XML style sheets have become the de facto and in some cases de jure standard for this uniform business language.
A connector, or agent model where each vendor, application, or interface can build a single component that can speak natively to that application and communicate with the centralized broker.
A system model that defines the APIs, data flow and rules of engagement to the system such that components can be built to interface with it in a standardized way.
Although other approaches like connecting at the database or user-interface level have been explored, they have not been found to scale or be able to adjust. Individual applications can publish messages to the centralized broker and subscribe to receive certain messages from that broker. Each application only requires one connection to the broker. This central control approach can be extremely scalable and highly evolvable.
Enterprise Application Integration is related to middleware technologies such as message-oriented middleware (MOM), and data representation technologies such as XML or JSON. Other EAI technologies involve using web services as part of service-oriented architecture as a means of integration. Enterprise Application Integration tends to be data centric. In the near future, it will come to include content integration and business processes.
=== Implementation pitfalls ===
In 2003 it was reported that 70% of all EAI projects fail. Most of these failures are not due to the software itself or technical difficulties, but due to management issues. Integration Consortium European Chairman Steve Craggs has outlined the seven main pitfalls undertaken by companies using EAI systems and explains solutions to these problems.
Constant change: The very nature of EAI is dynamic and requires dynamic project managers to manage their implementation.
Shortage of EAI experts: EAI requires knowledge of many issues and technical aspects.
Competing standards: Within the EAI field, the paradox is that EAI standards themselves are not universal.
EAI is a tool paradigm: EAI is not a tool, but rather a system and should be implemented as such.
Building interfaces is an art: Engineering the solution is not sufficient. Solutions need to be negotiated with user departments to reach a common consensus on the final outcome. A lack of consensus on interface designs leads to excessive effort to map between various systems' data requirements.
Loss of detail: Information that seemed unimportant at an earlier stage may become crucial later.
Accountability: Since so many departments have many conflicting requirements, there should be clear accountability for the system's final structure.
Other potential problems may arise in these areas:
Lack of centralized co-ordination of EAI work.
Emerging Requirements: EAI implementations should be extensible and modular to allow for future changes.
Protectionism: The applications whose data is being integrated often belong to different departments that have technical, cultural, and political reasons for not wanting to share their data with other departments
=== See also ===
Enterprise architecture framework
Strategies for Enterprise Application Integration
Business semantics management
Data integration
Enterprise information integration
Enterprise integration
Enterprise Integration Patterns
Enterprise service bus
Generalised Enterprise Reference Architecture and Methodology
Integration appliance
Integration competency center
Integration platform
System integration
==== Initiatives and organizations ====
Health Level 7
Open Knowledge Initiative
OSS through Java
Schools Interoperability Framework (SIF)
== References ==
7. CloudLeap, Inc., Enterprise Resource Planning (ERP) Seamlessly integrate with any ERP Systems and technologies, streamlining parcel shipping process. | Wikipedia/Enterprise_Application_Integration |
A relational database (RDB) is a database based on the relational model of data, as proposed by E. F. Codd in 1970.
A Relational Database Management System (RDBMS) is a type of database management system that stores data in a structured format using rows and columns.
Many relational database systems are equipped with the option of using SQL (Structured Query Language) for querying and updating the database.
== History ==
The concept of relational database was defined by E. F. Codd at IBM in 1970. Codd introduced the term relational in his research paper "A Relational Model of Data for Large Shared Data Banks". In this paper and later papers, he defined what he meant by relation. One well-known definition of what constitutes a relational database system is composed of Codd's 12 rules.
However, no commercial implementations of the relational model conform to all of Codd's rules, so the term has gradually come to describe a broader class of database systems, which at a minimum:
Present the data to the user as relations (a presentation in tabular form, i.e. as a collection of tables with each table consisting of a set of rows and columns);
Provide relational operators to manipulate the data in tabular form.
In 1974, IBM began developing System R, a research project to develop a prototype RDBMS.
The first system sold as an RDBMS was Multics Relational Data Store (June 1976). Oracle was released in 1979 by Relational Software, now Oracle Corporation. Ingres and IBM BS12 followed. Other examples of an RDBMS include IBM Db2, SAP Sybase ASE, and Informix. In 1984, the first RDBMS for Macintosh began being developed, code-named Silver Surfer, and was released in 1987 as 4th Dimension and known today as 4D.
The first systems that were relatively faithful implementations of the relational model were from:
University of Michigan – Micro DBMS (1969)
Massachusetts Institute of Technology (1971)
IBM UK Scientific Centre at Peterlee – IS1 (1970–72), and its successor, PRTV (1973–79).
The most common definition of an RDBMS is a product that presents a view of data as a collection of rows and columns, even if it is not based strictly upon relational theory. By this definition, RDBMS products typically implement some but not all of Codd's 12 rules.
A second school of thought argues that if a database does not implement all of Codd's rules (or the current understanding on the relational model, as expressed by Christopher J. Date, Hugh Darwen and others), it is not relational. This view, shared by many theorists and other strict adherents to Codd's principles, would disqualify most DBMSs as not relational. For clarification, they often refer to some RDBMSs as truly-relational database management systems (TRDBMS), naming others pseudo-relational database management systems (PRDBMS).
As of 2009, most commercial relational DBMSs employ SQL as their query language.
Alternative query languages have been proposed and implemented, notably the pre-1996 implementation of Ingres QUEL.
== Relational model ==
A relational model organizes data into one or more tables (or "relations") of columns and rows, with a unique key identifying each row. Rows are also called records or tuples. Columns are also called attributes. Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type of entity (such as "Lee" or "chair") and the columns represent values attributed to that instance (such as address or price).
For example, each row of a class table corresponds to a class, and a class corresponds to multiple students, so the relationship between the class table and the student table is "one to many"
== Keys ==
Each row in a table has its own unique key. Rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row (such columns are known as foreign keys). Codd showed that data relationships of arbitrary complexity can be represented by a simple set of concepts.
Part of this processing involves consistently being able to select or modify one and only one row in a table. Therefore, most physical implementations have a unique primary key (PK) for each row in a table. When a new row is written to the table, a new unique value for the primary key is generated; this is the key that the system uses primarily for accessing the table. System performance is optimized for PKs. Other, more natural keys may also be identified and defined as alternate keys (AK). Often several columns are needed to form an AK (this is one reason why a single integer column is usually made the PK). Both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, a globally unique identifier, when there are broader system requirements.
The primary keys within a database are used to define the relationships among the tables. When a PK migrates to another table, it becomes a foreign key (FK) in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, this design pattern can represent either a one-to-one or one-to-many relationship. Most relational database designs resolve many-to-many relationships by creating an additional table that contains the PKs from both of the other entity tables – the relationship becomes an entity; the resolution table is then named appropriately and the two FKs are combined to form a PK. The migration of PKs to other tables is the second major reason why system-assigned integers are used normally as PKs; there is usually neither efficiency nor clarity in migrating a bunch of other types of columns.
=== Relationships ===
Relationships are a logical connection between different tables (entities), established on the basis of interaction among these tables. These relationships can be modelled as an entity-relationship model.
== Transactions ==
In order for a database management system (DBMS) to operate efficiently and accurately, it must use ACID transactions.
== Stored procedures ==
Part of the programming within a RDBMS is accomplished using stored procedures (SPs). Often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the system design may grant access to only the stored procedures and not directly to the tables. Fundamental stored procedures contain the logic needed to insert new and update existing data. More complex procedures may be written to implement additional rules and logic related to processing or selecting the data.
== Terminology ==
The relational database was first defined in June 1970 by Edgar Codd, of IBM's San Jose Research Laboratory. Codd's view of what qualifies as an RDBMS is summarized in Codd's 12 rules. A relational database has become the predominant type of database. Other models besides the relational model include the hierarchical database model and the network model.
The table below summarizes some of the most important relational database terms and the corresponding SQL term:
== Relations or tables ==
In a relational database, a relation is a set of tuples that have the same attributes. A tuple usually represents an object and information about that object. Objects are typically physical objects or concepts. A relation is usually described as a table, which is organized into rows and columns. All the data referenced by an attribute are in the same domain and conform to the same constraints.
The relational model specifies that the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes. Applications access data by specifying queries, which use operations such as select to identify tuples, project to identify attributes, and join to combine relations. Relations can be modified using the insert, delete, and update operators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting.
Tuples by definition are unique. If the tuple contains a candidate or primary key then obviously it is unique; however, a primary key need not be defined for a row or record to be a tuple. The definition of a tuple requires that it be unique, but does not require a primary key to be defined. Because a tuple is unique, its attributes by definition constitute a superkey.
== Base and derived relations ==
All data are stored and accessed via relations. Relations that store data are called "base relations", and in implementations are called "tables". Other relations do not store data, but are computed by applying relational operations to other relations. These relations are sometimes called "derived relations". In implementations these are called "views" or "queries". Derived relations are convenient in that they act as a single relation, even though they may grab information from several relations. Also, derived relations can be used as an abstraction layer.
=== Domain ===
A domain describes the set of possible values for a given attribute, and can be considered a constraint on the value of the attribute. Mathematically, attaching a domain to an attribute means that any value for the attribute must be an element of the specified set. The character string "ABC", for instance, is not in the integer domain, but the integer value 123 is. Another example of domain describes the possible values for the field "CoinFace" as ("Heads","Tails"). So, the field "CoinFace" will not accept input values like (0,1) or (H,T).
== Constraints ==
Constraints are often used to make it possible to further restrict the domain of an attribute. For instance, a constraint can restrict a given integer attribute to values between 1 and 10. Constraints provide one method of implementing business rules in the database and support subsequent data use within the application layer. SQL implements constraint functionality in the form of check constraints.
Constraints restrict the data that can be stored in relations. These are usually defined using expressions that result in a Boolean value, indicating whether or not the data satisfies the constraint. Constraints can apply to single attributes, to a tuple (restricting combinations of attributes) or to an entire relation.
Since every attribute has an associated domain, there are constraints (domain constraints). The two principal rules for the relational model are known as entity integrity and referential integrity.
=== Primary key ===
Every relation/table has a primary key, this being a consequence of a relation being a set. A primary key uniquely specifies a tuple within a table. While natural attributes (attributes used to describe the data being entered) are sometimes good primary keys, surrogate keys are often used instead. A surrogate key is an artificial attribute assigned to an object which uniquely identifies it (for instance, in a table of information about students at a school they might all be assigned a student ID in order to differentiate them). The surrogate key has no intrinsic (inherent) meaning, but rather is useful through its ability to uniquely identify a tuple.
Another common occurrence, especially in regard to N:M cardinality is the composite key. A composite key is a key made up of two or more attributes within a table that (together) uniquely identify a record.
=== Foreign key ===
Foreign key refers to a field in a relational table that matches the primary key column of another table. It relates the two keys. Foreign keys need not have unique values in the referencing relation. A foreign key can be used to cross-reference tables, and it effectively uses the values of attributes in the referenced relation to restrict the domain of one or more attributes in the referencing relation. The concept is described formally as: "For all tuples in the referencing relation projected over the referencing attributes, there must exist a tuple in the referenced relation projected over those same attributes such that the values in each of the referencing attributes match the corresponding values in the referenced attributes."
=== Stored procedures ===
A stored procedure is executable code that is associated with, and generally stored in, the database. Stored procedures usually collect and customize common operations, like inserting a tuple into a relation, gathering statistical information about usage patterns, or encapsulating complex business logic and calculations. Frequently they are used as an application programming interface (API) for security or simplicity. Implementations of stored procedures on SQL RDBMS's often allow developers to take advantage of procedural extensions (often vendor-specific) to the standard declarative SQL syntax.
Stored procedures are not part of the relational database model, but all commercial implementations include them.
=== Index ===
An index is one way of providing quicker access to data. Indices can be created on any combination of attributes on a relation. Queries that filter using those attributes can find matching tuples directly using the index (similar to Hash table lookup), without having to check each tuple in turn. This is analogous to using the index of a book to go directly to the page on which the information you are looking for is found, so that you do not have to read the entire book to find what you are looking for. Relational databases typically supply multiple indexing techniques, each of which is optimal for some combination of data distribution, relation size, and typical access pattern. Indices are usually implemented via B+ trees, R-trees, and bitmaps.
Indices are usually not considered part of the database, as they are considered an implementation detail, though indices are usually maintained by the same group that maintains the other parts of the database. The use of efficient indexes on both primary and foreign keys can dramatically improve query performance. This is because B-tree indexes result in query times proportional to log(n) where n is the number of rows in a table and hash indexes result in constant time queries (no size dependency as long as the relevant part of the index fits into memory).
== Relational operations ==
Queries made against the relational database, and the derived relvars in the database are expressed in a relational calculus or a relational algebra. In his original relational algebra, Codd introduced eight relational operators in two groups of four operators each. The first four operators were based on the traditional mathematical set operations:
The union operator (υ) combines the tuples of two relations and removes all duplicate tuples from the result. The relational union operator is equivalent to the SQL UNION operator.
The intersection operator (∩) produces the set of tuples that two relations share in common. Intersection is implemented in SQL in the form of the INTERSECT operator.
The set difference operator (-) acts on two relations and produces the set of tuples from the first relation that do not exist in the second relation. Difference is implemented in SQL in the form of the EXCEPT or MINUS operator.
The cartesian product (X) of two relations is a join that is not restricted by any criteria, resulting in every tuple of the first relation being matched with every tuple of the second relation. The cartesian product is implemented in SQL as the Cross join operator.
The remaining operators proposed by Codd involve special operations specific to relational databases:
The selection, or restriction, operation (σ) retrieves tuples from a relation, limiting the results to only those that meet a specific criterion, i.e. a subset in terms of set theory. The SQL equivalent of selection is the SELECT query statement with a WHERE clause.
The projection operation (π) extracts only the specified attributes from a tuple or set of tuples.
The join operation defined for relational databases is often referred to as a natural join (⋈). In this type of join, two relations are connected by their common attributes. MySQL's approximation of a natural join is the Inner join operator. In SQL, an INNER JOIN prevents a cartesian product from occurring when there are two tables in a query. For each table added to a SQL Query, one additional INNER JOIN is added to prevent a cartesian product. Thus, for N tables in an SQL query, there must be N−1 INNER JOINS to prevent a cartesian product.
The relational division (÷) operation is a slightly more complex operation and essentially involves using the tuples of one relation (the dividend) to partition a second relation (the divisor). The relational division operator is effectively the opposite of the cartesian product operator (hence the name).
Other operators have been introduced or proposed since Codd's introduction of the original eight including relational comparison operators and extensions that offer support for nesting and hierarchical data, among others.
== Normalization ==
Normalization was first proposed by Codd as an integral part of the relational model. It encompasses a set of procedures designed to eliminate non-simple domains (non-atomic values) and the redundancy (duplication) of data, which in turn prevents data manipulation anomalies and loss of data integrity. The most common forms of normalization applied to databases are called the normal forms.
== RDBMS ==
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database". RDBMS is an extension of that initialism that is sometimes used when the underlying database is relational.
An alternative definition for a relational database management system is a database management system (DBMS) based on the relational model. Most databases in widespread use today are based on this model.
RDBMSs have been a common option for the storage of information in databases used for financial records, manufacturing and logistical information, personnel data, and other applications since the 1980s. Relational databases have often replaced legacy hierarchical databases and network databases, because RDBMS were easier to implement and administer. Nonetheless, relational stored data received continued, unsuccessful challenges by object database management systems in the 1980s and 1990s, (which were introduced in an attempt to address the so-called object–relational impedance mismatch between relational databases and object-oriented application programs), as well as by XML database management systems in the 1990s. However, due to the expanse of technologies, such as horizontal scaling of computer clusters, NoSQL databases have recently become popular as an alternative to RDBMS databases.
== Distributed relational databases ==
Distributed Relational Database Architecture (DRDA) was designed by a workgroup within IBM in the period 1988 to 1994. DRDA enables network connected relational databases to cooperate to fulfill SQL requests.
The messages, protocols, and structural components of DRDA are defined by the Distributed Data Management Architecture.
== List of database engines ==
According to DB-Engines, in December 2024 the most popular systems on the db-engines.com web site were:
Oracle Database
MySQL
Microsoft SQL Server
PostgreSQL
Snowflake
IBM Db2
SQLite
Microsoft Access
Databricks
MariaDB
According to research company Gartner, in 2011, the five leading proprietary software relational database vendors by revenue were Oracle (48.8%), IBM (20.2%), Microsoft (17.0%), SAP including Sybase (4.6%), and Teradata (3.7%).
== See also ==
Comparison of relational database management systems
Database schema
Datalog
Data warehouse
List of relational database management systems
Object database (OODBMS)
Online analytical processing (OLAP) and ROLAP (Relational Online Analytical Processing)
Relational transducer
Snowflake schema
SQL
Star schema
== References ==
== Sources ==
Date, C. J. (1984). A Guide to DB2 (student ed.). Addison-Wesley. ISBN 0201113171. OCLC 256383726. OL 2838595M. | Wikipedia/Relational_database_management_systems |
In mathematics, a finitary relation over a sequence of sets X1, ..., Xn is a subset of the Cartesian product X1 × ... × Xn; that is, it is a set of n-tuples (x1, ..., xn), each being a sequence of elements xi in the corresponding Xi. Typically, the relation describes a possible connection between the elements of an n-tuple. For example, the relation "x is divisible by y and z" consists of the set of 3-tuples such that when substituted to x, y and z, respectively, make the sentence true.
The non-negative integer n that gives the number of "places" in the relation is called the arity, adicity or degree of the relation. A relation with n "places" is variously called an n-ary relation, an n-adic relation or a relation of degree n. Relations with a finite number of places are called finitary relations (or simply relations if the context is clear). It is also possible to generalize the concept to infinitary relations with infinite sequences.
== Definitions ==
When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation.
Definition
R is an n-ary relation on sets X1, ..., Xn is given by a subset of the Cartesian product X1 × ... × Xn.
Since the definition is predicated on the underlying sets X1, ..., Xn, R may be more formally defined as the (n + 1)-tuple (X1, ..., Xn, G), where G, called the graph of R, is a subset of the Cartesian product X1 × ... × Xn.
As is often done in mathematics, the same symbol is used to refer to the mathematical object and an underlying set, so the statement (x1, ..., xn) ∈ R is often used to mean (x1, ..., xn) ∈ G is read "x1, ..., xn are R-related" and are denoted using prefix notation by Rx1⋯xn and using postfix notation by x1⋯xnR. In the case where R is a binary relation, those statements are also denoted using infix notation by x1Rx2.
The following considerations apply:
The set Xi is called the ith domain of R. In the case where R is a binary relation, X1 is also called simply the domain or set of departure of R, and X2 is also called the codomain or set of destination of R.
When the elements of Xi are relations, Xi is called a nonsimple domain of R.
The set of ∀xi ∈ Xi such that Rx1⋯xi−1xixi+1⋯xn for at least one (x1, ..., xn) is called the ith domain of definition or active domain of R. In the case where R is a binary relation, its first domain of definition is also called simply the domain of definition or active domain of R, and its second domain of definition is also called the codomain of definition or active codomain of R.
When the ith domain of definition of R is equal to Xi, R is said to be total on its ith domain (or on Xi, when this is not ambiguous). In the case where R is a binary relation, when R is total on X1, it is also said to be left-total or serial, and when R is total on X2, it is also said to be right-total or surjective.
When ∀x ∀y ∈ Xi. ∀z ∈ Xj. xRijz ∧ yRijz ⇒ x = y, where i ∈ I, j ∈ J, Rij = πij R, and {I, J} is a partition of {1, ..., n}, R is said to be unique on {Xi}i∈I, and {Xi}i∈J is called a primary key of R. In the case where R is a binary relation, when R is unique on {X1}, it is also said to be left-unique or injective, and when R is unique on {X2}, it is also said to be univalent or right-unique.
When all Xi are the same set X, it is simpler to refer to R as an n-ary relation over X, called a homogeneous relation. Without this restriction, R is called a heterogeneous relation.
When any of Xi is empty, the defining Cartesian product is empty, and the only relation over such a sequence of domains is the empty relation R = ∅.
Let a Boolean domain B be a two-element set, say, B = {0, 1}, whose elements can be interpreted as logical values, typically 0 = false and 1 = true. The characteristic function of R, denoted by χR, is the Boolean-valued function χR: X1 × ... × Xn → B, defined by χR((x1, ..., xn)) = 1 if Rx1⋯xn and χR((x1, ..., xn)) = 0 otherwise.
In applied mathematics, computer science and statistics, it is common to refer to a Boolean-valued function as an n-ary predicate. From the more abstract viewpoint of formal logic and model theory, the relation R constitutes a logical model or a relational structure, that serves as one of many possible interpretations of some n-ary predicate symbol.
Because relations arise in many scientific disciplines, as well as in many branches of mathematics and logic, there is considerable variation in terminology. Aside from the set-theoretic extension of a relational concept or term, the term "relation" can also be used to refer to the corresponding logical entity, either the logical comprehension, which is the totality of intensions or abstract properties shared by all elements in the relation, or else the symbols denoting these elements and intensions. Further, some writers of the latter persuasion introduce terms with more concrete connotations (such as "relational structure" for the set-theoretic extension of a given relational concept).
== Specific values of n ==
=== Nullary ===
Nullary (0-ary) relations count only two members: the empty nullary relation, which never holds, and the universal nullary relation, which always holds. This is because there is only one 0-tuple, the empty tuple (), and there are exactly two subsets of the (singleton) set of all 0-tuples. They are sometimes useful for constructing the base case of an induction argument.
=== Unary ===
Unary (1-ary) relations can be viewed as a collection of members (such as the collection of Nobel laureates) having some property (such as that of having been awarded the Nobel Prize).
Every nullary function is a unary relation.
=== Binary ===
Binary (2-ary) relations are the most commonly studied form of finitary relations. Homogeneous binary relations (where X1 = X2) include
Equality and inequality, denoted by signs such as = and < in statements such as "5 < 12", or
Divisibility, denoted by the sign | in statements such as "13 | 143".
Heterogeneous binary relations include
Set membership, denoted by the sign ∈ in statements such as "1 ∈ N".
=== Ternary ===
Ternary (3-ary) relations include, for example, the binary functions, which relate two inputs and the output. All three of the domains of a homogeneous ternary relation are the same set.
== Example ==
Consider the ternary relation R "x thinks that y likes z" over the set of people P = { Alice, Bob, Charles, Denise }, defined by:
R = { (Alice, Bob, Denise), (Charles, Alice, Bob), (Charles, Charles, Alice), (Denise, Denise, Denise) }.
R can be represented equivalently by the following table:
Here, each row represents a triple of R, that is it makes a statement of the form "x thinks that y likes z". For instance, the first row states that "Alice thinks that Bob likes Denise". All rows are distinct. The ordering of rows is insignificant but the ordering of columns is significant.
The above table is also a simple example of a relational database, a field with theory rooted in relational algebra and applications in data management. Computer scientists, logicians, and mathematicians, however, tend to have different conceptions what a general relation is, and what it is consisted of. For example, databases are designed to deal with empirical data, which is by definition finite, whereas in mathematics, relations with infinite arity (i.e., infinitary relation) are also considered.
== History ==
The logician Augustus De Morgan, in work published around 1860, was the first to articulate the notion of relation in anything like its present sense. He also stated the first formal results in the theory of relations (on De Morgan and relations, see Merrill 1990).
Charles Peirce, Gottlob Frege, Georg Cantor, Richard Dedekind and others advanced the theory of relations. Many of their ideas, especially on relations called orders, were summarized in The Principles of Mathematics (1903) where Bertrand Russell made free use of these results.
In 1970, Edgar Codd proposed a relational model for databases, thus anticipating the development of data base management systems.
== See also ==
== References ==
== Bibliography == | Wikipedia/Theory_of_relations |
In set theory, the complement of a set A, often denoted by
A
c
{\displaystyle A^{c}}
(or A′), is the set of elements not in A.
When all elements in the universe, i.e. all elements under consideration, are considered to be members of a given set U, the absolute complement of A is the set of elements in U that are not in A.
The relative complement of A with respect to a set B, also termed the set difference of B and A, written
B
∖
A
,
{\displaystyle B\setminus A,}
is the set of elements in B that are not in A.
== Absolute complement ==
=== Definition ===
If A is a set, then the absolute complement of A (or simply the complement of A) is the set of elements not in A (within a larger set that is implicitly defined). In other words, let U be a set that contains all the elements under study; if there is no need to mention U, either because it has been previously specified, or it is obvious and unique, then the absolute complement of A is the relative complement of A in U:
A
c
=
U
∖
A
=
{
x
∈
U
:
x
∉
A
}
.
{\displaystyle A^{c}=U\setminus A=\{x\in U:x\notin A\}.}
The absolute complement of A is usually denoted by
A
c
{\displaystyle A^{c}}
. Other notations include
A
¯
,
A
′
,
{\displaystyle {\overline {A}},A',}
∁
U
A
,
and
∁
A
.
{\displaystyle \complement _{U}A,{\text{ and }}\complement A.}
=== Examples ===
Assume that the universe is the set of integers. If A is the set of odd numbers, then the complement of A is the set of even numbers. If B is the set of multiples of 3, then the complement of B is the set of numbers congruent to 1 or 2 modulo 3 (or, in simpler terms, the integers that are not multiples of 3).
Assume that the universe is the standard 52-card deck. If the set A is the suit of spades, then the complement of A is the union of the suits of clubs, diamonds, and hearts. If the set B is the union of the suits of clubs and diamonds, then the complement of B is the union of the suits of hearts and spades.
When the universe is the universe of sets described in formalized set theory, the absolute complement of a set is generally not itself a set, but rather a proper class. For more info, see universal set.
=== Properties ===
Let A and B be two sets in a universe U. The following identities capture important properties of absolute complements:
De Morgan's laws:
(
A
∪
B
)
c
=
A
c
∩
B
c
.
{\displaystyle \left(A\cup B\right)^{c}=A^{c}\cap B^{c}.}
(
A
∩
B
)
c
=
A
c
∪
B
c
.
{\displaystyle \left(A\cap B\right)^{c}=A^{c}\cup B^{c}.}
Complement laws:
A
∪
A
c
=
U
.
{\displaystyle A\cup A^{c}=U.}
A
∩
A
c
=
∅
.
{\displaystyle A\cap A^{c}=\emptyset .}
∅
c
=
U
.
{\displaystyle \emptyset ^{c}=U.}
U
c
=
∅
.
{\displaystyle U^{c}=\emptyset .}
If
A
⊆
B
, then
B
c
⊆
A
c
.
{\displaystyle {\text{If }}A\subseteq B{\text{, then }}B^{c}\subseteq A^{c}.}
(this follows from the equivalence of a conditional with its contrapositive).
Involution or double complement law:
(
A
c
)
c
=
A
.
{\displaystyle \left(A^{c}\right)^{c}=A.}
Relationships between relative and absolute complements:
A
∖
B
=
A
∩
B
c
.
{\displaystyle A\setminus B=A\cap B^{c}.}
(
A
∖
B
)
c
=
A
c
∪
B
=
A
c
∪
(
B
∩
A
)
.
{\displaystyle (A\setminus B)^{c}=A^{c}\cup B=A^{c}\cup (B\cap A).}
Relationship with a set difference:
A
c
∖
B
c
=
B
∖
A
.
{\displaystyle A^{c}\setminus B^{c}=B\setminus A.}
The first two complement laws above show that if A is a non-empty, proper subset of U, then {A, A∁} is a partition of U.
== Relative complement ==
=== Definition ===
If A and B are sets, then the relative complement of A in B, also termed the set difference of B and A, is the set of elements in B but not in A.
The relative complement of A in B is denoted
B
∖
A
{\displaystyle B\setminus A}
according to the ISO 31-11 standard. It is sometimes written
B
−
A
,
{\displaystyle B-A,}
but this notation is ambiguous, as in some contexts (for example, Minkowski set operations in functional analysis) it can be interpreted as the set of all elements
b
−
a
,
{\displaystyle b-a,}
where b is taken from B and a from A.
Formally:
B
∖
A
=
{
x
∈
B
:
x
∉
A
}
.
{\displaystyle B\setminus A=\{x\in B:x\notin A\}.}
=== Examples ===
{
1
,
2
,
3
}
∖
{
2
,
3
,
4
}
=
{
1
}
.
{\displaystyle \{1,2,3\}\setminus \{2,3,4\}=\{1\}.}
{
2
,
3
,
4
}
∖
{
1
,
2
,
3
}
=
{
4
}
.
{\displaystyle \{2,3,4\}\setminus \{1,2,3\}=\{4\}.}
If
R
{\displaystyle \mathbb {R} }
is the set of real numbers and
Q
{\displaystyle \mathbb {Q} }
is the set of rational numbers, then
R
∖
Q
{\displaystyle \mathbb {R} \setminus \mathbb {Q} }
is the set of irrational numbers.
=== Properties ===
Let A, B, and C be three sets in a universe U. The following identities capture notable properties of relative complements:
C
∖
(
A
∩
B
)
=
(
C
∖
A
)
∪
(
C
∖
B
)
.
{\displaystyle C\setminus (A\cap B)=(C\setminus A)\cup (C\setminus B).}
C
∖
(
A
∪
B
)
=
(
C
∖
A
)
∩
(
C
∖
B
)
.
{\displaystyle C\setminus (A\cup B)=(C\setminus A)\cap (C\setminus B).}
C
∖
(
B
∖
A
)
=
(
C
∩
A
)
∪
(
C
∖
B
)
,
{\displaystyle C\setminus (B\setminus A)=(C\cap A)\cup (C\setminus B),}
with the important special case
C
∖
(
C
∖
A
)
=
(
C
∩
A
)
{\displaystyle C\setminus (C\setminus A)=(C\cap A)}
demonstrating that intersection can be expressed using only the relative complement operation.
(
B
∖
A
)
∩
C
=
(
B
∩
C
)
∖
A
=
B
∩
(
C
∖
A
)
.
{\displaystyle (B\setminus A)\cap C=(B\cap C)\setminus A=B\cap (C\setminus A).}
(
B
∖
A
)
∪
C
=
(
B
∪
C
)
∖
(
A
∖
C
)
.
{\displaystyle (B\setminus A)\cup C=(B\cup C)\setminus (A\setminus C).}
A
∖
A
=
∅
.
{\displaystyle A\setminus A=\emptyset .}
∅
∖
A
=
∅
.
{\displaystyle \emptyset \setminus A=\emptyset .}
A
∖
∅
=
A
.
{\displaystyle A\setminus \emptyset =A.}
A
∖
U
=
∅
.
{\displaystyle A\setminus U=\emptyset .}
If
A
⊂
B
{\displaystyle A\subset B}
, then
C
∖
A
⊃
C
∖
B
{\displaystyle C\setminus A\supset C\setminus B}
.
A
⊇
B
∖
C
{\displaystyle A\supseteq B\setminus C}
is equivalent to
C
⊇
B
∖
A
{\displaystyle C\supseteq B\setminus A}
.
== Complementary relation ==
A binary relation
R
{\displaystyle R}
is defined as a subset of a product of sets
X
×
Y
.
{\displaystyle X\times Y.}
The complementary relation
R
¯
{\displaystyle {\bar {R}}}
is the set complement of
R
{\displaystyle R}
in
X
×
Y
.
{\displaystyle X\times Y.}
The complement of relation
R
{\displaystyle R}
can be written
R
¯
=
(
X
×
Y
)
∖
R
.
{\displaystyle {\bar {R}}\ =\ (X\times Y)\setminus R.}
Here,
R
{\displaystyle R}
is often viewed as a logical matrix with rows representing the elements of
X
,
{\displaystyle X,}
and columns elements of
Y
.
{\displaystyle Y.}
The truth of
a
R
b
{\displaystyle aRb}
corresponds to 1 in row
a
,
{\displaystyle a,}
column
b
.
{\displaystyle b.}
Producing the complementary relation to
R
{\displaystyle R}
then corresponds to switching all 1s to 0s, and 0s to 1s for the logical matrix of the complement.
Together with composition of relations and converse relations, complementary relations and the algebra of sets are the elementary operations of the calculus of relations.
== LaTeX notation ==
In the LaTeX typesetting language, the command \setminus is usually used for rendering a set difference symbol, which is similar to a backslash symbol. When rendered, the \setminus command looks identical to \backslash, except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin{\backslash}. A variant \smallsetminus is available in the amssymb package, but this symbol is not included separately in Unicode. The symbol
∁
{\displaystyle \complement }
(as opposed to
C
{\displaystyle C}
) is produced by \complement. (It corresponds to the Unicode symbol U+2201 ∁ COMPLEMENT.)
== See also ==
Algebra of sets – Identities and relationships involving sets
Intersection (set theory) – Set of elements common to all of some sets
List of set identities and relations – Equalities for combinations of sets
Naive set theory – Informal set theories
Symmetric difference – Elements in exactly one of two sets
Union (set theory) – Set of elements in any of some sets
== Notes ==
== References ==
Bourbaki, N. (1970). Théorie des ensembles (in French). Paris: Hermann. ISBN 978-3-540-34034-8.
Devlin, Keith J. (1979). Fundamentals of contemporary set theory. Universitext. Springer. ISBN 0-387-90441-7. Zbl 0407.04003.
Halmos, Paul R. (1960). Naive set theory. The University Series in Undergraduate Mathematics. van Nostrand Company. ISBN 9780442030643. Zbl 0087.04403. {{cite book}}: ISBN / Date incompatibility (help)
== External links ==
Weisstein, Eric W. "Complement". MathWorld.
Weisstein, Eric W. "Complement Set". MathWorld. | Wikipedia/Difference_(set_theory) |
In database theory, relational algebra is a theory that uses algebraic structures for modeling data and defining queries on it with well founded semantics. The theory was introduced by Edgar F. Codd.
The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations.
The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results).
Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth.
== Introduction ==
Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages.
Relational algebra operates on homogeneous sets of tuples
S
=
{
(
s
j
1
,
s
j
2
,
.
.
.
s
j
n
)
|
j
∈
1...
m
}
{\displaystyle S=\{(s_{j1},s_{j2},...s_{jn})|j\in 1...m\}}
where we commonly interpret m to be the number of rows of tuples in a table and n to be the number of columns. All entries in each column have the same type.
A relation also has a unique tuple called the header which gives each column a unique name or attribute inside the relation. Attributes are used in projections and selections.
== Set operators ==
The relational algebra uses set union, set difference, and Cartesian product from set theory, and adds additional constraints to these operators to create new ones.
For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible.
For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name.
In addition, the Cartesian product is defined differently from the one in set theory in the sense that tuples are considered to be "shallow" for the purposes of the operation. That is, the Cartesian product of a set of n-tuples with a set of m-tuples yields a set of "flattened" (n + m)-tuples (whereas basic set theory would have prescribed a set of 2-tuples, each containing an n-tuple and an m-tuple). More formally, R × S is defined as follows:
R
×
S
:=
{
(
r
1
,
r
2
,
…
,
r
n
,
s
1
,
s
2
,
…
,
s
m
)
|
(
r
1
,
r
2
,
…
,
r
n
)
∈
R
,
(
s
1
,
s
2
,
…
,
s
m
)
∈
S
}
{\displaystyle R\times S:=\{(r_{1},r_{2},\dots ,r_{n},s_{1},s_{2},\dots ,s_{m})|(r_{1},r_{2},\dots ,r_{n})\in R,(s_{1},s_{2},\dots ,s_{m})\in S\}}
The cardinality of the Cartesian product is the product of the cardinalities of its factors, that is, |R × S| = |R| × |S|.
== Projection ==
A projection (Π) is a unary operation written as
Π
a
1
,
…
,
a
n
(
R
)
{\displaystyle \Pi _{a_{1},\ldots ,a_{n}}(R)}
where
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set
{
a
1
,
…
,
a
n
}
{\displaystyle \{a_{1},\ldots ,a_{n}\}}
.
Note: when implemented in SQL standard the "default projection" returns a multiset instead of a set, and the Π projection to eliminate duplicate data is obtained by the addition of the DISTINCT keyword.
== Selection ==
A generalized selection (σ) is a unary operation written as
σ
φ
(
R
)
{\displaystyle \sigma _{\varphi }(R)}
where φ is a propositional formula that consists of atoms as allowed in the normal selection and the logical operators
∧
{\displaystyle \wedge }
(and),
∨
{\displaystyle \lor }
(or) and
¬
{\displaystyle \neg }
(negation). This selection selects all those tuples in R for which φ holds.
To obtain a listing of all friends or business associates in an address book, the selection might be written as
σ
isFriend = true
∨
isBusinessContact = true
(
addressBook
)
{\displaystyle \sigma _{{\text{isFriend = true}}\,\lor \,{\text{isBusinessContact = true}}}({\text{addressBook}})}
. The result would be a relation containing every attribute of every unique record where isFriend is true or where isBusinessContact is true.
== Rename ==
A rename (ρ) is a unary operation written as
ρ
a
/
b
(
R
)
{\displaystyle \rho _{a/b}(R)}
where the result is identical to R except that the b attribute in all tuples is renamed to an a attribute. This is commonly used to rename the attribute of a relation for the purpose of a join.
To rename the "isFriend" attribute to "isBusinessContact" in a relation,
ρ
isBusinessContact / isFriend
(
addressBook
)
{\displaystyle \rho _{\text{isBusinessContact / isFriend}}({\text{addressBook}})}
might be used.
There is also the
ρ
x
(
A
1
,
…
,
A
n
)
(
R
)
{\displaystyle \rho _{x(A_{1},\ldots ,A_{n})}(R)}
notation, where R is renamed to x and the attributes
{
a
1
,
…
,
a
n
}
{\displaystyle \{a_{1},\ldots ,a_{n}\}}
are renamed to
{
A
1
,
…
,
A
n
}
{\displaystyle \{A_{1},\ldots ,A_{n}\}}
.
== Joins and join-like operators ==
=== Natural join ===
Natural join (⨝) is a binary operator that is written as (R ⨝ S) where R and S are relations. The result of the natural join is the set of all combinations of tuples in R and S that are equal on their common attribute names. For an example consider the tables Employee and Dept and their natural join:
Note that neither the employee named Mary nor the Production department appear in the result. Mary does not appear in the result because Mary's Department, "Human Resources", is not listed in the Dept relation and the Production department does not appear in the result because there are no tuples in the Employee relation that have "Production" as their DeptName attribute.
This can also be used to define composition of relations. For example, the composition of Employee and Dept is their join as shown above, projected on all but the common attribute DeptName. In category theory, the join is precisely the fiber product.
The natural join is arguably one of the most important operators since it is the relational counterpart of the logical AND operator. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value (this is a consequence of the idempotence of the logical AND). In particular, natural join allows the combination of relations that are associated by a foreign key. For example, in the above example a foreign key probably holds from Employee.DeptName to Dept.DeptName and then the natural join of Employee and Dept combines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key from Dept.Manager to Employee.Name then these columns must be renamed before taking the natural join. Such a join is sometimes also referred to as an equijoin.
More formally the semantics of the natural join are defined as follows:
where Fun(t) is a predicate that is true for a relation t (in the mathematical sense) iff t is a function (that is, t does not map any attribute to multiple values). It is usually required that R and S must have at least one common attribute, but if this constraint is omitted, and R and S have no common attributes, then the natural join becomes exactly the Cartesian product.
The natural join can be simulated with Codd's primitives as follows. Assume that c1,...,cm are the attribute names common to R and S, r1,...,rn are the
attribute names unique to R and s1,...,sk are the
attribute names unique to S. Furthermore, assume that the attribute names x1,...,xm are neither in R nor in S. In a first step the common attribute names in S can be renamed:
Then we take the Cartesian product and select the tuples that are to be joined:
Finally we take a projection to get rid of the renamed attributes:
=== θ-join and equijoin ===
Consider tables Car and Boat which list models of cars and boats and their respective prices. Suppose a customer wants to buy a car and a boat, but she does not want to spend more money for the boat than for the car. The θ-join (⋈θ) on the predicate CarPrice ≥ BoatPrice produces the flattened pairs of rows which satisfy the predicate. When using a condition where the attributes are equal, for example Price, then the condition may be specified as Price=Price
or alternatively (Price) itself.
In order to combine tuples from two relations where the combination condition is not simply the equality of shared attributes it is convenient to have a more general form of join operator, which is the θ-join (or theta-join). The θ-join is a binary operator that is written as
R
⋈
S
a
θ
b
{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ b}}
or
R
⋈
S
a
θ
v
{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ v}}
where a and b are attribute names, θ is a binary relational operator in the set {<, ≤, =, ≠, >, ≥}, υ is a value constant, and R and S are relations. The result of this operation consists of all combinations of tuples in R and S that satisfy θ. The result of the θ-join is defined only if the headers of S and R are disjoint, that is, do not contain a common attribute.
The simulation of this operation in the fundamental operations is therefore as follows:
R ⋈θ S = σθ(R × S)
In case the operator θ is the equality operator (=) then this join is also called an equijoin.
Note, however, that a computer language that supports the natural join and selection operators does not need θ-join as well, as this can be achieved by selection from the result of a natural join (which degenerates to Cartesian product when there are no shared attributes).
In SQL implementations, joining on a predicate is usually called an inner join, and the on keyword allows one to specify the predicate used to filter the rows. It is important to note: forming the flattened Cartesian product then filtering the rows is conceptually correct, but an implementation would use more sophisticated data structures to speed up the join query.
=== Semijoin ===
The left semijoin (⋉ and ⋊) is a joining similar to the natural join and written as
R
⋉
S
{\displaystyle R\ltimes S}
where
R
{\displaystyle R}
and
S
{\displaystyle S}
are relations. The result is the set of all tuples in
R
{\displaystyle R}
for which there is a tuple in
S
{\displaystyle S}
that is equal on their common attribute names. The difference from a natural join is that other columns of
S
{\displaystyle S}
do not appear. For example, consider the tables Employee and Dept and their semijoin:
More formally the semantics of the semijoin can be defined as
follows:
R
⋉
S
=
{
t
:
t
∈
R
∧
∃
s
∈
S
(
Fun
(
t
∪
s
)
)
}
{\displaystyle R\ltimes S=\{t:t\in R\land \exists s\in S(\operatorname {Fun} (t\cup s))\}}
where
Fun
(
r
)
{\displaystyle \operatorname {Fun} (r)}
is as in the definition of natural join.
The semijoin can be simulated using the natural join as follows. If
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
are the attribute names of
R
{\displaystyle R}
, then
R
⋉
S
=
Π
a
1
,
…
,
a
n
(
R
⋈
S
)
.
{\displaystyle R\ltimes S=\Pi _{a_{1},\ldots ,a_{n}}(R\bowtie S).}
Since we can simulate the natural join with the basic operators it follows that this also holds for the semijoin.
In Codd's 1970 paper, semijoin is called restriction.
=== Antijoin ===
The antijoin (▷), written as R ▷ S where R and S are relations, is similar to the semijoin, but the result of an antijoin is only those tuples in R for which there is no tuple in S that is equal on their common attribute names.
For an example consider the tables Employee and Dept and their
antijoin:
The antijoin is formally defined as follows:
R ▷ S = { t : t ∈ R ∧ ¬∃s ∈ S(Fun (t ∪ s))}
or
R ▷ S = { t : t ∈ R, there is no tuple s of S that satisfies Fun (t ∪ s)}
where Fun (t ∪ s) is as in the definition of natural join.
The antijoin can also be defined as the complement of the semijoin, as follows:
Given this, the antijoin is sometimes called the anti-semijoin, and the antijoin operator is sometimes written as semijoin symbol with a bar above it, instead of ▷.
In the case where the relations have the same attributes (union-compatible), antijoin is the same as minus.
=== Division ===
The division (÷) is a binary operation that is written as R ÷ S. Division is not implemented directly in SQL. The result consists of the restrictions of tuples in R to the attribute names unique to R, i.e., in the header of R but not in the header of S, for which it holds that all their combinations with tuples in S are present in R.
==== Example ====
If DBProject contains all the tasks of the Database project, then the result of the division above contains exactly the students who have completed both of the tasks in the Database project.
More formally the semantics of the division is defined as follows:where {a1,...,an} is the set of attribute names unique to R and t[a1,...,an] is the restriction of t to this set. It is usually required that the attribute names in the header of S are a subset of those of R because otherwise the result of the operation will always be empty.
The simulation of the division with the basic operations is as follows. We assume that a1,...,an are the attribute names unique to R and b1,...,bm are the attribute names of S. In the first step we project R on its unique attribute names and construct all combinations with tuples in S:
T := πa1,...,an(R) × S
In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene → Database1 and Eugene → Database2 in T.
EG: First, let's pretend that "Completed" has a third attribute called "grade". It's unwanted baggage here, so we must project it off always. In fact in this step we can drop "Task" from R as well; the multiply puts it back on.
T := πStudent(R) × S // This gives us every possible desired combination, including those that don't actually exist in R, and excluding others (eg Fred | compiler1, which is not a desired combination)
In the next step we subtract R from T
relation:
U := T − R
In U we have the possible combinations that "could have" been in R, but weren't.
EG: Again with projections — T and R need to have identical attribute names/headers.
U := T − πStudent,Task(R) // This gives us a "what's missing" list.
So if we now take the projection on the attribute names unique to R
then we have the restrictions of the tuples in R for which not
all combinations with tuples in S were present in R:
V := πa1,...,an(U)
EG: Project U down to just the attribute(s) in question (Student)
V := πStudent(U)
So what remains to be done is take the projection of R on its
unique attribute names and subtract those in V:
W := πa1,...,an(R) − V
EG: W := πStudent(R) − V.
== Common extensions ==
In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure.
=== Outer joins ===
Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far.
The operators defined in this section assume the existence of a null value, ω, which we do not define, to be used for the fill values; in practice this corresponds to the NULL in SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection is extended to a three-valued logic, although we elide those details in this article.
Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.)
==== Left outer join ====
The left outer join (⟕) is written as R ⟕ S where R and S are relations. The result of the left outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition (loosely speaking) to tuples in R that have no matching tuples in S.
For an example consider the tables Employee and Dept and their left outer join:
In the resulting relation, tuples in S which have no common values in common attribute names with tuples in R take a null value, ω.
Since there are no tuples in Dept with a DeptName of Finance or Executive, ωs occur in the resulting relation where tuples in Employee have a DeptName of Finance or Executive.
Let r1, r2, ..., rn be the attributes of the relation R and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation S (those that are not attributes of R). Then the left outer join can be described in terms of the natural join (and hence using basic operators) as follows:
(
R
⋈
S
)
∪
(
(
R
−
π
r
1
,
r
2
,
…
,
r
n
(
R
⋈
S
)
)
×
{
(
ω
,
…
,
ω
)
}
)
{\displaystyle (R\bowtie S)\cup ((R-\pi _{r_{1},r_{2},\dots ,r_{n}}(R\bowtie S))\times \{(\omega ,\dots ,\omega )\})}
==== Right outer join ====
The right outer join (⟖) behaves almost identically to the left outer join, but the roles of the tables are switched.
The right outer join of relations R and S is written as R ⟖ S. The result of the right outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R.
For example, consider the tables Employee and Dept and their right outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω.
Since there are no tuples in Employee with a DeptName of Production, ωs occur in the Name and EmpId attributes of the resulting relation where tuples in Dept had DeptName of Production.
Let s1, s2, ..., sn be the attributes of the relation S and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation R (those that are not attributes of S). Then, as with the left outer join, the right outer join can be simulated using the natural join as follows:
(
R
⋈
S
)
∪
(
{
(
ω
,
…
,
ω
)
}
×
(
S
−
π
s
1
,
s
2
,
…
,
s
n
(
R
⋈
S
)
)
)
{\displaystyle (R\bowtie S)\cup (\{(\omega ,\dots ,\omega )\}\times (S-\pi _{s_{1},s_{2},\dots ,s_{n}}(R\bowtie S)))}
==== Full outer join ====
The outer join (⟗) or full outer join in effect combines the results of the left and right outer joins.
The full outer join is written as R ⟗ S where R and S are relations. The result of the full outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R and tuples in R that have no matching tuples in S in their common attribute names.
For an example consider the tables Employee and Dept and their full outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω. Tuples in S which have no common values in common attribute names with tuples in R also take a null value, ω.
The full outer join can be simulated using the left and right outer joins (and hence the natural join and set union) as follows:
R ⟗ S = (R ⟕ S) ∪ (R ⟖ S)
=== Operations for domain computations ===
There is nothing in relational algebra introduced so far that would allow computations on the data domains (other than evaluation of propositional expressions involving equality). For example, it is not possible using only the algebra introduced so far to write an expression that would multiply the numbers from two columns, e.g. a unit price with a quantity to obtain a total price. Practical query languages have such facilities, e.g. the SQL SELECT allows arithmetic operations to define new columns in the result SELECT unit_price * quantity AS total_price FROM t, and a similar facility is provided more explicitly by Tutorial D's EXTEND keyword. In database theory, this is called extended projection.: 213
==== Aggregation ====
Furthermore, computing various functions on a column, like the summing up of its elements, is also not possible using the relational algebra introduced so far. There are five aggregate functions that are included with most relational database systems. These operations are Sum, Count, Average, Maximum and Minimum. In relational algebra the aggregation operation over a schema (A1, A2, ... An) is written as follows:
G
1
,
G
2
,
…
,
G
m
g
f
1
(
A
1
′
)
,
f
2
(
A
2
′
)
,
…
,
f
k
(
A
k
′
)
(
r
)
{\displaystyle G_{1},G_{2},\ldots ,G_{m}\ g_{f_{1}({A_{1}}'),f_{2}({A_{2}}'),\ldots ,f_{k}({A_{k}}')}\ (r)}
where each Aj', 1 ≤ j ≤ k, is one of the original attributes Ai, 1 ≤ i ≤ n.
The attributes preceding the g are grouping attributes, which function like a "group by" clause in SQL. Then there are an arbitrary number of aggregation functions applied to individual attributes. The operation is applied to an arbitrary relation r. The grouping attributes are optional, and if they are not supplied, the aggregation functions are applied across the entire relation to which the operation is applied.
Let's assume that we have a table named Account with three columns, namely Account_Number, Branch_Name and Balance. We wish to find the maximum balance of each branch. This is accomplished by Branch_NameGMax(Balance)(Account). To find the highest balance of all accounts regardless of branch, we could simply write GMax(Balance)(Account).
Grouping is often written as Branch_NameɣMax(Balance)(Account) instead.
=== Transitive closure ===
Although relational algebra seems powerful enough for most practical purposes, there are some simple and natural operators on relations that cannot be expressed by relational algebra. One of them is the transitive closure of a binary relation. Given a domain D, let binary relation R be a subset of D×D. The transitive closure R+ of R is the smallest subset of D×D that contains R and satisfies the following condition:
∀
x
∀
y
∀
z
(
(
x
,
y
)
∈
R
+
∧
(
y
,
z
)
∈
R
+
⇒
(
x
,
z
)
∈
R
+
)
{\displaystyle \forall x\forall y\forall z\left((x,y)\in R^{+}\wedge (y,z)\in R^{+}\Rightarrow (x,z)\in R^{+}\right)}
It can be proved using the fact that there is no relational algebra expression E(R) taking R as a variable argument that produces R+.
SQL however officially supports such fixpoint queries since 1999, and it had vendor-specific extensions in this direction well before that.
== Use of algebraic properties for query optimization ==
Relational database management systems often include a query optimizer which attempts to determine the most efficient way to execute a given query. Query optimizers enumerate possible query plans, estimate their cost, and pick the plan with the lowest estimated cost. If queries are represented by operators from relational algebra, the query optimizer can enumerate possible query plans by rewriting the initial query using the algebraic properties of these operators.
Queries can be represented as a tree, where
the internal nodes are operators,
leaves are relations,
subtrees are subexpressions.
The primary goal of the query optimizer is to transform expression trees into equivalent expression trees, where the average size of the relations yielded by subexpressions in the tree is smaller than it was before the optimization. The secondary goal is to try to form common subexpressions within a single query, or if there is more than one query being evaluated at the same time, in all of those queries. The rationale behind the second goal is that it is enough to compute common subexpressions once, and the results can be used in all queries that contain that subexpression.
Here are a set of rules that can be used in such transformations.
=== Selection ===
Rules about selection operators play the most important role in query optimization. Selection is an operator that very effectively decreases the number of rows in its operand, so if the selections in an expression tree are moved towards the leaves, the internal relations (yielded by subexpressions) will likely shrink.
==== Basic selection properties ====
Selection is idempotent (multiple applications of the same selection have no additional effect beyond the first one), and commutative (the order selections are applied in has no effect on the eventual result).
σ
A
(
R
)
=
σ
A
σ
A
(
R
)
{\displaystyle \sigma _{A}(R)=\sigma _{A}\sigma _{A}(R)\,\!}
σ
A
σ
B
(
R
)
=
σ
B
σ
A
(
R
)
{\displaystyle \sigma _{A}\sigma _{B}(R)=\sigma _{B}\sigma _{A}(R)\,\!}
==== Breaking up selections with complex conditions ====
A selection whose condition is a conjunction of simpler conditions is equivalent to a sequence of selections with those same individual conditions, and selection whose condition is a disjunction is equivalent to a union of selections. These identities can be used to merge selections so that fewer selections need to be evaluated, or to split them so that the component selections may be moved or optimized separately.
σ
A
∧
B
(
R
)
=
σ
A
(
σ
B
(
R
)
)
=
σ
B
(
σ
A
(
R
)
)
{\displaystyle \sigma _{A\land B}(R)=\sigma _{A}(\sigma _{B}(R))=\sigma _{B}(\sigma _{A}(R))}
σ
A
∨
B
(
R
)
=
σ
A
(
R
)
∪
σ
B
(
R
)
{\displaystyle \sigma _{A\lor B}(R)=\sigma _{A}(R)\cup \sigma _{B}(R)}
==== Selection and cross product ====
Cross product is the costliest operator to evaluate. If the input relations have N and M rows, the result will contain
N
M
{\displaystyle NM}
rows. Therefore, it is important to decrease the size of both operands before applying the cross product operator.
This can be effectively done if the cross product is followed by a selection operator, e.g.
σ
A
(
R
×
P
)
{\displaystyle \sigma _{A}(R\times P)}
. Considering the definition of join, this is the most likely case. If the cross product is not followed by a selection operator, we can try to push down a selection from higher levels of the expression tree using the other selection rules.
In the above case the condition A is broken up in to conditions B, C and D using the split rules about complex selection conditions, so that
A
=
B
∧
C
∧
D
{\displaystyle A=B\wedge C\wedge D}
and B contains attributes only from R, C contains attributes only from P, and D contains the part of A that contains attributes from both R and P. Note, that B, C or D are possibly empty. Then the following holds:
σ
A
(
R
×
P
)
=
σ
B
∧
C
∧
D
(
R
×
P
)
=
σ
D
(
σ
B
(
R
)
×
σ
C
(
P
)
)
{\displaystyle \sigma _{A}(R\times P)=\sigma _{B\wedge C\wedge D}(R\times P)=\sigma _{D}(\sigma _{B}(R)\times \sigma _{C}(P))}
==== Selection and set operators ====
Selection is distributive over the set difference, intersection, and union operators. The following three rules are used to push selection below set operations in the expression tree. For the set difference and the intersection operators, it is possible to apply the selection operator to just one of the operands following the transformation. This can be beneficial where one of the operands is small, and the overhead of evaluating the selection operator outweighs the benefits of using a smaller relation as an operand.
σ
A
(
R
∖
P
)
=
σ
A
(
R
)
∖
σ
A
(
P
)
=
σ
A
(
R
)
∖
P
{\displaystyle \sigma _{A}(R\setminus P)=\sigma _{A}(R)\setminus \sigma _{A}(P)=\sigma _{A}(R)\setminus P}
σ
A
(
R
∪
P
)
=
σ
A
(
R
)
∪
σ
A
(
P
)
{\displaystyle \sigma _{A}(R\cup P)=\sigma _{A}(R)\cup \sigma _{A}(P)}
σ
A
(
R
∩
P
)
=
σ
A
(
R
)
∩
σ
A
(
P
)
=
σ
A
(
R
)
∩
P
=
R
∩
σ
A
(
P
)
{\displaystyle \sigma _{A}(R\cap P)=\sigma _{A}(R)\cap \sigma _{A}(P)=\sigma _{A}(R)\cap P=R\cap \sigma _{A}(P)}
==== Selection and projection ====
Selection commutes with projection if and only if the fields referenced in the selection condition are a subset of the fields in the projection. Performing selection before projection may be useful if the operand is a cross product or join. In other cases, if the selection condition is relatively expensive to compute, moving selection outside the projection may reduce the number of tuples which must be tested (since projection may produce fewer tuples due to the elimination of duplicates resulting from omitted fields).
π
a
1
,
…
,
a
n
(
σ
A
(
R
)
)
=
σ
A
(
π
a
1
,
…
,
a
n
(
R
)
)
where fields in
A
⊆
{
a
1
,
…
,
a
n
}
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(\sigma _{A}(R))=\sigma _{A}(\pi _{a_{1},\ldots ,a_{n}}(R)){\text{ where fields in }}A\subseteq \{a_{1},\ldots ,a_{n}\}}
=== Projection ===
==== Basic projection properties ====
Projection is idempotent, so that a series of (valid) projections is equivalent to the outermost projection.
π
a
1
,
…
,
a
n
(
π
b
1
,
…
,
b
m
(
R
)
)
=
π
a
1
,
…
,
a
n
(
R
)
where
{
a
1
,
…
,
a
n
}
⊆
{
b
1
,
…
,
b
m
}
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(\pi _{b_{1},\ldots ,b_{m}}(R))=\pi _{a_{1},\ldots ,a_{n}}(R){\text{ where }}\{a_{1},\ldots ,a_{n}\}\subseteq \{b_{1},\ldots ,b_{m}\}}
==== Projection and set operators ====
Projection is distributive over set union.
π
a
1
,
…
,
a
n
(
R
∪
P
)
=
π
a
1
,
…
,
a
n
(
R
)
∪
π
a
1
,
…
,
a
n
(
P
)
.
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(R\cup P)=\pi _{a_{1},\ldots ,a_{n}}(R)\cup \pi _{a_{1},\ldots ,a_{n}}(P).\,}
Projection does not distribute over intersection and set difference. Counterexamples are given by:
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
∩
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
∅
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \}\cap \{\langle A=a,B=b'\rangle \})=\emptyset }
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
)
∩
π
A
(
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
{
⟨
A
=
a
⟩
}
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \})\cap \pi _{A}(\{\langle A=a,B=b'\rangle \})=\{\langle A=a\rangle \}}
and
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
∖
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
{
⟨
A
=
a
⟩
}
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \}\setminus \{\langle A=a,B=b'\rangle \})=\{\langle A=a\rangle \}}
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
)
∖
π
A
(
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
∅
,
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \})\setminus \pi _{A}(\{\langle A=a,B=b'\rangle \})=\emptyset \,,}
where b is assumed to be distinct from b'.
=== Rename ===
==== Basic rename properties ====
Successive renames of a variable can be collapsed into a single rename. Rename operations which have no variables in common can be arbitrarily reordered with respect to one another, which can be exploited to make successive renames adjacent so that they can be collapsed.
ρ
a
/
b
(
ρ
b
/
c
(
R
)
)
=
ρ
a
/
c
(
R
)
{\displaystyle \rho _{a/b}(\rho _{b/c}(R))=\rho _{a/c}(R)\,\!}
ρ
a
/
b
(
ρ
c
/
d
(
R
)
)
=
ρ
c
/
d
(
ρ
a
/
b
(
R
)
)
{\displaystyle \rho _{a/b}(\rho _{c/d}(R))=\rho _{c/d}(\rho _{a/b}(R))\,\!}
==== Rename and set operators ====
Rename is distributive over set difference, union, and intersection.
ρ
a
/
b
(
R
∖
P
)
=
ρ
a
/
b
(
R
)
∖
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\setminus P)=\rho _{a/b}(R)\setminus \rho _{a/b}(P)}
ρ
a
/
b
(
R
∪
P
)
=
ρ
a
/
b
(
R
)
∪
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\cup P)=\rho _{a/b}(R)\cup \rho _{a/b}(P)}
ρ
a
/
b
(
R
∩
P
)
=
ρ
a
/
b
(
R
)
∩
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\cap P)=\rho _{a/b}(R)\cap \rho _{a/b}(P)}
=== Product and union ===
Cartesian product is distributive over union.
(
A
×
B
)
∪
(
A
×
C
)
=
A
×
(
B
∪
C
)
{\displaystyle (A\times B)\cup (A\times C)=A\times (B\cup C)}
== Implementations ==
The first query language to be based on Codd's algebra was Alpha, developed by Dr. Codd himself. Subsequently, ISBL was created, and this pioneering work has been acclaimed by many authorities as having shown the way to make Codd's idea into a useful language. Business System 12 was a short-lived industry-strength relational DBMS that followed the ISBL example.
In 1998 Chris Date and Hugh Darwen proposed a language called Tutorial D intended for use in teaching relational database theory, and its query language also draws on ISBL's ideas. Rel is an implementation of Tutorial D. Bmg is an implementation of relational algebra in Ruby which closely follows the principles of Tutorial D and The Third Manifesto.
Even the query language of SQL is loosely based on a relational algebra, though the operands in SQL (tables) are not exactly relations and several useful theorems about the relational algebra do not hold in the SQL counterpart (arguably to the detriment of optimisers and/or users). The SQL table model is a bag (multiset), rather than a set. For example, the expression
(
R
∪
S
)
∖
T
=
(
R
∖
T
)
∪
(
S
∖
T
)
{\displaystyle (R\cup S)\setminus T=(R\setminus T)\cup (S\setminus T)}
is a theorem for relational algebra on sets, but not for relational algebra on bags.
== See also ==
== Notes ==
== References ==
== Further reading ==
Imieliński, T.; Lipski, W. (1984). "The relational model of data and cylindric algebras". Journal of Computer and System Sciences. 28: 80–102. doi:10.1016/0022-0000(84)90077-1. (For relationship with cylindric algebras).
== External links ==
RAT Relational Algebra Translator Free software to convert relational algebra to SQL
Lecture Videos: Relational Algebra Processing - An introduction to how database systems process relational algebra
Lecture Notes: Relational Algebra – A quick tutorial to adapt SQL queries into relational algebra
Relational – A graphic implementation of the relational algebra
Query Optimization This paper is an introduction into the use of the relational algebra in optimizing queries, and includes numerous citations for more in-depth study.
Relational Algebra System for Oracle and Microsoft SQL Server
Pireal – An experimental educational tool for working with Relational Algebra
DES – An educational tool for working with Relational Algebra and other formal languages
RelaX - Relational Algebra Calculator (open-source software available as an online service without registration)
RA: A Relational Algebra Interpreter
Translating SQL to Relational Algebra | Wikipedia/Join_(relational_algebra) |
In relational algebra, a rename is a unary operation written as
ρ
a
/
b
(
R
)
{\displaystyle \rho _{a/b}(R)}
where:
R is a relation
a and b are attribute names
b is an attribute of R
The result is identical to R except that the b attribute in all tuples is renamed to a. For an example, consider the following invocation of ρ on an Employee relation and the result of that invocation:
Formally, the semantics of the rename operator is defined as follows:
ρ
a
/
b
(
R
)
=
{
t
[
a
/
b
]
:
t
∈
R
}
,
{\displaystyle \rho _{a/b}(R)=\{\ t[a/b]:t\in R\ \},}
where
t
[
a
/
b
]
{\displaystyle t[a/b]}
is defined as the tuple t, with the b attribute renamed to a, so that:
t
[
a
/
b
]
=
{
(
c
,
v
)
|
(
c
,
v
)
∈
t
,
c
≠
b
}
∪
{
(
a
,
t
(
b
)
)
}
.
{\displaystyle t[a/b]=\{\ (c,v)\ |\ (c,v)\in t,\ c\neq b\ \}\cup \{\ (a,\ t(b))\ \}.}
== References == | Wikipedia/Rename_(relational_algebra) |
Distributed concurrency control is the concurrency control of a system distributed over a computer network (Bernstein et al. 1987, Weikum and Vossen 2001).
In database systems and transaction processing (transaction management) distributed concurrency control refers primarily to the concurrency control of a distributed database. It also refers to the concurrency control in a multidatabase (and other multi-transactional object) environment (e.g., federated database, grid computing, and cloud computing environments. A major goal for distributed concurrency control is distributed serializability (or global serializability for multidatabase systems). Distributed concurrency control poses special challenges beyond centralized one, primarily due to communication and computer latency. It often requires special techniques, like distributed lock manager over fast computer networks with low latency, like switched fabric (e.g., InfiniBand).
The most common distributed concurrency control technique is strong strict two-phase locking (SS2PL, also named rigorousness), which is also a common centralized concurrency control technique. SS2PL provides both the serializability and strictness. Strictness, a special case of recoverability, is utilized for effective recovery from failure. For large-scale distribution and complex transactions, distributed locking's typical heavy performance penalty (due to delays, latency) can be saved by using the atomic commitment protocol, which is needed in a distributed database for (distributed) transactions' atomicity.
== See also ==
Global concurrency control
== References ==
Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems, Addison Wesley Publishing Company, 1987, ISBN 0-201-10715-5
Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, ISBN 1-55860-508-8 | Wikipedia/Distributed_concurrency_control |
An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, peripherals, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
As of September 2024, Android is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems. Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems), such as embedded and real-time systems, exist for many applications. Security-focused operating systems also exist. Some operating systems have low system requirements (e.g. light-weight Linux distribution). Others may have higher system requirements.
Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e. live CD) or flash memory (i.e. a LiveUSB from a USB stick).
== Definition and purpose ==
An operating system is difficult to define, but has been called "the layer of software that manages a computer's resources for its users and their applications". Operating systems include the software that is always running, called a kernel—but can include other software as well. The two other types of programs that can run on a computer are system programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software.
There are three main purposes that an operating system fulfills:
Operating systems allocate resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory. On modern personal computers, users often want to run several applications at once. In order to ensure that one program cannot monopolize the computer's limited hardware resources, the operating system gives each application a share of the resource, either in time (CPU) or space (memory). The operating system also must isolate applications from each other to protect them from errors and security vulnerabilities in another application's code, but enable communications between different applications.
Operating systems provide an interface that abstracts the details of accessing hardware details (such as physical memory) to make things easier for programmers. Virtualization also enables the operating system to mask limited hardware resources; for example, virtual memory can provide a program with the illusion of nearly unlimited memory that exceeds the computer's actual memory.
Operating systems provide common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten. Which services to include in an operating system varies greatly, and this functionality makes up the great majority of code for most operating systems.
== Types of operating systems ==
=== Multicomputer operating systems ===
With multiprocessors multiple CPUs share memory. A multicomputer or cluster computer has multiple CPUs, each of which has its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive; they are universal in cloud computing because of the size of the machine needed. The different CPUs often need to send and receive messages to each other; to ensure good performance, the operating systems for these machines need to minimize this copying of packets. Newer systems are often multiqueue—separating groups of users into separate queues—to reduce the need for packet copying and support more concurrent users. Another technique is remote direct memory access, which enables each CPU to access memory belonging to other CPUs. Multicomputer operating systems often support remote procedure calls where a CPU can call a procedure on another CPU, or distributed shared memory, in which the operating system uses virtualization to generate shared memory that does not physically exist.
=== Distributed systems ===
A distributed system is a group of distinct, networked computers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world. Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system.
=== Embedded ===
Embedded operating systems are designed to be used in embedded computer systems, whether they are internet of things objects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10 kilobytes, and the smallest are for smart cards. Examples include Embedded Linux, QNX, VxWorks, and the extra-small systems RIOT and TinyOS.
=== Real-time ===
A real-time operating system is an operating system that guarantees to process events or data by or at a specific moment in time. Hard real-time systems require exact timing and are common in manufacturing, avionics, military, and other similar uses. With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones. In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such as eCos.
=== Hypervisor ===
A hypervisor is an operating system that runs a virtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware. Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development, and debugging. They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system.
=== Library ===
A library operating system (libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with a single application and configuration code to construct a unikernel:
a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together
), single address space, machine image that can be deployed to cloud or embedded environments.
The operating system code and application code are not executed in separated protection domains (there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentially inlining them based on compiler thresholds), without the usual overhead of context switches,
in a way similarly to embedded and real-time OSes. Note that this overhead is not negligible: to the direct cost of mode switching it's necessary to add the indirect pollution of important processor structures (like CPU caches, the instruction pipeline, and so on) which affects both user-mode and kernel-mode performance.
== History ==
The first computers in the late 1940s and 1950s were directly programmed either with plugboards or with machine code inputted on media such as punch cards, without programming languages or operating systems. After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators who manually do what a modern operating system would do, such as scheduling programs to run, but mainframes still had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS. In the 1960s, IBM introduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines of assembly language that had thousands of bugs. The OS/360 also was the first popular operating system to support multiprogramming, such that the CPU could be put to use on one job while another was waiting on input/output (I/O). Holding multiple jobs in memory necessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.
Around the same time, teleprinters began to be used as terminals so multiple users could access the computer simultaneously. The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing. The UNIX operating system originated as a development of MULTICS for a single user. Because UNIX's source code was available, it became the basis of other, incompatible operating systems, of which the most successful were AT&T's System V and the University of California's Berkeley Software Distribution (BSD). To increase compatibility, the IEEE released the POSIX standard for operating system application programming interfaces (APIs), which is supported by most UNIX systems. MINIX was a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available, free software Linux. Since 2008, MINIX is used in controllers of most Intel microchips, while Linux is widespread in data centers and Android smartphones.
=== Microcomputers ===
The invention of large scale integration enabled the production of personal computers (initially called microcomputers) from around 1980. For around five years, the CP/M (Control Program for Microcomputers) was the most popular operating system for microcomputers. Later, IBM bought the DOS (Disk Operating System) from Microsoft. After modifications requested by IBM, the resulting system was called MS-DOS (MicroSoft Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.
Apple's Macintosh was the first popular computer to use a graphical user interface (GUI). The GUI proved much more user friendly than the text-only command-line interface earlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay called Windows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a large legal settlement was paid. In the twenty-first century, Windows continues to be popular on personal computers but has less market share of servers. UNIX operating systems, especially Linux, are the most popular on enterprise systems and servers but are also used on mobile devices and many other computer systems.
On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular.
== Components ==
The components of an operating system are designed to ensure that various parts of a computer function cohesively. With the de facto obsoletion of DOS, all user software must interact with the operating system to access hardware.
=== Kernel ===
The kernel is the part of the operating system that provides protection between different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power of malicious software and protecting private data, and ensuring that one program cannot monopolize the computer's resources. Most operating systems have two modes of operation: in user mode, the hardware checks that the software is only executing legal instructions, whereas the kernel has unrestricted powers and is not subject to these checks. The kernel also manages memory for other processes and controls access to input/output devices.
==== Program execution ====
The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program typically involves the creation of a process by the operating system kernel, which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program, which then interacts with the user and with hardware devices. However, in some systems an application can request that the operating system execute another application within the same process, either as a subroutine or in a separate thread, e.g., the LINK and ATTACH facilities of OS/360 and successors.
==== Interrupts ====
An interrupt (also known as an abort, exception, fault, signal, or trap) provides an efficient way for most operating systems to react to the environment. Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running program to an interrupt handler, also known as an interrupt service routine (ISR). An interrupt service routine may cause the central processing unit (CPU) to have a context switch. The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave vary from operating system to operating system. However, several interrupt functions are common. The architecture and operating system must:
transfer control to an interrupt service routine.
save the state of the currently running process.
restore the state after the interrupt is serviced.
===== Software interrupt =====
A software interrupt is a message to a process that an event has occurred. This contrasts with a hardware interrupt — which is a message to the central processing unit (CPU) that an event has occurred. Software interrupts are similar to hardware interrupts — there is a change away from the currently running process. Similarly, both hardware and software interrupts execute an interrupt service routine.
Software interrupts may be normally occurring events. It is expected that a time slice will occur, so the kernel will have to perform a context switch. A computer program may set a timer to go off after a few seconds in case too much data causes an algorithm to take too long.
Software interrupts may be error conditions, such as a malformed machine instruction. However, the most common error conditions are division by zero and accessing an invalid memory address.
Users can send messages to the kernel to modify the behavior of a currently running process. For example, in the command-line environment, pressing the interrupt character (usually Control-C) might terminate the currently running process.
To generate software interrupts for x86 CPUs, the INT assembly language instruction is available. The syntax is INT X, where X is the offset number (in hexadecimal format) to the interrupt vector table.
===== Signal =====
To generate software interrupts in Unix-like operating systems, the kill(pid,signum) system call will send a signal to another process. pid is the process identifier of the receiving process. signum is the signal number (in mnemonic format) to be sent. (The abrasive name of kill was chosen because early implementations only terminated the process.)
In Unix-like operating systems, signals inform processes of the occurrence of asynchronous events. To communicate asynchronously, interrupts are required. One reason a process needs to asynchronously communicate to another process solves a variation of the classic reader/writer problem. The writer receives a pipe from the shell for its output to be sent to the reader's input stream. The command-line syntax is alpha | bravo. alpha will write to the pipe when its computation is ready and then sleep in the wait queue. bravo will then be moved to the ready queue and soon will read from its input stream. The kernel will generate software interrupts to coordinate the piping.
Signals may be classified into 7 categories. The categories are:
when a process finishes normally.
when a process has an error exception.
when a process runs out of a system resource.
when a process executes an illegal instruction.
when a process sets an alarm event.
when a process is aborted from the keyboard.
when a process has a tracing alert for debugging.
===== Hardware interrupt =====
Input/output (I/O) devices are slower than the CPU. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish. Instead, a computer may implement interrupts for I/O completion, avoiding the need for polling or busy waiting.
Some computers require an interrupt for each character or word, costing a significant amount of CPU time. Direct memory access (DMA) is an architecture feature to allow devices to bypass the CPU and access main memory directly. (Separate from the architecture, a device may perform direct memory access to and from main memory either directly or via a bus.)
==== Input/output ====
===== Interrupt-driven I/O =====
When a computer user types a key on the keyboard, typically the character appears immediately on the screen. Likewise, when a user moves a mouse, the cursor immediately moves across the screen. Each keystroke and mouse movement generates an interrupt called Interrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character or word transmitted.
===== Direct memory access =====
Devices such as hard disk drives, solid-state drives, and magnetic tape drives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time. Data is, instead, transferred between the device and memory independently of the CPU by hardware such as a channel or a direct memory access controller; an interrupt is delivered only when all the data is transferred.
If a computer program executes a system call to perform a block I/O write operation, then the system call might execute the following instructions:
Set the contents of the CPU's registers (including the program counter) into the process control block.
Create an entry in the device-status table. The operating system maintains this table to keep track of which processes are waiting for which devices. One field in the table is the memory address of the process control block.
Place all the characters to be sent to the device into a memory buffer.
Set the memory address of the memory buffer to a predetermined device register.
Set the buffer size (an integer) to another predetermined register.
Execute the machine instruction to begin the writing.
Perform a context switch to the next process in the ready queue.
While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device will interrupt the currently running process by asserting an interrupt request. The device will also place an integer onto the data bus. Upon accepting the interrupt request, the operating system will:
Push the contents of the program counter (a register) followed by the status register onto the call stack.
Push the contents of the other registers onto the call stack. (Alternatively, the contents of the registers may be placed in a system table.)
Read the integer from the data bus. The integer is an offset to the interrupt vector table. The vector table's instructions will then:
Access the device-status table.
Extract the process control block.
Perform a context switch back to the writing process.
When the writing process has its time slice expired, the operating system will:
Pop from the call stack the registers other than the status register and program counter.
Pop from the call stack the status register.
Pop from the call stack the address of the next instruction, and set it back into the program counter.
With the program counter now reset, the interrupted process will resume its time slice.
==== Memory management ====
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by the programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which does not exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt, which causes the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
==== Virtual memory ====
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that is not accessible memory, but nonetheless has been allocated to it, the kernel is interrupted (see § Memory management). This kind of interrupt is typically a page fault.
When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on a disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.
=== Concurrency ===
Concurrency refers to the operating system's ability to carry out multiple tasks simultaneously. Virtually all modern operating systems support concurrency.
Threads enable splitting a process' work into multiple parts that can run simultaneously. The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating system kernel schedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives. During a context switch a running thread is suspended, its state is saved into the thread control block and stack, and the state of the new thread is loaded in. Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now can interrupt a thread (preemptive multitasking).
Threads have their own thread ID, program counter (PC), a register set, and a stack, but share code, heap data, and other resources with other threads of the same process. Thus, there is less overhead to create a thread than a new process. On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs. Parallelism with multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently.
=== File system ===
Permanent storage devices used in twenty-first century computers, unlike volatile dynamic random-access memory (DRAM), are still accessible after a crash or power failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write. The two main technologies are a hard drive consisting of magnetic disks, and flash memory (a solid-state drive that stores data in electrical circuits). The latter is more expensive but faster and more durable.
File systems are an abstraction used by the operating system to simplify access to permanent storage. They provide human-readable filenames and other metadata, increase performance via amortization of accesses, prevent multiple threads from accessing the same section of memory, and include checksums to identify corruption. File systems are composed of files (named collections of data, of an arbitrary size) and directories (also called folders) that list human-readable filenames and other directories. An absolute file path begins at the root directory and lists subdirectories divided by punctuation, while a relative path defines the location of a file from a directory.
System calls (which are sometimes wrapped by libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application. The operating system's efforts to reduce latency include storing recently requested blocks of memory in a cache and prefetching data that the application has not asked for, but might need next. Device drivers are software specific to each input/output (I/O) device that enables the operating system to work without modification over different hardware.
Another component of file systems is a dictionary that maps a file's name and metadata to the data block where its contents are stored. Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses an index (often implemented as a tree). Separately, there is a free space map to track free blocks, commonly implemented as a bitmap. Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reduce fragmentation.
Maintaining data reliability in the face of a computer crash or hardware failure is another concern. File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing. Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks) and checksums to detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption.
=== Security ===
Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network. Operating systems security rests on achieving the CIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of a denial of service attack). As with other computer systems, isolating security domains—in the case of operating systems, the kernel, processes, and virtual machines—is key to achieving security. Other ways to increase security include simplicity to minimize the attack surface, locking access to resources by default, checking all requests for authorization, principle of least authority (granting the minimum privilege essential for performing a task), privilege separation, and reducing shared data.
Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with a monolithic kernel like most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design features microkernels that separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach. Unikernels are another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application.
Most operating systems are written in C or C++, which create potential vulnerabilities for exploitation. Despite attempts to protect against them, vulnerabilities are caused by buffer overflow attacks, which are enabled by the lack of bounds checking. Hardware vulnerabilities, some of them caused by CPU optimizations, can also be used to compromise the operating system. There are known instances of operating system programmers deliberately implanting vulnerabilities, such as back doors.
Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs. Because formal verification of operating systems may not be feasible, developers use operating system hardening to reduce vulnerabilities, e.g. address space layout randomization, control-flow integrity, access restrictions, and other techniques. There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures. Open source developers strive to work collaboratively to find and eliminate security vulnerabilities, using code review and type checking to expunge malicious code. Andrew S. Tanenbaum advises releasing the source code of all operating systems, arguing that it prevents developers from placing trust in secrecy and thus relying on the unreliable practice of security by obscurity.
=== User interface ===
A user interface (UI) is essential to support human interaction with a computer. The two most common user interface types for any computer are
command-line interface, where computer commands are typed, line-by-line,
graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP.
For personal computers, including smartphones and tablet computers, and for workstations, user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software. Personal computer users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most personal computers. The software to support GUIs is more complex than a command line for input and plain text output. Plain text output is often preferred by programmers, and is easy to support.
== Operating system development as a hobby ==
A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.
In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is her/his own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
Examples of hobby operating systems include Syllable and TempleOS.
== Diversity of operating systems and portability ==
If an application is written for use on a specific operating system, and is ported to another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
== Popular operating systems ==
As of September 2024, Android (based on the Linux kernel) is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems.
=== Linux ===
Linux is a free software distributed under the GNU General Public License (GPL), which means that all of its derivatives are legally required to release their source code. Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy.
Its design is similar to other UNIX systems not using a microkernel. It is written in C and uses UNIX System V syntax, but also supports BSD syntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, while supporting multiple users and employing preemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16 MB of RAM, but still is used on large multiprocessor systems. Similar to other UNIX systems, Linux distributions are composed of a kernel, system libraries, and system utilities. Linux has a graphical user interface (GUI) with a desktop, folder and file icons, as well as the option to access the operating system via a command line.
Android is a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity on smartphones and, to a lesser extent, embedded systems needing a GUI, such as "smart watches, automotive dashboards, airplane seatbacks, medical devices, and home appliances". Unlike Linux, much of Android is written in Java and uses object-oriented design.
=== Microsoft Windows ===
Windows is a proprietary operating system that is widely used on desktop computers, laptops, tablets, phones, workstations, enterprise servers, and Xbox consoles. The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on, energy efficiency and support for dynamic devices also became priorities.
Windows Executive works via kernel-mode objects for important data structures like processes, threads, and sections (memory objects, for example files). The operating system supports demand paging of virtual memory, which speeds up I/O for many applications. I/O device drivers use the Windows Driver Model. The NTFS file system has a master table and each file is represented as a record with metadata. The scheduling includes preemptive multitasking. Windows has many security features; especially important are the use of access-control lists and integrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.
== See also ==
== Notes ==
== References ==
== Further reading ==
== External links ==
Multics History and the history of operating systems | Wikipedia/Operating_systems |
In computer science, a timestamp-based concurrency control algorithm is a optimistic concurrency control method. It is used in some databases to safely handle transactions using timestamps.
== Operation ==
=== Assumptions ===
Every timestamp value is unique and accurately represents an instant in time.
A higher-valued timestamp occurs later in time than a lower-valued timestamp.
=== Generating a timestamp ===
A number of different approaches can generate timestamps
Using the value of the system's clock at the start of a transaction as the timestamp.
Using a thread-safe shared counter that is incremented at the start of a transaction as the timestamp.
A combination of the above two methods.
=== Formal definition ===
Each transaction (
T
i
{\displaystyle T_{i}}
) is an ordered list of actions (
A
i
x
{\displaystyle A_{ix}}
). Before the transaction performs its first action (
A
i
1
{\displaystyle A_{i1}}
), it is marked with the current timestamp, or any other strictly totally ordered sequence:
T
S
(
T
i
)
=
N
O
W
(
)
{\displaystyle TS(T_{i})=NOW()}
. Every transaction is also given an initially empty set of transactions upon which it depends,
D
E
P
(
T
i
)
=
[
]
{\displaystyle DEP(T_{i})=[]}
, and an initially empty set of old objects which it updated,
O
L
D
(
T
i
)
=
[
]
{\displaystyle OLD(T_{i})=[]}
.
Each object
(
O
j
)
{\displaystyle (O_{j})}
in the database is given two timestamp fields which are not used other than for concurrency control:
R
T
(
O
j
)
{\displaystyle RT(O_{j})}
is the timestamp of the last transaction that read the value of the object (
T
S
(
T
r
)
{\displaystyle TS(T_{r})}
, where
T
r
{\displaystyle T_{r}}
is the last transaction that read the value of the object).
W
T
(
O
j
)
{\displaystyle WT(O_{j})}
is the timestamp of the last transaction that updated the value of the object (
T
S
(
T
w
)
{\displaystyle TS(T_{w})}
, where
T
w
{\displaystyle T_{w}}
is the last transaction that updated the value of the object).
For all
T
i
{\displaystyle T_{i}}
:
For each action
A
i
x
{\displaystyle A_{ix}}
:
If
A
i
x
{\displaystyle A_{ix}}
wishes to read the value of
O
j
{\displaystyle O_{j}}
:
If
W
T
(
O
j
)
>
T
S
(
T
i
)
{\displaystyle WT(O_{j})>TS(T_{i})}
then abort (a more recent thread has overwritten the value),
Otherwise update the set of dependencies
D
E
P
(
T
i
)
.
a
d
d
(
W
T
(
O
j
)
)
{\displaystyle DEP(T_{i}).\mathrm {add} (WT(O_{j}))}
and set
R
T
(
O
j
)
=
max
(
R
T
(
O
j
)
,
T
S
(
T
i
)
)
{\displaystyle RT(O_{j})=\max(RT(O_{j}),TS(T_{i}))}
;
If
A
i
x
{\displaystyle A_{ix}}
wishes to update the value of
O
j
{\displaystyle O_{j}}
:
If
R
T
(
O
j
)
>
T
S
(
T
i
)
{\displaystyle RT(O_{j})>TS(T_{i})}
then abort (a more recent thread is already relying on the old value),
If
W
T
(
O
j
)
>
T
S
(
T
i
)
{\displaystyle WT(O_{j})>TS(T_{i})}
then skip (the Thomas Write Rule),
Otherwise store the previous values,
O
L
D
(
T
i
)
.
a
d
d
(
O
j
,
W
T
(
O
j
)
)
{\displaystyle OLD(T_{i}).\mathrm {add} (O_{j},WT(O_{j}))}
, set
W
T
(
O
j
)
=
T
S
(
T
i
)
{\displaystyle WT(O_{j})=TS(T_{i})}
, and update the value of
O
j
{\displaystyle O_{j}}
.
While there is a transaction in
D
E
P
(
T
i
)
{\displaystyle DEP(T_{i})}
that has not ended: wait
If there is a transaction in
D
E
P
(
T
i
)
{\displaystyle DEP(T_{i})}
that aborted then abort
Otherwise: commit.
To abort:
For each
(
o
l
d
O
j
,
o
l
d
W
T
(
O
j
)
)
{\displaystyle (\mathrm {old} O_{j},\mathrm {old} WT(O_{j}))}
in
O
L
D
(
T
i
)
{\displaystyle OLD(T_{i})}
If
W
T
(
O
j
)
{\displaystyle WT(O_{j})}
equals
T
S
(
T
i
)
{\displaystyle TS(T_{i})}
then restore
O
j
=
o
l
d
O
j
{\displaystyle O_{j}=\mathrm {old} O_{j}}
and
W
T
(
O
j
)
=
o
l
d
W
T
(
O
j
)
{\displaystyle WT(O_{j})=\mathrm {old} WT(O_{j})}
=== Informal definition ===
Whenever a transaction initiated, it receives a timestamp. The transaction's timestamp indicates when the transaction was initiated. These timestamps ensure that transactions affect each object in the same sequence of their respective timestamps. Thus, given two operations that affect the same object from different transactions, the operation of the transaction with the earlier timestamp must execute before the operation of the transaction with the later timestamp. However, if the operation of the wrong transaction is actually presented first, then it is aborted and the transaction must be restarted.
Every object in the database has a read timestamp, which is updated whenever the object's data is read, and a write timestamp, which is updated whenever the object's data is changed.
If a transaction wants to read an object,
but the transaction started before the object's write timestamp it means that something changed the object's data after the transaction started. In this case, the transaction is canceled and must be restarted.
and the transaction started after the object's write timestamp, it means that it is safe to read the object. In this case, if the transaction's timestamp is after the object's read timestamp, the read timestamp is set to the transaction's timestamp.
If a transaction wants to write to an object,
but the transaction started before the object's read timestamp it means that something has had a look at the object, and we assume it took a copy of the object's data. So we can't write to the object as that would make any copied data invalid, so the transaction is aborted and must be restarted.
and the transaction started before the object's write timestamp it means that something has changed the object since we started our transaction. In this case we use the Thomas write rule and simply skip our write operation and continue as normal; the transaction does not have to be aborted or restarted
otherwise, the transaction writes to the object, and the object's write timestamp is set to the transaction's timestamp.
== Physically unrealizable ==
The behavior is physically unrealizable if the results of transactions could not have occurred if transactions were instantaneous. The following are the only two situations that result in physically unrealizable behavior:
Transaction T tries to read X but TS(T) < WT(X). Reason: It means that X has been written to by another transaction after T began.
Transaction T tries to write X but TS(T) < RT(X). Reason: It means that a later transaction read X before it was written by T.
== Recoverability ==
Note that timestamp ordering in its basic form does not produce recoverable histories. Consider for example the following history with transactions
T
1
{\displaystyle T_{1}}
and
T
2
{\displaystyle T_{2}}
:
W
1
(
x
)
R
2
(
x
)
W
2
(
y
)
C
2
R
1
(
z
)
C
1
{\displaystyle W_{1}(x)\;R_{2}(x)\;W_{2}(y)\;C_{2}\;R_{1}(z)\;C_{1}}
This could be produced by a TO scheduler, but is not recoverable, as
T
2
{\displaystyle T_{2}}
commits even though having read from an uncommitted transaction. To make sure that it produces recoverable histories, a scheduler can keep a list of other transactions each transaction has read from, and not let a transaction commit before this list consisted of only committed transactions. To avoid cascading aborts, the scheduler could tag data written by uncommitted transactions as dirty, and never let a read operation commence on such a data item before it was untagged. To get a strict history, the scheduler should not allow any operations on dirty items.
== Implementation issues ==
=== Timestamp resolution ===
This is the minimum time elapsed between two adjacent timestamps. If the resolution of the timestamp is too large (coarse), the possibility of two or more timestamps being equal is increased and thus enabling some transactions to commit out of correct order. For example, for a system that creates one hundred unique timestamps per second, two events that occur 2 milliseconds apart may be given the same timestamp even though they occurred at different times.
=== Timestamp locking ===
Even though this technique is a non-locking one, in as much as the object is not locked from concurrent access for the duration of a transaction, the act of recording each timestamp against the Object requires an extremely short duration lock on the Object or its proxy.
== See also ==
Multiversion concurrency control
Timestamping (computing) | Wikipedia/Timestamp-based_concurrency_control |
Systems management is enterprise-wide administration of distributed systems including (and commonly in practice) computer systems. Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM.
== Discussion ==
Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used:
For a small business startup with ten computers, automated centralized processes may take more time to learn how to use and implement than just doing the management work manually on each computer.
A very large business with thousands of similar employee computers may clearly be able to save time and money, by having IT staff learn to do systems management automation.
A small branch office of a large corporation may have access to a central IT staff, with the experience to set up automated management of the systems in the branch office, without need for local staff in the branch office to do the work.
Systems management may involve one or more of the following tasks:
Hardware inventories.
Server availability monitoring and metrics.
Software inventory and installation.
Anti-virus and anti-malware.
User's activities monitoring.
Capacity monitoring.
Security management.
Storage management.
Network capacity and utilization monitoring.
Identity Access Management
Anti-manipulation management
== Functions ==
Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard. This framework is also known as Fault, Configuration, Accounting, Performance, Security (FCAPS).
Fault management
Troubleshooting, error logging and data recovery
Configuration management
Hardware and software inventory
As we begin the process of automating the management of our technology, what equipment and resources do we have already?
How can this inventorying information be gathered and updated automatically, without direct hands-on examination of each device, and without hand-documenting with a pen and notepad?
What do we need to upgrade or repair?
What can we consolidate to reduce complexity or reduce energy use?
What resources would be better reused somewhere else?
What commercial software are we using that is improperly licensed, and either needs to be removed or more licenses purchased?
Provisioning
What software will we need to use in the future?
What training will need to be provided to use the software effectively?
Software deployment
What steps are necessary to install it on perhaps hundreds or thousands of computers?
Package management
How do we maintain and update the software we are using, possibly through automated update mechanisms?
Accounting management
Billing and statistics gathering
Performance management
Software metering
Who is using the software and how often?
If the license says only so many copies may be in use at any one time but may be installed in many more places than licensed, then track usage of those licenses.
If the licensed user limit is reached, either prevent more people from using it, or allow overflow and notify accounting that more licenses need to be purchased.
Event and metric monitoring
How reliable are the computers and software?
What errors or software bugs are preventing staff from doing their job?
What trends are we seeing for hardware failure and life expectancy?
Security management
Identity management
Policy management
However this standard should not be treated as comprehensive, there are obvious omissions. Some are recently emerging sectors, some are implied and some are just not listed. The primary ones are:
Business Impact functions (also known as Business Systems Management)
Capacity management
Real-time Application Relationship Discovery (which supports Configuration Management)
Security Information and Event Management functions (SIEM)
Workload scheduling
Performance management functions can also be split into end-to-end performance measuring and infrastructure component measuring functions. Another recently emerging sector is operational intelligence (OI) which focuses on real-time monitoring of business events that relate to business processes, not unlike business activity monitoring (BAM).
== Standards ==
Distributed Management Task Force (DMTF)
Alert Standard Format (ASF)
Common Information Model (CIM)
Desktop and mobile Architecture for System Hardware (DASH)
Systems Management Architecture for Server Hardware (SMASH)
Java Management Extensions (JMX)
== Academic preparation ==
Schools that offer or have offered degrees in the field of systems management include the University of Southern California, the University of Denver, Capitol Technology University, and Florida Institute of Technology.
== See also ==
List of systems management systems
Application service management
Enterprise service management
Business activity monitoring
Business transaction management
Computer Measurement Group
Event correlation
Network management
Operational intelligence
System administration
Service governance
== References ==
== Bibliography ==
Hegering, Heinz-Gerd; Abeck, Sebastian; Neumair, Bernhard (1999). Integriertes Management vernetzter Systeme : Konzepte, Architekturen und deren betrieblicher Einsatz. Heidelberg: dpunkt-Verl. ISBN 3-932588-16-9.
== External links ==
Standards for Automated Resource Management | Wikipedia/Systems_management |
A federated database system (FDBS) is a type of meta-database management system (DBMS), which transparently maps multiple autonomous database systems into a single federated database. The constituent databases are interconnected via a computer network and may be geographically decentralized. Since the constituent database systems remain autonomous, a federated database system is a contrastable alternative to the (sometimes daunting) task of merging several disparate databases. A federated database, or virtual database, is a composite of all constituent databases in a federated database system. There is no actual data integration in the constituent disparate databases as a result of data federation.
Through data abstraction, federated database systems can provide a uniform user interface, enabling users and clients to store and retrieve data from multiple noncontiguous databases with a single query—even if the constituent databases are heterogeneous. To this end, a federated database system must be able to decompose the query into subqueries for submission to the relevant constituent DBMSs, after which the system must composite the result sets of the subqueries. Because various database management systems employ different query languages, federated database systems can apply wrappers to the subqueries to translate them into the appropriate query languages.
== Definition ==
McLeod and Heimbigner were among the first to define a federated database system in the mid-1980s.
A FDBS is one which "define[s] the architecture and interconnect[s] databases that minimize central authority yet support partial sharing and coordination among database systems". This description might not accurately reflect the McLeod/Heimbigner definition of a federated database. Rather, this description fits what McLeod/Heimbigner called a composite database. McLeod/Heimbigner's federated database is a collection of autonomous components that make their data available to other members of the federation through the publication of an export schema and access operations; there is no unified, central schema that encompasses the information available from the members of the federation.
Among other surveys, practitioners define a Federated Database as a collection of cooperating component systems which are autonomous and are possibly heterogeneous.
The three important components of an FDBS are autonomy, heterogeneity and distribution. Another dimension which has also been considered is the Networking Environment Computer Network, e.g., many DBSs over a LAN or many DBSs over a WAN update related functions of participating DBSs (e.g., no updates, nonatomic transitions, atomic updates).
== FDBS architecture ==
A DBMS can be classified as either centralized or distributed. A centralized system manages a single database while distributed manages multiple databases. A component DBS in a DBMS may be centralized or distributed. A multiple DBS (MDBS) can be classified into two types depending on the autonomy of the component DBS as federated and non federated. A nonfederated database system is an integration of component DBMS that are not autonomous.
A federated database system consists of component DBS that are autonomous yet participate in a federation to allow partial and controlled sharing of their data.
Federated architectures differ based on levels of integration with the component database systems and the extent of services offered by the federation. A FDBS can be categorized as loosely or tightly coupled systems.
Loosely Coupled require component databases to construct their own federated schema. A user will typically access other component database systems by using a multidatabase language but this removes any levels of location transparency, forcing the user to have direct knowledge of the federated schema. A user imports the data they require from other component databases and integrates it with their own to form a federated schema.
Tightly coupled system consists of component systems that use independent processes to construct and publicize an integrated federated schema.
Multiple DBS of which FDBS are a specific type can be characterized along three dimensions: Distribution, Heterogeneity and Autonomy. Another characterization could be based on the dimension of networking, for example single databases or multiple databases in a LAN or WAN.
=== Distribution ===
Distribution of data in an FDBS is due to the existence of a multiple DBS before an FDBS is built. Data can be distributed among multiple databases which could be stored in a single computer or multiple computers. These computers could be geographically located in different places but interconnected by a network. The benefits of data distribution help in increased availability and reliability as well as improved access times.
==== Heterogeneity ====
Heterogeneities in databases arise due to factors such as differences in structures, semantics of data, the constraints supported or query language. Differences in structure occur when two data models provide different primitives such as object oriented (OO) models that support specialization and inheritance and relational models that do not. Differences due to constraints occur when two models support two different constraints. For example, the set type in CODASYL schema may be partially modeled as a referential integrity constraint in a relationship schema. CODASYL supports insertion and retention that are not captured by referential integrity alone. The query language supported by one DBMS can also contribute to heterogeneity between other component DBMSs. For example, differences in query languages with the same data models or different versions of query languages could contribute to heterogeneity.
Semantic heterogeneities arise when there is a disagreement about meaning, interpretation or intended use of data. At the schema and data level, classification of possible heterogeneities include:
Naming conflicts e.g. databases using different names to represent the same concept.
Domain conflicts or data representation conflicts e.g. databases using different values to represent same concept.
Precision conflicts e.g. databases using same data values from domains of different cardinalities for same data.
Metadata conflicts e.g. same concepts are represented at schema level and instance level.
Data conflicts e.g. missing attributes
Schema conflicts e.g. table versus table conflict which includes naming conflicts, data conflicts etc.
In creating a federated schema, one has to resolve such heterogeneities before integrating the component DB schemas.
==== Schema matching, schema mapping ====
Dealing with incompatible data types or query syntax is not the only obstacle to a concrete implementation of an FDBS. In systems that are not planned top-down, a generic problem lies in matching semantically equivalent, but differently named parts from different schemas (=data models) (tables, attributes). A pairwise mapping between n attributes would result in
n
(
n
−
1
)
2
{\displaystyle n(n-1) \over 2}
mapping rules (given equivalence mappings) - a number that quickly gets too large for practical purposes. A common way out is to provide a global schema that comprises the relevant parts of all member schemas and provide mappings in the form of database views. Two principal approaches depend on the direction of the mapping:
Global as View (GaV): the global schema is defined in terms of the underlying schemas
Local as View (LaV): the local schemas are defined in terms of the global schema
Both are examples of data integration, called the schema matching problem.
=== Autonomy ===
Fundamental to the difference between an MDBS and an FDBS is the concept of autonomy. It is important to understand the aspects of autonomy for component databases and how they can be addressed when a component DBS participates in an FDBS.
There are four kinds of autonomies addressed:
Design Autonomy which refers to ability to choose its design irrespective of data, query language or conceptualization, functionality of the system implementation.
Heterogeneities in an FDBS are primarily due to design autonomy.
Communication autonomy refers to the general operation of the DBMS to communicate with other DBMS or not.
Execution autonomy allows a component DBMS to control the operations requested by local and external operations.
Association autonomy gives a power to component DBS to disassociate itself from a federation which means FDBS can operate independently of any single DBS.
The ANSI/X3/SPARC Study Group outlined a three level data description architecture, the components of which are the conceptual schema, internal schema and external schema of databases. The three level architecture is however inadequate to describing the architectures of an FDBS. It was therefore extended to support the three dimensions of the FDBS namely Distribution, Autonomy and Heterogeneity. The five level schema architecture is explained below.
=== Concurrency control ===
The Heterogeneity and Autonomy requirements pose special challenges concerning concurrency control in an FDBS, which is crucial for the correct execution of its concurrent transactions (see also Global concurrency control). Achieving global serializability, the major correctness criterion, under these requirements has been characterized as very difficult and unsolved.
== Five level schema architecture for FDBSs ==
The five level schema architecture includes the following:
Local Schema is basically the conceptual model of a component database expressed in a native data model.
Component schema is the subset of the local schema that the owner organisation is willing to share with other users of the FDBS and it is translated into a common data model.
Export Schema represents a subset of a component schema that is available to a particular federation. It may include access control information regarding its use by a specific federation user. The export schema helps in managing flow of control of data.
Federated Schema is an integration of multiple export schemas. It includes information on data distribution that is generated when integrating export schemas.
External schema is extracted from a federated schema, and is defined for the users/applications of a particular federation.
While accurately representing the state of the art in data integration, the Five Level Schema Architecture above does suffer from a major drawback, namely IT imposed look and feel. Modern data users demand control over how data is presented; their needs are somewhat in conflict with such bottom-up approaches to data integration.
== See also ==
Enterprise Information Integration (EII)
Data virtualization
Master data management (MDM)
Schema matching
Universal relation assumption
Linked data
SPARQL
== References ==
== External links ==
DB2 and Federated Databases
Issues of where to perform the join aka "pushdown" and other performance characteristics
Worked example federating Oracle, Informix, DB2, and Excel
Freitas, André, Edward Curry, João Gabriel Oliveira, and Sean O’Riain. 2012. “Querying Heterogeneous Datasets on the Linked Data Web: Challenges, Approaches, and Trends.” IEEE Internet Computing 16 (1): 24–33.
IBM Gaian Database: A dynamic Distributed Federated Database
Federated system and methods and mechanisms of implementing and using such a system | Wikipedia/Federated_database |
In computer security, a sandbox is a security mechanism for separating running programs, usually in an effort to mitigate system failures and/or software vulnerabilities from spreading. The sandbox metaphor derives from the concept of a child's sandbox—a play area where children can build, destroy, and experiment without causing any real-world damage. It is often used to kill untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine or operating system. A sandbox typically provides a tightly controlled set of resources for guest programs to run in, such as storage and memory scratch space. Network access, the ability to inspect the host system, or read from input devices are usually disallowed or heavily restricted.
In the sense of providing a highly controlled environment, sandboxes may be seen as a specific example of virtualization. Sandboxing is frequently used to test unverified programs that may contain a virus or other malicious code without allowing the software to harm the host device.
== Implementations ==
A sandbox is implemented by executing the software in a restricted operating system environment, thus controlling the resources (e.g. file descriptors, memory, file system space, etc.) that a process may use.
Examples of sandbox implementations include the following:
Linux application sandboxing, built on Seccomp, cgroups and Linux namespaces. Notably used by Systemd, Google Chrome, Firefox, Firejail.
Android was the first mainstream operating system to implement full application sandboxing, built by assigning each application its own Linux user ID.
Apple App Sandbox is required for apps distributed through Apple's Mac App Store and iOS/iPadOS App Store, and recommended for other signed apps.
Windows Vista and later editions include a "low" mode process running, known as "User Account Control" (UAC), which only allows writing in specific directories and registry keys. Windows 10 Pro, from version 1903, provides a feature known as Windows Sandbox.
Google Sandboxed API.
Virtual machines emulate a complete host computer, on which a conventional operating system may boot and run as on actual hardware. The guest operating system runs sandboxed in the sense that it does not function natively on the host and can only access host resources through the emulator.
A jail: network-access restrictions, and a restricted file system namespace. Jails are most commonly used in virtual hosting.
Rule-based execution gives users full control over what processes are started, spawned (by other applications), or allowed to inject code into other applications and have access to the net, by having the system assign access levels for users or programs according to a set of determined rules. It also can control file/registry security (what programs can read and write to the file system/registry). In such an environment, viruses and Trojans have fewer opportunities for infecting a computer. The SELinux and Apparmor security frameworks are two such implementations for Linux.
Security researchers rely heavily on sandboxing technologies to analyse malware behavior. By creating an environment that mimics or replicates the targeted desktops, researchers can evaluate how malware infects and compromises a target host. Numerous malware analysis services are based on the sandboxing technology.
Google Native Client is a sandbox for running compiled C and C++ code in the browser efficiently and securely, independent of the user's operating system.
Capability systems can be thought of as a fine-grained sandboxing mechanism, in which programs are given opaque tokens when spawned and have the ability to do specific things based on what tokens they hold. Capability-based implementations can work at various levels, from kernel to user-space. An example of capability-based user-level sandboxing involves HTML rendering in a Web browser.
Secure Computing Mode (seccomp) strict mode, seccomp only allows the write(), read(), exit(), and sigreturn() system calls.
HTML5 has a "sandbox" attribute for use with iframes.
Java virtual machines include a sandbox to restrict the actions of untrusted code, such as a Java applet.
The .NET Common Language Runtime provides Code Access Security to enforce restrictions on untrusted code.
Software Fault Isolation (SFI), allows running untrusted native code by sandboxing all store, read and jump assembly instructions to isolated segments of memory.
Some of the use cases for sandboxes include the following:
Online judge systems to test programs in programming contests.
New-generation pastebins allowing users to execute pasted code snippets on the pastebin's server.
== See also ==
FreeBSD jail
Sandboxie
seccomp
Test bench
Tor (anonymity network)
== References ==
== External links ==
Security In-Depth for Linux Software: Preventing and Mitigating Security Bugs
Sandbox – The Chromium Projects
FreeBSD capsicum(4) man page – a lightweight OS capability and sandbox framework
OpenBSD pledge(2) man page – a way to restrict system operations
Sandbox testing importance Archived 2021-04-26 at the Wayback Machine{sandbox} Importance of sandbox in zero day flaw | Wikipedia/Isolation_(computer_science) |
Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis.
== Overview ==
The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation or IP addresses). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (an itemset). Given a threshold
C
{\displaystyle C}
, the Apriori algorithm identifies the item sets which are subsets of at least
C
{\displaystyle C}
transactions in the database.
Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.
Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length
k
{\displaystyle k}
from item sets of length
k
−
1
{\displaystyle k-1}
. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent
k
{\displaystyle k}
-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates.
The pseudo code for the algorithm is given below for a transaction database
T
{\displaystyle T}
, and a support threshold of
ε
{\displaystyle \varepsilon }
. Usual set theoretic notation is employed, though note that
T
{\displaystyle T}
is a multiset.
C
k
{\displaystyle C_{k}}
is the candidate set for level
k
{\displaystyle k}
. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma.
c
o
u
n
t
[
c
]
{\displaystyle \mathrm {count} [c]}
accesses a field of the data structure that represents candidate set
c
{\displaystyle c}
, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies.
Apriori(T, ε)
L1 ← {large singleton itemsets}
k ← 2
while Lk−1 is not empty
Ck ← Generate_candidates(Lk−1, k)
for transactions t in T
Dt ← {c in Ck : c ⊆ t}
for candidates c in Dt
count[c] ← count[c] + 1
Lk ← {c in Ck : count[c] ≥ ε}
k ← k + 1
return Union(Lk) over all k
Generate_candidates(L, k)
result ← empty_set()
for all p ∈ L, q ∈ L where p and q differ in exactly one element
c ← p ∪ q
if u ∈ L for all u ⊆ c where |u| = k-1
result.add(c)
return result
== Examples ==
=== Example 1 ===
Consider the following database, where each row is a transaction and each cell is an individual item of the transaction:
The association rules that can be determined from this database are the following:
100% of sets with α also contain β
50% of sets with α, β also have ε
50% of sets with α, β also have θ
we can also illustrate this through a variety of examples.
=== Example 2 ===
Assume that a large supermarket tracks sales data by stock-keeping unit (SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together.
Let the database of transactions consist of following itemsets:
We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is the support threshold.
The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result
All the itemsets of size 1 have a support of at least 3, so they are all frequent.
The next step is to generate a list of all pairs of the frequent items.
For example, regarding the pair {1,2}: the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item {1,2} has support of three.
The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum support of 3, so they are frequent. The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we can prune sets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs:
in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold.
We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold.
== Limitations ==
Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all
2
|
S
|
−
1
{\displaystyle 2^{|S|}-1}
of its proper subsets.
The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory.
Also, both the time and space complexity of this algorithm are very high:
O
(
2
|
D
|
)
{\displaystyle O\left(2^{|D|}\right)}
, thus exponential, where
|
D
|
{\displaystyle |D|}
is the horizontal width (the total number of items) present in the database.
Later algorithms such as Max-Miner try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.
== References ==
== External links ==
ARtool, GPL Java association rule mining application with GUI, offering implementations of multiple algorithms for discovery of frequent patterns and extraction of association rules (includes Apriori)
SPMF offers Java open-source implementations of Apriori and several variations such as AprioriClose, UApriori, AprioriInverse, AprioriRare, MSApriori, AprioriTID, and other more efficient algorithms such as FPGrowth and LCM.
Christian Borgelt provides C implementations for Apriori and many other frequent pattern mining algorithms (Eclat, FPGrowth, etc.). The code is distributed as free software under the MIT license.
The R package arules contains Apriori and Eclat and infrastructure for representing, manipulating and analyzing transaction data and patterns.
Efficient-Apriori is a Python package with an implementation of the algorithm as presented in the original paper. | Wikipedia/Apriori_algorithm |
Raft is a consensus algorithm designed as an alternative to the Paxos family of algorithms. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features. Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions. It has a number of open-source reference implementations, with full-specification implementations in Go, C++, Java, and Scala. It is named after Reliable, Replicated, Redundant, And Fault-Tolerant.
Raft is not a Byzantine fault tolerant (BFT) algorithm; the nodes trust the elected leader.
== Basics ==
Raft achieves consensus via an elected leader. A server in a raft cluster is either a leader or a follower, and can be a candidate in the precise case of an election (leader unavailable). The leader is responsible for log replication to the followers. It regularly informs the followers of its existence by sending a heartbeat message. Each follower has a timeout (typically between 150 and 300 ms) in which it expects the heartbeat from the leader. The timeout is reset on receiving the heartbeat. If no heartbeat is received the follower changes its status to candidate and starts a leader election.
=== Approach of the consensus problem in Raft ===
Raft implements consensus by a leader approach. The cluster has one and only one elected leader which is fully responsible for managing log replication on the other servers of the cluster. It means that the leader can decide on new entries' placement and establishment of data flow between it and the other servers without consulting other servers. A leader leads until it fails or disconnects, in which case surviving servers elect a new leader.
The consensus problem is decomposed in Raft into two relatively independent subproblems listed down below.
==== Leader election ====
When the existing leader fails or when the algorithm initializes, a new leader needs to be elected.
In this case, a new term starts in the cluster. A term is an arbitrary period of time on the server for which a new leader needs to be elected. Each term starts with a leader election. If the election is completed successfully (i.e. a single leader is elected) the term keeps going with normal operations orchestrated by the new leader. If the election is a failure, a new term starts, with a new election.
A leader election is started by a candidate server. A server becomes a candidate if it receives no communication by the leader over a period called the election timeout, so it assumes there is no acting leader anymore. It starts the election by increasing the term counter, voting for itself as new leader, and sending a message to all other servers requesting their vote. A server will vote only once per term, on a first-come-first-served basis. If a candidate receives a message from another server with a term number larger than the candidate's current term, then the candidate's election is defeated and the candidate changes into a follower and recognizes the leader as legitimate. If a candidate receives a majority of votes, then it becomes the new leader. If neither happens, e.g., because of a split vote, then a new term starts, and a new election begins.
Raft uses a randomized election timeout to ensure that split vote problems are resolved quickly. This should reduce the chance of a split vote because servers won't become candidates at the same time: a single server will time out, win the election, then become leader and send heartbeat messages to other servers before any of the followers can become candidates.
==== Log replication ====
The leader is responsible for the log replication. It accepts client requests. Each client request consists of a command to be executed by the replicated state machines in the cluster. After being appended to the leader's log as a new entry, each of the requests is forwarded to the followers as AppendEntries messages. In case of unavailability of the followers, the leader retries AppendEntries messages indefinitely, until the log entry is eventually stored by all of the followers.
Once the leader receives confirmation from half or more of its followers that the entry has been replicated, the leader applies the entry to its local state machine, and the request is considered committed. This event also commits all previous entries in the leader's log. Once a follower learns that a log entry is committed, it applies the entry to its local state machine. This ensures consistency of the logs between all the servers through the cluster, ensuring that the safety rule of Log Matching is respected.
In the case of a leader crash, the logs can be left inconsistent, with some logs from the old leader not being fully replicated through the cluster. The new leader will then handle inconsistency by forcing the followers to duplicate its own log. To do so, for each of its followers, the leader will compare its log with the log from the follower, find the last entry where they agree, then delete all the entries coming after this critical entry in the follower log and replace it with its own log entries. This mechanism will restore log consistency in a cluster subject to failures.
=== Safety ===
==== Safety rules in Raft ====
Raft guarantees each of these safety properties:
Election safety: at most one leader can be elected in a given term.
Leader append-only: a leader can only append new entries to its logs (it can neither overwrite nor delete entries).
Log matching: if two logs contain an entry with the same index and term, then the logs are identical in all entries up through the given index.
Leader completeness: if a log entry is committed in a given term then it will be present in the logs of the leaders since this term.
State machine safety: if a server has applied a particular log entry to its state machine, then no other server may apply a different command for the same log.
The first four rules are guaranteed by the details of the algorithm described in the previous section. The State Machine Safety is guaranteed by a restriction on the election process.
==== State machine safety ====
This rule is ensured by a simple restriction: a candidate can't win an election unless its log contains all committed entries. In order to be elected, a candidate has to contact a majority of the cluster, and given the rules for logs to be committed, it means that every committed entry is going to be present on at least one of the servers the candidates contact.
Raft determines which of two logs (carried by two distinct servers) is more up-to-date by comparing the index term of the last entries in the logs. If the logs have a last entry with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.
In Raft, the request from a candidate to a voter includes information about the candidate's log. If its own log is more up-to-date than the candidate's log, the voter denies its vote to the candidate. This implementation ensures the State Machine Safety rule.
==== Follower crashes ====
If a follower crashes, AppendEntries and vote requests sent by other servers will fail. Such failures are handled by the servers trying indefinitely to reach the downed follower. If the follower restarts, the pending requests will complete. If the request has already been taken into account before the failure, the restarted follower will just ignore it.
==== Timing and availability ====
Timing is critical in Raft to elect and maintain a steady leader over time, in order to have a perfect availability of the cluster. Stability is ensured by respecting the timing requirement of the algorithm:
broadcastTime << electionTimeout << MTBF
broadcastTime is the average time it takes a server to send a request to every server in the cluster and receive responses. It is relative to the infrastructure used.
MTBF (Mean Time Between Failures) is the average time between failures for a server. It is also relative to the infrastructure.
electionTimeout is the same as described in the Leader Election section. It is something the programmer must choose.
Typical numbers for these values can be 0.5 ms to 20 ms for broadcastTime, which implies that the programmer sets the electionTimeout somewhere between 10 ms and 500 ms. It can take several weeks or months between single server failures, which means the values are sufficient for a stable cluster.
== Extensions ==
The dissertation “Consensus: Bridging Theory and Practice” by one of the co-authors of the original paper describes extensions to the original algorithm:
Pre-Vote: when a member rejoins the cluster, it can depending on timing trigger an election although there is already a leader. To avoid this, pre-vote will first check in with the other members. Avoiding the unnecessary election improves the availability of cluster, therefore this extension is usually present in production implementations.
Leadership transfer: a leader that is shutting down orderly can explicitly transfer the leadership to another member. This can be faster than waiting for a timeout. Also, a leader can step down when another member would be a better leader, for example when that member is on a faster machine.
== Production use of Raft ==
CockroachDB uses Raft in the Replication Layer.
Etcd uses Raft to manage a highly-available replicated log
Hazelcast uses Raft to provide its CP Subsystem, a strongly consistent layer for distributed data structures.
MongoDB uses a variant of Raft in the replication set.
Neo4j uses Raft to ensure consistency and safety.
RabbitMQ uses Raft to implement durable, replicated FIFO queues.
ScyllaDB uses Raft for metadata (schema and topology changes)
Splunk Enterprise uses Raft in a Search Head Cluster (SHC)
TiDB uses Raft with the storage engine TiKV.
YugabyteDB uses Raft in the DocDB Replication
ClickHouse uses Raft for in-house implementation of ZooKeeper-like service
Redpanda uses the Raft consensus algorithm for data replication
Apache Kafka Raft (KRaft) uses Raft for metadata management.
NATS Messaging uses the Raft consensus algorithm for Jetstream cluster management and data replication
Camunda uses the Raft consensus algorithm for data replication
== References ==
== External links ==
Official website | Wikipedia/Raft_(algorithm) |
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors.
Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures.
Consensus protocols are the basis for the state machine replication approach to distributed computing, as suggested by Leslie Lamport and surveyed by Fred Schneider. State machine replication is a technique for converting an algorithm into a fault-tolerant, distributed implementation. Ad-hoc techniques may leave important cases of failures unresolved. The principled approach proposed by Lamport et al. ensures all cases are handled safely.
The Paxos protocol was first submitted in 1989 and named after a fictional legislative consensus system used on the Paxos island in Greece, where Lamport wrote that the parliament had to function "even though legislators continually wandered in and out of the parliamentary Chamber". It was later published as a journal article in 1998.
The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although no deterministic fault-tolerant consensus protocol can guarantee progress in an asynchronous network (a result proved in a paper by Fischer, Lynch and Paterson), Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke.
Paxos is usually used where durability is required (for example, to replicate a file or a database), in which the amount of durable state could be large. The protocol attempts to make progress even during periods when some bounded number of replicas are unresponsive. There is also a mechanism to drop a permanently failed replica or to add a new replica.
== History ==
The topic predates the protocol. In 1988, Lynch, Dwork and Stockmeyer had demonstrated the solvability of consensus in a broad family of "partially synchronous" systems. Paxos has strong similarities to a protocol used for agreement in "viewstamped replication", first published by Oki and Liskov in 1988, in the context of distributed transactions. Notwithstanding this prior work, Paxos offered a particularly elegant formalism, and included one of the earliest proofs of safety for a fault-tolerant distributed consensus protocol.
Reconfigurable state machines have strong ties to prior work on reliable group multicast protocols that support dynamic group membership, for example Birman's work in 1985 and 1987 on the virtually synchronous gbcast protocol. However, gbcast is unusual in supporting durability and addressing partitioning failures.
Most reliable multicast protocols lack these properties, which are required for implementations of the state machine replication model.
This point is elaborated in a paper by Lamport, Malkhi and Zhou.
Paxos protocols are members of a theoretical class of solutions to a problem formalized as uniform agreement with crash failures.
Lower bounds for this problem have been proved by Keidar and Shraer. Derecho, a C++ software library for cloud-scale state machine replication, offers a Paxos protocol that has been integrated with self-managed virtually synchronous membership. This protocol matches the Keidar and Shraer optimality bounds, and maps efficiently to modern remote DMA (RDMA) datacenter hardware (but uses TCP if RDMA is not available).
== Assumptions ==
In order to simplify the presentation of Paxos, the following assumptions and definitions are made explicit. Techniques to broaden the applicability are known in the literature, and are not covered in this article.
=== Processors ===
Processors operate at arbitrary speed.
Processors may experience failures.
Processors with stable storage may re-join the protocol after failures (following a crash-recovery failure model).
Processors do not collude, lie, or otherwise attempt to subvert the protocol. (That is, Byzantine failures don't occur. See Byzantine Paxos for a solution that tolerates failures that arise from arbitrary/malicious behavior of the processes.)
=== Network ===
Processors can send messages to any other processor.
Messages are sent asynchronously and may take arbitrarily long to deliver.
Messages may be lost, reordered, or duplicated.
Messages are delivered without corruption. (That is, Byzantine failures don't occur. See Byzantine Paxos for a solution which tolerates corrupted messages that arise from arbitrary/malicious behavior of the messaging channels.)
=== Number of processors ===
In general, a consensus algorithm can make progress using
n
=
2
F
+
1
{\displaystyle n=2F+1}
processors, despite the simultaneous failure of any
F
{\displaystyle F}
processors: in other words, the number of non-faulty processes must be strictly greater than the number of faulty processes. However, using reconfiguration, a protocol may be employed which survives any number of total failures as long as no more than F fail simultaneously. For Paxos protocols, these reconfigurations can be handled as separate configurations.
== Safety and liveness properties ==
In order to guarantee safety (also called "consistency"), Paxos defines three properties and ensures the first two are always held, regardless of the pattern of failures:
Validity (or non-triviality)
Only proposed values can be chosen and learned.
Agreement (or consistency, or safety)
No two distinct learners can learn different values (or there can't be more than one decided value).
Termination (or liveness)
If value C has been proposed, then eventually learner L will learn some value (if sufficient processors remain non-faulty).
Note that Paxos is not guaranteed to terminate, and thus does not have the liveness property. This is supported by the Fischer Lynch Paterson impossibility result (FLP) which states that a consistency protocol can only have two of safety, liveness, and fault tolerance. As Paxos's point is to ensure fault tolerance and it guarantees safety, it cannot also guarantee liveness.
== Typical deployment ==
In most deployments of Paxos, each participating process acts in three roles: Proposer, Acceptor and Learner. This reduces the message complexity significantly, without sacrificing correctness:
In Paxos, clients send commands to a leader. During normal operation, the leader receives a client's command, assigns it a new command number
i
{\displaystyle i}
, and then begins the
i
{\displaystyle i}
th instance of the consensus algorithm by sending messages to a set of acceptor processes.
By merging roles, the protocol "collapses" into an efficient client-master-replica style deployment, typical of the database community. The benefit of the Paxos protocols (including implementations with merged roles) is the guarantee of its safety properties.
A typical implementation's message flow is covered in the section Multi-Paxos.
== Basic Paxos ==
This protocol is the most basic of the Paxos family. Each "instance" (or "execution") of the basic Paxos protocol decides on a single output value. The protocol proceeds over several rounds. A successful round has 2 phases: phase 1 (which is divided into parts a and b) and phase 2 (which is divided into parts a and b). See below the description of the phases. Remember that we assume an asynchronous model, so e.g. a processor may be in one phase while another processor may be in another.
=== Phase 1 ===
==== Phase 1a: Prepare ====
A Proposer creates a message, which we call a Prepare. The message is identified with a unique number, n, which must be greater than any number previously used in a Prepare message by this Proposer. Note that n is not the value to be proposed; it is simply a unique identifier of this initial message by the Proposer. In fact, the Prepare message needn't contain the proposed value (often denoted by v).
The Proposer chooses at least a Quorum of Acceptors and sends the Prepare message containing n to them. A Proposer should not initiate Paxos if it cannot communicate with enough Acceptors to constitute a Quorum.
==== Phase 1b: Promise ====
The Acceptors wait for a Prepare message from any of the Proposers. When an Acceptor receives a Prepare message, the Acceptor must examine the identifier number, n, of that message. There are two cases:
If n is higher than every previous proposal number received by the Acceptor (from any Proposer), then the Acceptor must return a message (called a Promise) to the Proposer, indicating that the Acceptor will ignore all future proposals numbered less than or equal to n. The Promise must include the highest number among the Proposals that the Acceptor previously accepted, along with the corresponding accepted value.
If n is less than or equal to any previous proposal number received by the Acceptor, the Acceptor needn't respond and can ignore the proposal. However, for the sake of optimization, sending a denial, or negative acknowledgement (NAK), response would tell the Proposer that it can stop its attempt to create consensus with proposal n.
=== Phase 2 ===
==== Phase 2a: Accept ====
If a Proposer receives Promises from a Quorum of Acceptors, it needs to set a value v to its proposal. If any Acceptors had previously accepted any proposal, then they'll have sent their values to the Proposer, who now must set the value of its proposal, v, to the value associated with the highest proposal number reported by the Acceptors, let's call it z. If none of the Acceptors had accepted a proposal up to this point, then the Proposer may choose the value it originally wanted to propose, say x.
The Proposer sends an Accept message, (n, v), to a Quorum of Acceptors with the chosen value for its proposal, v, and the proposal number n (which is the same as the number contained in the Prepare message previously sent to the Acceptors). So, the Accept message is either (n, v=z) or, in case none of the Acceptors previously accepted a value, (n, v=x).
This Accept message should be interpreted as a "request", as in "Accept this proposal, please!".
==== Phase 2b: Accepted ====
If an Acceptor receives an Accept message, (n, v), from a Proposer, it must accept it if and only if it has not already promised (in Phase 1b of the Paxos protocol) to only consider proposals having an identifier greater than n.
If the Acceptor has not already promised (in Phase 1b) to only consider proposals having an identifier greater than n, it should register the value v (of the just received Accept message) as the accepted value (of the Protocol), and send an Accepted message to the Proposer and every Learner (which can typically be the Proposers themselves). Learners will learn the decided value only after receiving Accepted messages from a majority of acceptors, i.e. not after receiving just the first Accept message.
Else, it can ignore the Accept message or request.
Note that consensus is achieved when a majority of Acceptors accept the same identifier number (rather than the same value). Because each identifier is unique to a Proposer and only one value may be proposed per identifier, all Acceptors that accept the same identifier thereby accept the same value. These facts result in a few counter-intuitive scenarios that do not impact correctness: Acceptors can accept multiple values, a value may achieve a majority across Acceptors (with different identifiers) only to later be changed, and Acceptors may continue to accept proposals after an identifier has achieved a majority. However, the Paxos protocol guarantees that consensus is permanent and the chosen value is immutable.
=== When rounds fail ===
Rounds fail when multiple Proposers send conflicting Prepare messages, or when the Proposer does not receive a Quorum of responses (Promise or Accepted). In these cases, another round must be started with a higher proposal number.
=== Paxos can be used to select a leader ===
Notice that a Proposer in Paxos could propose "I am the leader" (or, for example, "Proposer X is the leader"). Because of the agreement and validity guarantees of Paxos, if accepted by a Quorum, then the Proposer is now known to be the leader to all other nodes. This satisfies the needs of leader election because there is a single node believing it is the leader and a single node known to be the leader at all times.
=== Graphic representation of the flow of messages in the basic Paxos ===
The following diagrams represent several cases/situations of the application of the Basic Paxos protocol. Some cases show how the Basic Paxos protocol copes with the failure of certain (redundant) components of the distributed system.
Note that the values returned in the Promise message are "null" the first time a proposal is made (since no Acceptor has accepted a value before in this round).
==== Basic Paxos without failures ====
In the diagram below, there is 1 Client, 1 Proposer, 3 Acceptors (i.e. the Quorum size is 3) and 2 Learners (represented by the 2 vertical lines). This diagram represents the case of a first round, which is successful (i.e. no process in the network fails).
Here, V is the last of (Va, Vb, Vc).
==== Error cases in basic Paxos ====
The simplest error cases are the failure of an Acceptor (when a Quorum of Acceptors remains alive) and failure of a redundant Learner. In these cases, the protocol requires no "recovery" (i.e. it still succeeds): no additional rounds or messages are required, as shown below (in the next two diagrams/cases).
==== Basic Paxos when an Acceptor fails ====
In the following diagram, one of the Acceptors in the Quorum fails, so the Quorum size becomes 2. In this case, the Basic Paxos protocol still succeeds.
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(1)
| | | | ! | | !! FAIL !!
| |<---------X--X | | Promise(1,{Va, Vb, null})
| X--------->|->| | | Accept!(1,V)
| |<---------X--X--------->|->| Accepted(1,V)
|<---------------------------------X--X Response
| | | | | |
==== Basic Paxos when a redundant learner fails ====
In the following case, one of the (redundant) Learners fails, but the Basic Paxos protocol still succeeds.
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(1)
| |<---------X--X--X | | Promise(1,{Va,Vb,Vc})
| X--------->|->|->| | | Accept!(1,V)
| |<---------X--X--X------>|->| Accepted(1,V)
| | | | | | ! !! FAIL !!
|<---------------------------------X Response
| | | | | |
==== Basic Paxos when a Proposer fails ====
In this case, a Proposer fails after proposing a value, but before the agreement is reached. Specifically, it fails in the middle of the Accept message, so only one Acceptor of the Quorum receives the value. Meanwhile, a new Leader (a Proposer) is elected (but this is not shown in detail). Note that there are 2 rounds in this case (rounds proceed vertically, from the top to the bottom).
Client Proposer Acceptor Learner
| | | | | | |
X----->| | | | | | Request
| X------------>|->|->| | | Prepare(1)
| |<------------X--X--X | | Promise(1,{Va, Vb, Vc})
| | | | | | |
| | | | | | | !! Leader fails during broadcast !!
| X------------>| | | | | Accept!(1,V)
| ! | | | | |
| | | | | | | !! NEW LEADER !!
| X--------->|->|->| | | Prepare(2)
| |<---------X--X--X | | Promise(2,{V, null, null})
| X--------->|->|->| | | Accept!(2,V)
| |<---------X--X--X------>|->| Accepted(2,V)
|<---------------------------------X--X Response
| | | | | | |
==== Basic Paxos when multiple Proposers conflict ====
The most complex case is when multiple Proposers believe themselves to be Leaders. For instance, the current leader may fail and later recover, but the other Proposers have already re-selected a new leader. The recovered leader has not learned this yet and attempts to begin one round in conflict with the current leader. In the diagram below, 4 unsuccessful rounds are shown, but there could be more (as suggested at the bottom of the diagram).
Client Proposer Acceptor Learner
| | | | | | |
X----->| | | | | | Request
| X------------>|->|->| | | Prepare(1)
| |<------------X--X--X | | Promise(1,{null,null,null})
| ! | | | | | !! LEADER FAILS
| | | | | | | !! NEW LEADER (knows last number was 1)
| X--------->|->|->| | | Prepare(2)
| |<---------X--X--X | | Promise(2,{null,null,null})
| | | | | | | | !! OLD LEADER recovers
| | | | | | | | !! OLD LEADER tries 2, denied
| X------------>|->|->| | | Prepare(2)
| |<------------X--X--X | | Nack(2)
| | | | | | | | !! OLD LEADER tries 3
| X------------>|->|->| | | Prepare(3)
| |<------------X--X--X | | Promise(3,{null,null,null})
| | | | | | | | !! NEW LEADER proposes, denied
| | X--------->|->|->| | | Accept!(2,Va)
| | |<---------X--X--X | | Nack(3)
| | | | | | | | !! NEW LEADER tries 4
| | X--------->|->|->| | | Prepare(4)
| | |<---------X--X--X | | Promise(4,{null,null,null})
| | | | | | | | !! OLD LEADER proposes, denied
| X------------>|->|->| | | Accept!(3,Vb)
| |<------------X--X--X | | Nack(4)
| | | | | | | | ... and so on ...
==== Basic Paxos where an Acceptor accepts Two Different Values ====
In the following case, one Proposer achieves acceptance of value V1 by one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. Then V2 is accepted by all Acceptors, including the one that initially accepted V1.
Proposer Acceptor Learner
| | | | | | |
X--------->|->|->| | | Prepare(1)
|<---------X--X--X | | Promise(1,{null,null,null})
x--------->| | | | | Accept!(1,V1)
| | X------------>|->| Accepted(1,V1)
! | | | | | | !! FAIL !!
| | | | | |
X--------->|->| | | Prepare(2)
|<---------X--X | | Promise(2,{null,null})
X------>|->|->| | | Accept!(2,V2)
|<------X--X--X------>|->| Accepted(2,V2)
| | | | | |
==== Basic Paxos where a multi-identifier majority is insufficient ====
In the following case, one Proposer achieves acceptance of value V1 of one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. This Proposer is able to get one Acceptor to accept V2 before failing. A new Proposer finds a majority that includes the Acceptor that has accepted V1, and must propose it. The Proposer manages to get two Acceptors to accept it before failing. At this point, three Acceptors have accepted V1, but not for the same identifier. Finally, a new Proposer prepares the majority that has not seen the largest accepted identifier. The value associated with the largest identifier in that majority is V2, so it must propose it. This Proposer then gets all Acceptors to accept V2, achieving consensus.
Proposer Acceptor Learner
| | | | | | | | | | |
X--------------->|->|->|->|->| | | Prepare(1)
|<---------------X--X--X--X--X | | Promise(1,{null,null,null,null,null})
x--------------->| | | | | | | Accept!(1,V1)
| | | | X------------------>|->| Accepted(1,V1)
! | | | | | | | | | | !! FAIL !!
| | | | | | | | | |
X--------------->|->|->|->| | | Prepare(2)
|<---------------X--X--X--X | | Promise(2,{null,null,null,null})
X--------------->| | | | | | Accept!(2,V2)
| | | | X--------------->|->| Accepted(2,V2)
! | | | | | | | | | !! FAIL !!
| | | | | | | | |
X--------->|---->|->|->| | | Prepare(3)
|<---------X-----X--X--X | | Promise(3,{V1,null,null,null})
X--------------->|->| | | | Accept!(3,V1)
| | | | X--X--------->|->| Accepted(3,V1)
! | | | | | | | | !! FAIL !!
| | | | | | | |
X------>|->|------->| | | Prepare(4)
|<------X--X--|--|--X | | Promise(4,{V1(1),V2(2),null})
X------>|->|->|->|->| | | Accept!(4,V2)
| X--X--X--X--X------>|->| Accepted(4,V2)
==== Basic Paxos where new Proposers cannot change an existing consensus ====
In the following case, one Proposer achieves acceptance of value V1 of two Acceptors before failing. A new Proposer may start another round, but it is now impossible for that proposer to prepare a majority that doesn't include at least one Acceptor that has accepted V1. As such, even though the Proposer doesn't see the existing consensus, the Proposer's only option is to propose the value already agreed upon. New Proposers can continually increase the identifier to restart the process, but the consensus can never be changed.
Proposer Acceptor Learner
| | | | | | |
X--------->|->|->| | | Prepare(1)
|<---------X--X--X | | Promise(1,{null,null,null})
x--------->|->| | | | Accept!(1,V1)
| | X--X--------->|->| Accepted(1,V1)
! | | | | | | !! FAIL !!
| | | | | |
X--------->|->| | | Prepare(2)
|<---------X--X | | Promise(2,{V1,null})
X------>|->|->| | | Accept!(2,V1)
|<------X--X--X------>|->| Accepted(2,V1)
| | | | | |
== Multi-Paxos ==
A typical deployment of Paxos requires a continuous stream of agreed values acting as commands to a distributed state machine. If each command is the result of a single instance of the Basic Paxos protocol, a significant amount of overhead would result.
If the leader is relatively stable, phase 1 becomes unnecessary. Thus, it is possible to skip phase 1 for future instances of the protocol with the same leader.
To achieve this, the round number I is included along with each value which is incremented in each round by the same Leader. Multi-Paxos reduces the failure-free message delay (proposal to learning) from 4 delays to 2 delays.
=== Graphic representation of the flow of messages in the Multi-Paxos ===
==== Multi-Paxos without failures ====
In the following diagram, only one instance (or "execution") of the basic Paxos protocol, with an initial Leader (a Proposer), is shown. Note that a Multi-Paxos consists of several instances of the basic Paxos protocol.
Client Proposer Acceptor Learner
| | | | | | | --- First Request ---
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(N)
| |<---------X--X--X | | Promise(N,I,{Va,Vb,Vc})
| X--------->|->|->| | | Accept!(N,I,V)
| |<---------X--X--X------>|->| Accepted(N,I,V)
|<---------------------------------X--X Response
| | | | | | |
where V = last of (Va, Vb, Vc).
==== Multi-Paxos when phase 1 can be skipped ====
In this case, subsequent instances of the basic Paxos protocol (represented by I+1) use the same leader, so the phase 1 (of these subsequent instances of the basic Paxos protocol), which consist of the Prepare and Promise sub-phases, is skipped. Note that the Leader should be stable, i.e. it should not crash or change.
Client Proposer Acceptor Learner
| | | | | | | --- Following Requests ---
X-------->| | | | | | Request
| X--------->|->|->| | | Accept!(N,I+1,W)
| |<---------X--X--X------>|->| Accepted(N,I+1,W)
|<---------------------------------X--X Response
| | | | | | |
==== Multi-Paxos when roles are collapsed ====
A common deployment of the Multi-Paxos consists in collapsing the role of the Proposers, Acceptors and Learners to "Servers". So, in the end, there are only "Clients" and "Servers".
The following diagram represents the first "instance" of a basic Paxos protocol, when the roles of the Proposer, Acceptor and Learner are collapsed to a single role, called the "Server".
Client Servers
| | | | --- First Request ---
X-------->| | | Request
| X->|->| Prepare(N)
| |<-X--X Promise(N, I, {Va, Vb})
| X->|->| Accept!(N, I, Vn)
| X<>X<>X Accepted(N, I)
|<--------X | | Response
| | | |
==== Multi-Paxos when roles are collapsed and the leader is steady ====
In the subsequent instances of the basic Paxos protocol, with the same leader as in the previous instances of the basic Paxos protocol, the phase 1 can be skipped.
Client Servers
X-------->| | | Request
| X->|->| Accept!(N,I+1,W)
| X<>X<>X Accepted(N,I+1)
|<--------X | | Response
| | | |
== Optimisations ==
A number of optimisations can be performed to reduce the number of exchanged messages, to improve the performance of the protocol, etc. A few of these optimisations are reported below.
"We can save messages at the cost of an extra message delay by having a single distinguished learner that informs the other learners when it finds out that a value has been chosen. Acceptors then send Accepted messages only to the distinguished learner. In most applications, the roles of leader and distinguished learner are performed by the same processor.
"A leader can send its Prepare and Accept! messages just to a quorum of acceptors. As long as all acceptors in that quorum are working and can communicate with the leader and the learners, there is no need for acceptors not in the quorum to do anything.
"Acceptors do not care what value is chosen. They simply respond to Prepare and Accept! messages to ensure that, despite failures, only a single value can be chosen. However, if an acceptor does learn what value has been chosen, it can store the value in stable storage and erase any other information it has saved there. If the acceptor later receives a Prepare or Accept! message, instead of performing its Phase1b or Phase2b action, it can simply inform the leader of the chosen value.
"Instead of sending the value v, the leader can send a hash of v to some acceptors in its Accept! messages. A learner will learn that v is chosen if it receives Accepted messages for either v or its hash from a quorum of acceptors, and at least one of those messages contains v rather than its hash. However, a leader could receive Promise messages that tell it the hash of a value v that it must use in its Phase2a action without telling it the actual value of v. If that happens, the leader cannot execute its Phase2a action until it communicates with some process that knows v."
"A proposer can send its proposal only to the leader rather than to all coordinators. However, this requires that the result of the leader-selection algorithm be broadcast to the proposers, which might be expensive. So, it might be better to let the proposer send its proposal to all coordinators. (In that case, only the coordinators themselves need to know who the leader is.)
"Instead of each acceptor sending Accepted messages to each learner, acceptors can send their Accepted messages to the leader and the leader can inform the learners when a value has been chosen. However, this adds an extra message delay.
"Finally, observe that phase 1 is unnecessary for round 1 .. The leader of round 1 can begin the round by sending an Accept! message with any proposed value."
== Cheap Paxos ==
Cheap Paxos extends Basic Paxos to tolerate F failures with F+1 main processors and F auxiliary processors by dynamically reconfiguring after each failure.
This reduction in processor requirements comes at the expense of liveness; if too many main processors fail in a short time, the system must halt until the auxiliary processors can reconfigure the system. During stable periods, the auxiliary processors take no part in the protocol.
"With only two processors p and q, one processor cannot distinguish failure of the other processor from failure of the communication medium. A third processor is needed. However, that third processor does not have to participate in choosing the sequence of commands. It must take action only in case p or q fails, after which it does nothing while either p or q continues to operate the system by itself. The third processor can therefore be a small/slow/cheap one, or a processor primarily devoted to other tasks."
=== Message flow: Cheap Multi-Paxos ===
An example involving three main acceptors, one auxiliary acceptor and quorum size of three, showing failure of one main processor and subsequent reconfiguration:
{ Acceptors }
Proposer Main Aux Learner
| | | | | | -- Phase 2 --
X----------->|->|->| | | Accept!(N,I,V)
| | | ! | | --- FAIL! ---
|<-----------X--X--------------->| Accepted(N,I,V)
| | | | | -- Failure detected (only 2 accepted) --
X----------->|->|------->| | Accept!(N,I,V) (re-transmit, include Aux)
|<-----------X--X--------X------>| Accepted(N,I,V)
| | | | | -- Reconfigure : Quorum = 2 --
X----------->|->| | | Accept!(N,I+1,W) (Aux not participating)
|<-----------X--X--------------->| Accepted(N,I+1,W)
| | | | |
== Fast Paxos ==
Fast Paxos generalizes Basic Paxos to reduce end-to-end message delays. In Basic Paxos, the message delay from client request to learning is 3 message delays. Fast Paxos allows 2 message delays, but requires that (1) the system be composed of 3f+ 1 acceptors to tolerate up to f faults (instead of the classic 2f+1), and (2) the Client to send its request to multiple destinations.
Intuitively, if the leader has no value to propose, then a client could send an Accept! message to the Acceptors directly. The Acceptors would respond as in Basic Paxos, sending Accepted messages to the leader and every Learner achieving two message delays from Client to Learner.
If the leader detects a collision, it resolves the collision by sending Accept! messages for a new round which are Accepted as usual. This coordinated recovery technique requires four message delays from Client to Learner.
The final optimization occurs when the leader specifies a recovery technique in advance, allowing the Acceptors to perform the collision recovery themselves. Thus, uncoordinated collision recovery can occur in three message delays (and only two message delays if all Learners are also Acceptors).
=== Message flow: Fast Paxos, non-conflicting ===
Client Leader Acceptor Learner
| | | | | | | |
| X--------->|->|->|->| | | Any(N,I,Recovery)
| | | | | | | |
X------------------->|->|->|->| | | Accept!(N,I,W)
| |<---------X--X--X--X------>|->| Accepted(N,I,W)
|<------------------------------------X--X Response(W)
| | | | | | | |
=== Message flow: Fast Paxos, conflicting proposals ===
Conflicting proposals with coordinated recovery. Note: the protocol does not specify how to handle the dropped client request.
Client Leader Acceptor Learner
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | !! Concurrent conflicting proposals
| | | | | | | | | !! received in different order
| | | | | | | | | !! by the Acceptors
| X--------------?|-?|-?|-?| | | Accept!(N,I,V)
X-----------------?|-?|-?|-?| | | Accept!(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Acceptors disagree on value
| | |<-------X--X->|->|----->|->| Accepted(N,I,V)
| | |<-------|<-|<-X--X----->|->| Accepted(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Detect collision & recover
| | X------->|->|->|->| | | Accept!(N+1,I,W)
| | |<-------X--X--X--X----->|->| Accepted(N+1,I,W)
|<---------------------------------X--X Response(W)
| | | | | | | | |
Conflicting proposals with uncoordinated recovery.
Client Leader Acceptor Learner
| | | | | | | | |
| | X------->|->|->|->| | | Any(N,I,Recovery)
| | | | | | | | |
| | | | | | | | | !! Concurrent conflicting proposals
| | | | | | | | | !! received in different order
| | | | | | | | | !! by the Acceptors
| X--------------?|-?|-?|-?| | | Accept!(N,I,V)
X-----------------?|-?|-?|-?| | | Accept!(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Acceptors disagree on value
| | |<-------X--X->|->|----->|->| Accepted(N,I,V)
| | |<-------|<-|<-X--X----->|->| Accepted(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Detect collision & recover
| | |<-------X--X--X--X----->|->| Accepted(N+1,I,W)
|<---------------------------------X--X Response(W)
| | | | | | | | |
=== Message flow: Fast Paxos with uncoordinated recovery, collapsed roles ===
(merged Acceptor/Learner roles)
Client Servers
| | | | | |
| | X->|->|->| Any(N,I,Recovery)
| | | | | |
| | | | | | !! Concurrent conflicting proposals
| | | | | | !! received in different order
| | | | | | !! by the Servers
| X--------?|-?|-?|-?| Accept!(N,I,V)
X-----------?|-?|-?|-?| Accept!(N,I,W)
| | | | | |
| | | | | | !! Servers disagree on value
| | X<>X->|->| Accepted(N,I,V)
| | |<-|<-X<>X Accepted(N,I,W)
| | | | | |
| | | | | | !! Detect collision & recover
| | X<>X<>X<>X Accepted(N+1,I,W)
|<-----------X--X--X--X Response(W)
| | | | | |
== Generalized Paxos ==
Generalized consensus explores the relationship between the operations of the replicated state machine and the consensus protocol that implements it. The main discovery involves optimizations of Paxos when conflicting proposals could be applied in any order. i.e., when the proposed operations are commutative operations for the state machine. In such cases, the conflicting operations can both be accepted, avoiding the delays required for resolving conflicts and re-proposing the rejected operations.
This concept is further generalized into ever-growing sequences of commutative operations, some of which are known to be stable (and thus may be executed). The protocol tracks these sequences ensuring that all proposed operations of one sequence are stabilized before allowing any operation non-commuting with them to become stable.
=== Example ===
In order to illustrate Generalized Paxos, the example below shows a message flow between two concurrently executing clients and a replicated state machine implementing read/write operations over two distinct registers A and B.
Note that in this table indicates operations which are non-commutative.
A possible sequence of operations :
<1:Read(A), 2:Read(B), 3:Write(B), 4:Read(B), 5:Read(A), 6:Write(A)>
Since 5:Read(A) commutes with both 3:Write(B) and 4:Read(B), one possible permutation equivalent to the previous order is the following:
<1:Read(A), 2:Read(B), 5:Read(A), 3:Write(B), 4:Read(B), 6:Write(A)>
In practice, a commute occurs only when operations are proposed concurrently.
=== Message flow: Generalized Paxos (example) ===
Responses not shown. Note: message abbreviations differ from previous message flows due to specifics of the protocol, see for a full discussion.
Client Leader Acceptor Learner
| | | | | | | | !! New Leader Begins Round
| | X----->|->|->| | | Prepare(N)
| | |<-----X- X- X | | Promise(N,null)
| | X----->|->|->| | | Phase2Start(N,null)
| | | | | | | |
| | | | | | | | !! Concurrent commuting proposals
| X------- ?|-----?|-?|-?| | | Propose(ReadA)
X-----------?|-----?|-?|-?| | | Propose(ReadB)
| | X------X-------------->|->| Accepted(N,<ReadA,ReadB>)
| | |<--------X--X-------->|->| Accepted(N,<ReadB,ReadA>)
| | | | | | | |
| | | | | | | | !! No Conflict, both accepted
| | | | | | | | Stable = <ReadA, ReadB>
| | | | | | | |
| | | | | | | | !! Concurrent conflicting proposals
X-----------?|-----?|-?|-?| | | Propose(<WriteB,ReadA>)
| X--------?|-----?|-?|-?| | | Propose(ReadB)
| | | | | | | |
| | X------X-------------->|->| Accepted(N,<WriteB,ReadA> . <ReadB>)
| | |<--------X--X-------->|->| Accepted(N,<ReadB> . <WriteB,ReadA>)
| | | | | | | |
| | | | | | | | !! Conflict detected, leader chooses
| | | | | | | | commutative order:
| | | | | | | | V = <ReadA, WriteB, ReadB>
| | | | | | | |
| | X----->|->|->| | | Phase2Start(N+1,V)
| | |<-----X- X- X-------->|->| Accepted(N+1,V)
| | | | | | | | Stable = <ReadA, ReadB> .
| | | | | | | | <ReadA, WriteB, ReadB>
| | | | | | | |
| | | | | | | | !! More conflicting proposals
X-----------?|-----?|-?|-?| | | Propose(WriteA)
| X--------?|-----?|-?|-?| | | Propose(ReadA)
| | | | | | | |
| | X------X-------------->|->| Accepted(N+1,<WriteA> . <ReadA>)
| | |<--------X- X-------->|->| Accepted(N+1,<ReadA> . <WriteA>)
| | | | | | | |
| | | | | | | | !! Leader chooses order:
| | | | | | | | W = <WriteA, ReadA>
| | | | | | | |
| | X----->|->|->| | | Phase2Start(N+2,W)
| | |<-----X- X- X-------->|->| Accepted(N+2,W)
| | | | | | | | Stable = <ReadA, ReadB> .
| | | | | | | | <ReadA, WriteB, ReadB> .
| | | | | | | | <WriteA, ReadA>
| | | | | | | |
=== Performance ===
The above message flow shows us that Generalized Paxos can leverage operation semantics to avoid collisions when the spontaneous ordering of the network fails. This allows the protocol to be in practice quicker than Fast Paxos. However, when a collision occurs, Generalized Paxos needs two additional round trips to recover. This situation is illustrated with operations WriteB and ReadB in the above schema.
In the general case, such round trips are unavoidable and come from the fact that multiple commands can be accepted during a round. This makes the protocol more expensive than Paxos when conflicts are frequent. Hopefully two possible refinements of Generalized Paxos are possible to improve recovery time.
First, if the coordinator is part of every quorum of acceptors (round N is said centered), then to recover at round N+1 from a collision at round N, the coordinator skips phase 1 and proposes at phase 2 the sequence it accepted last during round N. This reduces the cost of recovery to a single round trip.
Second, if both rounds N and N+1 use a unique and identical centered quorum, when an acceptor detects a collision at round N, it spontaneously proposes at round N+1 a sequence suffixing both (i) the sequence accepted at round N by the coordinator and (ii) the greatest non-conflicting prefix it accepted at round N. For instance, if the coordinator and the acceptor accepted respectively at round N <WriteB, ReadB> and <ReadB, ReadA> , the acceptor will spontaneously accept <WriteB, ReadB, ReadA> at round N+1. With this variation, the cost of recovery is a single message delay which is obviously optimal. Notice here that the use of a unique quorum at a round does not harm liveness. This comes from the fact that any process in this quorum is a read quorum for the prepare phase of the next rounds.
== Byzantine Paxos ==
Paxos may also be extended to support arbitrary failures of the participants, including lying, fabrication of messages, collusion with other participants, selective non-participation, etc. These types of failures are called Byzantine failures, after the solution popularized by Lamport.
Byzantine Paxos introduced by Castro and Liskov adds an extra message (Verify) which acts to distribute knowledge and verify the actions of the other processors:
=== Message flow: Byzantine Multi-Paxos, steady state ===
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Accept!(N,I,V)
| | X<>X<>X | | Verify(N,I,V) - BROADCAST
| |<---------X--X--X------>|->| Accepted(N,V)
|<---------------------------------X--X Response(V)
| | | | | | |
Fast Byzantine Paxos introduced by Martin and Alvisi removes this extra delay, since the client sends commands directly to the Acceptors.
Note the Accepted message in Fast Byzantine Paxos is sent to all Acceptors and all Learners, while Fast Paxos sends Accepted messages only to Learners):
=== Message flow: Fast Byzantine Multi-Paxos, steady state ===
Client Acceptor Learner
| | | | | |
X----->|->|->| | | Accept!(N,I,V)
| X<>X<>X------>|->| Accepted(N,I,V) - BROADCAST
|<-------------------X--X Response(V)
| | | | | |
The failure scenario is the same for both protocols; Each Learner waits to receive F+1 identical messages from different Acceptors. If this does not occur, the Acceptors themselves will also be aware of it (since they exchanged each other's messages in the broadcast round), and correct Acceptors will re-broadcast the agreed value:
=== Message flow: Fast Byzantine Multi-Paxos, failure ===
Client Acceptor Learner
| | | ! | | !! One Acceptor is faulty
X----->|->|->! | | Accept!(N,I,V)
| X<>X<>X------>|->| Accepted(N,I,{V,W}) - BROADCAST
| | | ! | | !! Learners receive 2 different commands
| | | ! | | !! Correct Acceptors notice error and choose
| X<>X<>X------>|->| Accepted(N,I,V) - BROADCAST
|<-------------------X--X Response(V)
| | | ! | |
== Adapting Paxos for RDMA networks ==
With the emergence of very high speed reliable datacenter networks that support remote DMA (RDMA), there has been substantial interest in optimizing Paxos to leverage hardware offloading, in which the network interface card and network routers provide reliability and network-layer congestion control, freeing the host CPU for other tasks. The Derecho C++ Paxos library is an open-source Paxos implementation that explores this option.
Derecho offers both a classic Paxos, with data durability across full shutdown/restart sequences, and vertical Paxos (atomic multicast), for in-memory replication and state-machine synchronization. The Paxos protocols employed by Derecho needed to be adapted to maximize asynchronous data streaming and remove other sources of delay on the leader's critical path. So doing enables Derecho to sustain the full bidirectional RDMA data rate. In contrast, although traditional Paxos protocols can be migrated to an RDMA network by simply mapping the message send operations to native RDMA operations, doing so leaves round-trip delays on the critical path. In high-speed RDMA networks, even small delays can be large enough to prevent utilization of the full potential bandwidth.
== Production use of Paxos ==
Google uses the Paxos algorithm in their Chubby distributed lock service in order to keep replicas consistent in case of failure. Chubby is used by Bigtable which is now in production in Google Analytics and other products.
Google Spanner and Megastore use the Paxos algorithm internally.
The OpenReplica replication service uses Paxos to maintain replicas for an open access system that enables users to create fault-tolerant objects. It provides high performance through concurrent rounds and flexibility through dynamic membership changes.
IBM supposedly uses the Paxos algorithm in their IBM SAN Volume Controller product to implement a general purpose fault-tolerant virtual machine used to run the configuration and control components of the storage virtualization services offered by the cluster.
Microsoft uses Paxos in the Autopilot cluster management service from Bing, and in Windows Server Failover Clustering.
WANdisco have implemented Paxos within their DConE active-active replication technology.
XtreemFS uses a Paxos-based lease negotiation algorithm for fault-tolerant and consistent replication of file data and metadata.
Heroku uses Doozerd which implements Paxos for its consistent distributed data store.
Ceph uses Paxos as part of the monitor processes to agree which OSDs are up and in the cluster.
The MariaDB Xpand distributed SQL database uses Paxos for distributed transaction resolution.
Neo4j HA graph database implements Paxos, replacing Apache ZooKeeper from v1.9
Apache Cassandra NoSQL database uses Paxos for Light Weight Transaction feature only.
ScyllaDB NoSQL database uses Paxos for Light Weight Transactions.
Amazon Elastic Container Services uses Paxos to maintain a consistent view of cluster state.
Amazon DynamoDB uses the Paxos algorithm for leader election and consensus.
== See also ==
Two generals problem
Chandra–Toueg consensus algorithm
State machine
Raft
== References == | Wikipedia/Paxos_(computer_science) |
A trilemma is a difficult choice from three options, each of which is (or appears) unacceptable or unfavourable. There are two logically equivalent ways in which to express a trilemma: it can be expressed as a choice among three unfavourable options, one of which must be chosen, or as a choice among three favourable options, only two of which are possible at the same time.
The term derives from the much older term dilemma, a choice between two or more difficult or unfavourable alternatives. The earliest recorded use of the term was by the British preacher Philip Henry in 1672, and later, apparently independently, by the preacher Isaac Watts in 1725.
== In religion ==
=== Epicurus' trilemma ===
One of the earliest uses of the trilemma formulation is that of the Greek philosopher Epicurus, rejecting the idea of an omnipotent and omnibenevolent God (as summarised by David Hume):
If God is unable to prevent evil, then he is not all-powerful.
If God is not willing to prevent evil, then he is not all-good.
If God is both willing and able to prevent evil, then why does evil exist?
Although traditionally ascribed to Epicurus and called Epicurus' trilemma, it has been suggested that it may actually be the work of an early skeptic writer, possibly Carneades.
In studies of philosophy, discussions, and debates related to this trilemma are often referred to as being about the problem of evil.
=== Apologetic trilemma ===
One well-known trilemma is sometimes used by Christian apologists as a proof of the divinity of Jesus, and is most commonly known in the version by C. S. Lewis. It proceeds from the premise that Jesus claimed to be God, and that therefore one of the following must be true:
Lunatic: Jesus was not God, but he mistakenly believed that he was.
Liar: Jesus was not God, and he knew it, but he said so anyway.
Lord: Jesus is God.
The trilemma, usually in Lewis' formulation, is often used in works of popular apologetics, although it is almost completely absent from discussions about the status of Jesus by professional theologians and biblical scholars.
== In law ==
=== The "cruel trilemma" ===
The "cruel trilemma" was an English ecclesiastical and judicial weapon developed in the first half of the 17th century, and used as a form of coercion and persecution. The format was a religious oath to tell the truth, imposed upon the accused prior to questioning. The accused, if guilty, would find themselves trapped between:
A breach of religious oath if they lied (taken extremely seriously in that era, a mortal sin), as well as perjury;
Self-incrimination if they told the truth; or
Contempt of court if they said nothing and were silent.
Outcry over this process led to the foundation of the right to not incriminate oneself being established in common law and was the direct precursor of the right to silence and non-self-incrimination in the Fifth Amendment to the United States Constitution.
== In philosophy ==
=== The Münchhausen trilemma ===
In the theory of knowledge the Münchhausen trilemma is an argument against the possibility of proving any certain truth even in the fields of logic and mathematics. Its name is going back to a logical proof of the German philosopher Hans Albert.
This proof runs as follows: All of the only three possible attempts to get a certain justification must fail:
All justifications in pursuit of certain knowledge have also to justify the means of their justification and doing so they have to justify anew the means of their justification. Therefore, there can be no end. We are faced with the hopeless situation of an infinite regression.
One can stop at self-evidence or common sense or fundamental principles or speaking ex cathedra or at any other evidence, but in doing so the intention to install certain justification is abandoned.
The third horn of the trilemma is the application of a circular argument.
=== The trilemma of censorship ===
In John Stuart Mill's On Liberty, as a part of his argument against the suppression of free speech, he describes the trilemma facing those attempting to justify such suppression (although he does not refer to it as a trilemma, Leo Parker-Rees (2009) identified it as such).
If free speech is suppressed, the opinion suppressed is either:
True – in which case society is robbed of the chance to exchange error for truth;
False – in which case the opinion would create a 'livelier impression' of the truth, allowing people to justify the correct view;
Half-true – in which case it would contain a forgotten element of the truth, that is important to rediscover, with the eventual aim of a synthesis of the conflicting opinions that is the whole truth.
=== Buddhist Trilemma ===
The Buddhist philosopher Nagarjuna uses the trilemma in his Verses on the Middle Way, giving the example that:
a cause cannot follow its effect
a cause cannot be coincident with its effect
a cause cannot precede its effect
== In economics ==
=== "The Uneasy Triangle" ===
In 1952, the British magazine The Economist published a series of articles on an "Uneasy Triangle", which described "the three-cornered incompatibility between a stable price level, full employment, and ... free collective bargaining". The context was the difficulty maintaining external balance without sacrificing two sacrosanct political values: jobs for all and unrestricted labor rights. Inflation resulting from labor militancy in the context of full employment had put powerful downward pressure on the pound sterling. Runs on the pound then triggered a long series of economically and politically disruptive "stop-go" policies (deflation followed by reflation). John Maynard Keynes had anticipated the severe problem associated with reconciling full employment with stable prices without sacrificing democracy and the associational rights of labor. The same incompatibilities were also elaborated upon in Charles E. Lindblom's 1949 book, Unions and Capitalism.
=== The "impossible trinity" ===
In 1962 and 1963, a trilemma (or "impossible trinity") was introduced by the economists Robert Mundell and Marcus Fleming in articles discussing the problems with creating a stable international financial system. It refers to the trade-offs among the following three goals: a fixed exchange rate, national independence in monetary policy, and capital mobility. According to the Mundell–Fleming model of 1962 and 1963, a small, open economy cannot achieve all three of these policy goals at the same time: in pursuing any two of these goals, a nation must forgo the third.
=== Wage policy trilemmas ===
In 1989 Peter Swenson posited the existence of "wage policy trilemmas" encountered by trade unions trying to achieve three egalitarian goals simultaneously. One involved attempts to compress wages within a bargaining sector while compressing wages between sectors and maximizing access to employment in the sector. A variant of this "horizontal" trilemma was the "vertical" wage policy trilemma associated with trying simultaneously to compress wages, increase the wage share of value added at the expense of profits, and maximize employment. These trilemmas helped explain instability in unions' wage policies and their political strategies seemingly designed to resolve the incompatibilities.
=== The Pinker social trilemma ===
Steven Pinker proposed another social trilemma in his books How the Mind Works and The Blank Slate: that a society cannot be simultaneously "fair", "free", and "equal". If it is "fair", individuals who work harder will accumulate more wealth; if it is "free", parents will leave the bulk of their inheritance to their children; but then it will not be "equal", as people will begin life with different fortunes.
=== The political trilemma of the world economy ===
Economist Dani Rodrik argues in his book, The Globalization Paradox, that democracy, national sovereignty, and global economic integration are mutually incompatible. Democratic states pose obstacles to global integration (e. g. regulatory laws, taxes and tariffs) to protect their own economies. Therefore, if we need to achieve complete economic integration, it is necessary to also remove democratic nations states. A government of some nation state could possibly pursue the goal of global integration on the expense of its own population, but that would require an authoritarian regime. Otherwise, the government would be likely replaced in the next elections.
=== Holmström's theorem ===
In Moral Hazard in Teams, economist Bengt Holmström demonstrated a trilemma that arises from incentive systems. For any team of risk-neutral agents, no incentive system of revenue distribution can satisfy all three of the following conditions: Pareto efficiency, balanced budget, and Nash stability. This entails three optimized outcomes:
Martyrdom: the incentive system distributes all revenue, and no agent can improve their take by changing their strategy, but at least one agent is not receiving reward in proportion to their effort.
Instability: the incentive system distributes all revenue, and all agents are rewarded in proportion to their effort, but at least one agent could increase their take by changing strategies.
Insolvency: all agents are rewarded in proportion to their effort, and no shift in strategy would improve any agent's take, but not all revenue is distributed.
=== Arrow's impossibility theorem ===
In social choice theory, economist Kenneth Arrow proved that it is impossible to create a social welfare function that simultaneously satisfies three key criteria: Pareto efficiency, non-dictatorship and independence of irrelevant alternatives.
== In politics ==
=== The Brexit trilemma ===
Following the Brexit referendum, the first May government decided that not only should the United Kingdom leave the European Union but also that it should leave the European Union Customs Union and the European Single Market. This meant that a customs and regulatory border would arise between the UK and the EU. Whilst the sea border between Great Britain and continental Europe was expected to present manageable challenges, the UK/EU border in Ireland was recognised as having rather more intractable issues. These were summarised in what became known as the "Brexit trilemma", because of three competing objectives: no hard border on the island; no customs border in the Irish Sea; and no British participation in the European Single Market and the European Union Customs Union. It is not possible to have all three.
=== The Zionist trilemma ===
Zionists have often desired that Israel be democratic, have a Jewish identity, and encompass (at least) the land of Mandatory Palestine. However, these desires (or "desiderata") seemingly form an inconsistent triad, and thus a trilemma. Palestine has an Arab majority, so any democratic state encompassing all of Palestine would likely have a binational or Arab identity.
However, Israel could be:
Democratic and Jewish, but not in all of Palestine.
Democratic and in all of Palestine, but not Jewish.
Jewish and in all of Palestine, but not democratic.
This observation appears in "From Beirut to Jerusalem" (1989), by Thomas Friedman, who attributes it to the political scientist Aryeh Naor (historically, the 'trilema' is inexact since early Zionist activists often (a) believed that Jews would migrate to Palestine in sufficiently large numbers; (b) proposed forms of bi-national governance; (c) preferred forms of communism over democracy).
=== The Žižek trilemma ===
The "Žižek trilemma" is a humorous formulation on the incompatibility of certain personal virtues under a constraining ideological framework. Often attributed to the philosopher Slavoj Žižek, it is actually quoted by him as the product of an anonymous source:
One cannot but recall here a witty formula of life under a hard Communist regime: Of the three features—personal honesty, sincere support of the regime and intelligence—it was possible to combine only two, never all three. If one were honest and supportive, one was not very bright; if one were bright and supportive, one was not honest; if one were honest and bright, one was not supportive.
== In business ==
=== The project-management trilemma ===
Arthur C. Clarke cited a management trilemma encountered when trying to achieve production quickly and cheaply while maintaining high quality. In the software industry, this means that one can pick any two of: fastest time to market, highest software quality (fewest defects), and lowest cost (headcount). This is the basis of the popular project management aphorism "Quick, Cheap, Good: Pick two," conceptualized as the project management triangle or "quality, cost, delivery".
=== The trilemma of an encyclopedia ===
The Stanford Encyclopedia of Philosophy is said to have overcome the trilemma that an encyclopedia cannot be authoritative, comprehensive and up-to-date all at the same time for any significant duration.
== In computing and technology ==
=== In data storage ===
The RAID technology may offer two of three desirable values: (relative) inexpensiveness, speed or reliability (RAID 0 is fast and cheap, but unreliable; RAID 6 is extremely expensive and reliable, with correct performance and so on). A common phrase in data storage, which is the same in project management, is "fast, cheap, good: choose two".
The same saying has been pastiched in silent computing as "fast, cheap, quiet: choose two".
In researching magnetic recording, used in hard drive storage, a trilemma arises due to the competing requirements of readability, writeability and stability (known as the Magnetic Recording Trilemma). Reliable data storage means that for very small bit sizes the magnetic medium must be made of a material with a very high coercivity (ability to maintain its magnetic domains and withstand any undesired external magnetic influences). But this coercivity must be overridden by the drive head when data is written, which means an extremely strong magnetic field in a very tiny space, but the size occupied by one bit of data eventually becomes so small that the strongest magnetic field able to be created in the space available, is not strong enough to allow data writing. In effect, a point exists at which it becomes impractical or impossible to make a working disk drive because magnetic writing activity is no longer possible on such a small scale. Heat-assisted magnetic recording (HAMR) and Microwave Assisted Magnetic Recording (MAMR) are technologies that aim to modify coercivity during writing only, to work around the trilemma..
=== In anonymous communication protocols ===
Anonymous communication protocols can offer two of the three desirable properties: strong anonymity, low bandwidth overhead, low latency overhead.
Some anonymous communication protocols offer anonymity at the cost of high bandwidth overhead, that means the number of messages exchanged between the protocol parties is very high. Some offer anonymity with the expense of latency overhead (there is a high delay between when the message is sent by the sender and when it is received by the receiver). There are protocols which aims to keep the bandwidth overhead and latency overhead low, but they can only provide a weak form of anonymity.
=== In clustering algorithms ===
Kleinberg demonstrated through an axiomatic approach to clustering that no clustering method can satisfy all three of the following fundamental properties at the same time:
Scale Invariance: The clustering results remain the same when distances between data points are proportionally scaled.
Richness: The method can produce any possible partition of the data.
Consistency: Changes in distances that align with the clustering structure (e.g., making closer points even closer) do not alter the results.
=== Other (technology) ===
The CAP theorem, covering guarantees provided by distributed systems, and Zooko's triangle concerning naming of participants in network protocols, are both examples of other trilemmas in technology.
== See also ==
Ternary plot
Trichotomy (philosophy)
Inconsistent triad
Condorcet paradox
Tetralemma
== References ==
== External links ==
Chisholm, Hugh, ed. (1911). "Trilemma" . Encyclopædia Britannica (11th ed.). Cambridge University Press. | Wikipedia/Trilemma |
A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of loading entire new pages. The goal is faster transitions that make the website feel more like a native app.
In a SPA, a page refresh never occurs; instead, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions.
== History ==
The origins of the term single-page application are unclear, though the concept was discussed at least as early as 2003 by technology evangelists from Netscape. Stuart Morris, a programming student at Cardiff University, Wales, wrote the self-contained website at slashdotslash.com with the same goals and functions in April 2002, and later the same year Lucas Birdeau, Kevin Hakman, Michael Peachey and Clifford Yeh described a single-page application implementation in US patent 8,136,109. Earlier forms were called rich web applications.
JavaScript can be used in a web browser to display the user interface (UI), run application logic, and communicate with a web server. Mature free libraries are available that support the building of a SPA, reducing the amount of JavaScript code developers have to write.
== Technical approaches ==
There are various techniques available that enable the browser to retain a single page even when the application requires server communication.
=== Document hashes ===
HTML authors can leverage element IDs to show or hide different sections of the HTML document. Then, using CSS, authors can use the :target pseudo-class selector to only show the section of the page which the browser navigated to.
=== JavaScript frameworks ===
Web browser JavaScript frameworks and libraries, such as Angular, Ember.js, ExtJS, Knockout.js, Meteor.js, React, Vue.js, and Svelte have adopted SPA principles. Aside from ExtJS, all of these are free.
AngularJS is a discontinued fully client-side framework. AngularJS's templating is based on bidirectional UI data binding. Data-binding is an automatic way of updating the view whenever the model changes, as well as updating the model whenever the view changes. The HTML template is compiled in the browser. The compilation step creates pure HTML, which the browser re-renders into the live view. The step is repeated for subsequent page views. In traditional server-side HTML programming, concepts such as controller and model interact within a server process to produce new HTML views. In the AngularJS framework, the controller and model states are maintained within the client browser. Therefore, new pages are capable of being generated without any interaction with a server.
Angular 2+ is a SPA Framework developed by Google after AngularJS. There is a strong community of developers using this framework. The framework is updated twice every year. New features and fixes are frequently added in this framework.
Ember.js is a client-side JavaScript web application framework based on the model–view–controller (MVC) software architectural pattern. It allows developers to create scalable single-page applications by incorporating common idioms and best practices into a framework that provides a rich object model, declarative two-way data binding, computed properties, automatically updating templates powered by Handlebars.js, and a router for managing application state.
ExtJS is also a client side framework that allows creating MVC applications. It has its own event system, window and layout management, state management (stores) and various UI components (grids, dialog windows, form elements etc.). It has its own class system with either dynamic or static loader. The application built with ExtJS can either exist on its own (with state in the browser) or with the server (e.g. with REST API that is used to fill its internal stores). ExtJS has only built in capabilities to use localStorage so larger applications need a server to store state.
Knockout.js is a client side framework which uses templates based on the Model-View-ViewModel pattern.
Meteor.js is a full-stack (client-server) JavaScript framework designed exclusively for SPAs. It features simpler data binding than Angular, Ember or ReactJS, and uses the Distributed Data Protocol and a publish–subscribe pattern to automatically propagate data changes to clients in real-time without requiring the developer to write any synchronization code. Full stack reactivity ensures that all layers, from the database to the templates, update themselves automatically when necessary. Ecosystem packages such as Server Side Rendering address the problem of search engine optimization.
React is a JavaScript library for building user interfaces. It is maintained by Facebook, Instagram and a community of individual developers and corporations. React uses a syntax extension for JavaScript, named JSX, which is a mix of JS and HTML (a subset of HTML). Several companies use React with Redux (JavaScript library) which adds state management capabilities, which (with several other libraries) lets developers create complex applications.
Vue.js is a JavaScript framework for building user interfaces. Vue developers also provide Pinia for state management.
Svelte is a framework for building user interfaces that compiles Svelte code to JavaScript DOM (Document Object Model) manipulations, avoiding the need to bundle a framework to the client, and allowing for simpler application development syntax.
==== Capabilities and trade-offs in modern frameworks ====
JavaScript-based web application frameworks, such as React and Vue, provide extensive capabilities but come with associated trade-offs. These frameworks often extend or enhance features available through native web technologies, such as routing, component-based development, and state management. While native web standards, including Web Components, modern JavaScript APIs like Fetch and ES Modules, and browser capabilities like Shadow DOM, have advanced significantly, frameworks remain widely used for their ability to enhance developer productivity, offer structured patterns for large-scale applications, simplify handling edge cases, and provide tools for performance optimization.
Frameworks can introduce abstraction layers that may contribute to performance overhead, larger bundle sizes, and increased complexity. Modern frameworks, such as React 18 and Vue 3, address these challenges with features like concurrent rendering, tree-shaking, and selective hydration. While these advancements improve rendering efficiency and resource management, their benefits depend on the specific application and implementation context. Lightweight frameworks, such as Svelte and Preact, take different architectural approaches, with Svelte eliminating the virtual DOM entirely in favor of compiling components to efficient JavaScript code, and Preact offering a minimal, compatible alternative to React. Framework choice depends on an application’s requirements, including the team’s expertise, performance goals, and development priorities.
A newer category of web frameworks, including enhance.dev, Astro, and Fresh, leverages native web standards while minimizing abstractions and development tooling. These solutions emphasize progressive enhancement, server-side rendering, and optimizing performance. Astro renders static HTML by default while hydrating only interactive parts. Fresh focuses on server-side rendering with zero runtime overhead. Enhance.dev prioritizes progressive enhancement patterns using Web Components. While these tools reduce reliance on client-side JavaScript by shifting logic to build-time or server-side execution, they still use JavaScript where necessary for interactivity. This approach makes them particularly suitable for performance-critical and content-focused applications.
=== WebAssembly-based frameworks ===
The following frameworks utilize WebAssembly or can build single-page applications (SPAs) with WebAssembly as a core technology or support mechanism. These frameworks enable high-performance and interactive client-side development, extending the SPA paradigm across languages and ecosystems.
Avalonia is primarily a cross-platform desktop UI framework, but experimental support for WebAssembly allows it to be used for SPA development. It has an XAML-based UI design and native-style application features.
Blazor WebAssembly is a .NET-based framework that allows developers to build SPAs using C# and Razor syntax. It runs .NET code in the browser via WebAssembly, enabling a full-stack .NET development experience without relying on JavaScript.
Flutter on the Web extends Flutter’s cross-platform development capabilities to web-based SPAs. Using Dart and its Skia graphics engine, Flutter allows developers to create visually rich SPAs that run in the browser.
OpenSilver is another open-source reimplementation of Silverlight but targeted toward SPAs developed with C# and XAML. It uses WebAssembly to run the .NET code in the browser, so it's fitted for highly interactive client-side applications.
Uno Platform is a cross-platform framework that supports SPA development through WebAssembly. It allows developers to use XAML and C# to build applications that run on the Web, mobile, and desktop platforms, with UI components rendered directly in the browser.
=== Ajax ===
As of 2006, the most prominent technique used was Ajax. Ajax involves using asynchronous requests to a server for XML or JSON data, such as with JavaScript's XMLHttpRequest or more modern fetch() (since 2017), or the deprecated ActiveX Object. In contrast to the declarative approach of most SPA frameworks, with Ajax the website directly uses JavaScript or a JavaScript library such as jQuery to manipulate the DOM and edit HTML elements. Ajax has further been popularized by libraries like jQuery, which provides a simpler syntax and normalizes Ajax behavior across different browsers which historically had varying behavior.
=== WebSockets ===
WebSockets are a bidirectional real-time client-server communication technology that are part of the HTML specification. For real-time communication, their use is superior to Ajax in terms of performance and simplicity.
=== Server-sent events ===
Server-sent events (SSEs) is a technique whereby servers can initiate data transmission to browser clients. Once an initial connection has been established, an event stream remains open until closed by the client. SSEs are sent over traditional HTTP and have a variety of features that WebSockets lack by design such as automatic reconnection, event IDs, and the ability to send arbitrary events.
=== Browser plugins ===
Although this method is outdated, asynchronous calls to the server may also be achieved using browser plug-in technologies such as Silverlight, Flash, or Java applets.
=== Data transport (XML, JSON and Ajax) ===
Requests to the server typically result in either raw data (e.g., XML or JSON), or new HTML being returned. In the case where HTML is returned by the server, JavaScript on the client updates a partial area of the DOM (Document Object Model). When raw data is returned, often a client-side JavaScript XML / (XSL) process (and in the case of JSON a template) is used to translate the raw data into HTML, which is then used to update a partial area of the DOM.
=== Server architecture ===
==== Thin server architecture ====
A SPA moves logic from the server to the client, with the role of the web server evolving into a pure data API or web service. This architectural shift has, in some circles, been coined "Thin Server Architecture" to highlight that complexity has been moved from the server to the client, with the argument that this ultimately reduces overall complexity of the system.
==== Thick stateful server architecture ====
The server keeps the necessary state in memory of the client state of the page. In this way, when any request hits the server (usually user actions), the server sends the appropriate HTML and/or JavaScript with the concrete changes to bring the client to the new desired state (usually adding/deleting/updating a part of the client DOM). At the same time, the state in server is updated. Most of the logic is executed on the server, and HTML is usually also rendered on the server. In some ways, the server simulates a web browser, receiving events and performing delta changes in server state which are automatically propagated to client.
This approach needs more server memory and server processing, but the advantage is a simplified development model because a) the application is usually fully coded in the server, and b) data and UI state in the server are shared in the same memory space with no need for custom client/server communication bridges.
==== Thick stateless server architecture ====
This is a variant of the stateful server approach. The client page sends data representing its current state to the server, usually through Ajax requests. Using this data, the server is able to reconstruct the client state of the part of the page which needs to be modified and can generate the necessary data or code (for instance, as JSON or JavaScript), which is returned to the client to bring it to a new state, usually modifying the page DOM tree according to the client action that motivated the request.
This approach requires that more data be sent to the server and may require more computational resources per request to partially or fully reconstruct the client page state in the server. At the same time, this approach is more easily scalable because there is no per-client page data kept in the server and, therefore, Ajax requests can be dispatched to different server nodes with no need for session data sharing or server affinity.
== Running locally ==
Some SPAs may be executed from a local file using the file URI scheme. This gives users the ability to download the SPA from a server and run the file from a local storage device, without depending on server connectivity. If such a SPA wants to store and update data, it must use browser-based Web Storage. These applications benefit from advances available with HTML.
== Challenges with the SPA model ==
Because the SPA is an evolution away from the stateless page-redraw model that browsers were originally designed for, some new challenges have emerged. Possible solutions (of varying complexity, comprehensiveness, and author control) include:
client-side JavaScript libraries
server-side web frameworks that specialize in the SPA model
the evolution of browsers and the HTML specification, designed for the SPA model
=== Search-engine optimization ===
Because of the lack of JavaScript execution on crawlers of some popular Web search engines, SEO (search engine optimization) has historically presented a problem for public facing websites wishing to adopt the SPA model.
Between 2009 and 2015, Google Webmaster Central proposed and then recommended an "AJAX crawling scheme" using an initial exclamation mark in fragment identifiers for stateful AJAX pages (#!). Special behavior must be implemented by the SPA site to allow extraction of relevant metadata by the search engine's crawler. For search engines that do not support this URL hash scheme, the hashed URLs of the SPA remain invisible. These "hash-bang" URIs have been considered problematic by a number of writers including Jeni Tennison at the W3C because they make pages inaccessible to those who do not have JavaScript activated in their browser. They also break HTTP referer headers as browsers are not allowed to send the fragment identifier in the Referer header. In 2015, Google deprecated their hash-bang AJAX crawling proposal.
Alternatively, applications may render the first page load on the server and subsequent page updates on the client. This is traditionally difficult, because the rendering code might need to be written in a different language or framework on the server and in the client. Using logic-less templates, cross-compiling from one language to another, or using the same language on the server and the client may help to increase the amount of code that can be shared.
In 2018, Google introduced dynamic rendering as another option for sites wishing to offer crawlers a non-JavaScript heavy version of a page for indexing purposes. Dynamic rendering switches between a version of a page that is rendered client-side and a pre-rendered version for specific user agents. This approach involves your web server detecting crawlers (via the user agent) and routing them to a renderer, from which they are then served a simpler version of HTML content. As of 2024, Google no longer recommends dynamic rendering, suggesting "server-side rendering, static rendering, or hydration" instead.
Because SEO compatibility is not trivial in SPAs, SPAs are commonly not used in a context where search engine indexing is either a requirement, or desirable. Use cases include applications that surface private data hidden behind an authentication system. In the cases where these applications are consumer products, often a classic "page redraw" model is used for the applications landing page and marketing site, which provides enough meta data for the application to appear as a hit in a search engine query. Blogs, support forums, and other traditional page redraw artifacts often sit around the SPA that can seed search engines with relevant terms.
As of 2021 and Google specifically, SEO compatibility for a plain SPA is straightforward and requires just a few simple conditions to be met.
One way to increase the amount of code that can be shared between servers and clients is to use a logic-less template language like Mustache or Handlebars. Such templates can be rendered from different host languages, such as Ruby on the server and JavaScript in the client. However, merely sharing templates typically requires duplication of business logic used to choose the correct templates and populate them with data. Rendering from templates may have negative performance effects when only updating a small portion of the page—such as the value of a text input within a large template. Replacing an entire template might also disturb a user's selection or cursor position, where updating only the changed value might not. To avoid these problems, applications can use UI data bindings or granular DOM manipulation to only update the appropriate parts of the page instead of re-rendering entire templates.
=== Browser history ===
With a SPA being, by definition, "a single page", the model breaks the browser's design for page history navigation using the "forward" or "back" buttons. This presents a usability impediment when a user presses the back button, expecting the previous screen state within the SPA, but instead, the application's single page unloads and the previous page in the browser's history is presented.
The traditional solution for SPAs has been to change the browser URL's hash fragment identifier in accord with the current screen state. This can be achieved with JavaScript, and causes URL history events to be built up within the browser. As long as the SPA is capable of resurrecting the same screen state from information contained within the URL hash, the expected back-button behavior is retained.
To further address this issue, the HTML specification has introduced pushState and replaceState providing programmatic access to the actual URL and browser history.
=== Analytics ===
Analytics tools such as Google Analytics rely heavily upon entire new pages loading in the browser, initiated by a new page load. SPAs do not work this way.
After the first page load, all subsequent page and content changes are handled internally by the application, which should simply call a function to update the analytics package. Failing to call such a function, the browser never triggers a new page load, nothing gets added to the browser history, and the analytics package has no idea who is doing what on the site.
=== Security scanning ===
Similarly to the problems encountered with search engine crawlers, DAST tools may struggle with these JavaScript-rich applications. Problems can include the lack of hypertext links, memory usage concerns and resources loaded by the SPA typically being made available by an Application Programming Interface or API. Single-page applications are still subject to the same security risks as traditional web pages such as Cross-Site Scripting (XSS), but also a host of other unique vulnerabilities such as data exposure via API and client-side logic and client-side enforcement of server-side security. In order to effectively scan a single-page application, a DAST scanner must be able to navigate the client-side application in a reliable and repeatable manner to allow discovery of all areas of the application and interception of all requests that the application sends to remote servers (e.g. API requests).
=== Adding page loads to a SPA ===
It is possible to add page load events to a SPA using the HTML History API; this will help integrate analytics. The difficulty comes in managing this and ensuring that everything is being tracked accurately – this involves checking for missing reports and double entries.
Some frameworks provide free analytics integrations addressing most of the major analytics providers. Developers can integrate them into the application and make sure that everything is working correctly, but there is no need to do everything from scratch.
=== Speeding up the page load ===
There are some ways of speeding up the initial load of a SPA, such as selective prerendering of the SPA landing/index page, caching and various code splitting techniques including lazy-loading modules when needed. But it's not possible to get away from the fact that it needs to download the framework, at least some of the application code; and will hit an API for data if the page is dynamic. This is a "pay me now, or pay me later" trade-off scenario. The question of performance and wait-times remains a decision that the developer must make.
== Page lifecycle ==
A SPA is fully loaded in the initial page load and then page regions are replaced or updated with new page fragments loaded from the server on demand. To avoid excessive downloading of unused features, a SPA will often progressively download more features as they become required, either small fragments of the page, or complete screen modules.
In this way an analogy exists between "states" in a SPA and "pages" in a traditional website. Because "state navigation" in the same page is analogous to page navigation, in theory, any page-based web site could be converted to single-page replacing in the same page only the changed parts.
The SPA approach on the web is similar to the single-document interface (SDI) presentation technique popular in native desktop applications.
== See also ==
Progressive web application (PWA)
Server-side scripting
== References ==
== External links ==
Migrating Multi-page Web Applications to Single-page Ajax Interfaces (Delft University of Technology)
The Single Page Interface Manifesto
Dynamic Rendering | Wikipedia/Single-page_application |
In computing, a solution stack or software stack is a set of software subsystems or components needed to create a complete platform such that no additional software is needed to support applications. Applications are said to "run on" or "run on top of" the resulting platform.
For example, to develop a web application, the architect defines the stack as the target operating system, web server, database, and programming language. Another version of a software stack is operating system, middleware, database, and applications. Regularly, the components of a software stack are developed by different developers independently from one another.
Some components/subsystems of an overall system are chosen together often enough that the particular set is referred to by a name representing the whole, rather than by naming the parts. Typically, the name is an acronym representing the individual components.
The term "solution stack" has, historically, occasionally included hardware components as part of a final product, mixing both the hardware and software in layers of support.
A full-stack developer is expected to be able to work in all the layers of the application (front-end and back-end). A full-stack developer can be defined as a developer or an engineer who works with both the front and back end development of a website, web application or desktop application. This means they can lead platform builds that involve databases, user-facing websites, and working with clients during the planning phase of projects.
== Examples ==
=== OS-level stacks ===
BCHS
OpenBSD (operating system)
C (programming language)
httpd (web server)
SQLite (database)
Ganeti
Xen or KVM (hypervisor)
Linux with LVM (mass-storage device management)
Distributed Replicated Block Device (storage replication)
Ganeti (virtual machine cluster management tool)
Ganeti Web Manager (web interface)
GLASS
GemStone (database and application server)
Linux (operating system)
Apache (web server)
Smalltalk (programming language)
Seaside (web framework)
LAMP
Linux (operating system)
Apache (web server)
MySQL or MariaDB (database management systems)
Perl, PHP, or Python (scripting languages)
LEAP
Linux (operating system)
Eucalyptus (free and open-source alternative to the Amazon Elastic Compute Cloud)
AppScale (cloud computing-framework and free and open-source alternative to Google App Engine)
Python (programming language)
LEMP/LNMP
Linux (operating system)
Nginx (web server)
MySQL or MariaDB (database management systems)
Perl, PHP, or Python (scripting languages)
LLMP
Linux (operating system)
Lighttpd (web server)
MySQL or MariaDB (database management systems)
Perl, PHP, or Python (scripting languages)
LYME and LYCE
Linux (operating system)
Yaws (web server, written in Erlang)
Mnesia or CouchDB (database, written in Erlang)
Erlang (functional programming language)
MAMP
Mac OS X (operating system)
Apache (web server)
MySQL or MariaDB (database)
PHP, Perl, or Python (programming languages)
LAPP
Linux (operating system)
Apache (web server)
PostgreSQL (database management systems)
Perl, PHP, or Python (scripting languages)
MLVN
MongoDB (database)
Linux (operating system)
Varnish (software) (frontend cache)
Node.js (JavaScript runtime)
WAMP
Windows (operating system)
Apache (web server)
MySQL or MariaDB (database)
PHP, Perl, or Python (programming language)
WIMP
Windows (operating system)
Internet Information Services (web server)
MySQL or MariaDB (database)
PHP, Perl, or Python (programming language)
WINS
Windows Server (operating system)
Internet Information Services (web server)
.NET (software framework)
SQL Server (database)
WISA
Windows Server (operating system)
Internet Information Services (web server)
SQL Server (database)
ASP.NET (web framework)
WISAV/WIPAV
Windows Server (operating system)
Internet Information Services (web server)
Microsoft SQL Server/PostgreSQL (database)
ASP.NET (backend web framework)
Vue.js (frontend web framework)
=== OS-agnostic web stacks ===
ELK
Elasticsearch (search engine)
Logstash (event and log management tool)
Kibana (data visualization)
GRANDstack
GraphQL (data query and manipulation language)
React (web application presentation)
Apollo (Data Graph Platform)
Neo4j (database management systems)
JAMstack
JavaScript (programming language)
APIs (Application programming interfaces)
Markup (content)
MARQS
Apache Mesos (node startup/shutdown)
Akka (toolkit) (actor implementation)
Riak (data store)
Apache Kafka (messaging)
Apache Spark (big data and MapReduce)
MEAN
MongoDB (database)
Express.js (application controller layer)
AngularJS/Angular (web application presentation)
Node.js (JavaScript runtime)
MERN
MongoDB (database)
Express.js (application controller layer)
React.js (web application presentation)
Node.js (JavaScript runtime)
MEVN
MongoDB (database)
Express.js (application controller layer)
Vue.js (web application presentation)
Node.js (JavaScript runtime)
NMP
Nginx (web server)
MySQL or MariaDB (database)
PHP (programming language)
OpenACS
NaviServer (web server)
OpenACS (web application framework)
PostgreSQL or Oracle Database (database)
Tcl (scripting language)
PERN
PostgreSQL (database)
Express.js (application controller layer)
React (JavaScript library) (web application presentation)
Node.js (JavaScript runtime)
PLONK
Prometheus (metrics and time-series)
Linkerd (service mesh)
OpenFaaS (management and auto-scaling of compute)
NATS (asynchronous message bus/queue)
Kubernetes (declarative, extensible, scale-out, self-healing clustering)
SMACK
Apache Spark (big data and MapReduce)
Apache Mesos (node startup/shutdown)
Akka (toolkit) (actor implementation)
Apache Cassandra (database)
Apache Kafka (messaging)
T-REx
TerminusDB (scalable graph database)
React (JavaScript web framework)
Express.js (framework for Node.js)
XAMPP
cross-platform (operating system)
Apache (web server)
MariaDB or MySQL (database)
PHP (programming language)
Perl (programming language)
XRX
XML database (database such as BaseX, eXist, MarkLogic Server)
XQuery (Query language)
REST (client interface)
XForms (client)
== See also ==
List of content management systems
Content management system
List of Apache–MySQL–PHP packages
Purple squirrel
Web framework
== References == | Wikipedia/Solution_stack |
A Rich Internet Application (also known as a rich web application, RIA or installable Internet application) is a web application that has many of the characteristics of desktop application software. The concept is closely related to a single-page application, and may allow the user interactive features such as drag and drop, background menu, WYSIWYG editing, etc. The concept was first introduced in 2002 by Macromedia to describe Macromedia Flash MX product (which later became Adobe Flash). Throughout the 2000s, the term was generalized to describe browser-based applications developed with other competing browser plugin technologies including Java applets, Microsoft Silverlight.
With the deprecation of browser plugin interfaces and transition to standard HTML5 technologies, Rich Internet Applications were replaced with JavaScript web applications, including single-page applications and progressive web applications.
== History ==
The terms "Rich Internet Application" and "rich client" were introduced in a white paper of March 2002 by Macromedia (now Adobe), though the concept had existed for a number of years earlier under names including: "Remote Scripting" by Microsoft in April 1999 and the "X Internet" by Forrester Research in October 2000.
In November 2011, there were a number of announcements that demonstrated a decline in demand for Rich Internet Application architectures based on browser plug-ins in order to favor HTML5 alternatives. Adobe announced that Flash would no longer be produced for mobile or TV (refocusing its efforts on Adobe AIR). Pundits questioned its continued relevance even on the desktop and described it as "the beginning of the end". Research In Motion (RIM) announced that it would continue to develop Flash for the PlayBook, a decision questioned by some commentators. Rumors stated that Microsoft was to abandon Silverlight after the upcoming release of version 5 -- this would later turn out to be the case. The combination of these announcements had some proclaiming it "the end of the line for browser plug-ins".
=== Rich mobile applications ===
A rich mobile application (RMA) is a mobile application that inherits numerous properties from web applications and features several explicit properties, such as context awareness and ubiquity. RMAs are "energy efficient, multi-tier, online mobile applications originated from the convergence of mobile cloud computing, future web, and imminent communication technologies envisioning to deliver rich user experience via high functionality, immersive interaction, and crisp response in a secure wireless environment while enabling context-awareness, offline usability, portability, and data ubiquity".
==== Origins of RMAs ====
After successful deployment of web applications to desktop computers and the increasing popularity of mobile devices, researchers brought these enhanced web application functionalities to the smartphone platform. NTT DoCoMo of Japan adopted Adobe Flash Lite in 2003 to enhance mobile applications' functionality. In 2008, Google brought Google Gears to Windows Mobile 5 and 6 devices to support platform-neutral mobile applications in offline mode. Google Gears for mobile devices is a mobile browser extension for developing web applications enriched by a separate, user-installable add-on. These applications can be executed inside the mobile device with a web browser regardless of the architecture, operating system and technology. In April 2008, Microsoft introduced Microsoft Silverlight mobile to develop engaging, interactive UIs for mobile devices. Silverlight is a .NET plug-in compatible with several mobile browsers that runs the Silverlight-enabled mobile apps. Android accommodated the Google Gear plug-in in the Google Chrome Lite browser to improve the interaction experience of Android end-users.
== Technologies ==
=== Adobe Flash ===
Adobe Flash manipulated vector and raster graphics to provide animation of text, drawings, and still images. It supported bidirectional streaming of audio and video, and it could capture user input via mouse, keyboard, microphone, and camera. Flash contained an object-oriented language called ActionScript and supported automation via the JavaScript Flash language (JSFL). Flash content could be displayed on various computer systems and devices, using Adobe Flash Player, which was available free of charge for common web browsers, some mobile phones and a few other electronic devices (using Flash Lite).
Apache Flex, formerly Adobe Flex, is a software development kit (SDK) for the development and deployment of cross-platform RIAs based on the Adobe Flash platform. Initially developed by Macromedia and then acquired by Adobe Systems, Flex was donated by Adobe to the Apache Software Foundation in 2011.
Adobe deprecated Flash in 2017, and the Adobe Flash Player was discontinued in most markets by early 2021.
=== Java applet ===
Java applets were used to create interactive visualizations and to present video, three-dimensional objects and other media. Java applets were appropriate for complex visualizations that required significant programming effort in a high level language or communications between applet and originating server.
=== JavaFX ===
JavaFX is a software platform for creating and delivering RIAs that can run across a wide variety of connected devices. The current release (JavaFX 12, March 11, 2019) enables building applications for desktop, browser and mobile phones and comes with 3D support. TV set-top boxes, gaming consoles, Blu-ray players and other platforms are planned. Java FX runs as plug-in Java applet or via Webstart.
=== Microsoft Silverlight ===
Silverlight was proposed by Microsoft as another proprietary alternative. The technology has not been widely accepted and, for instance, lacks support on many mobile devices. Some examples of application were video streaming for events including the 2008 Summer Olympics in Beijing, the 2010 Winter Olympics in Vancouver, and the 2008 conventions for both major political parties in the United States. Silverlight was also used by Netflix for its instant video streaming service. Silverlight is no longer under active development and is not supported in Microsoft Edge Legacy or newer.
=== Gears ===
Gears, formerly known as Google Gears, is a discontinued utility software providing offline storage and other additional features to web browsers, including Google Chrome. Gears was discontinued in favor of the standardized HTML5 methods. Gears was removed from Google Chrome 12.
=== Other techniques ===
RIAs could use XForms to enhance their functionality. Using XML and XSLT along with some XHTML, CSS and JavaScript can also be used to generate richer client side UI components like data tables that can be resorted locally on the client without going back to the server. Mozilla and Internet Explorer browsers both support this.
== Security issues in older standards ==
RIAs present indexing challenges to Web search engines, but Adobe Flash content is now at least partially indexable.
Security can improve over that of application software (for example through use of sandboxes and automatic updates), but the extensions themselves remain subject to vulnerabilities and access is often much greater than that of native Web applications. For security purposes, most RIAs run their client portions within a special isolated area of the client desktop called a sandbox. The sandbox limits visibility and access to the file-system and to the operating system on the client to the application server on the other side of the connection. This approach allows the client system to handle local activities, reformatting and so forth, thereby lowering the amount and frequency of client-server traffic, especially versus client-server implementations built around so-called thin clients.
== See also ==
HTML5
List of rich web application frameworks
PIGUI
== References ==
== External links ==
Accessible rich Internet applications (WAI-ARIA) 1.0 – W3C Candidate Recommendation 18 January 2011 | Wikipedia/Rich_Internet_Application |
In computing, server application programming interface (SAPI) is the direct module interface to web servers such as the Apache HTTP Server, Microsoft IIS, and Oracle iPlanet Web Server.
In other words, SAPI is an application programming interface (API) provided by the web server to help other developers in extending the web server capabilities.
Microsoft uses the term Internet Server Application Programming Interface (ISAPI), and the defunct Netscape web server used the term Netscape Server Application Programming Interface (NSAPI) for the same purpose.
As an example, PHP has a direct module interface called SAPI for different web servers; in the case of PHP 5 and Apache 2.0 on Windows, it is provided in the form of a DLL file called php5apache2.dll, which is a module that, among other functions, provides an interface between PHP and the web server, implemented in a form that the server understands. This form is what is known as a SAPI.
Different kinds of SAPIs exist for various web-server extensions. For example, in addition to those listed above, other SAPIs for the PHP language include the Common Gateway Interface (CGI) and command-line interface (CLI).
== See also ==
FastCGI (a variation of the CGI)
== References ==
== External links ==
Developing modules for the Apache HTTP Server 2.4 | Wikipedia/Server_application_programming_interface |
The Internet Server Application Programming Interface (ISAPI) is an n-tier API of Internet Information Services (IIS), Microsoft's collection of Windows-based web server services. The most prominent application of IIS and ISAPI is Microsoft's web server.
The ISAPI has also been implemented by Apache's mod_isapi module so that server-side web applications written for Microsoft's IIS can be used with Apache. Other third-party web servers like Zeus Web Server offer ISAPI interfaces, too.
Microsoft's web server application software is called Internet Information Services, which is made up of a number of "sub-applications" and is very configurable. ASP.NET is one such slice of IIS, allowing a programmer to write web applications in their choice of programming language (VB.NET, C#, F#) that's supported by the Microsoft .NET CLR. ISAPI is a much lower-level programming system, giving much better performance, at the expense of simplicity.
== ISAPI applications ==
ISAPI consists of two components: Extensions and Filters. These are the only two types of applications that can be developed using ISAPI. Both Filters and Extensions must be compiled into DLL files which are then registered with IIS to be run on the web server.
ISAPI applications can be written using any language which allows the export of standard C functions, for instance C, C++, Delphi. There are a couple of libraries available which help to ease the development of ISAPI applications, and in Delphi Pascal the Intraweb components for web-application development. MFC includes classes for developing ISAPI applications. Additionally, there is the ATL Server technology which includes a C++ library dedicated to developing ISAPI applications.
=== Extensions ===
ISAPI Extensions are true applications that run on IIS. They have access to all of the functionality provided by IIS. ISAPI extensions are implemented as DLLs that are loaded into a process that is controlled by IIS. Clients can access ISAPI extensions in the same way they access a static HTML page. Certain file extensions or a complete folder or site can be mapped to be handled by an ISAPI extension.
=== Filters ===
ISAPI filters are used to modify or enhance the functionality provided by IIS. They always run on an IIS server and filter every request until they find one they need to process. Filters can be programmed to examine and modify both incoming and outgoing streams of data. Internally programmed and externally configured priorities determine in which order filters are called.
Filters are implemented as DLLs and can be registered on an IIS server on a site level or a global level (i.e., they apply to all sites on an IIS server). Filters are initialised when the worker process is started and listens to all requests to the site on which it is installed.
Common tasks performed by ISAPI filters include:
Changing request data (URLs or headers) sent by the client
Controlling which physical file gets mapped to the URL
Controlling the user name and password used with anonymous or basic authentication
Modifying or analyzing a request after authentication is complete
Modifying a response going back to the client
Running custom processing on "access denied" responses
Running processing when a request is complete
Run processing when a connection with the client is closed
Performing special logging or traffic analysis.
Performing custom authentication.
Handling encryption and compression.
=== Common ISAPI applications ===
This is a list of common ISAPI applications implemented as ISAPI extensions:
Active Server Pages (ASP), installed as standard
ActiveVFP, Active Visual FoxPro installed on IIS
ASP.NET, installed as standard on IIS 6.0 onwards
ColdFusion, later versions of ColdFusion are installable on IIS
Perl ISAPI (aka Perliis), available for free to install
PHP, available for free to install, not maintained anymore.
== ISAPI development ==
ISAPI applications can be developed using any development tool that can generate a Windows DLL. Wizards for generating ISAPI framework applications have been available in Microsoft development tools since Visual C++ 4.0.
== See also ==
Internet Information Services
ATL Server
SAPI
C++
PHP
FastCGI
== References == | Wikipedia/Internet_Server_Application_Programming_Interface |
The Netscape Server Application Programming Interface (NSAPI) is an application programming interface for extending server software, typically web server software.
== History ==
NSAPI was initially developed by Rob McCool at Netscape for use in Netscape Enterprise Server. A variant of NSAPI can also be used with Netscape Directory Server.
Because there is no formal standard, applications that use NSAPI are not necessarily portable across server software. As of 2007, varying degrees of support for NSAPI are found in Sun Java System Web Server and Zeus Web Server.
== NSAPI plug-ins ==
Applications that use NSAPI are referred to as NSAPI plug-ins. Each plug-in implements one or more Server Application Functions (SAFs).
To use a SAF, an administrator must first configure the server to load the plug-in that implements that SAF. This is typically controlled by a configuration file named magnus.conf. Once the plug-in is loaded, the administrator can configure when the server should invoke the SAF and what parameters it should be passed. This is typically controlled by a configuration file named obj.conf.
== Comparison with related APIs and protocols ==
NSAPI can be compared to an earlier protocol named Common Gateway Interface (CGI). Like CGI, NSAPI provides a means of interfacing application software with a web server. Unlike CGI programs, NSAPI plug-ins run inside the server process. Because CGI programs run outside of the server process, CGI programs are generally slower than NSAPI plug-ins. However, running outside of the server process can improve server reliability by isolating potentially buggy applications from the server software and from each other.
In contrast to CGI programs, NSAPI SAFs can be configured to run at different stages of request processing. For example, while processing a single HTTP request, different NSAPI SAFs can be used to authenticate and authorize the remote user, map the requested URI to a local file system path, generate the web page, and log the request.
After Netscape introduced NSAPI, Microsoft developed ISAPI and the Apache Software Foundation developed Apache API (or ASAPI: Apache Server API). All three APIs have a number of similarities. For example: NSAPI, ISAPI and Apache API allow applications to run inside the server process. Further, all three allow applications to participate in the different stages of request processing. For example, Apache API hooks closely resemble those used in NSAPI.
== See also ==
NPAPI (Netscape Plugin Application Programming Interface)
== References ==
== External links ==
Oracle iPlanet Web Server 7.0.9 NSAPI Developer's Guide
Sun Java System Web Server 7.0 NSAPI Developer's Guide
Zeus Web Server Introduction to NSAPI (archived version) | Wikipedia/Netscape_Server_Application_Programming_Interface |
XAML Browser Applications (XBAP, pronounced "ex-bap") are Windows Presentation Foundation (.xbap) applications that were intended to run inside a web browser such as Firefox or Internet Explorer through the NPAPI interface. Due to NPAPI being phased out in recent years, and from lack of support, there are currently no browsers that support XBAP applications.
Hosted applications run in a partial trust sandbox environment and are not given full access to the computer's resources like opening a new network connection or saving a file to the computer disk and not all WPF functionality is available. The hosted environment is intended to protect the computer from malicious applications; however it can also run in full trust mode by the client changing the permission. Starting an XBAP from an HTML page was seamless (with no security or installation prompt). Although one perceived the application running in the browser, it actually ran in an out-of-process executable (PresentationHost.exe) managed by a virtual machine.
== XBAP limitations ==
XBAP applications have certain restrictions on what .NET features they can use. Since they run in partial trust, they are restricted to the same set of permission granted to any InternetZone application. Nearly all standard WPF functionality, however, around 99%, is available to an XBAP application. Therefore, most of the WPF UI features are available.
Starting in February 2009, XBAP applications no longer function when run from the Internet. Attempting to run the XBAP will cause the browser to present a generic error message. An option exists in Internet Explorer 9 that can be used to allow the applications to run, but this must be done with care as it increases the potential attack surface - and there have been security vulnerabilities in XBAP.
=== Permitted ===
2D drawing
3D
Animation
Audio
=== Not permitted ===
Access to OS drag-and-drop
Bitmap effects (these are deprecated in .NET 3.5 SP1)
Direct database communication (unless the application is fully trusted)
Interoperability with Windows controls or ActiveX controls
Most standard dialogs
Shader effects
Stand-alone Windows
== See also ==
ClickOnce
Extensible Application Markup Language (XAML)
Google Native Client (NaCl)
HTML Application (HTA)
Microsoft Silverlight
WebAssembly
Windows UI Library (WinUI or WinRT XAML)
XAP (file format)
Java Web Start
== References ==
== External links ==
Windows Presentation Foundation Security Sandbox | Wikipedia/XAML_Browser_Applications |
RoadRunner is an open-source application server, load-balancer and process manager written in Go and implemented by PHP 7. It is used in rapid application development to speed up the performance of large web applications for users. It is often used in conjunction with frameworks like Symfony, Laravel, and others to enhance the performance and responsiveness of PHP web applications.
== History ==
Development on RoadRunner began in 2017 by Anton Titov and was released in 2018 on GitHub, under an MIT license. "Introducing RoadRunner: A High-Performance PHP Application Server". 19 November 2018. By the middle of 2018, we polished the approach, published it to GitHub under an MIT license, and called it RoadRunner which described its incredible speed and efficiency.
RoadRunner was created to handle the peak loads of a large-scale PHP application developed by Spiral Scout. The end application was experiencing anomaly peaks in very short spurts of time, which did not allow classic load balancing mechanisms to activate.
Roadrunner uses multi-threading to keep a PHP application in memory between requests, allowing it to eliminate boot loading and code loading processes and reduce latency. Improved RPC communication between the PHP application and its server processes gives Roadrunner the ability to offload some of the heavy communication from PHP to Go.
== Application Features ==
Production-ready PSR-7 compatible HTTP, HTTP2, FastCGI server
No external PHP dependencies (64bit version required)
Frontend agnostic (Queue, PSR-7, GRPC, etc.)
Background job processing (AMQP, Amazon SQS, Beanstalk and memory)
GRPC server and clients
Pub/Sub and Websockets broadcasting
Integrated metrics server (Prometheus)
Integrations with Symfony, Laravel, Slim, CakePHP, Zend Expressive, Spiral
== Licensing ==
RoadRunner is a free open-source software released under an MIT license. It can be downloaded and installed as a package from the project page or from GitHub.
== Versions ==
== References ==
New Dedicated Application Server Revs PHP to Peak Performance - DZone Performance
RoadRunner, the PHP Application Server written in Go
Roadrunner & Zend Expressive & Cycle ORM. Not allow to php to die.
Roadrunner: a PHP application server
RoadRunner: PHP is not created to die, or Go to the rescue
RoadRunner: PHP не создан, чтобы умирать, или Go спешит на помощь
spiral/roadrunner - Packagist
RoadRunner – High-Speed PHP Applications
Roadrunner – High-performance PHP application server, load-balancer, and process manager written in Go | PHPnews.io
== External links ==
Official website
PHP to Go IPC bridge
GRPC server
Message queue | Wikipedia/RoadRunner_(application_server) |
CrateDB is a distributed SQL database management system that integrates a fully searchable document-oriented data store. It is open-source, written in Java, based on a shared-nothing architecture, and designed for high scalability. CrateDB includes components from Trino, Lucene, Elasticsearch and Netty.
== History ==
The CrateDB project was started by Christian Lutz, Bernd Dorn, and Jodok Batlogg in Dornbin, Austria as an open source, clustered database purposedly built for fast text search and analytics.
The company, now called Crate.io, raised its first round of financing in April 2014. In June that year, CrateDB won the judge's choice award at the GigaOm Structure Launchpad competition. In October, CrateDB won the TechCrunch Disrupt Europe in London.
Crate.io closed a $4M founding round in March 2016. In December 2016, CrateDB 1.0 was released having more than one million downloads.
CrateDB 2.0, the first Enterprise Edition of CrateDB, was released in May 2017 after a $2.5M round from Dawn Capital, Draper Esprit, Speedinvest, and Sunstone Capital. In June 2021 Crate.io announced another $10M funding round.
== References ==
== External links ==
Official website | Wikipedia/CrateDB |
The International Conference on Distributed Event-Based Systems is a conference in computer science.
== History ==
The DEBS event began as a series of five workshops run annually from 2002 to 2006. These DEBS workshops were co-located variously with International Conference on Distributed Computing Systems (IEEE ICDCS), ACM SIGMOD Conference/PODS and International Conference on Software Engineering (ACM ICSE).
The inaugural DEBS conference was held in 2007, in Toronto, Canada, and has been held annually since.
== Conference structure ==
DEBS events follow the structure of many computer science conferences, runs a sequential track program, and includes tracks for:
Research papers
Industry submissions
Tutorials
Demonstrations and posters
and a doctoral workshop.
A recent, novel feature of the conference is the "Grand Challenges" track, which aims to provide a datasets and exercises by which academic and industrial teams may compete to demonstrate the strengths of their solutions.
== Location history ==
2017: Barcelona, Spain
2016: Irvine, California, United States
2015: Oslo, Norway
2014: Mumbai, India
2013: Arlington, Texas, United States
2012: Berlin, Germany
2011: New York City, New York, United States
2010: Cambridge, United Kingdom
2009: Nashville, Tennessee, United States
2008: Rome, Italy
2007: Toronto, Ontario, Canada
== DEBS Workshops ==
2006: Lisbon, Portugal
2005: Columbus, Ohio, United States
2004: Edinburgh, Scotland
2003: San Diego, California, USA
2002: Vienna, Austria
== See also ==
List of computer science conferences
== References ==
== External links ==
http://debs.org/
DEBS 2017- June 19–23, 2017, Barcelona, Spain
DEBS 2016- June 20–24, 2016, Irvine, CA, USA
DEBS 2015- June 29-July 3, 2015, Oslo, Norway
DEBS 2014 - May 26–29, 2014, Mumbai, India
DEBS 2013 - June 29-July 3, 2013, Arlington, Texas, USA
DEBS 2012 - July 16–20, 2012, Freie Universitaet Berlin, Berlin, Germany
DEBS 2011 - July 11–14, 2011, New York, U.S.
DEBS 2010 - July 12–15, 2010, Cambridge, United Kingdom
DEBS 2009 - July 6–9, 2009, Vanderbilt University Campus, Nashville, TN, USA
DEBS 2008 Archived 2014-10-06 at the Wayback Machine - July 2–4, 2008, Rome, Italy
DEBS 2007 - June 20–22, Toronto, Canada | Wikipedia/Distributed_Event-Based_Systems |
SIGDA, Association for Computing Machinery's Special Interest Group on Design Automation, is a professional development organization for the electronic design automation (EDA) community. SIGDA is organized and operated exclusively for educational, scientific, and technical purposes in electronic design automation. SIGDA's bylaws were approved in 1969, following the charter of SIC (Special Interest Committee) in Design Automation in 1965.
The mission of SIGDA and its activities includes collecting and disseminating information in design automation through a newsletter and other publications; organizing sessions at conferences sponsored by ACM; sponsoring conferences, symposia, and workshops; organizing projects and working groups for education, research, and development; serving as a source of technical information for the Council and subunits of the ACM; and representing the opinions and expertise of the membership on matters of technical interest to SIGDA or ACM.
SIGDA sponsors or co-sponsors several conferences and symposia, while supporting numerous professional development programs addressing the needs of EDA students, researchers, and engineers. SIGDA was a pioneer in electronic publishing of conference and symposia proceedings, long before the wide availability of digital libraries made available by major professional organizations. SIGDA volunteers also pioneered several educational initiatives that have enabled the participance of many students and researchers to major events in EDA and computer-aided design (CAD).
== SIGDA Chairs ==
Yiran Chen, Duke University (2021–present)
Xiaobo Sharon Hu, University of Notre Dame (2018–2021)
Vijaykrishnan Narayanan, The Pennsylvania State University (2015–2018)
Naehyuck Chang, KAIST (2012–2015)
Patrick Madden, SUNY (2009–2012)
Diana Marculescu, University of Texas at Austin (2006–2009)
== References ==
== External links ==
SIGDA webpage
SIGDA professional development programmes
SIGDA publications of proceedings
SIGDA newsletter
SIGDA events | Wikipedia/Special_Interest_Group_on_Design_Automation |
The Programming Language Design and Implementation (PLDI) conference is an annual computer science conference organized by the Association for Computing Machinery (ACM) which focuses on the study of algorithms, programming languages and compilers. It is sponsored by the SIGPLAN special interest group on programming languages.
In 2003, the conference was given an estimated impact factor of 2.89 by CiteSeer, placing it in the top 1% of computer science conferences.
== History ==
The precursor of PLDI was the Symposium on Compiler Optimization, held July 27–28, 1970 at the University of Illinois at Urbana-Champaign and chaired by Robert S. Northcote. That conference included papers by Frances E. Allen, John Cocke, Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. The first conference in the current PLDI series took place in 1979 under the name SIGPLAN Symposium on Compiler Construction in Denver, Colorado. The next compiler construction conference took place in 1982 in Boston, Massachusetts. The compiler construction conferences then alternated with SIGPLAN Conferences on Language Issues until 1988, when the conference was renamed to PLDI. From 1982 until 2001, the conference acronym was SIGPLAN 'xx. Starting in 2002, the initialism became PLDI 'xx, and in 2006 it became PLDI xxxx.
== Conference locations and organizers ==
PLDI 2025 - SIGPLAN Conference on Programming Language Design and Implementation: Seoul, South Korea
General Chair: Chung-Kil Hur
Program Chair: Zachary Tatlock
PLDI 2024 - SIGPLAN Conference on Programming Language Design and Implementation: Copenhagen, Denmark
General Chair: Milind Kulkarni
Program Chair: John Regehr
PLDI 2023 - SIGPLAN Conference on Programming Language Design and Implementation: Orlando, FL, United States
General Chair: Steve Blackburn
Program Chair: Nate Foster
PLDI 2022 - SIGPLAN Conference on Programming Language Design and Implementation: San Diego, CA, United States
General Chair: Ranjit Jhala
Program Chair: Isil Dillig
PLDI 2021 - SIGPLAN Conference on Programming Language Design and Implementation: Online due to COVID-19
General Chair: Stephen N. Freund
Program Chair: Eran Yahav
PLDI 2020 - SIGPLAN Conference on Programming Language Design and Implementation: London, United Kingdom (planned); moved online due to COVID-19
General Chair: Alastair F. Donaldson
Program Chair: Emina Torlak
proceedings
PLDI 2019 - SIGPLAN Conference on Programming Language Design and Implementation: Phoenix, AZ, United States
Conference Chair: Kathryn S. McKinley
Program Chair: Kathleen Fisher
PLDI 2018 - SIGPLAN Conference on Programming Language Design and Implementation: Philadelphia, PA, United States
Conference Chair: Jeffrey S. Foster
Program Chair: Dan Grossman
PLDI 2017 - SIGPLAN Conference on Programming Language Design and Implementation: Barcelona, Spain
Conference Chair: Albert Cohen
Program Chair: Martin Vechev
PLDI 2016 - SIGPLAN Conference on Programming Language Design and Implementation: Santa Barbara, CA, United States
Conference Chair: Chandra Krintz
Program Chair: Emery Berger
PLDI 2015 - SIGPLAN Conference on Programming Language Design and Implementation: Portland, OR, United States
Conference Chair: Dave Grove
Program Chair: Steve Blackburn
Part of the Federated Computing Research Conference 2015
PLDI 2014 - SIGPLAN Conference on Programming Language Design and Implementation: Edinburgh, Scotland, United Kingdom
Conference Chair: Michael O'Boyle
Program Chair: Keshav Pingali
PLDI 2013 - SIGPLAN Conference on Programming Language Design and Implementation: Seattle, WA, United States
Conference Chair: Hans-J. Boehm
Program Chair: Cormac Flanagan
PLDI 2012 - SIGPLAN Conference on Programming Language Design and Implementation: Beijing, China
Conference Chairs: Jan Vitek, Haibo Lin
Program Chair: Frank Tip
PLDI 2011 - SIGPLAN Conference on Programming Language Design and Implementation: San Jose, CA, United States
Conference Chair: Mary Hall
Program Chair: David Padua
Part of the Federated Computing Research Conference 2011
PLDI 2010 - SIGPLAN Conference on Programming Language Design and Implementation: Toronto, ON, Canada
Conference Chair: Ben Zorn
Program Chair: Alex Aiken
PLDI 2009 - SIGPLAN Conference on Programming Language Design and Implementation: Dublin, Ireland
Conference Chair: Michael Hind
Program Chair: Amer Diwan
PLDI 2008 - SIGPLAN Conference on Programming Language Design and Implementation: Tucson, Arizona, USA
Conference Chair: Rajiv Gupta
Program Chair: Saman Amarasinghe
PLDI 2007 - SIGPLAN Conference on Programming Language Design and Implementation: San Diego, California, USA
Conference Chair: Jeanne Ferrante
Program Chair: Kathryn S. McKinley
Part of the Federated Computing Research Conference 2007
PLDI 2006 - SIGPLAN Conference on Programming Language Design and Implementation: Ottawa, Ontario, Canada
Conference Chair: Michael Schwartzbach
Program Chair: Thomas Ball
PLDI '05 - SIGPLAN Conference on Programming Language Design and Implementation: Chicago, Illinois, USA
Conference Chair: Vivek Sarkar
Program Chair: Mary Hall
PLDI '04 - SIGPLAN Conference on Programming Language Design and Implementation: Washington, D.C., USA
Conference Chair: William Pugh
Program Chair: Craig Chambers
PLDI 03 - SIGPLAN Conference on Programming Language Design and Implementation: San Diego, California, USA
Conference Chair: Ron Cytron
Program Chair: Rajiv Gupta
Part of the Federated Computing Research Conference 2003
PLDI '02 - SIGPLAN Conference on Programming Language Design and Implementation: Berlin, Germany
Conference Chair: Jens Knoop
Program Chair: Laurie Hendren
SIGPLAN '01 Conference on Programming Language Design and Implementation (PLDI): Snowbird, Utah, USA
Conference Chair: Michael Burke
Program Chair: Mary Lou Soffa
SIGPLAN '00 Conference on Programming Language Design and Implementation (PLDI): Vancouver, British Columbia, Canada
Conference Chair: James Larus
Program Chair: Monica Lam
SIGPLAN '99 Conference on Programming Language Design and Implementation (PLDI): Atlanta, Georgia, USA
Conference Chair: Barbara G. Ryder
Program Chair: Benjamin G. Zorn
Part of the Federated Computing Research Conference 1999
SIGPLAN '98 Conference on Programming Language Design and Implementation (PLDI): Montreal, Quebec, Canada
Conference Chair: Jack W. Davidson
Program Chair: Keith D. Cooper
SIGPLAN '97 Conference on Programming Language Design and Implementation (PLDI): Las Vegas, Nevada, USA
Conference Chair: Marina Chen
Program Chair: Ron K. Cytron
SIGPLAN '96 Conference on Programming Language Design and Implementation (PLDI): Philadelphia, Pennsylvania, USA
Conference Chair: Charles N. Fischer
Program Chair: Michael Burke
Part of the Federated Computing Research Conference 1996
SIGPLAN '95 Conference on Programming Language Design and Implementation (PLDI): La Jolla, California, USA
Conference Chair: David W. Wall
Program Chair: David R. Hanson
SIGPLAN '94 Conference on Programming Language Design and Implementation (PLDI): Orlando, Florida, USA
Conference co-Chairs: Barbara Ryder and Mary Lou Soffa
Program Chair: Vivek Sarkar
SIGPLAN '93 Conference on Programming Language Design and Implementation: Albuquerque, New Mexico, USA
Conference Chair: Robert Cartwright
Program Chair: David W. Wall
SIGPLAN '92 Conference on Programming Language Design and Implementation: San Francisco, California
Conference Chair: Stuart I. Feldman
Program Chair: Christopher W. Fraser
SIGPLAN '91 Conference on Programming Language Design and Implementation: Toronto, Ontario, Canada
Conference Chair: Brent Hailpern
Program Chair: Barbara G. Ryder
SIGPLAN '90 Conference on Programming Language Design and Implementation: White Plains, New York, USA
Conference Chair: Mark Scott Johnson
Program Chair: Bernard Lang
SIGPLAN '89 Conference on Programming Language Design and Implementation: Portland, Oregon, USA
Conference Chair: Bruce Knobe
Program Chair: Charles N. Fischer
SIGPLAN '88 Conference on Programming Language Design and Implementation: Atlanta, Georgia, USA
Conference Chair: David S. Wise
Program Chair: Mayer D. Schwartz
SIGPLAN '87 Symposium on Interpreters and Interpretive Techniques: St. Paul, Minnesota, USA
Conference Chair: Mark Scott Johnson
Program Chair: Thomas Turba
SIGPLAN '86 Symposium on Compiler Construction: Palo Alto, California, USA
Conference Chair: John R. Sopka
Program Chair: Jeanne Ferrante
SIGPLAN '85 Symposium on Language Issues in Programming Environments: Seattle, Washington, USA
Conference Chair: Teri Payton
Program Chair: L. Peter Deutsch
SIGPLAN '84 Symposium on Compiler Construction: Montreal, Quebec, Canada
Conference Chair: Mary Van Deusen
Program Chair: Susan L. Graham
SIGPLAN '83 Symposium on Programming Language Issues in Software Systems: San Francisco, California, USA
Conference Chair: John R. White
Program Chair: Lawrence A. Rowe
SIGPLAN '82 Symposium on Compiler Construction: Boston, Massachusetts, USA
Conference Chair: John R. White
Program Chair: Frances E. Allen
SIGPLAN Symposium on Compiler Construction 1979: Denver, Colorado, USA
SIGPLAN Symposium on Compiler Optimization 1970: Urbana-Champaign, Illinois, USA
== References ==
== External links ==
Official website
bibliography for PLDI at DBLP | Wikipedia/Programming_Language_Design_and_Implementation |
A business object is an entity within a multi-tiered software application that works in conjunction with the data access and business logic layers to transport data.
Business objects separate state from behaviour because they are communicated across the tiers in a multi-tiered system, while the real work of the application is done in the business tier and does not move across the tiers.
== Function ==
Whereas a program may implement classes, which typically end in objects managing or executing behaviours, a business object usually does nothing itself but holds a set of instance variables or properties, also known as attributes, and associations with other business objects, weaving a map of objects representing the business relationships.
A domain model where business objects do not have behaviour is called an anemic domain model.
== Examples ==
For example, a "Manager" would be a business object where its attributes can be "Name", "Second name", "Age", "Area", "Country" and it could hold a 1-n association with its employees (a collection of "Employee" instances).
Another example would be a concept like "Process" having "Identifier", "Name", "Start date", "End date" and "Kind" attributes and holding an association with the "Employee" (the responsible) that started it.
== See also ==
Active record pattern, design pattern that stores object data in memory in relational databases, with functions to insert, update, and delete records
Business intelligence, a field within information technology that provides decision support and business-critical information based on data
Data access object, design pattern that provides an interface to a type of database or other persistent mechanism, and offers data operations to application calls without exposing database details
Data transfer object, design pattern where an object carries aggregated data between processes to reduce the number of calls
== References ==
Rockford Lhotka, Visual Basic 6.0 Business Objects, ISBN 1-86100-107-X
Rockford Lhotka, Expert C# Business Objects, ISBN 1-59059-344-8
Rockford Lhotka, Expert One-on-One Visual Basic .NET Business Objects, ISBN 1-59059-145-3
== External links ==
A definition of domain model by Martin Fowler
Anemic Domain Model by Martin Fowler | Wikipedia/Business_entity_(computer_science) |
The identity transform is a data transformation that copies the source data into the destination data without change.
The identity transformation is considered an essential process in creating a reusable transformation library. By creating a library of variations of the base identity transformation, a variety of data transformation filters can be easily maintained. These filters can be chained together in a format similar to UNIX shell pipes.
== Examples of recursive transforms ==
The "copy with recursion" permits, changing little portions of code, produce entire new and different output, filtering or updating the input. Understanding the "identity by recursion" we can understand the filters.
=== Using XSLT ===
The most frequently cited example of the identity transform (for XSLT version 1.0) is the "copy.xsl" transform as expressed in XSLT. This transformation uses the xsl:copy command to perform the identity transformation:
This template works by matching all attributes (@*) and other nodes (node()), copying each node matched, then applying the identity transformation to all attributes and child nodes of the context node. This recursively descends the element tree and outputs all structures in the same structure they were found in the original file, within the limitations of what information is considered significant in the XPath data model. Since node() matches text, processing instructions, root, and comments, as well as elements, all XML nodes are copied.
A more explicit version of the identity transform is:
This version is equivalent to the first, but explicitly enumerates the types of XML nodes that it will copy. Both versions copy data that is unnecessary for most XML usage (e.g., comments).
==== XSLT 3.0 ====
XSLT 3.0 specifies an on-no-match attribute of the xsl:mode instruction that allows the identity transform to be declared rather than implemented as an explicit template rule. Specifically:
is essentially equivalent to the earlier template rules. See the XSLT 3.0 standard's description of shallow-copy for details.
Finally, note that markup details, such as the use of CDATA sections or the order of attributes, are not necessarily preserved in the output, since this information is not part of the XPath data model. To show CDATA markup in the output, the XSLT stylesheet that contains the identity transform template (not the identity transform template itself) should make use of the xsl:output attribute called cdata-section-elements.
cdata-section-elements specifies a list of the names of elements whose text node children should be output using CDATA sections.
For example:
=== Using XQuery ===
XQuery can define recursive functions. The following example XQuery function copies the input directly to the output without modification.
The same function can also be achieved using a typeswitch-style transform.
The typeswitch transform is sometime preferable since it can easily be modified by simply adding a case statement for any element that needs special processing.
== Non-recursive transforms ==
Two simple and illustrative "copy all" transforms.
=== Using XSLT ===
=== Using XProc ===
Here one important note about the XProc identity, is that it can take either one document like this example or a sequence of document as input.
== More complex examples ==
Generally the identity transform is used as a base on which one can make local modifications.
=== Remove named element transform ===
==== Using XSLT ====
The identity transformation can be modified to copy everything from an input tree to an output tree except a given node. For example, the following will copy everything from the input to the output except the social security number:
==== Using XQuery ====
To call this one would add:
==== Using XProc ====
== See also ==
Data mapping
XML pipeline
== Further reading ==
XSLT Cookbook, O'Reilly Media, Inc., December 1, 2002, by Sal Mangano, ISBN 0-596-00372-2
Priscilla Walmsley, XQuery, O'Reilly Media, Inc., Chapter 8 Functions – Recursive Functions – page 109
== References == | Wikipedia/Identity_transform |
Virtual Storage Access Method (VSAM) is an IBM direct-access storage device (DASD) file storage access method, first used in the OS/VS1, OS/VS2 Release 1 (SVS) and Release 2 (MVS) operating systems, later used throughout the Multiple Virtual Storage (MVS) architecture and now in z/OS. Originally a record-oriented filesystem, VSAM comprises four data set organizations: key-sequenced (KSDS), relative record (RRDS), entry-sequenced (ESDS) and linear (LDS). The KSDS, RRDS and ESDS organizations contain records, while the LDS organization (added later to VSAM) contains a sequence of pages with no intrinsic record structure, for use as a memory-mapped file.
== Overview ==
An IBM Redbook named "VSAM PRIMER" (especially when used with the "Virtual Storage Access Method (VSAM) Options for Advanced Applications" manual) explains the concepts needed to make use of VSAM. IBM uses the term data set in official documentation as a synonym for file, and direct-access storage device (DASD) for devices with random access to data locations, such as disk drives, as opposed to devices such as tape drives that can only be read sequentially.
VSAM records can be of fixed or variable length. They are organised in fixed-size blocks called control intervals (CIs), and then into larger divisions called Control Areas (CAs). Control Interval sizes are measured in bytes – for example 4 kilobytes – while Control Area sizes are measured in disk tracks or cylinders. Control Intervals are the units of transfer between disk and computer so a read request will read one complete Control Interval. Control Areas are the units of allocation so, when a VSAM data set is defined, an integral number of Control Areas will be allocated.
The Access Method Services utility program IDCAMS is commonly used to manipulate ("delete and define") VSAM data sets. Custom programs can access VSAM datasets through Data Definition (DD) statements in Job Control Language (JCL), via dynamic allocation or in online regions such as in Customer Information Control System (CICS).
Both IMS/DB and Db2: 41 are implemented on top of VSAM and use its underlying data structures.
== Files ==
The physical organization of VSAM data sets differs considerably from the organizations used by other access methods, as follows.
A VSAM file is defined as a cluster of VSAM components, e.g., for KSDS a DATA component and an INDEX component.
=== Control intervals and control areas ===
VSAM components consist of fixed length physical blocks grouped into fixed length control intervals (CI) and control areas (CA). The size of the CI and CA is determined by the Access Method Services (AMS), and the way in which they are used is normally not visible to the user. There will be a fixed number of control intervals in each control area.
A control interval normally contains multiple records. The records are stored within the control interval starting from the low address upwards. Control information is stored at the other end of the control interval, starting from the high address and moving downwards. The space between the records and the control information is free space. The control information comprises two types of entry: a control interval descriptor field (CIDF) which is always present, and record descriptor fields (RDF) which are present when there are records within the control interval and describe the length of the associated record. Free space within a CI is always contiguous.
When records are inserted into a control interval, they are placed in the correct order relative to other records. This may require records to be moved out of the way inside the control interval. Conversely, when a record is deleted, later records are moved down so that the free space remains contiguous. If there is not enough free space in a control interval for a record to be inserted, the control interval is split. Roughly half the records are stored in the original control interval while the remaining records are moved into a new control interval. The new control interval is taken from a pool of free control intervals within the same control area as the original control interval. If there is no remaining free control interval within that control area, the control area itself is split and the control intervals are distributed equally between the old and the new control areas.
You can use three types of record-orientated file organization with VSAM (the contents of linear data sets have no record structure):
=== Sequential organization ===
An ESDS may have an index defined to it to enable access via keys, by defining an Alternate Index. Records in ESDS are stored in order in which they are written by address access. Records are loaded irrespective of their contents and their byte addresses cannot be changed.
=== Indexed organization ===
A KSDS has two parts: the index component and the data component. These may be stored on separate disk volumes.
While a basic KSDS only has one key (the primary key), alternate indices may be defined to permit the use of additional fields as secondary keys. An alternate index (AIX) is itself a KSDS.
The data structure used by a KSDS is nowadays known as a B+ tree.
=== Relative organization ===
An RRDS may have an index defined to it to enable access via keys, by defining an Alternate Index.
=== Linear organization ===
An LDS is an unstructured VSAM dataset with a control interval size of a multiple of 4K. It is used by certain system services.
== Data access techniques ==
There are four types of access techniques for VSAM data:
Local Shared Resources (LSR), is optimised for "random" or direct access. LSR access is easy to achieve from CICS.
Global Shared Resources (GSR)
Non-Shared Resources (NSR), which is optimised for sequential access. NSR access has historically been easier to use than LSR for batch programs.
Distributed File Management (DFM), an implementation of a Distributed Data Management Architecture server, enables programs on remote computers to create, manage, and access VSAM files.
== Sharing data ==
Sharing of VSAM data between CICS regions can be done by VSAM Record-Level Sharing (RLS). This adds record caching and, more importantly, record locking. Logging and commit processing remain the responsibility of CICS which means that sharing of VSAM data outside a CICS environment is severely restricted.
Sharing between CICS regions and batch jobs requires Transactional VSAM, DFSMStvs. This is an optional program that builds on VSAM RLS by adding logging and two-phase commit, using underlying z/OS system services. This permits generalised sharing of VSAM data.
== History ==
VSAM was introduced as a replacement for older access methods and was intended to add function, to be easier to use and to overcome problems of performance and device-dependence. VSAM was introduced in the 1970s when IBM announced virtual storage operating systems (DOS/VS, OS/VS1 and OS/VS2) for its new System/370 series, as successors of the DOS/360 and OS/360 operating systems running on its System/360 computer series. While backwards compatibility was maintained, the older access methods suffered from performance problems due to the address translation required for virtual storage.
The KSDS organization was designed to replace ISAM, the Indexed Sequential Access Method. Changes in disk technology had meant that searching for data in ISAM data sets had become very inefficient. It was also difficult to move ISAM data sets as there were embedded pointers to physical disk locations which became invalid if the data set was moved. IBM also provided a compatibility interface to allow programs coded to use ISAM to use a KSDS instead.
The RRDS organization was designed to replace BDAM, the Basic Direct Access Method. In some cases, BDAM data sets contained embedded pointers which prevented them from being moved. However, most BDAM data sets did not and the incentive to move from BDAM to VSAM RRDS was much less compelling than that to move from ISAM to VSAM KSDS.
Linear data sets were added later, followed by VSAM RLS and then Transactional VSAM.
== See also ==
Job Control Language (JCL)
IBM mainframe utility programs
ISAM
Geneva ERS
Record Management Services, a similar system developed by Digital Equipment Corporation
Enscribe, a similar system developed by Tandem Computers
Berkeley DB
== Notes ==
== References ==
VSAM Demystified
DFSMStvs Overview and Planning Guide | Wikipedia/Virtual_Storage_Access_Method |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.