text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In computing, a hardware random number generator (HRNG), true random number generator (TRNG), non-deterministic random bit generator (NRBG), or physical random number generator is a device that generates random numbers from a physical process capable of producing entropy, unlike a pseudorandom number generator (PRNG) that utilizes a deterministic algorithm and non-physical nondeterministic random bit generators that do not include hardware dedicated to generation of entropy.
Many natural phenomena generate low-level, statistically random "noise" signals, including thermal and shot noise, jitter and metastability of electronic circuits, Brownian motion, and atmospheric noise. Researchers also used the photoelectric effect, involving a beam splitter, other quantum phenomena, and even the nuclear decay (due to practical considerations the latter, as well as the atmospheric noise, is not viable). While "classical" (non-quantum) phenomena are not truly random, an unpredictable physical system is usually acceptable as a source of randomness, so the qualifiers "true" and "physical" are used interchangeably.
A hardware random number generator is expected to output near-perfect random numbers ("full entropy"). A physical process usually does not have this property, and a practical TRNG typically includes a few blocks:
a noise source that implements the physical process producing the entropy. Usually this process is analog, so a digitizer is used to convert the output of the analog source into a binary representation;
a conditioner (randomness extractor) that improves the quality of the random bits;
health tests. TRNGs are mostly used in cryptographical algorithms that get completely broken if the random numbers have low entropy, so the testing functionality is usually included.
Hardware random number generators generally produce only a limited number of random bits per second. In order to increase the available output data rate, they are often used to generate the "seed" for a faster PRNG. DRBG also helps with the noise source "anonymization" (whitening out the noise source identifying characteristics) and entropy extraction. With a proper DRBG algorithm selected (cryptographically secure pseudorandom number generator, CSPRNG), the combination can satisfy the requirements of Federal Information Processing Standards and Common Criteria standards.
== Uses ==
Hardware random number generators can be used in any application that needs randomness. However, in many scientific applications additional cost and complexity of a TRNG (when compared with pseudo random number generators) provide no meaningful benefits. TRNGs have additional drawbacks for data science and statistical applications: impossibility to re-run a series of numbers unless they are stored, reliance on an analog physical entity can obscure the failure of the source. The TRNGs therefore are primarily used in the applications where their unpredictability and the impossibility to re-run the sequence of numbers are crucial to the success of the implementation: in cryptography and gambling machines.
=== Cryptography ===
The major use for hardware random number generators is in the field of data encryption, for example to create random cryptographic keys and nonces needed to encrypt and sign data. In addition to randomness, there are at least two additional requirements imposed by the cryptographic applications:
forward secrecy guarantees that the knowledge of the past output and internal state of the device should not enable the attacker to predict future data;
backward secrecy protects the "opposite direction": knowledge of the output and internal state in the future should not divulge the preceding data.
A typical way to fulfill these requirements is to use a TRNG to seed a cryptographically secure pseudorandom number generator.
== History ==
Physical devices were used to generate random numbers for thousands of years, primarily for gambling. Dice in particular have been known for more than 5000 years (found on locations in modern Iraq and Iran), and flipping
a coin (thus producing a random bit) dates at least to the times of ancient Rome.
The first documented use of a physical random number generator for scientific purposes was by Francis Galton (1890). He devised a way to sample a probability distribution using a common gambling dice. In addition to the top digit, Galton also looked at the face of a dice closest to him, thus creating 6*4 = 24 outcomes (about 4.6 bits of randomness).
Kendall and Babington-Smith (1938) used a fast-rotating 10-sector disk that was illuminated by periodic bursts of light. The sampling was done by a human who wrote the number under the light beam onto a pad. The device was utilized to produce a 100,000-digit random number table (at the time such tables were used for statistical experiments, like PRNG nowadays).
On 29 April 1947, the RAND Corporation began generating random digits with an "electronic roulette wheel", consisting of a random frequency pulse source of about 100,000 pulses per second gated once per second with a constant frequency pulse and fed into a five-bit binary counter. Douglas Aircraft built the equipment, implementing Cecil Hasting's suggestion (RAND P-113) for a noise source (most likely the well known behavior of the 6D4 miniature gas thyratron tube, when placed in a magnetic field). Twenty of the 32 possible counter values were mapped onto the 10 decimal digits and the other 12 counter values were discarded. The results of a long run from the RAND machine, filtered and tested, were converted into a table, which originally existed only as a deck of punched cards, but was later published in 1955 as a book, 50 rows of 50 digits on each page (A Million Random Digits with 100,000 Normal Deviates). The RAND table was a significant breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available. It has been a useful source for simulations, modeling, and for deriving the arbitrary constants in cryptographic algorithms to demonstrate that the constants had not been selected maliciously ("nothing up my sleeve numbers").
Since the early 1950s, research into TRNGs has been highly active, with thousands of research works published and about 2000 patents granted by 2017.
== Physical phenomena with random properties ==
Multiple different TRNG designs were proposed over time with a large variety of noise sources and digitization techniques ("harvesting"). However, practical considerations (size, power, cost, performance, robustness) dictate the following desirable traits:
use of a commonly available inexpensive silicon process;
exclusive use of digital design techniques. This allows an easier system-on-chip integration and enables the use of FPGAs;
compact and low-power design. This discourages use of analog components (e.g., amplifiers);
mathematical justification of the entropy collection mechanisms.
Stipčević & Koç in 2014 classified the physical phenomena used to implement TRNG into four groups:
electrical noise;
free-running oscillators;
chaos;
quantum effects.
=== Electrical noise-based RNG ===
Noise-based RNGs generally follow the same outline: the source of a noise generator is fed into a comparator. If the voltage is above threshold, the comparator output is 1, otherwise 0. The random bit value is latched using a flip-flop. Sources of noise vary and include:
Johnson–Nyquist noise ("thermal noise");
Zener noise;
avalanche breakdown.
The drawbacks of using noise sources for an RNG design are:
noise levels are hard to control, they vary with environmental changes and device-to-device;
calibration processes needed to ensure a guaranteed amount of entropy are time-consuming;
noise levels are typically low, thus the design requires power-hungry amplifiers. The sensitivity of amplifier inputs enables manipulation by an attacker;
circuitry located nearby generates a lot of non-random noise thus lowering the entropy;
a proof of randomness is near-impossible as multiple interacting physical processes are involved.
=== Chaos-based RNG ===
The idea of chaos-based noise stems from the use of a complex system that is hard to characterize by observing its behavior over time. For example, lasers can be put into (undesirable in other applications) chaos mode with chaotically fluctuating power, with power detected using a photodiode and sampled by a comparator. The design can be quite small, as all photonics elements can be integrated on-chip. Stipčević & Koç characterize this technique as "most objectionable", mostly due to the fact that chaotic behavior is usually controlled by a differential equation and no new randomness is introduced, thus there is a possibility of the chaos-based TRNG producing a limited subset of possible output strings.
=== Free-running oscillators-based RNG ===
The TRNGs based on a free-running oscillator (FRO) typically utilize one or more ring oscillators (ROs), outputs of which are sampled using yet another clock. Since inverters forming the RO can be thought of as amplifiers with a very large gain, an FRO output exhibits very fast oscillations in phase and frequency domains. The FRO-based TRNGs are very popular due to their use of the standard digital logic despite issues with randomness proofs and chip-to-chip variability.
=== Quantum-based RNG ===
Quantum random number generation technology is well established with 8 commercial quantum random number generator (QRNG) products offered before 2017.
Herrero-Collantes & Garcia-Escartin list the following stochastic processes as "quantum":
nuclear decay historically was the earliest quantum method used since the 1960s owing its popularity to the availability of Geiger counters and calibrated radiation sources. The entropy harvesting was done using an event counter that was periodically sampled or a time counter that was sampled at the time of the event. Similar designs were utilized in the 1950s to generate random noise in analog computers. The major drawbacks were radiation safety concerns, low bit rates, and non-uniform distribution;
shot noise, a quantum mechanical noise source found in electronic circuits, while technically a quantum effect, is hard to isolate from the thermal noise, so, with few exceptions, noise sources utilizing it are only partially quantum and are usually classified as "classical";
quantum optics:
branching path generator using a beamsplitter so that a photon from a single-photon source randomly takes one of the two paths and sensed by one of the two single-photon detectors thus generating a random bit;
time of arrival generators and photon counting generators use a weak photon source, with the entropy harvested similarly to the case of radioactive decay;
attenuated pulse generators are a generalization (simplifying the equipment) of the above methods that allows more than one photon in the system at a time;
vacuum fluctuations generators use a laser homodyne detection to probe the changes in the vacuum state;
laser phase noise generators use the phase noise on the output of a single spatial mode laser that is converted to amplitude using an unbalanced Mach-Zehnder interferometer. The noise is sampled by a photodetector;
amplified spontaneous emission generators use spontaneous light emission present in the optical amplifiers as a source of noise;
Raman scattering generators extract entropy from the interaction of photons with the solid-state materials;
optical parametric oscillator generators use the spontaneous parametric down-conversion leading to binary phase state selection in a degenerate optical parametric oscillator;
To reduce costs and increase robustness of quantum random number generators, online services have been implemented.
A plurality of quantum random number generators designs are inherently untestable and thus can be manipulated by adversaries. Mannalath et al. call these designs "trusted" in a sense that they can only operate in a fully controlled, trusted environment.
== Performance test ==
The failure of a TRNG can be quite complex and subtle, necessitating validation of not just the results (the output bit stream), but of the unpredictability of the entropy source. Hardware random number generators should be constantly monitored for proper operation to protect against the entropy source degradation due to natural causes and deliberate attacks. FIPS Pub 140-2 and NIST Special Publication 800-90B define tests which can be used for this.
The minimal set of real-time tests mandated by the certification bodies is not large; for example, NIST in SP 800-90B requires just two continuous health tests:
repetition count test checks that the sequences of identical digits are not too long, for a (typical) case of a TRNG that digitizes one bit at a time, this means not having long strings of either 0s or 1s;
adaptive proportion test verifies that any random digit does not occur too frequently in the data stream (low bias). For bit-oriented entropy sources that means that the count of 1s and 0s in the bit stream is approximately the same.
=== Attacks ===
Just as with other components of a cryptography system, a cryptographic random number generator should be designed to resist certain attacks. Defending against these attacks is difficult without a hardware entropy source.
The physical processes in HRNG introduce new attack surfaces. For example, a free-running oscillator-based TRNG can be attacked using a frequency injection.
=== Estimating entropy ===
There are mathematical techniques for estimating the entropy of a sequence of symbols. None are so reliable that their estimates can be fully relied upon; there are always assumptions which may be very difficult to confirm. These are useful for determining if there is enough entropy in a seed pool, for example, but they cannot, in general, distinguish between a true random source and a pseudorandom generator. This problem is avoided by the conservative use of hardware entropy sources.
== See also ==
AN/CYZ-9
Bell test experiments
/dev/random
ERNIE
Lavarand (a hardware random number generator based on movement of the floating material in lava lamps)
List of random number generators
Lottery machine
RDRAND
Trusted Platform Module
== References ==
== Sources ==
Turan, Meltem Sönmez; Barker, Elaine; Kelsey, John; McKay, Kerry A; Baish, Mary L; Boyle, Mike (2018). NIST SP800-90B: Recommendation for the entropy sources used for random bit generation (Report). Gaithersburg, MD: National Institute of Standards and Technology. doi:10.6028/nist.sp.800-90b.
Templ, M. (2016). Simulation for Data Science with R. Packt Publishing. ISBN 978-1-78588-587-7. Retrieved 2023-08-07.
Saarinen, Markku-Juhani O.; Newell, G. Richard; Marshall, Ben (2020-11-09). Building a Modern TRNG: An Entropy Source Interface for RISC-V (PDF). New York, NY, USA: ACM. doi:10.1145/3411504.3421212. Archived from the original on 2021-03-16. Retrieved 2023-09-09.{{cite conference}}: CS1 maint: bot: original URL status unknown (link)
Schindler, Werner (2009). "Random Number Generators for Cryptographic Applications". Cryptographic Engineering. Boston, MA: Springer US. pp. 5–23. doi:10.1007/978-0-387-71817-0_2. ISBN 978-0-387-71816-3.
Sunar, Berk (2009). "True Random Number Generators for Cryptography". Cryptographic Engineering. Boston, MA: Springer US. pp. 55–73. doi:10.1007/978-0-387-71817-0_4. ISBN 978-0-387-71816-3.
L'Ecuyer, Pierre (2017). History of uniform random number generation (PDF). 2017 Winter Simulation Conference (WSC). Las Vegas, NV, USA: IEEE. doi:10.1109/wsc.2017.8247790. ISBN 978-1-5386-3428-8. ISSN 1558-4305.
Stipčević, Mario; Koç, Çetin Kaya (2014). "True Random Number Generators". Open Problems in Mathematics and Computational Science (PDF). Cham: Springer International Publishing. pp. 275–315. doi:10.1007/978-3-319-10683-0_12. ISBN 978-3-319-10682-3.
Herrero-Collantes, Miguel; Garcia-Escartin, Juan Carlos (2017-02-22). "Quantum random number generators". Reviews of Modern Physics. 89 (1). American Physical Society (APS): 015004. arXiv:1604.03304. Bibcode:2017RvMP...89a5004H. doi:10.1103/revmodphys.89.015004. ISSN 0034-6861.
Quantum Random Number Generation: Theory and Practice. Quantum Science and Technology. Springer Cham. 2020. doi:10.1007/978-3-319-72596-3. ISBN 978-3-319-72596-3.
Mannalath, Vaisakh; Mishra, Sandeep; Pathak, Anirban (2023). "A Comprehensive Review of Quantum Random Number Generators: Concepts, Classification and the Origin of Randomness". Quantum Information Processing. 22 (12): 439. arXiv:2203.00261. Bibcode:2023QuIP...22..439M. doi:10.1007/s11128-023-04175-y.
=== General references ===
== External links ==
The Intel Random Number Generator (PDF), Intel, 22 April 1999.
ProtegoST SG100, ProtegoST, "Hardware Random Number Generator "Based on quantum physics random number source from a zener diode". | Wikipedia/Entropy_pool |
A mask generation function (MGF) is a cryptographic primitive similar to a cryptographic hash function except that while a hash function's output has a fixed size, a MGF supports output of a variable length. In this respect, a MGF can be viewed as a extendable-output function (XOF): it can accept input of any length and process it to produce output of any length. Mask generation functions are completely deterministic: for any given input and any desired output length the output is always the same.
== Definition ==
A mask generation function takes an octet string of variable length and a desired output length as input, and outputs an octet string of the desired length. There may be restrictions on the length of the input and output octet strings, but such bounds are generally very large. Mask generation functions are deterministic; the octet string output is completely determined by the input octet string. The output of a mask generation function should be pseudorandom, that is, if the seed to the function is unknown, it should be infeasible to distinguish the output from a truly random string.
== Applications ==
Mask generation functions, as generalizations of hash functions, are useful wherever hash functions are. However, use of a MGF is desirable in cases where a fixed-size hash would be inadequate. Examples include generating padding, producing one-time pads or keystreams in symmetric-key encryption, and yielding outputs for pseudorandom number generators.
=== Padding schemes ===
Mask generation functions were first proposed as part of the specification for padding in the RSA-OAEP algorithm. The OAEP algorithm required a cryptographic hash function that could generate an output equal in size to a "data block" whose length was proportional to arbitrarily sized input message.
=== Random number generators ===
NIST Special Publication 800-90A defines a class of cryptographically secure random number generators, one of which is the "Hash DRBG", which uses a hash function with a counter to produce a requested sequence of random bits equal in size to the requested number of random bits.
== Examples ==
Perhaps the most common and straightforward mechanism to build a MGF is to iteratively apply a hash function together with an incrementing counter value. The counter may be incremented indefinitely to yield new output blocks until a sufficient amount of output is collected. This is the approach used in MGF1.
=== MGF1 ===
MGF1 is a mask generation function defined in the Public Key Cryptography Standard #1 published by RSA Laboratories:
==== Options ====
H
a
s
h
{\displaystyle {\mathsf {Hash}}}
hash function (
h
L
e
n
{\displaystyle {\mathsf {hLen}}}
denotes the length in octets of the hash function output)
==== Input ====
Z
{\displaystyle Z}
seed from which mask is generated, an octet string
l
{\displaystyle l}
intended length in octets of the mask, at most
2
32
(
h
L
e
n
)
{\displaystyle 2^{32}({\mathsf {hLen}})}
==== Output ====
m
a
s
k
{\displaystyle {\mathsf {mask}}}
mask, an octet string of length
l
{\displaystyle l}
; or "mask too long"
==== Steps ====
=== Example code ===
Below is Python code implementing MGF1:
Example outputs of MGF1:
== References == | Wikipedia/Mask_generation_function |
In cryptography, a key signature is the result of a third-party applying a cryptographic signature to a representation of a cryptographic key. This is usually done as a form of assurance or verification: If "Alice" has signed "Bob's" key, it can serve as an assurance to another party, say "Eve", that the key actually belongs to Bob, and that Alice has personally checked and attested to this.
The representation of the key that is signed is usually shorter than the key itself, because most public-key signature schemes can only encrypt or sign short lengths of data. Some derivative of the public key fingerprint may be used, i.e. via hash functions.
== See also ==
Key (cryptography)
Public key certificate | Wikipedia/Key_signature_(cryptography) |
A cryptographic key is a string of data that is used to lock or unlock cryptographic functions, including authentication, authorization and encryption. Cryptographic keys are grouped into cryptographic key types according to the functions they perform.
== Description ==
Consider a keyring that contains a variety of keys. These keys might be various shapes and sizes, but one thing is certain, each will generally serve a separate purpose. One key might be used to start an automobile, while another might be used to open a safe deposit box. The automobile key will not work to open the safe deposit box and vice versa. This analogy provides some insight on how cryptographic key types work. These keys are categorized in respect to how they are used and what properties they possess.
A cryptographic key is categorized according to how it will be used and what properties it has. For example, a key might have one of the following properties: Symmetric, Public or Private. Keys may also be grouped into pairs that have one private and one public key, which is referred to as an Asymmetric key pair.
=== Asymmetric versus symmetric keys ===
Asymmetric keys differ from symmetric keys in that the algorithms use separate keys for encryption and decryption, while a symmetric key’s algorithm uses a single key for both processes. Because multiple keys are used with an asymmetric algorithm, the process takes longer to produce than a symmetric key algorithm would. However, the benefits lay in the fact that an asymmetric algorithm is much more secure than a symmetric key algorithm is.
With a symmetric key, the key needs to be transmitted to the receiver, where there is always the possibility that the key could be intercepted or tampered with. With an asymmetric key, the message and/or accompanying data can be sent or received by using a public key; however, the receiver or sender would use his or her personal private key to access the message and/or accompanying data. Thus, asymmetric keys are suited for use for transmitting confidential messages and data and when authentication is required for assurance that the message has not been tampered with. Only the receiver, who is in possession of the private key’s corresponding to the public key(encryption only key), has the ability to decode the message. A public key can be sent back and forth between recipients, but a private key remains fixed to one location and is not sent back and forth, which keeps it safe from being intercepted during transmission.
=== Long term versus single use ===
Cryptographic keys may also have keys that designate they can be used for long-term (static, archived) use or used for a single session (ephemeral). The latter generally applies to the use of an Ephemeral Key Agreement Key. Most other key types are designed to last for long crypto-periods, from about one to two years. When a shorter crypto-period is designed different key types may be used, such as Data Encryption keys, Symmetric Authentication keys, Private Key-Transport keys, Key-Wrapping keys, Authorization keys or RNG keys.
== Key types ==
This page shows the classification of key types from the point of view of key management. In a key management system, each key should be labeled with one such type and that key should never be used for a different purpose.
According to NIST SP 800-57 (Revision 4) the following types of keys exist:
Private signature key
Private signature keys are the private keys of asymmetric (public) key pairs that are used by public key algorithms to generate digital signatures with possible long-term implications. When properly handled, private signature keys can be used to provide authentication, integrity and non-repudiation.
Public signature verification key
A public signature verification key is the public key of an asymmetric key pair that is used by a public key algorithm to verify digital signatures, either to authenticate a user's identity, to determine the integrity of the data, for non-repudiation, or a combination thereof.
Symmetric authentication key
Symmetric authentication keys are used with symmetric key algorithms to provide assurance of the integrity and source of messages, communication sessions, or stored data.
Private authentication key
A private authentication key is the private key of an asymmetric key pair that is used with a public key algorithm to provide assurance as to the integrity of information, and the identity of the originating entity or the source of messages, communication sessions, or stored data.
Public authentication key
A public authentication key is the public key of an asymmetric key pair that is used with a public key algorithm to determine the integrity of information and to authenticate the identity of entities, or the source of messages, communication sessions, or stored data.
Symmetric data encryption key
These keys are used with symmetric key algorithms to apply confidentiality protection to information.
Symmetric key wrapping key
Symmetric key wrapping keys are used to encrypt other keys using symmetric key algorithms. Key wrapping keys are also known as key encrypting keys.
Symmetric and asymmetric random number generation keys
These are keys used to generate random numbers.
Symmetric master key
A symmetric master key is used to derive other symmetric keys (e.g., data encryption keys, key wrapping keys, or authentication keys) using symmetric cryptographic methods.
Private key transport key
Private key transport keys are the private keys of asymmetric key pairs that are used to decrypt keys that have been encrypted with the associated public key using a public key algorithm. Key transport keys are usually used to establish keys (e.g., key wrapping keys, data encryption keys or MAC keys) and, optionally, other keying material (e.g., initialization vectors).
Public key transport key
Public key transport keys are the public keys of asymmetric key pairs that are used to encrypt keys using a public key algorithm. These keys are used to establish keys (e.g., key wrapping keys, data encryption keys or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors).
Symmetric key agreement key
These symmetric keys are used to establish keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors) using a symmetric key agreement algorithm.
Private static key agreement key
Private static key agreement keys are the private keys of asymmetric key pairs that are used to establish keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors).
Public static key agreement key
Public static key agreement keys are the public keys of asymmetric key pairs that are used to establish keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors).
Private ephemeral key agreement key
Private ephemeral key agreement keys are the private keys of asymmetric key pairs that are used only once to establish one or more keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors).
Public ephemeral key agreement key
Public ephemeral key agreement keys are the public keys of asymmetric key pairs that are used in a single key establishment transaction to establish one or more keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., Initialization Vectors).
Symmetric authorization key
Symmetric authorization keys are used to provide privileges to an entity using a symmetric cryptographic method. The authorization key is known by the entity responsible for monitoring and granting access privileges for authorized entities and by the entity seeking access to resources.
Private authorization key
A private authorization key is the private key of an asymmetric key pair that is used to provide privileges to an entity.
Public authorization key
A public authorization key is the public key of an asymmetric key pair that is used to verify privileges for an entity that knows the associated private authorization key.
== References ==
== External links ==
Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57
NIST Cryptographic Toolkit | Wikipedia/Cryptographic_key_types |
In computer security, a key server is a computer that receives and then serves existing cryptographic keys to users or other programs. The users' programs can be running on the same network as the key server or on another networked computer.
The keys distributed by the key server are almost always provided as part of a cryptographically protected public key certificates containing not only the key but also 'entity' information about the owner of the key. The certificate is usually in a standard format, such as the OpenPGP public key format, the X.509 certificate format, or the PKCS format. Further, the key is almost always a public key for use with an asymmetric key encryption algorithm.
== History ==
Key servers play an important role in public key cryptography.
In public key cryptography an individual is able to generate a key pair, where one of the keys is kept private
while the other is distributed publicly. Knowledge of the public key does not compromise the security of public key cryptography. An
individual holding the public key of a key pair can use that key to carry out cryptographic operations that allow secret communications with strong authentication of the holder of the matching private key. The
need to have the public key of a key pair in order to start
communication or verify signatures is a bootstrapping problem. Locating keys
on the web or writing to the individual asking them to transmit their public
keys can be time consuming and unsecure. Key servers act as central repositories to
alleviate the need to individually transmit public keys and can act as the root of a chain of trust.
The first web-based PGP keyserver was written for a thesis by Marc Horowitz, while he was studying at MIT. Horowitz's keyserver was called the HKP Keyserver
after a web-based OpenPGP HTTP Keyserver Protocol (HKP), used to allow people to interact with the
keyserver. Users were able to upload, download, and search keys either through
HKP on TCP port 11371, or through web pages which ran CGI
scripts. Before the creation of the HKP Keyserver, keyservers relied on email
processing scripts for interaction.
=== Enterprise PGP ===
A separate key server, known as the PGP Certificate Server, was developed by PGP, Inc. and was used as the software (through version 2.5.x for the server) for the default key server in PGP through version 8.x (for the client software), keyserver.pgp.com. Network Associates was granted a patent co-authored by Jon Callas (United States Patent 6336186) on the key server concept.
To replace the aging Certificate Server, an LDAP-based key server was redesigned at Network Associates in part by Randy Harmon and Len Sassaman, called PGP Keyserver 7. With the release of PGP 6.0, LDAP was the preferred key server interface for Network Associates’ PGP versions. This LDAP and LDAPS key server (which also spoke HKP for backwards compatibility, though the protocol was (arguably correctly) referred to as “HTTP” or “HTTPS”) also formed the basis for the PGP Administration tools for private key servers in corporate settings, along with a schema for Netscape Directory Server.
PGP Keyserver 7 was later replaced by the new PGP Corporation PGP Global Directory of 2011 which allows PGP keys to be published and downloaded using HTTPS or LDAP.
=== OpenPGP ===
The OpenPGP world largely used its own development of keyserver software independent from the PGP Corporation suite. The main software used until the 2019 spamming attack was "SKS" (Synchronizing Key Server), written by Yaron Minsky. The public SKS pool (consisting of many interconnected SKS instances) provided access via HKPS (HKP with TLS) and HTTPS. It finally shut down in 2021 following a number of GDPR that it was unable to process effectively.
A number of newer pools using other software has been made available following the shutdown of the SKS pool, see § Keyserver examples.
== Public versus private keyservers ==
Many publicly accessible key servers, located around the world, are computers which store and provide OpenPGP keys over the Internet for users of that cryptosystem. In this instance, the computers can be, and mostly are, run by individuals as a pro bono service, facilitating the web of trust model PGP uses.
Several publicly accessible S/MIME key servers are available to publish or retrieve certificates used with the S/MIME cryptosystem.
There are also multiple proprietary public key infrastructure systems which maintain key servers for their users; those may be private or public, and only the participating users are likely to be aware of those keyservers at all.
== Problems with keyservers ==
=== Lack of retraction mechanism ===
The OpenPGP keyservers since their development in 1990s suffered from a few problems. Once a public key has been uploaded, it was purposefully made difficult to remove it as servers auto-synchronize between each other (it was done in order to fight government censorship). Some users stop using their public keys for various reasons, such as when they forget their pass phrase, or if their private key is compromised or lost. In those cases, it was hard to delete a public key from the server, and even if it were deleted, someone else can upload a fresh copy of the same public key to the server. This leads to an accumulation of old fossil public keys that never go away, a form of "keyserver plaque".
The lack of a retraction mechanism also breached the European General Data Protection Regulation, which was cited as a reason for the closure of the SKS pool. Modern PGP keyservers allow deletion of keys. Because only the owner of a key's e-mail address can upload a key (see next section) in such servers, the key stays deleted unless the owner decides otherwise.
=== Lack of ownership check ===
The keyserver also had no way to check to see if the key was legitimate (belong to true owner). As consequence anyone can upload a bogus public key to the keyserver, bearing the name of a person who in fact does not own that key, or even worse, use it as vulnerability: the Certificate Spamming Attack.: §2.2
Modern keyservers, starting with the PGP Global Directory, now use the e-mail address for confirmation. This keyserver sent an email confirmation request to the putative key owner, asking that person to confirm that the key in question is theirs. If they confirm it, the PGP Global Directory accepts the key. The confirmation can be renewed periodically, to prevent the accumulation of keyserver plaque. The result is a higher quality collection of public keys, and each key has been vetted by email with the key's apparent owner. But as consequence, another problem arise: because PGP Global Directory allows key account maintenance and verifies only by email, not cryptographically, anybody having access to the email account could for example delete a key and upload a bogus one.
The last Internet Engineering Task Force draft for HKP also defines a distributed key server network, based on DNS SRV records: to find the key of someone@example.com, one can ask it by requesting example.com's key server.
=== Leakage of personal relationships ===
For many individuals, the purpose of using cryptography is to obtain a higher
level of privacy in personal interactions and relationships. It has been pointed
out that allowing a public key to be uploaded in a key server when using
decentralized web of trust based cryptographic systems, like PGP, may reveal a
good deal of information that an individual may wish to have kept private. Since
PGP relies on signatures on an individual's public key to determine the
authenticity of that key, potential relationships can be revealed by analyzing
the signers of a given key. In this way, models of entire social networks can be
developed. (Mike Perry's 2013 criticism of the Web of Trust mentions the issue
as already been "discussed at length".)
A number of modern key servers remove third-party signatures from the uploaded
key. Doing so removes all personal connections into the Web of Trust, thus preventing
any leakage from happening. The main goal, however, was to minimize the storage space
required, as "signature spamming" can easily add megabytes to a key.: §2.1
== Keyserver examples ==
These are some keyservers that are often used for looking up keys with gpg --recv-keys. These can be queried via https:// (HTTPS) or hkps:// (HKP over TLS) respectively.
keys.openpgp.org
keys.mailvelope.com/manage.html
pgp.mit.edu
keyring.debian.org (only contains keys from Debian project members)
keyserver.ubuntu.com (default in GnuPG since version 2.3.2)
pgp.surf.nl
== See also ==
Lightweight Directory Access Protocol
GnuPG
== References ==
== External links ==
The OpenPGP HTTP Keyserver Protocol (HKP) (December 2024)
OpenPGP Public Key Server on SourceForge - an OpenPGP key server software package distributed under a BSD-style license. It has largely been replaced by Hockeypuck.
Synchronizing Key Server (SKS) - an OpenPGP key server software package distributed under the GPL. It has largely been replaced by Hockeypuck.
Hockeypuck - a synchronising OpenPGP keyserver software package distributed under the AGPL.
Hagrid - a non-synchronising, verifying OpenPGP keyserver software package distributed under the AGPL.
PGP Global Directory hosted by the PGP Corporation. | Wikipedia/Key_server_(cryptographic) |
The Chemical Biological Incident Response Force (CBIRF) is a Marine Corps unit responsible for countering the effects of a chemical, biological, radiological, nuclear, or high-yield explosive (CBRNE) incident, support counter CBRN terrorism, and urban search and rescue when CBRN incident. They were activated in April 1996 by General Charles C. Krulak, then Commandant of the Marine Corps. The unit is based at Naval Support Facility Indian Head in Indian Head, Maryland and falls under the command of the United States Marine Corps Forces Command.
== Mission ==
When directed, a CBIRF unit will forward-deploy and/or respond to a credible threat of a chemical, biological, radiological, nuclear, or high-yield explosive (CBRNE) incident in order to assist local, state, or federal agencies and Unified Combat Commanders in the conduct of consequence management operations.
CBIRF accomplishes this mission by providing capabilities for CBRN agent detection and identification, casualty search and extraction, technical rescue, personnel decontamination, and tactical emergency medical care and stabilization of contaminated victims. All CBIRF Marines and sailors are trained to perform both casualty extraction and decontamination. The lifesaving competencies required of personnel serving at CBIRF are taught during CBIRF Basic Operations Course (CBOC). By completing CBOC, all Marines and Sailors are able to support CBIRF's mission, and bolster the current response force.
== History ==
Since its inception CBIRF has trained many local agencies. It has also had a presence at the following:
1996 Summer Olympic Games
1997 Presidential Inaugural Ceremony
1997 Denver Summit of the Eight
1997-2020 Presidential State of the Union Addresses
1999 Pope Visit to St Louis, MO
1999 NATO Summit
2001 Presidential Inaugural Ceremony
2001 Cleaned Anthrax out of the Longworth House Office Bldg in Washington, DC
2001 Cleaned Anthrax out of the Hart Senate Bldg in Washington DC
2004 Ricin Incident Attack on Dirksen Senate Building
2004 Dedication of World War II Memorial
2004 Lying In State of President Ronald W. Reagan
2005 Presidential Inaugural Ceremony
2008 Republican National Convention in St. Paul, MN
2009 Presidential Inaugural Ceremony
2011 Operation Tomodachi
2012 NATO Summit
2013 Presidential Inaugural Ceremony
2016 State of the Union Address
2016 Republican and Democratic National Conventions
2017 Presidential Inaugural Ceremony
2018 State of the Union Address
2019 State of the Union Address
2020 State of the Union Address
2020 Republican and Democratic National Conventions
2021 Presidential Inaugural Ceremony
2021 Presidential Address to Congress
== Awards ==
== See also ==
2001 anthrax attacks – Bioterrorist attacks in the United States
2003 ricin letters – Attempted bioterrorist attacks
Fukushima Daiichi nuclear disaster – 2011 nuclear disaster in JapanPages displaying short descriptions of redirect targets
History of the United States Marine Corps
Organization of the United States Marine Corps
== References ==
This article incorporates public domain material from websites or documents of the United States Marine Corps.
== External links ==
CBIRF's official website | Wikipedia/Chemical_Biological_Incident_Response_Force |
The Agricultural Research Service (ARS) is the principal in-house research agency of the United States Department of Agriculture (USDA). ARS is one of four agencies in USDA's Research, Education and Economics mission area. ARS is charged with extending the nation's scientific knowledge and solving agricultural problems through its four national program areas: nutrition, food safety and quality; animal production and protection; natural resources and sustainable agricultural systems; and crop production and protection. ARS research focuses on solving problems affecting Americans every day. The ARS Headquarters is located in the Jamie L. Whitten Building on Independence Avenue in Washington, D.C., and the headquarters staff is located at the George Washington Carver Center (GWCC) in Beltsville, Maryland. For 2018, its budget was $1.2 billion. For 2023, the budget grew to $1.9 billion.
== Mission ==
ARS conducts scientific research for the American public. Their main focus is on research to develop solutions to agricultural problems and provide information access and dissemination to:
ensure high quality, safe food and other agricultural products,
assess the nutritional needs of Americans,
sustain a competitive agricultural economy.
enhance the natural resource base and the environment, and
provide economic opportunities to rural citizens, communities, and society as a whole.
ARS research complements the work of state colleges and universities, agricultural experiment stations, other federal and state agencies, and the private sector. ARS research may often focus on regional issues that have national implications, and where there is a clear federal role. ARS also provides information on its research results to USDA action and regulatory agencies and to several other federal regulatory agencies, including the Food and Drug Administration and the United States Environmental Protection Agency.
ARS disseminates much of its research results through scientific journals, technical publications, Agricultural Research magazine, and other forums. Information is also distributed through ARS's National Agricultural Library (NAL). ARS has more than 150 librarians and other information specialists who work at two NAL locations—the Abraham Lincoln Building in Beltsville, Maryland; and the DC Reference Center in Washington, D.C. NAL provides reference and information services, document delivery, interlibrary loan and interlibrary borrowing services to a variety of audiences.
== History ==
Prior to the inception of ARS, agricultural research was first conducted under the umbrella of the Agricultural Department in the U.S. Patent Office in 1839. It was created to collect statistics, distribute seeds and compile and distribute pertinent information. In 1862 the USDA was created and agricultural research was moved to its department. That same year, the department issued its first research bulletin on the sugar content of grape varietals and their suitability for wine. Six years later the USDA would begin its first research on animal diseases, specifically hog cholera, which was causing devastating losses at the time. In the early 1900s the USDA began analyzing food composition and the first studies of nutrition and the effects of cooking and processing foods were conducted. Finally, in 1953 the Agricultural Research Service was created to be the USDA's primary scientific research agency.
== Research centers ==
ARS supports more than 2,000 scientists and post docs working on approximately 690 research projects within 15 National Programs at more than 90 research locations. The ARS is divided into 5 geographic areas: Midwest Area, Northeast Area, Pacific West Area, Plains Area, and Southeast Area. ARS has five major regional research centers:
Western Regional Research Center in Albany, California
Center for Agricultural Resources Research in Fort Collins, Colorado
Southern Regional Research Center in New Orleans, Louisiana
National Center for Agricultural Utilization Research in Peoria, Illinois
Eastern Regional Research Center in Wyndmoor, Pennsylvania
The research centers focus on innovation in agricultural practices, pest control, health, and nutrition among other things. Work at these facilities has given life to numerous products, processes, and technologies.
ARS' Henry A. Wallace Beltsville Agricultural Research Center (BARC) in Beltsville, Maryland, is the world's largest agricultural research complex. Other D.C. area locations include the United States National Agricultural Library and the United States National Arboretum.
ARS also has six major human nutrition research centers that focus on solving a wide spectrum of human nutrition questions by providing authoritative, peer-reviewed, science-based evidence. The centers are located in Arkansas, Maryland, Texas, North Dakota, Massachusetts (the Human Nutrition Research Center on Aging), and California. ARS scientists at these centers study the role of food and dietary components in human health from conception to advanced age.
The ARS also offers the Culture Collection, which is the largest public collection of microorganisms in the world, containing approximately 93,000 strains of bacteria and fungi. The ARS Culture Collection is housed at the National Center for Agricultural Utilization Research (NCAUR). There is also a collection of entomopathogenic fungi, called ARSEF, at the Robert W. Holley Center for Agriculture and Health in Ithaca, New York. The ARS operates the U.S. Horticultural Research Laboratory in Fort Pierce, Florida, and the U.S. National Poultry Research Center in Athens, Georgia. Other notable ARS facilities include Northern Great Basin Experimental Range in Oregon, and formerly the Plum Island Animal Disease Center off Long Island.
Several ARS research units focus on pests, diseases, and management practices of horticultural crops. The Cereal Disease Laboratory is located on the campus of the University of Minnesota in Saint Paul. It primarily hosts research into rusts and Fusaria of cereals. The San Joaquin Valley Agricultural Sciences Center in Parlier, California conducts research on specialty crops including grapes, citrus, almonds, alfalfa, peaches, pomegranates, and many others. Several research locations throughout the US also house germplasm collections of plant species important for agricultural and industrial uses as part of the National Genetic Resources Program.
== Research impacts ==
From the very beginning the Department of Agriculture and in turn the Agricultural Research Service has been focused on improving not only the farming industry but also the quality of food and the health of Americans. In 1985, technology to produce lactose-free milk, yogurt, and ice cream was developed through the Agricultural Research Service. The grape breeding program, currently located in Parlier, CA which began in 1923, developed seedless grapes and continues to release new grapevine varieties with improved traits. The ARS Citrus and Subtropical Products Laboratory in Winter Haven, Florida, actively works to improve the taste of orange juice concentrate.
The Agricultural Research Service had a Toxoplasma gondii (T. gondii) research program, which experimented on cats infected with the parasite, from 1982 until 2018. The prevalence of T. gondii parasite has been reduced by 50% in the U.S. As of September 2018, which the ARS claims is a result of their research. The USDA has since discontinued the use of cats in their research amid acute accusations of animal abuse, and a lack of meaningful contribution to the field in recent years (only three of thirteen papers published about T. gondii by the ARS were published after the year 2000).
More recently, the ARS has focused research on genetics and plant and animal DNA. Their research has developed pest-resistant corn, faster growing plants and fish, and a focus on plant and animal genome research and mapping. Outside of scientific research, the ARS has worked to release databases on food components in order to assist consumers with making informed decisions about food choices.
== See also ==
Agricultural Resource Management Survey
Germplasm Resources Information Network
Human Nutrition Research Center on Aging
National Agricultural Center and Hall of Fame
National Clonal Germplasm Repository
National Interagency Confederation for Biological Research
Title 7 of the Code of Federal Regulations
== References ==
== Sources ==
"Agricultural Research Service". Archived from the original on October 14, 2004. Retrieved 7 October 2005.
"Image gallery". Agricultural Research Service. Archived from the original on 6 October 2005. Retrieved 7 October 2005. – An online catalog from the Agricultural Research Service Information Staff.
"Western Regional Research Center". Archived from the original on 1 September 2009. Retrieved 1 September 2009.
"Southern Regional Research Center". Archived from the original on 26 August 2009. Retrieved 1 September 2009.
"National Center for Agricultural Utilization Research". Archived from the original on 31 August 2009. Retrieved 1 September 2009.
"Eastern Regional Research Center". Archived from the original on 3 September 2009. Retrieved 1 September 2009.
"ARS Human Nutrition Research". Archived from the original on 2 September 2009. Retrieved 1 September 2009.
"National Agricultural Library". Archived from the original on 16 November 2008. Retrieved 19 November 2008.
"White Coat Waste Project" (PDF). White Coat Waste Project. Retrieved 2024-01-22.
== External links ==
Official website
Agricultural Research Service in the Federal Register | Wikipedia/Foreign_Disease_Weed_Science_Research_Unit |
The G7-led Global Partnership Against the Spread of Weapons and Materials of Mass Destruction (Global Partnership) is an international security initiative announced at the 2002 G8 summit in Kananaskis, Canada, in response to the September 11 attacks. It is the primary multilateral group that coordinates funding and in-kind support to help vulnerable countries around the world combat the spread of weapons and materials of mass destruction (WMDs).
The Global Partnership began as a 10-year, US$20 billion initiative aimed at addressing the threat of WMD proliferation to non-state actors and states of proliferation concern. The initial focus was on programming in Russia and other countries of the Former Soviet Union (FSU) to mitigate serious threats posed by Soviet-era WMD legacies. Specific priorities included: destroying stockpiles of chemical weapons, dismantling decommissioned nuclear submarines, safeguarding/disposing of fissile material, and the redirection of former weapons scientists. In recognition of the Global Partnership’s success and the increasingly global nature of WMD proliferation and terrorism challenges, at the 2008 G8 Summit in Toyako, Japan, leaders agreed to expand the geographic focus of the Global Partnership beyond Russia and the FSU, and to target WMD proliferation threats wherever they presented. Additionally, at the 2011 G8 Summit in Deauville, France, G8 leaders extended the mandate of the Global Partnership beyond its original 10-year timeline (based on work undertaken by Canada during its 2010 G8 Presidency).
To date the Global Partnership community has delivered more than US$25 billion in tangible threat-reduction programming and continues to lead international efforts to mitigate all manner of CBRN threats around the world. As outlined in the Global Partnership’s annual Programming Annex, in 2020 a total of 245 Projects valued at US$669 million (or €555 million) were implemented by Members in dozens of countries in every region of the world. Many additional contributions were measured not by financial means, but by the leadership and diplomatic efforts of members in the areas of threat reduction or non-proliferation.
== Priorities and working groups ==
The Global Partnership delivers threat reduction programming in four priority areas: nuclear and radiological security, biological security, chemical security, and implementation of the United Nations Security Council Resolution (UNSCR) 1540.
Members of the Global Partnership coordinate and collaborate on an ongoing basis to develop and deliver projects and programs to mitigate all manner of threats posed by chemical, biological, radiological and nuclear (CBRN) weapons and related materials. Under the leadership of the rotational G7 Presidency, Global Partnership partners come together twice annually as the Global Partnership Working Group (GPWG) to review progress, assess the threat landscape and discuss where and how members can meaningfully engage to prevent terrorists and states of proliferation concern from acquiring and using weapons of mass destruction.
There are four sub-working groups subsumed under the GPWG. They aim to facilitate regular dialogue between experts on the Global Partnership's thematic priorities:
Biological Security Working Group (BSWG)
Chemical Security Working Group (CSWG)
CBRN Working Group (CBRNWG)
Nuclear & Radiological Security Working Group (NRWSG)
== Member countries and partners ==
Although originally launched as a G8 initiative and retaining strong affiliation with the G7 (e.g. the Global Partnership Presidency rotates with that of the G7), the Global Partnership currently includes 30 active member countries and the European Union. Russia does not currently participate, consistent with G7 exclusion.
The Global Partnership coordinates and collaborates with a variety of international organizations, initiatives and non-governmental organizations (NGOs) with similarly aligned objectives. Some of these key partners include the International Atomic Energy Agency (IAEA), the International Police Organization (INTERPOL), the Organisation for the Prohibition of Chemical Weapons (OPCW), the United Nations Security Council Resolution 1540 Committee, the World Health Organization, and other United Nations Agencies.
== See also ==
Arms control
Disarmament
Weapons of Mass Destruction
== References ==
== External links ==
Official website of the Global Partnership
Article on the Global Partnership by the Nuclear Threat Initiative
Official document outlining the "International Activities of Global Partnership Member Countries related to Article X of the Biological Weapons Convention", 4 December 2018
G8 and G7 statements on the Global Partnership | Wikipedia/Global_Partnership_Against_the_Spread_of_Weapons_and_Materials_of_Mass_Destruction |
The Johns Hopkins Center for Health Security (abbreviated CHS) is an independent, nonprofit organization of the Johns Hopkins Bloomberg School of Public Health. The center works to protect people's health from epidemics and pandemics and ensures that communities are resilient to major challenges. The center is also concerned with biological weapons and the biosecurity implications of emerging biotechnology.
The Center for Health Security gives policy recommendations to the United States government, the World Health Organization and the UN Biological Weapons Convention.
== History ==
The Center for Health Security began as the Johns Hopkins Center for Civilian Biodefense Strategies (CCBS) in 1998 at the Johns Hopkins Bloomberg School of Public Health. D. A. Henderson served as the founding director. At that time, the center was the first and only academic center focused on biosecurity policy and practice.
At one point around 2003, CHS had become part of a new umbrella organization called the Institute for Global Health and Security at the Johns Hopkins Bloomberg School of Public Health.
In November 2003, some of the leaders left Johns Hopkins to join the University of Pittsburgh Medical Center (UPMC), and launched their own Center for Biosecurity of UPMC. This move apparently split the organization in two, and it is unclear what happened to the old organization.
On April 30, 2013, the UPMC Center changed its name from "Center for Biosecurity of UPMC" to "UPMC Center for Health Security". This name change reflected a broadening of the scope of CHS's work.
In January 2017, the JHU Center became part of the Johns Hopkins Bloomberg School of Public Health. Its domain name changed from upmchealthsecurity.org to centerforhealthsecurity.org.
== Funding ==
In 2002, the center received a $1 million grant from the US federal government.
Before 2017, CHS was heavily reliant on government funding.
In January 2017, the Open Philanthropy Project awarded a $16 million grant over three years to the Center for Health Security. Another grant of $19.5 million was awarded in September 2019.
== Publications ==
The Center for Health Security publishes three online newsletters:
Clinicians' Biosecurity News (formerly the Clinicians' Biosecurity Network Report), published twice each month
Health Security Headlines, a news digest published 3 times a week (previously called Biosecurity Briefing, then Biosecurity News in Brief starting in 2009, then Biosecurity News Today starting in 2010 or 2011, and finally Health Security Headlines starting in 2013; the digest used to also be weekly until in February 2009, HSH was published daily from 2009 until late 2021 when it was changed to 3 times per week to accommodate the COVID-19 Update briefings published twice a week since January 2020)
Preparedness Pulsepoints, published weekly
It maintains and edits a peer-reviewed journal Health Security which is part of the Mary Ann Liebert publishing group.
It also provides editorial oversight for the journal Health Security, which was launched in 2003 and called Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science until 2015.
CHS published the blog The Bifurcated Needle until 2020.
The Open Philanthropy Project's grant writeup of CHS noted several publications:
Boddie, Crystal; Watson, Matthew; Ackerman, Gary; Gronvall, Gigi Kwik (August 21, 2015). "Assessing the bioweapons threat: Is there a foundation of agreement among experts about risk?" (PDF). Science. 349 (6250): 792–793. doi:10.1126/science.aab0713. ISSN 0036-8075. PMID 26293941. S2CID 206637099. Archived from the original (PDF) on September 18, 2017. Retrieved February 9, 2017.
Inglesby, Thomas V.; Relman, David A. (February 1, 2016). "How likely is it that biological agents will be used deliberately to cause widespread harm?". EMBO Reports. 17 (2): 127–130. doi:10.15252/embr.201541674. ISSN 1469-3178. PMC 5290809. PMID 26682799.
Gronvall, Gigi Kwik; Shearer, Matthew; Collins, Hannah; Inglesby, Thomas (July 14, 2016). "Improving Security through International Biosafety Norms" (PDF). UPMC Center for Health Security. Archived from the original (PDF) on November 9, 2016. Retrieved February 10, 2017.
The center has published in journals including JAMA and The Lancet. A full list of publications is available on the CHS website. As of February 2017, the list shows more than 400 publications.
== Major conferences and tabletop exercises ==
=== Operation Dark Winter ===
From June 22–23, 2001, CHS co-hosted Operation Dark Winter, a senior-level bioterrorism attack simulation involving a covert and widespread smallpox attack on the United States.
=== Atlantic Storm ===
On January 14, 2005, CHS helped to host Atlantic Storm, a table-top smallpox bioterrorism simulation.
=== Clade X ===
On May 15, 2018, the Center hosted Clade X, a day-long pandemic tabletop exercise that simulated a series of National Security Council–convened meetings of 10 US government leaders, played by individuals prominent in the fields of national security or epidemic response.
Drawing from actual events, Clade X identified important policy issues and preparedness challenges that could be solved with sufficient political will and attention. These issues were designed in a narrative to engage and educate the participants and the audience.
Clade X was livestreamed on Facebook and extensive materials from the exercise are available online.
=== Event 201 ===
On October 18, 2019, the CHS partnered with the World Economic Forum and the Bill and Melinda Gates Foundation to host the tabletop exercise Event 201 in New York City. According to the CHS, "the exercise illustrated areas where public/private partnerships will be necessary during the response to a severe pandemic in order to diminish large-scale economic and societal consequences".
Event 201 simulated the effects of a fictional coronavirus passing to humans via infected pig farms in Brazil with "no possibility of a vaccine being available in the first year". The simulation ended after 18 months and projected 65 million deaths from the coronavirus.
=== Southeast Asia Biosecurity Dialogue ===
A series of Track II multilateral dialogues cohosted by the Center in the Southeast Asia region ultimately helped to establish the Asia Centre for Health Security.
=== Other ===
== See also ==
== References ==
== External links ==
"Experts predicted a coronavirus pandemic years ago. Now it's playing out before our eyes" CNN
"SPARS Pandemic Scenario"
Official website
The Bifurcated Needle, the Center for Health Security's blog
Clinicians' Biosecurity News Archived December 30, 2021, at the Wayback Machine, a twice monthly newsletter published by the Center
Rad Resilient City; Rad Resilient City Preparedness Checklist Actions | Wikipedia/Johns_Hopkins_Center_for_Civilian_Biodefense_Strategies |
Real-time outbreak and disease surveillance system (RODS) is a syndromic surveillance system developed by the University of Pittsburgh, Department of Biomedical Informatics. It is "prototype developed at the University of Pittsburgh where real-time clinical data from emergency departments within a geographic region can be integrated to provide an instantaneous picture of symptom patterns and early detection of epidemic events."
RODS uses a combination of various monitoring tools.
The first tool is a moving average with a 120-day sliding phase-I-window.
The second tool is a nonstandard combination of CUSUM and EWMA, where an EWMA is used to predict next-day counts, and a CuSum monitors the residuals from these predictions.
The third monitoring tool in RODS is a recursive least squares (RLS) algorithm, which fits an autoregressive model to the counts and updates estimates continuously by minimizing prediction error. A Shewhart I-chart is then applied to the residuals, using a threshold of 4 standard deviations.
The fourth tool in RODS implements a wavelet approach, which decomposes the time series using Haar wavelets, and uses the lowest resolution to remove long-term trends from the raw series. The residuals are then monitored using an ordinary Shewhart I-chart with a threshold of 4 standard deviations.
== References == | Wikipedia/Real-time_outbreak_and_disease_surveillance |
The Laboratory Response Network (LRN) is a collaborative effort within the US federal government involving the Association of Public Health Laboratories and the Centers for Disease Control and Prevention (CDC). Most state public health laboratories participate as reference laboratories (formerly level B/C) of the LRN. These facilities support hundreds of sentinel (formerly level A) laboratories in local hospitals throughout the United States and can provide sophisticated confirmatory diagnosis and typing of biological agents that may be used in a bioterrorist attack or other bio-agent incident. The LRN was established in 1999.
== Levels ==
The LRN consists of a loose network of government labs at three levels:
=== Sentinel Laboratories ===
These laboratories, found in many hospitals and local public health facilities, have the ability to rule out specific bioterrorism threat agents, to handle specimens safely, and to forward specimens to higher-level labs within the network.
=== Reference Laboratories ===
These laboratories (more than 100), typically found at state health departments and at military, veterinary, agricultural, and water-testing facilities, can rule on the presence of the various biological threat agents. They can use BSL-3 practices and can often conduct nucleic acid amplification and molecular typing studies.
=== National Laboratories ===
These laboratories, including those at CDC and U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID), can use BSL-4 practices and serve as the final authority in the evaluation of potential bioterrorism specimens. They provide specialized reagents to lower level laboratories and have the ability to bank specimens, perform serotyping, and detect genetic recombinants and chimeras.
== References ==
== See also ==
BioWatch | Wikipedia/Laboratory_Response_Network_(US) |
Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture. In preparative chromatography, GC can be used to prepare pure compounds from a mixture.
Gas chromatography is also sometimes known as vapor-phase chromatography (VPC), or gas–liquid partition chromatography (GLPC). These alternative names, as well as their respective abbreviations, are frequently used in scientific literature.
Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of 100–320 micrometres (0.0039–0.0126 in) and a length of 5–60 metres (16–197 ft). The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector.
== Operating principle ==
A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature controlled oven. As the chemicals exit the end of the column, they are detected and identified electronically.
== History ==
=== Background ===
Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography.
=== Invention ===
The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector.
Martin and another one of their colleagues, Richard Synge, with whom he shared the 1952 Nobel Prize in Chemistry, had noted in an earlier paper that chromatography might also be used to separate gases. Synge pursued other work while Martin continued his work with James.
=== Gas adsorption chromatography precursors ===
German physical chemist Erika Cremer in 1947 together with Austrian graduate student Fritz Prior developed what could be considered the first gas chromatograph that consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector. They exhibited the chromatograph at ACHEMA in Frankfurt, but nobody was interested in it.
N.C. Turner with the Burrell Corporation introduced in 1943 a massive instrument that used a charcoal column and mercury vapors. Stig Claesson of Uppsala University published in 1946 his work on a charcoal column that also used mercury.
Gerhard Hesse, while a professor at the University of Marburg/Lahn decided to test the prevailing opinion among German chemists that molecules could not be separated in a moving gas stream. He set up a simple glass column filled with starch and successfully separated bromine and iodine using nitrogen as the carrier gas. He then built a system that flowed an inert gas through a glass condenser packed with silica gel and collected the eluted fractions.
Courtenay S.G Phillips of Oxford University investigated separation in a charcoal column using a thermal conductivity detector. He consulted with Claesson and decided to use displacement as his separating principle. After learning about the results of James and Martin, he switched to partition chromatography.
=== Column technology ===
Early gas chromatography used packed columns, made of block 1–5 m long, 1–5 mm diameter, and filled with particles. The resolution of packed columns was improved by the invention of capillary column, in which the stationary phase is coated on the inner wall of the capillary.
== Physical components ==
=== Autosamplers ===
The autosampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.Different kinds of autosamplers exist. Autosamplers can be classified in relation to sample capacity (auto-injectors vs. autosamplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis:
Liquid
Static head-space by syringe technology
Dynamic head-space by transfer-line technology
Solid phase microextraction (SPME)
=== Inlets ===
The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head.
Common inlet types are:
S/SL (split/splitless) injector – a sample is introduced into a heated small chamber via a syringe through a septum – the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas splitless injection is best suited for trace analysis with low amounts of analytes (<0.01%). In splitless mode the split valve opens after a pre-set amount of time to purge heavier elements that would otherwise contaminate the system. This pre-set (splitless) time should be optimized, the shorter time (e.g., 0.2 min) ensures less tailing but loss in response, the longer time (2 min) increases tailing but also signal.
On-column inlet – the sample is here introduced directly into the column in its entirety without heat, or at a temperature below the boiling point of the solvent. The low temperature condenses the sample into a narrow zone. The column and inlet can then be heated, releasing the sample into the gas phase. This ensures the lowest possible temperature for chromatography and keeps samples from decomposing above their boiling point.
PTV injector – Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 μL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the programmed temperature vaporising injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented.
Gas source inlet or gas switching valve – gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream.
P/T (purge-and-trap) system – An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port.
The choice of carrier gas (mobile phase) is important. Hydrogen has a range of flow rates that are comparable to helium in efficiency. However, helium may be more efficient and provide the best separation if flow rates are optimized. Helium is non-flammable and works with a greater number of detectors and older instruments. Therefore, helium is the most common carrier gas used. However, the price of helium has gone up considerably over recent years, causing an increasing number of chromatographers to switch to hydrogen gas. Historical use, rather than rational consideration, may contribute to the continued preferential use of helium.
=== Detectors ===
Commonly used detectors are the flame ionization detector (FID) and the thermal conductivity detector (TCD). While TCDs are beneficial in that they are non-destructive, its low detection limit for most analytes inhibits widespread use. FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. FIDs cannot detect water or carbon dioxide which make them ideal for environmental organic analyte analysis. FID is two to three times more sensitive to analyte detection than TCD.
The TCD relies on the thermal conductivity of matter passing around a thin wire of tungsten-rhenium with a current traveling through it. In this set up helium or nitrogen serve as the carrier gas because of their relatively high thermal conductivity which keep the filament cool and maintain uniform resistivity and electrical efficiency of the filament. When analyte molecules elute from the column, mixed with carrier gas, the thermal conductivity decreases while there is an increase in filament temperature and resistivity resulting in fluctuations in voltage ultimately causing a detector response. Detector sensitivity is proportional to filament current while it is inversely proportional to the immediate environmental temperature of that detector as well as flow rate of the carrier gas.
In a flame ionization detector (FID), electrodes are placed adjacent to a flame fueled by hydrogen / air near the exit of the column, and when carbon containing compounds exit the column they are pyrolyzed by the flame. This detector works only for organic / hydrocarbon containing compounds due to the ability of the carbons to form cations and electrons upon pyrolysis which generates a current between the electrodes. The increase in current is translated and appears as a peak in a chromatogram. FIDs have low detection limits (a few picograms per second) but they are unable to generate ions from carbonyl containing carbons. FID compatible carrier gasses include helium, hydrogen, nitrogen, and argon.
In FID, sometimes the stream is modified before entering the detector. A methanizer converts carbon monoxide and carbon dioxide into methane so that it can be detected. A different technology is the polyarc, by Activated Research Inc, that converts all compounds to methane.
Alkali flame detector (AFD) or alkali flame ionization detector (AFID) has high sensitivity to nitrogen and phosphorus, similar to NPD. However, the alkaline metal ions are supplied with the hydrogen gas, rather than a bead above the flame. For this reason AFD does not suffer the "fatigue" of the NPD, but provides a constant sensitivity over long period of time. In addition, when alkali ions are not added to the flame, AFD operates like a standard FID. A catalytic combustion detector (CCD) measures combustible hydrocarbons and hydrogen. Discharge ionization detector (DID) uses a high-voltage electric discharge to produce ions.
Flame photometric detector (FPD) uses a photomultiplier tube to detect spectral lines of the compounds as they are burned in a flame. Compounds eluting off the column are carried into a hydrogen fueled flame which excites specific elements in the molecules, and the excited elements (P,S, Halogens, Some Metals) emit light of specific characteristic wavelengths. The emitted light is filtered and detected by a photomultiplier tube. In particular, phosphorus emission is around 510–536 nm and sulfur emission is at 394 nm. With an atomic emission detector (AED), a sample eluting from a column enters a chamber which is energized by microwaves that induce a plasma. The plasma causes the analyte sample to decompose and certain elements generate an atomic emission spectra. The atomic emission spectra is diffracted by a diffraction grating and detected by a series of photomultiplier tubes or photo diodes.
Electron capture detector (ECD) uses a radioactive beta particle (electron) source to measure the degree of electron capture. ECD are used for the detection of molecules containing electronegative / withdrawing elements and functional groups like halogens, carbonyl, nitriles, nitro groups, and organometalics. In this type of detector either nitrogen or 5% methane in argon is used as the mobile phase carrier gas. The carrier gas passes between two electrodes placed at the end of the column, and adjacent to the cathode (negative electrode) resides a radioactive foil such as 63Ni. The radioactive foil emits a beta particle (electron) which collides with and ionizes the carrier gas to generate more ions resulting in a current. When analyte molecules with electronegative / withdrawing elements or functional groups electrons are captured which results in a decrease in current generating a detector response.
Nitrogen–phosphorus detector (NPD), a form of thermionic detector where nitrogen and phosphorus alter the work function on a specially coated bead and a resulting current is measured.
Dry electrolytic conductivity detector (DELCD) uses an air phase and high temperature (v. Coulsen) to measure chlorinated compounds.
Mass spectrometer (MS), also called GC-MS; highly effective and sensitive, even in a small quantity of sample. This detector can be used to identify the analytes in chromatograms by their mass spectrum. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS.
Vacuum ultraviolet (VUV) represents the most recent development in gas chromatography detectors. Most chemical species absorb and have unique gas phase absorption cross sections in the approximately 120–240 nm VUV wavelength range monitored. Where absorption cross sections are known for analytes, the VUV detector is capable of absolute determination (without calibration) of the number of molecules present in the flow cell in the absence of chemical interferences.
Olfactometric detector, also called GC-O, uses a human assessor to analyse the odour activity of compounds. With an odour port or a sniffing port, the quality of the odour, the intensity of the odour and the duration of the odour activity of a compound can be assessed.
Other detectors include the Hall electrolytic conductivity detector (ElCD), helium ionization detector (HID), infrared detector (IRD), photo-ionization detector (PID), pulsed discharge ionization detector (PDD), and thermionic ionization detector (TID).
== Methods ==
The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required.
Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development.
=== Carrier gas selection and flow rates ===
Typical carrier gases include helium, nitrogen, argon, and hydrogen. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples the carrier is also selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection.
The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. The most common purity grades required by modern instruments for the majority of sensitivities are 5.0 grades, or 99.999% pure meaning that there is a total of 10 ppm of impurities in the carrier gas that could affect the results. The highest purity grades in common use are 6.0 grades, but the need for detection at very low levels in some forensic and environmental applications has driven the need for carrier gases at 7.0 grade purity and these are now commercially available. Trade names for typical purities include "Zero Grade", "Ultra-High Purity (UHP) Grade", "4.5 Grade" and "5.0 Grade".
The carrier gas linear velocity affects the analysis in the same way that temperature does (see above). The higher the linear velocity the faster the analysis, but the lower the separation between analytes. Selecting the linear velocity is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature. The linear velocity will be implemented by means of the carrier gas flow rate, with regards to the inner diameter of the column.
With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure". The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. It was not possible to vary the pressure setting during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids.
Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs.
=== Stationary compound selection ===
The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity as the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns more options are available.
=== Inlet types and flow rates ===
The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the injector (SPME applications).
=== Sample size and injection technique ===
==== Sample injection ====
The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. In the injection system in the capillary gas chromatograph the amount injected should not overload the column and
the width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this latter requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column.
Some general requirements which a good injection technique should fulfill are that it should be possible to obtain the column's optimum separation efficiency, it should allow accurate and reproducible injections of small amounts of representative samples, it should induce no change in sample composition, it should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability, and it should be applicable for trace analysis as well as for undiluted samples.
However, there are a number of problems inherent in the use of syringes for injection. Even the best syringes claim an accuracy of only 3%, and in unskilled hands, errors are much larger. The needle may cut small pieces of rubber from the septum as it injects sample through it. These can block the needle and prevent the syringe filling the next time it is used. It may not be obvious that this has happened. A fraction of the sample may get trapped in the rubber, to be released during subsequent injections. This can give rise to ghost peaks in the chromatogram. There may be selective loss of the more volatile components of the sample by evaporation from the tip of the needle.
=== Column selection ===
The choice of column depends on the sample and the active measured. The main chemical attribute regarded when choosing a column is the polarity of the mixture, but functional groups can play a large part in column selection. The polarity of the sample must closely match the polarity of the column stationary phase to increase resolution and separation while reducing run time. The separation and run time also depends on the film thickness (of the stationary phase), the column diameter and the column length.
=== Column temperature and temperature program ===
The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.)
The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated.
In general, the column temperature is selected to compromise between the length of the analysis and the level of separation.
A method which holds the column at the same temperature for the entire analysis is called "isothermal". Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp"), and final temperature are called the temperature program.
A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column.
== Data reduction and analysis ==
=== Qualitative analysis ===
Generally, chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. However, in most modern applications, the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.
=== Quantitative analysis ===
The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte).
In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra.
== Applications ==
In general, substances that vaporize below 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance known as a reference standard.
Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process.
Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring chemicals in soil, air or water, such as soil gases. GC is very accurate if used properly and can measure picomoles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples.
In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyse hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with a FID. A complication with light gas analyses that include H2 is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analysing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than two separate ones. The sensitivity is reduced, but this is a trade off for simplicity in the gas supply.
Gas chromatography is used extensively in forensic science. Disciplines as diverse as solid drug dose (pre-consumption form) identification and quantification, arson investigation, paint chip analysis, and toxicology cases, employ GC to identify and quantify various biological specimens and crime-scene evidence.
== See also ==
Analytical chemistry
Chromatography
Gas chromatography–mass spectrometry
Gas chromatography-olfactometry
High-performance liquid chromatography
Inverse gas chromatography
Proton transfer reaction mass spectrometry
Secondary electrospray ionization
Selected ion flow tube mass spectrometry
Standard addition
Thin layer chromatography
Unresolved complex mixture
== References ==
== External links ==
Media related to Gas chromatography at Wikimedia Commons
Chromatographic Columns in the Chemistry LibreTexts Library | Wikipedia/Gas_chromatograph |
Blue Coat Systems, Inc., was a company that provided hardware, software, and services designed for cybersecurity and network management. In 2016 it was acquired by and folded into Symantec and in 2019 as part of Symantec’s Enterprise Security business it was sold to Broadcom.
The company was known as CacheFlow until 2002.
The company had "a broad security portfolio including hardware, software and services." The company was best known for web gateway appliances that scan internet traffic for security threats, authenticate users and manage encrypted traffic, as well as products to monitor and filter employee internet activity. It also produced consumer products, such as parental control software. The company's products were initially sold to internet service providers, but later products were intended for large companies.
== History ==
In March 1996, the company was founded as CacheFlow, Inc. in Redmond, Washington by Michael Malcolm, a computer scientist and professor at the University of Waterloo, Joe Pruskowski, and Doug Crow. The company initially raised $1 million in seed money from a dozen angel investors.
The company's goal was to develop appliances that would increase website speed by storing frequently accessed web data in the cache.
In October 1996, Benchmark Capital purchased 25% of the company for $2.8 million, equivalent to a price of 87.5 cents per share. By February 1997, the company had raised $5.1 million. In December 1997, U.S. Venture Partners acquired 17% of the company for $6 million.
In 1997, the company moved its headquarters to Silicon Valley.
In January 1998, the company released its first product, the CacheFlow 1000. It cached website objects that users were likely to use repeatedly, to increase load speed. In October 1999, the CacheOS operating system was released.
In April 1998, the company released the CacheFlow 100.
In mid-1998, during the dot-com bubble, the company made its first sales, earning just $809,000 over three months, and investors started pushing for an initial public offering (IPO).
In March 1999, Technology Crossover Ventures invested $8.7 million for 7% of the company, equivalent to a price of $4.575 per share.
In September 1999, the company released the CacheFlow 500, with a list price of $9,995. A competitive review of caching appliances in PC Magazine gave the CacheFlow 500 Editor's Choice. The editor said it had "excellent performance", a "plug-and-go" setup, and "good management tools". The review noted that its "most noteworthy features" were its DNS caching and object pipelining techniques, which allowed page data to be delivered in parallel, rather than sequential, streams.
In early November 1999, Marc Andreessen invested $3.1 million in shares, at a price of $11.04 per share.
On November 19, 1999, during the peak of the dot-com bubble, the company became a public company via an initial public offering, raising $132 million. Shares rose fivefold on its first day of trading. However, the company was not profitable and its product was unproven. At that time, the company had $3.6 million in revenue and $6.6 million in losses in the prior 3 months. The company had initially hoped to price its shares at $11-13 each and raise $50 million but due to strong demand, it priced its shares at $26 each. One month later, they traded as high as $182 each.
In 1999, board of directors member Andrew Rachleff of Benchmark Capital, an investor, brought in Brian NeSmith, who had just sold his company, Ipsilon Networks, to Nokia for $120 million, as chief executive officer.
In 2000, it introduced the CacheFlow Server Accelerator product family, which offloads content delivery tasks from web servers. Tests by Network World found the Server Accelerator 725 increased website load speed eight-fold. The CacheFlow Client Accelerator 600 product family was also introduced in 2000. It was the company's first product family for corporate networks, caching Web or multimedia content directly on the corporate network.
Revenues grew from $7 million in 1998, to $29 million in 1999 and $97 million in 2001. It already had 25% of the overall market for caching and 35% of the caching appliance market and was still not profitable.
In March 2000, the company integrated its products with those of Akamai Technologies.
In May 2000, the company updated CacheFlow OS to cache multimedia content, including RealNetworks' RealSystem, Microsoft's Windows Media and Apple's Quicktime formats.
In June 2000, the company acquired Springbank Networks, an internet device company, for $180 million.
In December 2000, the company acquired Entera, a digital content streaming company, for $440 million.
In 2001, new features specifically for streaming media were introduced under the name "cIQ".
Later in 2001, the company was renamed to Blue Coat Systems to focus on security appliances and simultaneously released the SG800. The appliance sat behind corporate firewalls to filter website traffic for viruses, worms and other harmful software. It had a custom operating system called Security Gateway and provided many of its security features through partners, like Symantec and Trend Micro.
The company lost 97% of its value from October 2000 to March 2001 as the dot-com bubble burst. The company continued to lose money. By 2002, several competing internet caching companies had abandoned the market, due to slow adoption of caching technology and most of CacheFlow's revenues came from its IT security products.
In February 2002, the company closed its Redmond, WA office and laid off about 60 workers there. Around a dozen Redmond workers were offered transfers to the Sunnyvale, CA office.
In April 2002, the company launched fifth version of its operating system. The Security Gateway 600/6000 Series was the company's newest product family. It had a range of security features, such as authentication, internet use policies, virus scanning, content filtering, and bandwidth restrictions for streaming video applications.
In August 2002, the company started adding IT security features.
Also in August 2002, the company changed its name to Blue Coat Systems. The new name was intended to evoke the image of a police officer or guard. It then focusing on internet security appliances. Its products were primarily used to control, monitor and secure internet use by employees. For example, a company could limit employee access to online gaming and video streaming, as well as scan attachments for viruses.
The shift in focus was followed by smaller losses and revenues at first and eventually company growth. Losses in 2002 after the rename were $247 million, about half of the prior year's losses.
By 2003, the company had 250 employees and $150 million in annual revenue.
In 2003, 3 new products were introduced for small and medium-sized businesses (SMBs) with 50, 100 or 250 users in bundles with Websense; and Secure Computing, although the Websense partnership was later at least partially dismantled and the two vendors were "slinging mud at each other" in the VAR community. This was followed by a second generation of the ProxySG product family, which added security features for instant messaging. A review in eWeek said the new ProxySG line was effective and easy to deploy, but the ongoing maintenance fees were expensive.
In October 2003, the company acquired Ositis Software, maker of antivirus appliances, for $1.36 million.
In July 2004, the company acquired Cerberian, a URL filtering company, for $17.5 million.
In 2005, the company introduced an anti-spyware appliance called Spyware Interceptor and the following year it announced upcoming WAN optimization products. This was followed by SSL-VPN security appliances to secure remote connections.
In 2005, the company was profitable for the first time.
In 2006, the company introduced a free web-tool, K9 Web Protection, that can monitor internet traffic, block certain websites, identify phishing scams. In a November 2008 review, PC World gave it 4.25 out of 5 stars.
In March 2006, the company acquired Permeo Technologies, an end point security company, for $60 million.
In June 2006, the company acquired NetCache assets from NetApp, which were involved in proxy caching, for $30 million.
In June 2008, the company acquired Packeteer, engaged in WAN optimization, for $268 million.
In 2009, the company introduced a plugin for PacketShaper to throttle applications such as Spotify. A review of the PacketShaper 12000 in IT Pro gave it four out of five stars. The review said that "you won't find superior WAN traffic management anywhere else," but "the hardware platform could be more up to date considering the price."
In November 2009, the company went through a restructuring that included layoffs of 280 of its 1,500 employees and the closing of facilities in Latvia, New Jersey and the Netherlands.
In February 2010, the company acquired S7 Software Solutions, an IT research and development firm based in Bangalore, for $5.25 million.
In August 2010, Michael J. Borman was named president and CEO of the company.
In August 2011, CEO Michael Borman was fired for failing to meet performance goals and was replaced with Greg Clark.
In December 2011, Thoma Cressey Bravo acquired the company for $1.3 billion.
In March 2012, David Murphy was named chief operating officer and president of the company.
In December 2012, the company acquired Crossbeam Systems, maker of the X-Series of security appliances.
In May 2013, the company acquired Solera Networks, involved in big data security.
In December 2013, the company acquired Norman Shark, an anti malware firm.
In 2014, Elastica's technology was incorporated into Blue Coat products, with its Audit subscription service being merged into the Blue Coat Appfeed in the ProxySG gateways.
In June 2015, the company acquired Perspecsys, involved in cloud security.
Also in March 2015, the company integrated technologies from its acquisitions of Norman Shark and Solera Networks to create a cloud-based product family called the Global Intelligence Network.
Also in March 2015, the company pressured security researcher Raphaël Rigo into canceling his talk at SyScan '15. Although Raphaël's talk did not contain any information about vulnerabilities on the ProxySG platform, the company still cited concerns that the talk was going to "provide information useful to the ongoing security assessments of ProxySG by Blue Coat." The canceling of the talk was met with harsh criticism by various prominent security researchers and professionals alike who generally welcome technical information about various security products that are widely used.
In March 2015, Bain Capital acquired the company from Thoma Bravo for $2.4 billion. Bain indicated that it hoped to launch another IPO and several months later, anonymous sources said the company was looking for investment banks for that purpose.
In August 2015, the company launched the Alliance Ecosystem of Endpoint Detection and Response (EDR) to share information about IT security threats across vendors and companies. Its first channel program was started that March and the Cloud Ready Partner Program was announced the following April.
In November 2015, the company acquired Elastica, involved in cloud security, for $280 million.
By 2015, the company had $200 million in annual profits and about $600 million in revenues, a 50% increase from 2011.
On June 2, 2016, the company once again filed plans for an initial public offering. However, on June 13, 2016, the company abandoned its IPO plans and announced that Symantec agreed to acquire the company for $4.65 billion. The acquisition was completed on August 1, 2016 and the company's products were integrated into those of Symantec.
Blue Coat CEO Greg Clark was named CEO of Symantec and Blue Coat COO Michael Fey was named president and COO of Symantec.
== Use by repressive regimes ==
Blue Coat devices are what is known as a "dual-use" technology, because they can be used both to defend corporate networks and by governments to censor and monitor the public's internet traffic. The appliances can see some types of encrypted traffic, block websites or record website traffic.
In October 2011, the U.S. government examined claims made by Telecomix, a hacktivist group, that Syria was using Blue Coat Systems products for internet censorship. Telecomix released 54GB of log data alleged to have been taken from 7 Blue Coat web gateway appliances that depict search terms, including "Israel" and "proxy", that were blocked in the country using the appliances. The company later acknowledged that its systems were being used in Syria, but asserted the equipment was sold to intermediaries in Dubai for use by an Iraqi governmental agency.
Despite the systems consistently sending "heartbeat" pings directly back to Blue Coat, the company claimed not to be monitoring the logs to identify from which country an appliance is communicating. Blue Coat announced that it would halt providing updates, support and other services for systems operating in Syria. In April 2013, the Bureau of Industry and Security announced a $2.8 million civil settlement with Computerlinks FZCO, a Dubai reseller, for violations of the Export Administration Regulations related to the transfer to Syria of Blue Coat products.
The company's devices were also used to block the public from viewing certain websites in Bahrain, Qatar, and U.A.E., among others. By 2013, Citizen Lab had published 3 reports regarding the company's devices being found in countries known for using technology to violate human rights. It identified 61 countries using Blue Coat devices, including those known for censoring and surveilling their citizens' internet activity, such as China, Egypt, Russia, and Venezuela. However, "it remains unclear exactly how the technologies are being used, but experts say the tools could empower repressive governments to spy on opponents."
== See also ==
Kaleidescape
== References == | Wikipedia/Blue_Coat_Systems |
In information science, profiling refers to the process of construction and application of user profiles generated by computerized data analysis.
This is the use of algorithms or other mathematical techniques that allow the discovery of patterns or correlations in large quantities of data, aggregated in databases. When these patterns or correlations are used to identify or represent people, they can be called profiles. Other than a discussion of profiling technologies or population profiling, the notion of profiling in this sense is not just about the construction of profiles, but also concerns the application of group profiles to individuals, e. g., in the cases of credit scoring, price discrimination, or identification of security risks (Hildebrandt & Gutwirth 2008) (Elmer 2004).
Profiling is being used in fraud prevention, ambient intelligence, consumer analytics, and surveillance. Statistical methods of profiling include Knowledge Discovery in Databases (KDD).
== The profiling process ==
The technical process of profiling can be separated in several steps:
Preliminary grounding: The profiling process starts with a specification of the applicable problem domain and the identification of the goals of analysis.
Data collection: The target dataset or database for analysis is formed by selecting the relevant data in the light of existing domain knowledge and data understanding.
Data preparation: The data are preprocessed for removing noise and reducing complexity by eliminating attributes.
Data mining: The data are analysed with the algorithm or heuristics developed to suit the data, model and goals.
Interpretation: The mined patterns are evaluated on their relevance and validity by specialists and/or professionals in the application domain (e.g. excluding spurious correlations).
Application: The constructed profiles are applied, e.g. to categories of persons, to test and fine-tune the algorithms.
Institutional decision: The institution decides what actions or policies to apply to groups or individuals whose data match a relevant profile.
Data collection, preparation and mining all belong to the phase in which the profile is under construction. However, profiling also refers to the application of profiles, meaning the usage of profiles for the identification or categorization of groups or individual persons. As can be seen in step six (application), the process is circular. There is a feedback loop between the construction and the application of profiles. The interpretation of profiles can lead to the reiterant – possibly real-time – fine-tuning of specific previous steps in the profiling process. The application of profiles to people whose data were not used to construct the profile is based on data matching, which provides new data that allows for further adjustments. The process of profiling is both dynamic and adaptive. A good illustration of the dynamic and adaptive nature of profiling is the Cross-Industry Standard Process for Data Mining (CRISP-DM).
== Types of profiling practices ==
In order to clarify the nature of profiling technologies, some crucial distinctions have to be made between different types of profiling practices, apart from the distinction between the construction and the application of profiles. The main distinctions are those between bottom-up and top-down profiling (or supervised and unsupervised learning), and between individual and group profiles.
=== Supervised and unsupervised learning ===
Profiles can be classified according to the way they have been generated (Fayyad, Piatetsky-Shapiro & Smyth 1996) (Zarsky & 2002-3). On the one hand, profiles can be generated by testing a hypothesized correlation. This is called top-down profiling or supervised learning. This is similar to the methodology of traditional scientific research in that it starts with a hypothesis and consists of testing its validity. The result of this type of profiling is the verification or refutation of the hypothesis. One could also speak of deductive profiling. On the other hand, profiles can be generated by exploring a data base, using the data mining process to detect patterns in the data base that were not previously hypothesized. In a way, this is a matter of generating hypothesis: finding correlations one did not expect or even think of. Once the patterns have been mined, they will enter the loop – described above – and will be tested with the use of new data. This is called unsupervised learning.
Two things are important with regard to this distinction. First, unsupervised learning algorithms seem to allow the construction of a new type of knowledge, not based on hypothesis developed by a researcher and not based on causal or motivational relations but exclusively based on stochastical correlations. Second, unsupervised learning algorithms thus seem to allow for an inductive type of knowledge construction that does not require theoretical justification or causal explanation (Custers 2004).
Some authors claim that if the application of profiles based on computerized stochastical pattern recognition 'works', i.e. allows for reliable predictions of future behaviours, the theoretical or causal explanation of these patterns does not matter anymore (Anderson 2008). However, the idea that 'blind' algorithms provide reliable information does not imply that the information is neutral. In the process of collecting and aggregating data into a database (the first three steps of the process of profile construction), translations are made from real-life events to machine-readable data. These data are then prepared and cleansed to allow for initial computability. Potential bias will have to be located at these points, as well as in the choice of algorithms that are developed. It is not possible to mine a database for all possible linear and non-linear correlations, meaning that the mathematical techniques developed to search for patterns will be determinate of the patterns that can be found. In the case of machine profiling, potential bias is not informed by common sense prejudice or what psychologists call stereotyping, but by the computer techniques employed in the initial steps of the process. These techniques are mostly invisible for those to whom profiles are applied (because their data match the relevant group profiles).
=== Individual and group profiles ===
Profiles must also be classified according to the kind of subject they refer to. This subject can either be an individual or a group of people. When a profile is constructed with the data of a single person, this is called individual profiling (Jaquet-Chiffelle 2008). This kind of profiling is used to discover the particular characteristics of a certain individual, to enable unique identification or the provision of personalized services. However, personalized servicing is most often also based on group profiling, which allows categorisation of a person as a certain type of person, based on the fact that her profile matches with a profile that has been constructed on the basis of massive amounts of data about massive numbers of other people. A group profile can refer to the result of data mining in data sets that refer to an existing community that considers itself as such, like a religious group, a tennis club, a university, a political party etc. In that case it can describe previously unknown patterns of behaviour or other characteristics of such a group (community). A group profile can also refer to a category of people that do not form a community, but are found to share previously unknown patterns of behaviour or other characteristics (Custers 2004). In that case the group profile describes specific behaviours or other characteristics of a category of people, like for instance women with blue eyes and red hair, or adults with relatively short arms and legs. These categories may be found to correlate with health risks, earning capacity, mortality rates, credit risks, etc.
If an individual profile is applied to the individual that it was mined from, then that is direct individual profiling. If a group profile is applied to an individual whose data match the profile, then that is indirect individual profiling, because the profile was generated using data of other people. Similarly, if a group profile is applied to the group that it was mined from, then that is direct group profiling (Jaquet-Chiffelle 2008). However, in as far as the application of a group profile to a group implies the application of the group profile to individual members of the group, it makes sense to speak of indirect group profiling, especially if the group profile is non-distributive.
=== Distributive and non-distributive profiling ===
Group profiles can also be divided in terms of their distributive character (Vedder 1999). A group profile is distributive when its properties apply equally to all the members of its group: all bachelors are unmarried, or all persons with a specific gene have 80% chance to contract a specific disease. A profile is non-distributive when the profile does not necessarily apply to all the members of the group: the group of persons with a specific postal code have an average earning capacity of XX, or the category of persons with blue eyes has an average chance of 37% to contract a specific disease. Note that in this case the chance of an individual to have a particular earning capacity or to contract the specific disease will depend on other factors, e.g. sex, age, background of parents, previous health, education. It should be obvious that, apart from tautological profiles like that of bachelors, most group profiles generated by means of computer techniques are non-distributive. This has far-reaching implications for the accuracy of indirect individual profiling based on data matching with non-distributive group profiles. Quite apart from the fact that the application of accurate profiles may be unfair or cause undue stigmatisation, most group profiles will not be accurate.
== Applications ==
In the financial sector, institutions use profiling technologies for fraud prevention and credit scoring. Banks want to minimize the risks in giving credit to their customers. On the basis of the extensive group, profiling customers are assigned a certain scoring value that indicates their creditworthiness. Financial institutions like banks and insurance companies also use group profiling to detect fraud or money-laundering. Databases with transactions are searched with algorithms to find behaviors that deviate from the standard, indicating potentially suspicious transactions.
In the context of employment, profiles can be of use for tracking employees by monitoring their online behavior, for the detection of fraud by them, and for the deployment of human resources by pooling and ranking their skills. (Leopold & Meints 2008)
Profiling can also be used to support people at work, and also for learning, by intervening in the design of adaptive hypermedia systems personalizing the interaction. For instance, this can be useful for supporting the management of attention (Nabeth 2008).
In forensic science, the possibility exists of linking different databases of cases and suspects and mining these for common patterns. This could be used for solving existing cases or for the purpose of establishing risk profiles of potential suspects (Geradts & Sommer 2008) (Harcourt 2006).
=== Consumer profiling ===
Consumer profiling is a form of customer analytics, where customer data is used to make decisions on product promotion, the pricing of products, as well as personalized advertising. When the aim is to find the most profitable customer segment, consumer analytics draws on demographic data, data on consumer behavior, data on the products purchased, payment method, and surveys to establish consumer profiles. To establish predictive models on the basis of existing databases, the Knowledge Discovery in Databases (KDD) statistical method is used. KDD groups similar customer data to predict future consumer behavior. Other methods of predicting consumer behaviour are correlation and pattern recognition. Consumer profiles describe customers based on a set of attributes and typically consumers are grouped according to income, living standard, age and location. Consumer profiles may also include behavioural attributes that assess a customer's motivation in the buyer decision process. Well known examples of consumer profiles are Experian's Mosaic geodemographic classification of households, CACI's Acorn, and Acxiom's Personicx.
=== Ambient intelligence ===
In a built environment with ambient intelligence everyday objects have built-in sensors and embedded systems that allow objects to recognise and respond to the presence and needs of individuals. Ambient intelligence relies on automated profiling and human–computer interaction designs. Sensors monitor an individual's action and behaviours, therefore generating, collecting, analysing, processing and storing personal data. Early examples of consumer electronics with ambient intelligence include mobile apps, augmented reality and location-based service.
== Risks and issues ==
Profiling technologies have raised a host of ethical, legal and other issues including privacy, equality, due process, security and liability. Numerous authors have warned against the affordances of a new technological infrastructure that could emerge on the basis of semi-autonomic profiling technologies (Lessig 2006) (Solove 2004) (Schwartz 2000).
Privacy is one of the principal issues raised. Profiling technologies make possible a far-reaching monitoring of an individual's behaviour and preferences. Profiles may reveal personal or private information about individuals that they might not even be aware of themselves (Hildebrandt & Gutwirth 2008).
Profiling technologies are by their very nature discriminatory tools. They allow unparalleled kinds of social sorting and segmentation which could have unfair effects. The people that are profiled may have to pay higher prices, they could miss out on important offers or opportunities, and they may run increased risks because catering to their needs is less profitable (Lyon 2003). In most cases they will not be aware of this, since profiling practices are mostly invisible and the profiles themselves are often protected by intellectual property or trade secret. This poses a threat to the equality of and solidarity of citizens. On a larger scale, it might cause the segmentation of society.
One of the problems underlying potential violations of privacy and non-discrimination is that the process of profiling is more often than not invisible for those that are being profiled. This creates difficulties in that it becomes hard, if not impossible, to contest the application of a particular group profile. This disturbs principles of due process: if a person has no access to information on the basis of which they are withheld benefits or attributed certain risks, they cannot contest the way they are being treated (Steinbock 2005).
Profiles can be used against people when they end up in the hands of people who are not entitled to access or use the information. An important issue related to these breaches of security is identity theft.
When the application of profiles causes harm, the liability for this harm has to be determined who is to be held accountable. Is the software programmer, the profiling service provider, or the profiled user to be held accountable? This issue of liability is especially complex in the case the application and decisions on profiles have also become automated like in Autonomic Computing or ambient intelligence decisions of automated decisions based on profiling.
== See also ==
== References ==
Notes and other references | Wikipedia/Profile_(information_science) |
Corporate surveillance describes the practice of businesses monitoring and extracting information from their users, clients, or staff. This information may consist of online browsing history, email correspondence, phone calls, location data, and other private details. Acts of corporate surveillance frequently look to boost results, detect potential security problems, or adjust advertising strategies. These practices have been criticized for violating ethical standards and invading personal privacy. Critics and privacy activists have called for businesses to incorporate rules and transparency surrounding their monitoring methods to ensure they are not misusing their position of authority or breaching regulatory standards.
Monitoring can feel intrusive and give the impression that the business does not promote ethical behavior among its personnel. Staff satisfaction, productivity, and staff turnover may all suffer as a result of the invasion of privacy.
== Monitoring methods ==
Employers may be authorized to gather information through keystroke logging and mouse tracking, which involves recording the keys individuals interact with and cursor position on computers. In cases where employment contracts permit it, they may also monitor webcam activity on company-provided computers. Employers may be able to view the emails sent from business accounts and may be able to see the websites visited when using a corporate internet connection. The screenshot capability is another tool that enables companies to see what remote workers are doing. This feature, which can be found in tracking software, takes screenshots throughout the day at predetermined or arbitrary intervals. Additionally, people who don't work in offices are observed. For instance, it has been claimed that Amazon has incorporated tracking technology to monitor warehouse staff and delivery drivers.
== Use of collected information ==
Information collected by corporations can be used for a variety of uses including marketing research, targeting advertising, fraud detection and prevention, ensuring policy adherence, preventing lawsuits, and safeguarding records and company assets.
== Privacy concerns ==
Concerns over corporate privacy have become more important due to companies collection and manipulation of personal data. Since these practices have been recognized there has been a rising concern about both the security and the possible mishandling of the data accumulated.
Social Media data collection and monitoring has been one of the most concerned areas regarding corporate surveillance. Recently, many employers on CareerBuilder have checked their potential candidates' social media activities before the hiring process. This approach can be excusable since it is important to be aware of a future employee or applicant's online presence, and how it might affect the company's reputation in the future. This is crucial since employers are often made legally responsible for their worker's digital actions.
These data can also be used to enact political gains. The Facebook-Cambridge Analytica data scandal in 2018 revealed that its British branch to have surreptitiously sold American psychological data to the Trump campaign. This information was supposed to be private, but Facebook's inability to protect user information had reportedly not been a top priority of the company at the time.
== Laws and regulations ==
The National Labor and Relations Act (NLRA) safeguards workplace democracy by giving workers in the private sector the basic freedom to demand better working conditions and choice of representation without fear of retaliation.
General Data Protection Regulation (GDPR) outlines the broad responsibilities of data controllers and the "processors" that handle personal data on their behalf. They must adopt the necessary security measures in accordance with the risk involved in the data processing operations they carry out.[1]
Electronics Communication Privacy Act (ECPA), as amended, provides protection for electronic, oral, and wire communications while they are being created, while they are being sent, and while they are being stored on computers. Email, phone calls, and electronically stored data are covered by the Act.
== Sale of customer data ==
If it is business intelligence, data collected on individuals and groups can be sold to other corporations, so that they can use it for the aforementioned purpose. It can be used for direct marketing purposes, such as targeted advertisements on Google and Yahoo. These ads are tailored to the individual user of the search engine by analyzing their search history and emails (if they use free webmail services).
For example, the world's most popular web search engine stores identifying information for each web search. Google stores an IP address and the search phrase used in a database for up to 2 years. Google also scans the content of emails of users of its Gmail webmail service, in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest web advertising agency. Their revenue model is based on receiving payments from advertisers for each page-visit resulting from a visitor clicking on a Google AdWords ad, hosted either on a Google service or a third-party website. Millions of sites place Google's advertising banners and links on their websites, in order to share this profit from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies cookies on each visitor's computer. These cookies track the user across all of these sites, and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use for building a profile of the user to deliver better-targeted advertising.
== Surveillance of workers ==
In 1993, David Steingard and Dale Fitzgibbons argued that modern management, far from empowering workers, had features of neo-Taylorism, where teamwork perpetuated surveillance and control. They argued that employees had become their own "thought police" and the team gaze was the equivalent of Bentham's panopticon guard tower. A critical evaluation of the Hawthorne Plant experiments has in turn given rise to the notion of a Hawthorne effect, where workers increase their productivity in response to their awareness of being observed or because they are gratified for being chosen to participate in a project.
According to the American Management Association and the ePolicy Institute, who undertook a quantitative survey in 2007 about electronic monitoring and surveillance with approximately 300 US companies, "more than one fourth of employers have fired workers for misusing email and nearly one third have fired employees for misusing the Internet." Furthermore, about 30 percent of the companies had also fired employees for usage of "inappropriate or offensive language" and "viewing, downloading, or uploading inappropriate/offensive content." More than 40 percent of the companies monitor email traffic of their workers, and 66 percent of corporations monitor Internet connections. In addition, most companies use software to block websites such as sites with games, social networking, entertainment, shopping, and sports. The American Management Association and the ePolicy Institute also stress that companies track content that is being written about them, for example by monitoring blogs and social media, and scanning all files that are stored in a filesystem.
== Government use of corporate surveillance data ==
The United States government often gains access to corporate databases, either by producing a warrant for it, or by asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies—such as Google—for augmenting the profiles of individuals that it is monitoring. The US government has gathered information from grocery store discount card programs, which track customers' shopping patterns and store them in databases, in order to look for terrorists by analyzing shoppers' buying patterns.
== Corporate surveillance of citizens ==
According to Dennis Broeders, "Big Brother is joined by big business". He argues that corporations are in any event interested in data on their potential customers and that placing some forms of surveillance in the hands of companies, results in companies owning video surveillance data for stores and public places. The commercial availability of surveillance systems has led to their rapid spread. Therefore it is almost impossible for citizens to maintain their anonymity.
When businesses can monitor their customers, such customers run the risk of facing prejudice when applying for housing, loans, jobs, and other economic opportunities. The consumer may not even be aware that they are being treated differently if discrimination results in different prices being charged for the same goods. Without their knowledge, their information was being accessed and sold.
In February 2024, concerns over the collection of customer data without their knowledge were raised at the University of Waterloo following a post on Reddit and a published article in the school newspaper mathNEWS, in which vending machines displayed an error suggesting facial recognition software was in use by the machines. Following this error, the university requested for the machines to have their facial recognition software disabled and eventually removed. On February 29, the Ontario privacy commissioner announced that it would be beginning an investigation into the installation of the vending machines.
== See also ==
Big data
Computer surveillance in the workplace
Employee monitoring
Keystroke logging
Loyalty program
Mouse tracking
Shopping cart software
Surveillance capitalism
== References == | Wikipedia/Corporate_surveillance |
The Digital Collection System Network (DCSNet) is the Federal Bureau of Investigation (FBI)'s point-and-click surveillance system that can perform instant wiretaps on almost any telecommunications device in the United States.
It allows access to cellphone, landline, SMS communications anywhere in the US from a point-and-click interface. It runs on a fiber-optic backbone that is separate from the Internet. It is intended to increase agent productivity through workflow modeling, allowing for the routing of intercepts for translation or analysis with only a few clicks. The DCSNet real-time intelligence data intercept has the capability to record, review and playback intercepted material in real-time.
The DCSNet systems operate on a virtual private network parallel to the public Internet, with services provided at least for some time by the Sprint Peerless IP network.
Much of the information available on this system has come from the results of Freedom of Information Act (FOIA) requests made by the Electronic Frontier Foundation (EFF).
== Components ==
It is composed of at least three classified software components that run on the Windows operating system—DCS3000, DCS5000, DCS6000.
=== DCS-1000 ===
=== DCS-3000 ===
DCS-3000 and "Red Hook" were first mentioned publicly in a March 2006 report from the United States Department of Justice Office of the Inspector General (OIG) on the implementation of the Communications Assistance for Law Enforcement Act (CALEA). The report described Red Hook as "a system to collect voice and data calls and then process and display the intercepted information in the absence of a CALEA solution." and it described DCS-3000 "as an interim solution to intercept personal communications services delivered via emerging digital technologies used by wireless carriers in advance of any CALEA solutions being deployed."
Citing the OIG report, the Electronic Frontier Foundation (EFF) filed an FOIA request later that year in order to obtain more information about the two programs. When the FBI did not respond with more information, the EFF sued, and in May 2007 obtained a court order to release documents concerning the programs.
On August 29, 2007, Wired magazine published an article on these systems, citing the EFF documents. The DCS-3000 collects information associated with dialed and incoming numbers like traditional trap-and-trace and pen registers. The article named "Red Hook" as the client for DCS-3000. Wired reported that the DCS-3000 cost $320 per number targeted, and that the software is maintained by Booz Allen Hamilton.
=== DCS-5000 ===
The DCS-5000 is a system used by the FBI unit responsible for counter-intelligence to target suspected spies, alleged terrorists, and others with wiretaps.
=== DCS-6000 ===
The DCS-6000 (a.k.a. "Digital Storm") captures the content of phone calls and text messages for analysis. Once the data has been captured, it is indexed and prioritized using the Electronic Surveillance Data Management System (ELSUR).
== See also ==
Carnivore (FBI)
ECHELON
Investigative Data Warehouse
Mass surveillance
Signals intelligence (SIGINT)
== References == | Wikipedia/Digital_Collection_System_Network |
Mail Isolation Control and Tracking (MICT) is an imaging system employed by the United States Postal Service (USPS) that takes photographs of the exterior of every piece of mail that is processed in the United States. The Postmaster General has stated that the system is primarily used for mail sorting, though it also enables the USPS to retroactively track mail correspondence at the request of law enforcement. It was created in the aftermath of the 2001 anthrax attacks that killed five people, including two postal workers. The automated mail tracking program was created so that the Postal Service could more easily track hazardous substances and keep people safe, according to U.S. Postmaster General Patrick R. Donahoe.
The Federal Bureau of Investigation (FBI) revealed MICT on June 7, 2013, when discussing the Bureau's investigation of ricin-laced letters sent to U.S. President Barack Obama and New York City mayor Michael Bloomberg. The FBI stated in a criminal complaint that the program was used to narrow its investigation to Shannon Richardson. The Postmaster General, Patrick R. Donahoe, confirmed in an interview with the Associated Press the existence of this program on August 2, 2013.
In confirming the existence of MICT, Donahoe told the Associated Press that the USPS does not maintain a massive centralized database of the letter images. He said that the images are taken at more than 200 mail processing centers around the country, and that each scanning machine at the processing centers only keeps images of the letters it scans. He also stated the images are retained for a week to 30 days and then destroyed.
Computer security and information privacy expert Bruce Schneier compared MICT to the mass surveillance of the National Security Agency (NSA), revealed in June 2013 by Edward Snowden. Schneier said, "Basically, [the USPS is] doing the same thing as the [NSA] programs, collecting the information on the outside of your mail, the metadata, if you will, of names, addresses, return addresses and postmark locations, which gives the government a pretty good map of your contacts, even if they aren't reading the contents."
James J. Wedick, a former FBI agent, said of MICT, "It's a treasure trove of information. Looking at just the outside of letters and other mail, I can see who you bank with, who you communicate with—all kinds of useful information that gives investigators leads that they can then follow up on with a subpoena." He also said the program "can be easily abused because it's so easy to use, and you don't have to go through a judge to get the information. You just fill out a form."
The U.S. Postal Service almost never denies requests to track suspects’ mail on behalf of law-enforcement agencies.
== See also ==
Informed Delivery
Mail cover
Mass surveillance in the United States
MLOCR
Postal censorship
== References == | Wikipedia/Mail_Isolation_Control_and_Tracking |
Network monitoring is the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator (via email, SMS or other alarms) in case of outages or other trouble. Network monitoring is part of network management.
== Details ==
While an intrusion detection system monitors a network threats from the outside, a network monitoring system monitors the network for problems caused by overloaded or crashed servers, network connections or other devices.
For example, to determine the status of a web server, monitoring software may periodically send an HTTP request to fetch a page. For email servers, a test message might be sent through SMTP and retrieved by IMAP or POP3.
Commonly measured metrics are response time, availability and uptime, although both consistency and reliability metrics are starting to gain popularity. The widespread addition of WAN optimization devices is having an adverse effect on most network monitoring tools, especially when it comes to measuring accurate end-to-end delay because they limit round-trip delay time visibility.
Status request failures, such as when a connection cannot be established, it times-out, or the document or message cannot be retrieved, usually produce an action from the monitoring system. These actions vary; An alarm may be sent (via SMS, email, etc.) to the resident sysadmin, automatic failover systems may be activated to remove the troubled server from duty until it can be repaired, etc.
Monitoring the performance of a network uplink is also known as network traffic measurement.
== Network tomography ==
Network tomography is an important area of network measurement, which deals with monitoring the health of various links in a network using end-to-end probes sent by agents located at vantage points in the network/Internet.
== Route analytics ==
Route analytics is another important area of network measurement. It includes
the methods, systems, algorithms and tools to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation or downtime.
== Various types of protocols ==
Site monitoring services can check HTTP pages, HTTPS, SNMP, FTP, SMTP, POP3, IMAP, DNS, SSH, TELNET, SSL, TCP, ICMP, SIP, UDP, Media Streaming and a range of other ports with a variety of check intervals ranging from every four hours to every one minute. Typically, most network monitoring services test your server anywhere between once per hour to once per minute.
For monitoring network performance, most tools use protocols like SNMP, NetFlow, Packet Sniffing, or WMI.
== Internet server monitoring ==
Monitoring an internet server means that the server owner always knows if one or all of their services go down. Server monitoring may be internal, i.e. web server software checks its status and notifies the owner if some services go down, and external, i.e. some web server monitoring companies check the status of the services with a certain frequency. Server monitoring can encompass a check of system metrics, such as CPU usage, memory usage, network performance and disk space. It can also include application monitoring, such as checking the processes of programs such as Apache HTTP server, MySQL, Nginx, Postgres and others.
External monitoring is more reliable, as it keeps on working when the server completely goes down. Good server monitoring tools also have performance benchmarking, alerting capabilities and the ability to link certain thresholds with automated server jobs, such as provisioning more memory or performing a backup.
=== Servers around the globe ===
Network monitoring services usually have several servers around the globe - for example in America, Europe, Asia, Australia and other locations. By having multiple servers in different geographic locations, a monitoring service can determine if a Web server is available across different networks worldwide. The more the locations used, the more complete the picture of network availability.
=== Web server monitoring process ===
When monitoring a web server for potential problems, an external web monitoring service checks several parameters. First of all, it monitors for a proper HTTP return code. By HTTP specifications RFC 2616, any web server returns several HTTP codes. Analysis of the HTTP codes is the fastest way to determine the current status of the monitored web server. Third-party application performance monitoring tools provide additional web server monitoring, alerting and reporting capabilities.
=== Notification ===
As the information brought by web server monitoring services is in most cases urgent and may be of crucial importance, various notification methods may be used: e-mail, landline and cell phones, messengers, SMS, fax, pagers, etc.
== See also ==
== Notes and references ==
== External links ==
List of Network Monitoring and Management Tools at Stanford University | Wikipedia/Network_monitoring |
The Omnibus Crime Control and Safe Streets Act of 1968 (Pub. L. 90–351, 82 Stat. 197, enacted June 19, 1968, codified at 34 U.S.C. § 10101 et seq.) was legislation passed by the Congress of the United States and signed into law by President Lyndon B. Johnson that established the Law Enforcement Assistance Administration (LEAA). Title III of the Act set rules for obtaining wiretap orders in the United States. The act was a major accomplishment of Johnson's war on crime.
== Grants ==
The LEAA, which was superseded by the Office of Justice Programs, provided federal grant funding for criminology and criminal justice research, much of which focused on social aspects of crime. Research grants were also provided to develop alternative sanctions for punishment of young offenders. Block grants were provided to the states, with $100 million in funding. Within that amount, $50 million was earmarked for assistance to local law enforcement agencies, which included funds to deal with riot control and organized crime.
== Handguns ==
The Omnibus Crime Bill also prohibited interstate trade in handguns and increased the minimum age to 21 for buying handguns. This legislation was soon followed by the Gun Control Act of 1968, which set forth additional gun control restrictions.
On May 10, 2023, Senior District Judge Robert E. Payne of the Eastern District of Virginia declared the minimum age for handgun purchases to be unconstitutional.
On December 1, 2023, District Judge Thomas Kleeh of the Northern District of West Virginia also declared the minimum age requirement unconstitutional.
== Wiretaps ==
The wiretapping section of the bill was passed in part as a response to the U.S. Supreme Court decisions Berger v. New York, 388 U.S. 41 (1967) and Katz v. United States, 389 U.S. 347 (1967), which both limited the power of the government to obtain information from citizens without their consent, based on the protections under the Fourth Amendment to the U.S. Constitution. In the Katz decision, the Court "extended the Fourth Amendment protection from unreasonable search and seizure to protect individuals with a 'reasonable expectation of privacy.'"
Section 2511(3) of the Crime Control Bill specifies that nothing in the act or the Federal Communications Act of 1934 shall limit the constitutional power of the President "to take such measures as he deems necessary":
"to protect the nation against actual or potential attack or other hostile acts of a foreign power, to obtain foreign intelligence information deemed essential to the security of the United States or to protect national security information against foreign intelligence activities"
"to protect the United States against the overthrow of the Government by force or other unlawful means, or against any other clear and present danger to the structure or existence of the Government"
The section also limits use in evidence only where the interception was reasonable and prohibits disclosure except for purpose.
In 1975, the United States Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, (known as the "Church Committee") was established to investigate abuses by the Central Intelligence Agency (CIA), National Security Agency (NSA), Federal Bureau of Investigation (FBI), and the Internal Revenue Service (IRS). In 1975 and 1976, the Church Committee published 14 reports on various U.S. intelligence agencies' operations, and a report on the FBI's COINTELPRO program stated that "the Fourth Amendment did apply to searches and seizures of conversations and protected all conversations of an individual as to which he had a reasonable expectation of privacy...At no time, however, were the Justice Department's standards and procedures ever applied to NSA's electronic monitoring system and its 'watch listing' of American citizens. From the early 1960s until 1973, NSA compiled a list of individuals and organizations, including 1200 American citizens and domestic groups, whose communications were segregated from the mass of communications intercepted by the Agency, transcribed, and frequently disseminated to other agencies for intelligence purposes".
Academic Colin Agur argues that the act "disappoints" from the perspective of Brandeisian legal philosophy, in regards to individual privacy, because it assumes that law enforcement agencies have a right to electronic surveillance, instead of "giving unambiguous priority to individual privacy."
=== Employee privacy ===
The Act prohibits "employers from listening to the private telephone conversations of employees or disclosing the contents of these conversations." Employers can ban personal phone calls and can monitor calls for compliance provided they stop listening as soon as a personal conversation begins. Violations carry fines up to $10,000. The Electronic Communications Privacy Act of 1986 expanded these protections to electronic and cell phone communication. See also Employee monitoring and Workplace privacy.
== FBI expansion ==
The bill increased the FBI budget by 10% to fund police training at the FBI National Academy. Much of this training was for riot control, a popular political issue at the time.
== Miranda warning ==
In 1966, the U.S. Supreme Court decision in Miranda v. Arizona (384 U.S. 436) created the requirement that a citizen must be informed of their legal rights upon their arrest and before they are interrogated, which came to be known as Miranda warnings. Responding to various complaints that such warnings allowed too many criminals go free, Congress, in provisions codified under 18 U.S.C. § 3501 with a clear intent to reverse the effect of the court ruling, included a provision in the Crime Control Act directing federal trial judges to admit statements of criminal defendants if they were made voluntarily, without regard to whether he had received the Miranda warnings.
The stated criteria for voluntary statements depended on such things as:
(1) the time between arrest and arraignment;
(2) whether the defendant knew the crime for which he had been arrested;
(3) whether he had been told that he did not have to talk to the police and that any statement could be used against him;
(4) whether the defendant knew prior to questioning that he had the right to the assistance of counsel; and,
(5) whether he actually had the assistance of counsel during questioning.
It also provided that the "presence or absence of any of" the factors "need not be conclusive on the issue of voluntariness of the confession." (As a federal statute, it applied only to criminal proceedings either under federal laws, or in the District of Columbia.)
That provision was disallowed in 1968 by a federal appeals court decision that was not appealed, and it escaped Supreme Court review until 32 years after passage, in Dickerson v. United States (2000). A lower court of the Fourth Circuit had reasoned that Miranda was not a constitutional requirement, that Congress could therefore overrule it by legislation, and that the provision in the Omnibus Crime Control Act had supplanted the requirement that police give Miranda warnings. The Supreme Court overturned the Fourth Circuit decision, reaffirming the ruling of Miranda v. Arizona (1966) as the primary guideline for the admissibility of statements made during custodial interrogation, and stating that Congress does not have the legislative power to supersede Miranda v. Arizona.
== See also ==
Gun law in the United States
Warrantless wiretaps
== References ==
== External links ==
Omnibus Crime Control and Safe Streets Act of 1968 (PDF/details) as amended in the GPO Statute Compilations collection
As codified in 18 USC chapter 44 of the United States Code from LII
As codified in 18 USC chapter 44 of the United States Code from the US House of Representatives
List of privacy/tapping legislation at Government Computer News
Church report summary of the passing of this law
text of the Church Report which references this law.
Text of the Act from the FCC library
A Primer on the Federal Wiretap Act and Its Fourth Amendment Framework | Wikipedia/Omnibus_Crime_Control_and_Safe_Streets_Act_of_1968 |
The Directorate-General for External Security (French: Direction générale de la Sécurité extérieure, pronounced [diʁɛksjɔ̃ ʒeneʁal də la sekyʁite ɛksteʁjœʁ], DGSE) is France's foreign intelligence agency, equivalent to the British MI6 and the American CIA, established on 27 November 1943. The DGSE safeguards French national security through intelligence gathering and conducting paramilitary and counterintelligence operations abroad, as well as economic espionage. The service is currently headquartered in the 20th arrondissement of Paris, but construction has begun on a new headquarters at Fort Neuf de Vincennes, in Vincennes, on the eastern edge of Paris.
The DGSE operates under the direction of the French Ministry of Armed Forces and works alongside its domestic counterpart, the DGSI (General Directorate for Internal Security). As with most other intelligence agencies, details of its operations and organization are classified and not made public.
The DGSE follows a system which it refers to as LEDA. L stands for loyalty (loyauté), E stands for elevated standards (exigence), D stands for discretion (discrétion) and A stands for adaptability (adaptabilité). These characteristics are viewed as essential in managing intelligence work and in collaborating with agents, authorities and partners.
== History ==
=== Origins ===
The DGSE can trace its roots back to 27 November 1943, when a central external intelligence agency, known as the DGSS (Direction générale des services spéciaux), was founded by politician Jacques Soustelle. The name of the agency was changed on 26 October 1944, to DGER (Direction générale des études et recherches). As the organisation was characterised by numerous cases of nepotism, abuses and political feuds, Soustelle was removed from his position as Director.
Former free-fighter André Dewavrin, aka "Colonel Passy", was tasked to reform the DGER; he fired more than 8,300 of the 10,000 full-time intelligence workers Soustelle had hired, and the agency was renamed SDECE (Service de documentation extérieure et de contre-espionnage) on 28 December 1945. The SDECE also brought under one head a variety of separate agencies – some, such as the well-known Deuxième Bureau, aka 2e Bureau, created by the military circa 1871–1873 in the wake of the birth of the French Third Republic. Another was the BRCA (Bureau central de renseignements et d'action), formed during WWII, from July 1940 to November 27, 1943, with André Dewavrin as its head.
On 2 April 1982, the new left wing government of François Mitterrand extensively reformed the SDECE and renamed it DGSE. The SDECE had remained independent until the mid-1960s, when it was discovered to have been involved in the kidnapping and presumed murder of Mehdi Ben Barka, a Moroccan revolutionary living in Paris. Following this scandal, it was announced that the agency was placed under the control of the French Ministry of Defence. In reality, foreign intelligence activities in France have always been supervised by the military since 1871, for political reasons mainly relating to anti-Bonapartism and the rise of Socialism. Exceptions related to telecommunications interception and cyphering and code-breaking, which were also conducted by the police in territorial France, and by the Ministry of Foreign Affairs abroad, and economic and financial intelligence, which were also carried out initially by the Ministry of Foreign Affairs and, from 1915 onwards, by the Ministry of Commerce until the aftermath of WWII, when the SDECE of the Ministry of Defence took over the specialty in partnership with the Ministry for the Economy and Finance.
In 1992, most of the defence responsibilities of the DGSE, no longer relevant to the post-Cold War context, were transferred to the Military Intelligence Directorate (DRM), a new military agency. Combining the skills and knowledge of five military groups, the DRM was created to close the intelligence gaps of the 1991 Gulf War.
=== Cold War-era rivalries ===
The SDECE and DGSE have been shaken by numerous scandals. In 1968, for example, Philippe Thyraud de Vosjoli, who had been an important officer in the French intelligence system for 20 years, asserted in published memoirs that the SDECE had been deeply penetrated by the Soviet KGB in the 1950s. He also indicated that there had been periods of intense rivalry between the French and U.S. intelligence systems. In the early 1990s a senior French intelligence officer created another major scandal by revealing that the DGSE had conducted economic intelligence operations against American businessmen in France.
A major scandal for the service in the late Cold War was the sinking of the Rainbow Warrior in 1985. The Rainbow Warrior was sunk by DGSE operatives, unintentionally killing one of the crew. They had set two time-separated explosive charges to encourage evacuation, but photographer Fernando Pereira stayed inside the boat to rescue his expensive cameras and drowned following the second explosion. (See below in this article for more details). The operation was ordered by the French President, François Mitterrand. New Zealand was outraged that its sovereignty was violated by an ally, as was the Netherlands since the killed Greenpeace activist was a Dutch citizen and the ship had Amsterdam as its port of origin.
=== Political controversies ===
The agency was conventionally run by French military personnel until 1999, when former diplomat Jean-Claude Cousseran was appointed its head. Cousseran had served as an ambassador to Turkey and Syria, as well as a strategist in the Ministry of Foreign Affairs. Cousseran reorganized the agency to improve the flow of information, following a series of reforms drafted by Bruno Joubert, the agency's director of strategy at that time.
This came during a period when the French government was formed as a cohabitation between left and right parties. Cousseran, linked to the Socialist Party, was therefore obliged to appoint Jean-Pierre Pochon of the Gaullist RPR as head of the Intelligence Directorate. Being conscious of the political nature of the appointment, and wanting to steer around Pochon, Cousseron placed one of his friends in a top job under Pochon. Alain Chouet, a specialist in terrorism, especially Algerian and Iranian networks, took over as chief of the Security Intelligence Service. He had been on post in Damascus at a time when Cousseran was France's ambassador to Syria. Chouet began writing reports to Cousseran that bypassed his immediate superior, Pochon.
Politics eventually took precedence over DGSE's intelligence function. Instead of informing the president's staff of reports directly concerning President Chirac, Cousseran informed only Socialist prime minister Lionel Jospin, who was going to run against Chirac in the 2002 presidential election. Pochon learned of the maneuvers only in March 2002 and informed Chirac's circle of the episode. He then had a furious argument with Cousseran and was informally told he wasn't wanted around the agency anymore. Pochon nonetheless remained Director of Intelligence, though he no longer turned up for work. He remained "ostracized" until the arrival of a new DGSE director, Pierre Brochand, in August 2002.
== Organization ==
=== Divisions ===
The DGSE includes the following services:
Directorate of Administration
Directorate of Strategy
Directorate of Intelligence
Political intelligence service
Security intelligence service
Technical Directorate (Responsible for electronic intelligence and devices)
Directorate of Operations
Action Division (Responsible for clandestine operations)
==== Technical Directorate (or COMINT Department) ====
In partnership with the Direction du renseignement militaire, DRM (Directorate of Military Intelligence) and with considerable support from the Army in particular, and from the Air Force and the Navy to lesser extent, the DGSE is responsible for electronic spying abroad. Historically the Ministry of Defence in general has always been much interested in telecommunications interception.
In the early 1880s a partnership between the Post Office (also in charge of all national telegraphic communications) and the Army gave birth to an important military telegraphy unit of more than 600 men; it settled in the Fort of the Mont Valérien near Paris. In 1888, the military settled the first service of telecommunications interception and deciphering in the Hôtel des Invalides, Paris—where it is still active today as an independent intelligence agency secretly created in 1959 under the name Groupement Interministériel de Contrôle or GIC (Inter-ministerial Control Group).
In 1910, the military unit of the Mont Valérien grew up with the creation of a wireless telecommunication station, and three years later it transformed into a regiment of about 1000 men. Anecdotally, government domestic Internet tapping and its best specialists are still located in the same area today (in underground facilities in Taverny and surroundings), though unofficially and not only. At about the same time, the Army and the Navy created several "listening stations" in the region of the Mediterranean Sea, and they began to intercept the coded wireless communications of the British and Spanish navies. It was the first joint use of wireless telegraphy and Cryptanalysis in the search for intelligence of military interest.
In the 1970s, the SDECE considerably developed its technical capacities in code-breaking, notably with the acquisition of a supercomputer from Cray.
In the 1980s, the DGSE heavily invested in satellite telecommunications interception, and created several satellite listening stations in France and overseas. The department of this agency responsible for telecommunications interception was anonymously called Direction Technique (Technical Directorate).
But in the early 1990s the DGSE was alarmed by a steady and important decrease of its foreign telecommunication interception and gathering, as telecommunication by submarine cables was supplanting satellites. At that time the DGSE was using Silicon Graphics computers for code-breaking while simultaneously asking Groupe Bull computers to develop French-made supercomputers. Until then, the DGSE had been sheltering its computers and was carrying code-breaking 100 feet underground its headquarters of Boulevard Mortier, lest of foreign electronic spying and possible jamming. But this underground facility quickly became too small and poorly practical. That is why from 1987 to 1990 important works were carried on in the underground of the Taverny Air Base, whose goal was to secretly build a large communication deciphering and computer analysis center then called Centre de Transmission et de Traitement de l'Information, CTTI (Transmission and Information Processing Center). The CTTI was the direct ancestor of the Pôle National de Cryptanalyse et de Décryptement–PNCD (National Branch of Cryptanalysis and Decryption), launched to fit a new policy of intelligence sharing between agencies called Mutualisation du Renseignement (Intelligence Pooling). Once the work was finished, the huge underground of the former Taverny Air base, located in Taverny a few miles northeast of Paris, sheltered the largest Faraday cage in Europe (for protection against leaks of radio electric waves (see also Tempest (codename) for technical explanations) and possible EMP, attacks (see Nuclear electromagnetic pulse for technical explanations), with supercomputers working 24/7 on processing submarine cable telecommunications interception and signal deciphering. The Taverny underground facility also has a sister base located in Mutzig, also settled underground, which officially is sheltering the 44e Régiment de Transmissions, 44e RT (44th Signal Regiment). For today more than ever, signal regiments of the French Army still carries on civilian telecommunication interceptions under the pretense of training and military exercises in electronic warfare in peacetime. The DGSE otherwise enjoys the technical cooperation of the French companies Orange S.A. (which also provides cover activities to the staff of the Technical Directorate of the DGSE), and Alcatel-Lucent for its know-how in optical cable interception.
Allegedly, in 2007–2008 State Councilor Jean-Claude Mallet advised newly elected President Nicolas Sarkozy to invest urgently in submarine cable tapping, and in computer capacities to automatically collect and decipher optical data. This was undertaken in the early 2000s. Mallet planned the installation of a new computer system to break codes. Officially, this enormous foreign intelligence program began in 2008, and it was all set in 2013. Its cost would have amounted 700 million euros, and resulted in a first hiring of about 600 new DGSE employees, all highly skilled specialists in related fields. Since then the DGSE is constantly expending its staff of specialists in cryptanalysis, decryption and signal and computer engineers. For in 2018 about 90% of world trade is no longer going through satellites, but submarine fiber-optic cables drawn between continents. And the Technical Directorate of the DGSE mainly targets intelligence of financial and economic natures.
Remarkably, the DGSE, along with the DRM with which it works closely, have established together a partnership in telecommunication interception with its German counterpart the BND (the Technische Aufklärung, or Technical Directorate of this agency more particularly), and with an important support from the French Army with regard to infrastructures and means and staff. Thanks to its close partnership with the DRM, the DGSE also enjoys the service of the large spy ship French ship Dupuy de Lôme (A759), which entered the service of the French Navy in April 2006. The DGSE and the DRM since long also have a special agreement in intelligence with the United Arab Emirates, thanks to which these agencies share with the German BND a COMINT station located in the Al Dhafra Air Base 101. The DGSE also enjoys a partnership in intelligence activities with the National Intelligence Agency (South Africa).
Today the French intelligence community would rank third in the world behind the American National Security Agency and British GCHQ in capacities of telecommunication interceptions worldwide.
==== Action Division ====
The action division (Division Action) is responsible for planning and performing clandestine operations. It also performs other security-related operations such as testing the security of nuclear power plants (as it was revealed in Le Canard Enchaîné in 1990) and military facilities such as the submarine base of the Île Longue, Bretagne. The division's headquarters are located at the fort of Noisy-le-Sec. As the DGSE has a close partnership with the "Commandement des Opérations Spéciales" of the Army or COS (Special Operations Command), the Action Division selects most of its men from regiments of this military organization, the 1er Régiment de Parachutistes d'Infanterie de Marine, 1er R.P.I.Ma (1st Marine Infantry Parachute Regiment) and the 13e Régiment de Dragons Parachutistes, 13e RDP (13th Parachute Dragoon Regiment) in particular. But, in general, a large number of DGSE executives and staff members under military statuses, and also of operatives first enlisted in one of these two last regiments, and also in the past in the 11e régiment parachutiste de choc, 11e RPC (11th Shock Parachute Regiment), colloquially called "11e Choc," and in the 1er Bataillon Parachutiste de Choc, 1er BPC (1st Shock Parachute Battalion), colloquially called "1er Choc."
=== Installations ===
The DGSE headquarters, codenamed CAT (Centre Administratif des Tourelles), are located at 141 Boulevard Mortier in the 20th arrondissement in Paris, approximately 1 km northeast of the Père Lachaise Cemetery. The building is often referred to as La piscine ("the swimming pool") because of the nearby Piscine des Tourelles of the French Swimming Federation.
A project named "Fort 2000" was supposed to allow the DGSE headquarters to be moved to the fort of Noisy-le-Sec, where the Action Division and the Service Technique d'Appui or "STA" (Technical and Support Service) were already stationed. However, the project was often disturbed and interrupted due to lacking funds, which were not granted until the 1994 and 1995 defence budgets. The allowed budget passed from 2 billion francs to one billion, and as the local workers and inhabitants started opposing the project, it was eventually canceled in 1996. The DGSE instead received additional premises located in front of the Piscine des Tourelles, and a new policy called "Privatisation des Services" (Privatization of the Services) was set. Roughly speaking, the Privatization of the Services consists for the DGSE in creating on the French territory numerous private companies of varied sizes, each being used as cover activity for specialized intelligence cells and units. This policy allows to turn round the problem of heavily investing in the building of large and highly secured facilities, and also of public and parliamentary scrutinies. This method is not entirely new however, since in 1945 the DGER, ancestor of the DGSE, owned 123 anonymous buildings, houses and apartments in addition to the military barracks of Boulevard Mortier serving already as headquarters. And this dispersion of premises began very early at the time of the Deuxième Bureau, and more particularly from the 1910s on, when intelligence activities carried on under the responsibility of the military knew a strong and steady rise in France.
=== Size and importance ===
In 2007 the DGSE employed a total of 4,620 agents. In 1999 the DGSE was known for employing a total of 2,700 civilians and 1,300 Officers or Non Commissioned Officers in its service.
It also benefits from an unknown number of voluntary correspondents (spies), French nationals in a large majority of instances, both in France and abroad who do not appear on the government's list of civil servants. Those for long were referred to with the title of "honorable correspondant" (honourable correspondent) or "HC," and since a number of years as "contact". Whereas the DGSE calls a "source" any French and foreign nationals this agency recruits indistinctly as "conscious" and willing or "unconscious or unwilling (i.e. manipulated) spy." While a DGSE's agent trained and sent to spy abroad generally is referred to as "operative" in English-speaking countries, the DGSE internally calls such agent "agent volant" (flying agent) by analogy with a butterfly (and not a bird). This agency, however, also colloquially calls "hirondelle" (swallow) a female operative. And it collectively and indistinctly calls "capteurs" (sensors) its contacts, sources, and flying agents.
The DGSE is directly supervised by the Ministry of Armed Forces.
=== Budget ===
The DGSE's budget is entirely official (it is voted upon and accepted by the French parliament). It generally consists of about €500M, in addition to which are added special funds from the Prime Minister (often used in order to finance certain operations of the Action Division). How these special funds are spent has always been kept secret.
Some known yearly budgets include:
1991: FRF 0.9bn
1992: FRF 1bn
1997: FRF 1.36bn
1998: FRF 1.29bn
2007: EUR 450 million, plus 36 million in special funds.
2009: EUR 543.8 million, plus 48.9 million in special funds.
According to Claude Silberzahn, one of its former directors, the agency's budget is divided in the following manner:
25% for military intelligence
25% for economic intelligence
50% for diplomatic intelligence
=== Directors ===
Pierre Marion (17 June 1981 – 10 November 1982)
Adm. Pierre Lacoste (10 November 1982 – 19 September 1985)
Gen. René Imbot (20 September 1985 – 1 December 1987)
Gen. François Mermet (2 December 1987 – 23 March 1989)
Claude Silberzahn (23 March 1989 – 7 June 1993)
Jacques Dewatre (7 June 1993 – 19 December 1999)
Jean-Claude Cousseran (19 December 1999 – 24 July 2002)
Pierre Brochand (24 July 2002 – 10 October 2008)
Erard Corbin de Mangoux (10 October 2008 – 10 April 2013)
Bernard Bajolet (10 April 2013 – 27 April 2017)
Jean-Pierre Palasset (interim) (27 April 2017 – 26 June 2017)
Bernard Émié (26 June 2017 – 8 janvier 2024)
Nicolas Lerner (9 janvier 2024 – present)
== Logo ==
As of 18 July 2012 the organisation had inaugurated its current logo. The bird of prey represents the sovereignty, operational capacities, international operational nature, and the efficiency of the DGSE. France is depicted as a sanctuary in the logo. The lines depict the networks utilized by the DGSE.
== Activities ==
=== Range ===
Various tasks and roles are generally appointed to the DGSE:
Intelligence gathering:
HUMINT, internally called "ROHUM," which stands for Renseignement d'Origine Humaine (Intelligence of Human Origin), is carried on by a large network of agents and under-agents, contacts, and sources who are not directly and officially paid by the DGSE in a large majority of instances and by reason of secrecy, but by varied public services and private companies which are not all necessarily cover-ups by vocation however, and which thus cooperate through particular and unofficial agreements. But many under-agents, contacts and sources act out of patriotism and political/ideological motives, and they are not all aware to help an intelligence agency.
SIGINT, (COMINT/SIGINT/ELINT), internally called "ROEM," which stands for Renseignement d'Origine Electromagnétique (Intelligence of Electromagnetic Origin), is carried on from France and from a network of COMINT stations overseas, each internally called Centre de Renseignement Électronique, CRE (Electronic Intelligence Center). And then two other names are used to name: smaller COMINT/SIGINT territorial or oversea stations, internally called Détachement Avancé de Transmission, CAT (Signal Detachment Overseas); and specifically ELINT and SIGINT stations indifferently located on the French soil and overseas, each called Centre de Télémesure Militaire, CTM (Military Telemetry Center). Since the 1980s, the DGSE focuses much of its efforts and financial expenditures in communications interception (COMINT) abroad, which today (2018) has a reach extending from the east coast of the United States to Japan, with a focus on the Arabian Peninsula between these two opposite areas. In the DGSE in particular, those considerable and very expensive COMINT capacities are under the official responsibility of its Direction Technique, DT, (Technical Directorate). But as these capacities are rapidly growing and passively involve about all other French intelligence agencies (more than 20) in the context of a new policy called Mutualisation du Renseignement (Intelligence pooling between agencies) officially decreed in 2016, the whole of it is called Pôle National de Cryptanalyse et de Décryptement, PNCD (National Branch of Cryptanalysis and Decryption) since the early 2000s at least. Earlier and from 1987 to 1990 on in particular, the PNCD was called Centre de Transmission et de Traitement de l'Information, CTTI (Transmission and Information Processing Center), and its main center is secretly located underground the Taverny Air Base, in the eastern Paris' suburb. Otherwise, the French press nicknamed the French COMINT capacities and network Frenchlon, borrowing to ECHELON, its U.S. equivalent.
Space imagery analysis: integrated in the "ROIM" general mission, standing for Renseignement d'Origine Image (Intelligence of Image Origin).
Special operations, such as missions behind enemy lines, exfiltrations otherwise called extraction, Coup d'état and revolution of palace and counter-revolutions (in African countries in particular since WWII), and sabotages and assassinations (on the French soil as abroad), with the help of the regiments of the Special Operations Command, COS.
Counterintelligence on the French soil is not officially acknowledged by the DGSE, as this is officially part of the general mission of the General Directorate for Internal Security, DGSI, along with counter-terrorism in particular. But in reality, and for several reasons, the DGSE has indeed for a long time also carried out counterintelligence missions on French soil, which it calls "contre-ingérence" (counter-interference), as well as much "offensive counterintelligence" operations abroad. As a matter of fact the former name of the DGSE, the SDECE, means Service de Documentation Extérieure et de Contre-Espionnage (External Documentation and Counter-Espionage Service). Counterintelligence activities in the DGSE are integrated in a more general mission internally called "Mesures actives" (active measures), directly inspired by the Russian Active measures in their principles. That is why offensive counterintelligence (or counter-interference) in the DGSE has multiple and direct connections with the other and different fields of "counter-influence" and influence (i.e. on the French soil as abroad) (see Agent of influence), and also by extension with Agitprop operations (all specialties in intelligence rather called Psychological warfare in English-speaking countries).
=== Known operations ===
==== 1970s ====
Under the codename "Operation Caban", the SDECE staged a coup d'état against Emperor Jean-Bédel Bokassa in the Central African Empire in September 1979, and installed a pro-French government.
Between the early 1970s and the late 1980s, the SDECE/DGSE had effectively planted agents in major U.S. companies, such as Texas Instruments, IBM and Corning. Some of the economic intelligence thus acquired was shared with French corporations, such as the Compagnie des Machines Bull.
==== 1980s ====
Working with the DST in the early 1980s, the agency exploited the source "Farewell", revealing the most extensive technological spy network uncovered in Europe and the United States to date. This network had allowed the United States and other European countries to gather significant amounts of information about important technical advances in the Soviet Union without the knowledge of the KGB. However, former DGSE employee Dominique Poirier contends, in his book he self-published in May 2018, that KGB Lt-Colonel Vladimir Vetrov code-named "Farewell" could not possibly reveal, alone, the names of 250 KGB officers acting abroad undercover, and help identify nearly 100 Soviet spies in varied western countries, at least by reason of the rule of "compartmentalization" or need to know.
The DGSE exploited a network called "Nicobar", which facilitated the sale of forty-three Mirage 2000 fighter jets by French defence companies to India for a total of more than US$2 billion, and the acquisition of information about the type of the armour used on Soviet T-72 tanks.
Operation Satanique, a mission aimed at preventing protests by Greenpeace against French nuclear testing in the Pacific through the sinking of the Rainbow Warrior in Auckland, New Zealand, on July 10, 1985. A French navy limpet mine exploded at 11:38 pm when many of the crew were asleep, and blew a large hole in the ship's hull. A second limpet mine exploded on the propeller shaft when Fernando Pereira, ships photographer, returned to retrieve his camera equipment, he was trapped in his cabin and drowned. New Zealand Police initiated one of their country's largest investigations and uncovered the plot after they captured two DGSE agents, who pleaded guilty to manslaughter and arson. French relations with New Zealand were sorely strained, as they threatened New Zealand with EEC sanctions in an attempt to secure the agents' release. Australia also attempted to arrest DGSE agents to extradite them. The incident is still widely remembered in New Zealand. The uncovering of the operation resulted in the firing of the head of the DGSE and the resignation of the French Defence Minister.
==== 1990s ====
During the Rwandan Civil War, the DGSE had an active role in passing on disinformation, which resurfaced in various forms in French newspapers. The general trend of this disinformation was to present the renewed fighting in 1993 as something completely new (although a regional conflict had been taking place since 1990) and as a straightforward foreign invasion, the rebel RPF being presented merely as Ugandans under a different guise. The disinformation played its role in preparing the ground for increased French involvement during the final stages of the war.
During 1989–97, DGSE helped many Chinese dissidents who participated in the Tiananmen Square protests of 1989 escape to western countries as a part of Operation Yellowbird.
During the Kosovo War, the DGSE played an active role in providing weapons training for the KLA. According to British wartime intercepts of Serbian military communication, DGSE officers took part in active fighting against Serbian forces. It was even revealed that several DGSE officers had been killed alongside KLA fighters in a Serbian ambush.
Reports in 2006 have credited DGSE operatives for infiltrating and exposing the inner workings of Afghan training camps during the 1990s. One of the spies employed by the agency later published a work under the pseudonym "Omar Nasiri", uncovering details of his life inside Al-Qaeda.
==== 2000s ====
A DGSE general heads the Alliance Base, a joint CTIC set up in Paris in cooperation with the CIA and other intelligence agencies. Alliance Base is known for having been involved in the arrest of Christian Ganczarski.
In 2003, the DGSE was held responsible for the outcome of Opération 14 juillet, a failed mission to rescue Íngrid Betancourt Pulecio from FARC rebels in Colombia.
In 2004, the DGSE was credited for liberating two French journalists, Georges Malbrunot and Christian Chesnot, who were held as hostages for 124 days in Iraq.
DGSE personnel were part of a team that arranged the release on June 12, 2005, of French journalist Florence Aubenas, held hostage for five months in Iraq.
DGSE was said to be involved in the arrest of the two presumed killers of four French tourists in Mauritania in January 2006.
In 2006, the French newspaper L'Est Républicain acquired an apparently leaked DGSE report to the French president Jacques Chirac claiming that Osama bin Laden had died in Pakistan on August 23, 2006, after contracting typhoid fever. The report had apparently been based on Saudi Arabian intelligence. These "death" allegations were thereafter denied by the French foreign minister Philippe Douste-Blazy and Saudi authorities, as well as CIA Bin Laden specialist Michael Scheuer.
In 2007–10, DGSE undertook an extensive operation to track est. 120 Al-Qaeda terrorists in FATA region of Pakistan.
In June 2009, DGSE uncovered evidence that two registered passengers on board Air France Flight 447, which crashed with the loss of 228 lives in the vicinity of Brazil, were linked to Islamic terrorist groups.
==== 2010s ====
November 2010, three operatives from DGSE's Service Operations (SO) (formerly Service 7) botched an operation to burgle the room of China Eastern Airlines' boss Shaoyong Liu at the Crowne Plaza Hotel in Toulouse. Failure of the operation resulted in the suspension of all of SO's activities and the very survival of the unit was called into question. SO only operates on French soil, where it mounts secret HUMINT operations such as searching hotel rooms, opening mail or diplomatic pouches.
In the year 2010/11, the DGSE has been training agents of Bahrain's National Security, the intelligence service which is trying to subdue the country's Shi'ite opposition protests. Bahrain's Special Security Force also benefits from a French advisor seconded from the Police Nationale who is training the Special Security Force in modern anti-riot techniques.
March 2011, the DGSE sent several members of the Service Action to support the Libyan rebels. However, most of the agents deployed were from the Direction des Operations' Service Mission. The latter unit gathers intelligence and makes contact with fighting factions in crisis zones.
In January 2013, Service Action members attempted to rescue one of its agents held hostage. The rescue was a failure as the hostage was killed alongside 2 DGSE operators.
In 2014, DGSE in a joint operation with AIVD, had successfully infiltrated and planted hidden cameras in Cyber Operations Center under SVR in Russia.
The DGSE, with the NDS, ran a joint intelligence unit in Afghanistan known as "Shamshad" until the Taliban captured Kabul.
In 2017, DGSE concluded that Russia sought to influence France's 2017 presidential elections by generating social media support for the far-right candidate.
In 2017–19, Action Division assassinated 3 major terrorist leaders of JNIM
In 2018–19, DGSE in a joint operation with CIA, DGSI, MI6 and FIS, tracked and identified 15 members of the Unit 29155, who were using Chamonix as a 'base camp' to conduct covert operations around Europe.
In 2020, DGSE along with CIA, had supplied the intelligence to COS, in their operation to kill Abdelmalek Droukdel.
=== DGSE officers or alleged officers ===
== In popular culture ==
The DGSE has been referenced in the following media:
James Bond 007: Nightfire (2002) James Bond works alongside operative Dominique Paradis.
The Bureau (2015–2020), Canal+ series about the lives of DGSE agents.
In the Marvel Cinematic Universe, the villain character of Georges Batroc (appearing in both Captain America: The Winter Soldier (2014) and The Falcon and the Winter Soldier (2021)) was an agent of the DGSE, Action Division, before being demobilized, although he is of Algerian descent, not French.
Secret Defense (2008)
Secret Agents (2004)
Godzilla (1998) film features Jean Reno as a DGSE agent in a major role.
== See also ==
General Directorate for Internal Security
List of intelligence agencies of France
Bob Denard, a French mercenary
== References ==
== External links ==
DGSE section of French Ministry of Defence website
DGSE section of French Ministry of Defence website (in French)
Direction General for External Security at the Wayback Machine (archive index)
Archive index at the Wayback Machine (in French)
DGSE on FAS.org | Wikipedia/Directorate-General_for_External_Security |
In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.
This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in which bandwidth is used to refer to the signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. The actual bit rate that can be achieved depends not only on the signal bandwidth but also on the noise on the channel.
== Network capacity ==
The term bandwidth sometimes defines the net bit rate peak bit rate, information rate, or physical layer useful bit rate, channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The maximum rate that can be sustained on a link is limited by the Shannon–Hartley channel capacity for these communication systems, which is dependent on the bandwidth in hertz and the noise on the channel.
== Network consumption ==
The consumed bandwidth in bit/s, corresponds to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The consumed bandwidth can be affected by technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval.
Channel bandwidth may be confused with useful data throughput (or goodput). For example, a channel with x bit/s may not necessarily transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For instance, much internet traffic uses the transmission control protocol (TCP), which requires a three-way handshake for each transaction. Although in many modern implementations the protocol is efficient, it does add significant overhead compared to simpler protocols. Also, data packets may be lost, which further reduces the useful data throughput. In general, for any effective digital communication, a framing protocol is needed; overhead and effective throughput depends on implementation. Useful throughput is less than or equal to the actual channel capacity minus implementation overhead.
== Maximum throughput ==
The asymptotic bandwidth (formally asymptotic throughput) for a network is the measure of maximum throughput for a greedy source, for example when the message size (the number of packets per second from a source) approaches close to the maximum amount.
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the network, measuring the end-to-end throughput. As with other bandwidths, the asymptotic bandwidth is measured in multiples of bits per seconds. Since bandwidth spikes can skew the measurement, carriers often use the 95th percentile method. This method continuously measures bandwidth usage and then removes the top 5 percent.
== Multimedia ==
Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data compression (source coding), defined as the total amount of data divided by the playback time.
Due to the impractically high bandwidth requirements of uncompressed digital media, the required multimedia bandwidth can be significantly reduced with data compression. The most widely used data compression technique for media bandwidth reduction is the discrete cosine transform (DCT), which was first proposed by Nasir Ahmed in the early 1970s. DCT compression significantly reduces the amount of memory and bandwidth required for digital signals, capable of achieving a data compression ratio of up to 100:1 compared to uncompressed media.
== Web hosting ==
In Web hosting service, the term bandwidth is often used to describe the amount of data transferred to or from the website or server within a prescribed period of time, for example bandwidth consumption accumulated over a month measured in gigabytes per month. The more accurate phrase used for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.
A similar situation can occur for end-user Internet service providers as well, especially where network capacity is limited (for example in areas with underdeveloped internet connectivity and on wireless networks).
== Internet connections ==
== Edholm's law ==
Edholm's law, proposed by and named after Phil Edholm in 2004, holds that the bandwidth of telecommunication networks double every 18 months, which has proven to be true since the 1970s. The trend is evident in the cases of Internet, cellular (mobile), wireless LAN and wireless personal area networks.
The MOSFET (metal–oxide–semiconductor field-effect transistor) is the most important factor enabling the rapid increase in bandwidth. The MOSFET (MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and went on to become the basic building block of modern telecommunications technology. Continuous MOSFET scaling, along with various advances in MOS technology, has enabled both Moore's law (transistor counts in integrated circuit chips doubling every two years) and Edholm's law (communication bandwidth doubling every 18 months).
== References == | Wikipedia/Network_bandwidth |
The Jindalee Operational Radar Network (JORN) is an over-the-horizon radar (OHR) network operated by the Royal Australian Air Force (RAAF) that can monitor air and sea movements across 37,000 square kilometres (14,000 sq mi). It has a normal operating range of 1,000–3,000 kilometres (620–1,860 mi). The network is used in the defence of Australia, and can also monitor maritime operations, wave heights, and wind directions.
JORN's main ground stations comprise a control centre, known as the JORN Coordination Centre (JCC), at RAAF Base Edinburgh in South Australia and three transmission stations: Radar 1 near Longreach, Queensland, Radar 2 near Laverton, Western Australia and Radar 3 near Alice Springs, Northern Territory.
== History ==
The roots of the JORN can be traced back to post World War II experiments in the United States and a series of Australian experiments at DSTO Edinburgh, South Australia beginning in the early 1950s.
In 1969, The Technical Cooperation Program membership and papers by John Strath prompted development of a core Over the Horizon radar project.
From July 1970 a study was undertaken; this resulted in a proposal for a program to be carried out, in three phases, to develop an over-the-horizon-radar system.
=== Geebung ===
Phase 1, Project Geebung, aimed to define operational requirements for an OHR and study applicable technologies and techniques. The project carried out a series of ionospheric soundings evaluating the suitability of the ionosphere for the operation of an OTHR.
=== Jindalee ===
Phase 2, Project Jindalee, aimed at proving the feasibility and costing of OHR. This second phase was carried out by the Radar Division, (later, the High Frequency Radar Division), of the Defence Science and Technology Organisation (DSTO). Project Jindalee came into being during the period 1972–1974 and was divided into three stages.
Stage 'A' commenced in April 1974. It involved the construction of a prototype radar receiver at Mount Everard, (near Alice Springs), a transmitter (at Harts Range, 160 kilometres or 99 miles away) and a beacon in Derby. When completed (in October 1976) the Stage A radar ran for two years, closing in December 1978. Stage A formally ended in February 1979, having achieved its mission of proving the feasibility of OTHR. The success of stage A resulted in the construction of a larger stage 'B' radar, drawing on the knowledge gained from stage A.
Stage 'B' commenced on 6 July 1978. The new radar was constructed next to the stage A radar. Developments during stage B included real time signal processing, custom built processors, larger antenna arrays, and higher power transmitters, which resulted in a more sensitive and capable radar.
The first data was received by stage B in the period April–May 1982,
the first ship was detected in January 1983, and
an aircraft was automatically tracked in February 1984.
Trials were carried out with the Royal Australian Air Force during April 1984, substantially fulfilling the mission of stage B, to demonstrate an OHR operating in Australia. Another two years of trials were carried out before the Jindalee project officially finished in December 1985.
Stage 'C' became the conversion of the stage B radar to an operational radar. This stage saw substantial upgrades to the stage B equipment followed by the establishment of No. 1 Radar Surveillance Unit RAAF (1RSU) and the handover of the radar to 1RSU. The aim was to provide the Australian Defence Force with operational experience of OHR.
=== JORN ===
==== Phase 3 ====
Phase 3 of the OTHR program was the design and construction of the JORN. The decision to build the JORN was announced in October 1986. Telstra, in association with GEC-Marconi, became the prime contractor and a fixed price contract for the construction of the JORN was signed on 11 June 1991. The JORN was to be completed by 13 June 1997.
==== Phase 3 Project problems ====
Telstra was responsible for software development and systems integration, areas in which it had no previous experience. GEC-Marconi was responsible for the HF Radar and related software aspects of the project, areas in which it had no previous experience. Other unsuccessful tenderers for the project included experienced Australian software development and systems integration company, BHP IT, and experienced Australian defence contractor AWA Defence Industries. Both of these companies are no longer in business.
By 1996, the project was experiencing technical difficulties and cost overruns. Telstra reported a A$609-million loss and announced that it could not guarantee a delivery date.
The failed Telstra contract prompted the project to enter a fourth phase.
==== Phase 4 ====
Phase 4 involved the completion of the JORN and its subsequent maintenance using a new contractor. In February 1997 Lockheed Martin and Tenix received a contract to deliver and manage the JORN. Subsequently, during June 1997 Lockheed and Tenix formed the company RLM Group to handle the joint venture. An operational radar system was delivered in April 2003, with maintenance contracted to continue until February 2007.
In August 2008, Lockheed Martin acquired Tenix Group's interest in RLM Holdings.
==== Phase 5 ====
As a consequence of the duration of its construction, the JORN delivered in 2003 was designed to a specification developed in the early 1990s. During this period the Alice Springs radar had evolved significantly under the guidance of the Defence Science & Technology Organisation. In February 2004 a fifth phase of the JORN project was approved.
Phase 5 aimed to upgrade the Laverton and Longreach radars to reflect over a decade of OTHR research and development. It was scheduled to run until approximately the year 2011, but was completed around 2013/2014 due to skills shortage. All three stations are now similar, and use updated electronics.
==== Phase 6 ====
In March 2018 it was announced that BAE Systems Australia would undertake the $1.2 billion upgrade to Australia’s Jindalee Operational Radar Network, which was expected to take 10 years to complete.
== Project cost ==
The JORN project (JP2025) has had 5 phases, and has cost approximately $1.8 billion. The ANAO Audit report of June 1996 estimated an overall project cost for Phase 3 of $1.1 billion. Phase 5 costs have been estimated at $70 million. Phase 6 costs expect to be $1.2 billion.
== Technology export ==
On 18 March 2025, Canadian prime minister Mark Carney announced the AU$6.5 billion purchase of JORN radar technology by Canada, for deployment over the Arctic.
== Network ==
JORN consists of:
three active radar stations: one near Longreach, Queensland (Radar 1), a second near Laverton, Western Australia (Radar 2), and a third near Alice Springs, Northern Territory (Radar 3);
a control centre at RAAF Base Edinburgh in South Australia (JCC);
seven transponders; and
twelve vertical ionosondes distributed around Australia and its territories.
DSTO previously used the radar station near Alice Springs, Northern Territory (known as Jindalee Facility Alice Springs) for research and development and also has its own network of vertical/oblique ionosondes for research purposes.
The Alice Springs radar was fully integrated into the JORN during Phase 5 to provide a third active radar station.
Each radar station consists of a transmitter site and a receiver site, separated by a large distance to prevent the transmitter from interfering with the receiver. The JORN transmitter and receiver sites are:
the Queensland transmitter at Longreach, with 90-degree coverage (23.658047°S 144.145432°E / -23.658047; 144.145432),
the Queensland receiver at Stonehenge, with 90-degree coverage (24.291095°S 143.195286°E / -24.291095; 143.195286),
the Western Australian transmitter at Leonora, with 180-degree coverage (28.317378°S 122.843456°E / -28.317378; 122.843456), and
the Western Australian receiver at Laverton, with 180-degree coverage (28.326747°S 122.005234°E / -28.326747; 122.005234).
the Alice Springs transmitter at Harts Range, with 90-degree coverage (22.967561°S 134.447937°E / -22.967561; 134.447937), and
the Alice Springs receiver at Mount Everard, with 90-degree coverage (23.521497°S 133.677521°E / -23.521497; 133.677521).
The Alice Springs radar was the original 'Jindalee Stage B' test bed on which the design of the other two stations was based. It continues to act as a research and development testbed in addition to its operational role.
The Mount Everard receiver site contains the remains of the first, smaller, 'Jindalee Stage A' receiver. It is visible in aerial photos, behind the stage B receiver (23.530074°S 133.68782°E / -23.530074; 133.68782). The stage A transmitter was rebuilt to become the stage B transmitter.
The high frequency radio transmitter arrays at Longreach and Laverton have 28 elements, each driven by a 20-kilowatt power amplifier giving a total power of 560 kW. Stage B transmitted 20 kW per amplifier. The signal is bounced off the ionosphere, landing in the "illuminated" area of target interest. Much incident radiation is reflected forward in the original direction of travel, but a small proportion "backscatters" and returns along the original, reciprocal transmission path. These returns again reflect from the ionosphere, finally being received at the Longreach and Laverton stations. Signal attenuation, from transmit antenna to target and finally back to receive antenna, is substantial, and its performance in such a context marks this system as leading-edge science. The receiver stations use KEL Aerospace KFR35 series receivers. JORN uses radio frequencies between 5 and 30 MHz, which is far lower than most other civilian and military radars that operate in the microwave frequency band. Also, unlike most microwave radars, JORN does not use pulsed transmission, nor does it use movable antennas. Transmission is Frequency-Modulated Continuous Wave (FMCW), and the transmitted beam is aimed by the interaction between its "beam-steering" electronics and antenna characteristics in the transmit systems. Radar returns are distinguished in range by the offset between the instantaneous radiated signal frequency and the returning signal frequency. Returns are distinguished in azimuth by measuring phase offsets of individual returns incident across the kilometres-plus length of the multi-element receiving antenna array. Intensive computational work is necessary to JORN's operation, and refinement of the software suite offers the most cost-effective path for improvements.
The JORN ionosonde network is made up of vertical ionosondes, providing a real time map of the ionosphere. Each vertical incidence sounder (VIS) is a standardized Single-Receiver Digisonde Portable Sounder, built by University of Massachusetts Lowell for the JORN. A new ionospheric map is generated every 225 seconds. In a clockwise direction around Australia, the locations of the twelve (11 active and one test) JORN ionosondes are below.
JORN Ionosondes
The DSTO ionosonde network is not part of the JORN, but is used to further DSTO's research goals. DSTO uses Four-Receiver Digisonde Portable Sounders (DPS-4), also built by Lowell. During 2004 DSTO had ionosondes at the following locations.
DSTO Ionosondes
From west to east, the seven JORN transponders are located at
Christmas Island (OzGeoRFMap),
Broome, WA (OzGeoRFMap),
Kalumburu, WA (OzGeoRFMap),
Darwin, NT (OzGeoRFMap),
Nhulunbuy, NT (OzGeoRFMap),
Normanton, Qld (OzGeoRFMap), and
Horn island, Qld (OzGeoRFMap).
All of the above sites (and many more that likely form part of the network) can be found precisely on the RadioFrequency Map, which also lists the frequencies in use at each site.
== Operation and uses ==
The JORN network is operated by No. 1 Remote Sensor Unit (1RSU). Data from the JORN sites is fed to the JORN Coordination Centre at RAAF Base Edinburgh where it is passed on to other agencies and military units. Officially the system allows the Australian Defence Force to observe air and sea activity north of Australia to distances up to 4,000 kilometres (2,500 mi). This encompasses all of Java, New Guinea and the Solomon Islands, and may include Singapore.
However, in 1997, the prototype was able to detect missile launches by China over 5,500 kilometres (3,400 mi) away.
JORN is so sensitive it is able to track planes as small as a Cessna 172 taking off and landing in East Timor 2,600 kilometres (1,600 mi) away. Current research is anticipated to increase its sensitivity by a factor of ten beyond this level.
It is also reportedly able to detect stealth aircraft, as typically these are designed only to avoid detection by microwave radar. Project DUNDEE was a cooperative research project, with American missile defence research, into using JORN to detect missiles. The JORN was anticipated to play a role in future Missile Defense Agency initiatives, detecting and tracking missile launches in Asia.
As JORN is reliant on the interaction of signals with the ionosphere ('bouncing'), disturbances in the ionosphere adversely affect performance. The most significant factor influencing this is solar changes, which include sunrise, sunset and solar disturbances. The effectiveness of JORN is also reduced by extreme weather, including lightning and rough seas.
As JORN uses the Doppler principle to detect objects, it cannot detect objects moving at a tangent to the system, or objects moving at a similar speed to their surroundings.
== Engineering heritage award ==
JORN received an Engineering Heritage International Marker from Engineers Australia as part of its Engineering Heritage Recognition Program.
== See also ==
Cobra Mist
Duga radar, a similar Russian system
Imaging radar
== References == | Wikipedia/Jindalee_Operational_Radar_Network |
Application security (short AppSec) includes all tasks that introduce a secure software development life cycle to development teams. Its final goal is to improve security practices and, through that, to find, fix and preferably prevent security issues within applications. It encompasses the whole application life cycle from requirements analysis, design, implementation, verification as well as maintenance.
Web application security is a branch of information security that deals specifically with the security of websites, web applications, and web services. At a high level, web application security draws on the principles of application security but applies them specifically to the internet and web systems. The application security also concentrates on mobile apps and their security which includes iOS and Android Applications
Web Application Security Tools are specialized tools for working with HTTP traffic, e.g., Web application firewalls.
== Approaches ==
Different approaches will find different subsets of the security vulnerabilities lurking in an application and are most effective at different times in the software lifecycle. They each represent different tradeoffs of time, effort, cost and vulnerabilities found.
Design review. Before code is written the application's architecture and design can be reviewed for security problems. A common technique in this phase is the creation of a threat model.
Whitebox security review, or code review. This is a security engineer deeply understanding the application through manually reviewing the source code and noticing security flaws. Through comprehension of the application, vulnerabilities unique to the application can be found.
Blackbox security audit. This is only through the use of an application testing it for security vulnerabilities, no source code is required.
Automated Tooling. Many security tools can be automated through inclusion into the development or testing environment. Examples of those are automated DAST/SAST tools that are integrated into code editor or CI/CD platforms.
Coordinated vulnerability platforms. These are hacker-powered application security solutions offered by many websites and software developers by which individuals can receive recognition and compensation for reporting bugs.
== Security threats ==
The Open Worldwide Application Security Project (OWASP) provides free and open resources. It is led by a non-profit called The OWASP Foundation. The OWASP Top 10 - 2017 results from recent research based on comprehensive data compiled from over 40 partner organizations. This data revealed approximately 2.3 million vulnerabilities across over 50,000 applications. According to the OWASP Top 10 - 2021, the ten most critical web application security risks include:
Broken access control
Cryptographic failures
Injection
Insecure design
Security misconfiguration
Vulnerable and outdated components
Identification and authentification failures
Software and data integrity failures
Security logging and monitoring failures*
Server-side request forgery (SSRF)*
== Security controls ==
The OWASP Top 10 Proactive Controls 2024 is a list of security techniques every software architect and developer should know and heed.
The current list contains:
Implement access control
Use cryptography the proper way
Validate all input & handle exceptions
Address security from the start
Secure by default configurations
Keep your components secure
Implement digital identity
Use browser security features
Implement security logging and monitoring
Stop server-side request forgery
== Tooling for security testing ==
Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave applications open to exploitation. Ideally, security testing is implemented throughout the entire software development life cycle (SDLC) so that vulnerabilities may be addressed in a timely and thorough manner.
There are many kinds of automated tools for identifying vulnerabilities in applications. Common tool categories used for identifying application vulnerabilities include:
Static application security testing (SAST) analyzes source code for security vulnerabilities during an application's development. Compared to DAST, SAST can be utilized even before the application is in an executable state. As SAST has access to the full source code it is a white-box approach. This can yield more detailed results but can result in many false positives that need to be manually verified.
Dynamic application security testing (DAST, often called vulnerability scanners) automatically detects vulnerabilities by crawling and analyzing websites. This method is highly scalable, easily integrated and quick. DAST tools are well suited for dealing with low-level attacks such as injection flaws but are not well suited to detect high-level flaws, e.g., logic or business logic flaws. Fuzzing tools are commonly used for input testing.
Interactive application security testing (IAST) assesses applications from within using software instrumentation. This combines the strengths of both SAST and DAST methods as well as providing access to code, HTTP traffic, library information, backend connections and configuration information. Some IAST products require the application to be attacked, while others can be used during normal quality assurance testing.
Runtime application self-protection augments existing applications to provide intrusion detection and prevention from within an application runtime.
Dependency scanners (also called software composition analysis) try to detect the usage of software components with known vulnerabilities. These tools can either work on-demand, e.g., during the source code build process, or periodically.
== Security standards and regulations ==
CERT Secure Coding standard
ISO/IEC 27034-1:2011 Information technology — Security techniques — Application security -- Part 1: Overview and concepts
ISO/IEC TR 24772:2013 Information technology — Programming languages — Guidance to avoiding vulnerabilities in programming languages through language selection and use
NIST Special Publication 800-53
OWASP ASVS: Web Application Security Verification Standard
== See also ==
Common Weakness Enumeration
Data security
Mobile security
OWASP
Microsoft Security Development Lifecycle
Usable security
== References == | Wikipedia/Web_application_security |
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie–Hellman key exchange, public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems.
== Description ==
Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography are digital signature and public-key encryption:
In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message: 283 —it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach.
== Applications ==
The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication.
Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols.
== Hybrid cryptosystems ==
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
== Weaknesses ==
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.
=== Algorithms ===
All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
=== Alteration of public keys ===
Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.
=== Public key infrastructure ===
One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
=== Unencrypted metadata ===
Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
== History ==
During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys.
=== Anticipation ===
In his 1874 book The Principles of Science, William Stanley Jevons wrote:
Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know.
Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."
=== Classified discovery ===
In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.
In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange.
The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential.
—Ralph Benjamin
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.
=== Public discovery ===
In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.
In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American.
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC.
== Examples ==
Examples of well-regarded asymmetric key techniques for varied purposes include:
Diffie–Hellman key exchange protocol
DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm
ElGamal
Elliptic-curve cryptography
Elliptic Curve Digital Signature Algorithm (ECDSA)
Elliptic-curve Diffie–Hellman (ECDH)
Ed25519 and Ed448 (EdDSA)
X25519 and X448 (ECDH/EdDH)
Various password-authenticated key agreement techniques
Paillier cryptosystem
RSA encryption algorithm (PKCS#1)
Cramer–Shoup cryptosystem
YAK authenticated key agreement protocol
Examples of asymmetric key algorithms not yet widely adopted include:
NTRUEncrypt cryptosystem
Kyber
McEliece cryptosystem
Examples of notable – yet insecure – asymmetric key algorithms include:
Merkle–Hellman knapsack cryptosystem
Examples of protocols using asymmetric key algorithms include:
S/MIME
GPG, an implementation of OpenPGP, and an Internet Standard
EMV, EMV Certificate Authority
IPsec
PGP
ZRTP, a secure VoIP protocol
Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer
SILC
SSH
Bitcoin
Off-the-Record Messaging
== See also ==
== Notes ==
== References ==
== External links ==
Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s.
An account of how GCHQ kept their invention of PKE secret until 1997 | Wikipedia/Asymmetric-key_cryptography |
In cryptography, differential equations of addition (DEA) are one of the most basic equations related to differential cryptanalysis that mix additions over two different groups (e.g. addition modulo 232 and addition over GF(2)) and where input and output differences are expressed as XORs.
== Examples ==
Differential equations of addition (DEA) are of the following form:
(
x
+
y
)
⊕
(
(
x
⊕
a
)
+
(
y
⊕
b
)
)
=
c
{\displaystyle (x+y)\oplus ((x\oplus a)+(y\oplus b))=c}
where
x
{\displaystyle x}
and
y
{\displaystyle y}
are
n
{\displaystyle n}
-bit unknown variables and
a
{\displaystyle a}
,
b
{\displaystyle b}
and
c
{\displaystyle c}
are known variables. The symbols
+
{\displaystyle +}
and
⊕
{\displaystyle \oplus }
denote addition modulo
2
n
{\displaystyle 2^{n}}
and bitwise exclusive-or respectively. The above equation is denoted by
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
.
Let a set
S
=
{
(
a
i
,
b
i
,
c
i
)
|
i
<
k
}
{\displaystyle S=\{(a_{i},b_{i},c_{i})|i<k\}}
for integer
i
{\displaystyle i}
denote a system of
k
(
n
)
{\displaystyle k(n)}
DEA where
k
(
n
)
{\displaystyle k(n)}
is a polynomial in
n
{\displaystyle n}
. It has been proved that the satisfiability of an arbitrary set of DEA is in the complexity class P when a brute force search requires an exponential time.
In 2013, some properties of a special form of DEA were reported by Chengqing Li et al., where
a
=
0
{\displaystyle a=0}
and
y
{\displaystyle y}
is assumed known. Essentially, the special DEA can be represented as
(
x
∔
α
)
⊕
(
x
∔
β
)
=
c
{\displaystyle (x\dotplus \alpha )\oplus (x\dotplus \beta )=c}
. Based on the found properties, an algorithm for deriving
x
{\displaystyle x}
was proposed and analyzed.
== Applications ==
Solution to an arbitrary set of DEA (either in batch and or in adaptive query model) was due to Souradyuti Paul and Bart Preneel. The solution techniques have been used to attack the stream cipher Helix.
== Further reading ==
Souradyuti Paul and Bart Preneel, Solving Systems of Differential Equations of Addition, ACISP 2005. Full version (PDF)
Souradyuti Paul and Bart Preneel, Near Optimal Algorithms for Solving Differential Equations of Addition With Batch Queries, Indocrypt 2005. Full version (PDF)
Helger Lipmaa, Johan Wallén, Philippe Dumas: On the Additive Differential Probability of Exclusive-Or. FSE 2004: 317-331.
== References == | Wikipedia/Differential_equations_of_addition |
A cryptographic
n
{\displaystyle n}
-multilinear map is a kind of multilinear map, that is, a function
e
:
G
1
×
⋯
×
G
n
→
G
T
{\displaystyle e:G_{1}\times \cdots \times G_{n}\rightarrow G_{T}}
such that for any integers
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
and elements
g
i
∈
G
i
{\displaystyle g_{i}\in G_{i}}
,
e
(
g
1
a
1
,
…
,
g
n
a
n
)
=
e
(
g
1
,
…
,
g
n
)
∏
i
=
1
n
a
i
{\displaystyle e(g_{1}^{a_{1}},\ldots ,g_{n}^{a_{n}})=e(g_{1},\ldots ,g_{n})^{\prod _{i=1}^{n}a_{i}}}
, and which in addition is efficiently computable and satisfies some security properties. It has several applications on cryptography, as key exchange protocols, identity-based encryption, and broadcast encryption. There exist constructions of cryptographic 2-multilinear maps, known as bilinear maps, however, the problem of constructing such multilinear maps for
n
>
2
{\displaystyle n>2}
seems much more difficult and the security of the proposed candidates is still unclear.
== Definition ==
=== For n = 2 ===
In this case, multilinear maps are mostly known as bilinear maps or pairings, and they are usually defined as follows: Let
G
1
,
G
2
{\displaystyle G_{1},G_{2}}
be two additive cyclic groups of prime order
q
{\displaystyle q}
, and
G
T
{\displaystyle G_{T}}
another cyclic group of order
q
{\displaystyle q}
written multiplicatively. A pairing is a map:
e
:
G
1
×
G
2
→
G
T
{\displaystyle e:G_{1}\times G_{2}\rightarrow G_{T}}
, which satisfies the following properties:
Bilinearity
∀
a
,
b
∈
F
q
∗
,
∀
P
∈
G
1
,
Q
∈
G
2
:
e
(
a
P
,
b
Q
)
=
e
(
P
,
Q
)
a
b
{\displaystyle \forall a,b\in F_{q}^{*},\ \forall P\in G_{1},Q\in G_{2}:\ e(aP,bQ)=e(P,Q)^{ab}}
Non-degeneracy
If
g
1
{\displaystyle g_{1}}
and
g
2
{\displaystyle g_{2}}
are generators of
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
, respectively, then
e
(
g
1
,
g
2
)
{\displaystyle e(g_{1},g_{2})}
is a generator of
G
T
{\displaystyle G_{T}}
.
Computability
There exists an efficient algorithm to compute
e
{\displaystyle e}
.
In addition, for security purposes, the discrete logarithm problem is required to be hard in both
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
.
=== General case (for any n) ===
We say that a map
e
:
G
1
×
⋯
×
G
n
→
G
T
{\displaystyle e:G_{1}\times \cdots \times G_{n}\rightarrow G_{T}}
is a
n
{\displaystyle n}
-multilinear map if it satisfies the following properties:
All
G
i
{\displaystyle G_{i}}
(for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
) and
G
T
{\displaystyle G_{T}}
are groups of same order;
if
a
1
,
…
,
a
n
∈
Z
{\displaystyle a_{1},\ldots ,a_{n}\in \mathbb {Z} }
and
(
g
1
,
…
,
g
n
)
∈
G
1
×
⋯
×
G
n
{\displaystyle (g_{1},\ldots ,g_{n})\in G_{1}\times \cdots \times G_{n}}
, then
e
(
g
1
a
1
,
…
,
g
n
a
n
)
=
e
(
g
1
,
…
,
g
n
)
∏
i
=
1
n
a
i
{\displaystyle e(g_{1}^{a_{1}},\ldots ,g_{n}^{a_{n}})=e(g_{1},\ldots ,g_{n})^{\prod _{i=1}^{n}a_{i}}}
;
the map is non-degenerate in the sense that if
g
1
,
…
,
g
n
{\displaystyle g_{1},\ldots ,g_{n}}
are generators of
G
1
,
…
,
G
n
{\displaystyle G_{1},\ldots ,G_{n}}
, respectively, then
e
(
g
1
,
…
,
g
n
)
{\displaystyle e(g_{1},\ldots ,g_{n})}
is a generator of
G
T
{\displaystyle G_{T}}
There exists an efficient algorithm to compute
e
{\displaystyle e}
.
In addition, for security purposes, the discrete logarithm problem is required to be hard in
G
1
,
…
,
G
n
{\displaystyle G_{1},\ldots ,G_{n}}
.
=== Candidates ===
All the candidates multilinear maps are actually slightly generalizations of multilinear maps known as graded-encoding systems, since they allow the map
e
{\displaystyle e}
to be applied partially: instead of being applied in all the
n
{\displaystyle n}
values at once, which would produce a value in the target set
G
T
{\displaystyle G_{T}}
, it is possible to apply
e
{\displaystyle e}
to some values, which generates values in intermediate target sets. For example, for
n
=
3
{\displaystyle n=3}
, it is possible to do
y
=
e
(
g
2
,
g
3
)
∈
G
T
2
{\displaystyle y=e(g_{2},g_{3})\in G_{T_{2}}}
then
e
(
g
1
,
y
)
∈
G
T
{\displaystyle e(g_{1},y)\in G_{T}}
.
The three main candidates are GGH13, which is based on ideals of polynomial rings; CLT13, which is based approximate GCD problem and works over integers, hence, it is supposed to be easier to understand than GGH13 multilinear map; and GGH15, which is based on graphs.
== References == | Wikipedia/Cryptographic_multilinear_map |
Functional encryption (FE) is a generalization of public-key encryption in which possessing a secret key allows one to learn a function of what the ciphertext is encrypting.
== Formal definition ==
More precisely, a functional encryption scheme for a given functionality
f
{\displaystyle f}
consists of the following four algorithms:
(
pk
,
msk
)
←
Setup
(
1
λ
)
{\displaystyle ({\text{pk}},{\text{msk}})\leftarrow {\textsf {Setup}}(1^{\lambda })}
: creates a public key
pk
{\displaystyle {\text{pk}}}
and a master secret key
msk
{\displaystyle {\text{msk}}}
.
sk
←
Keygen
(
msk
,
f
)
{\displaystyle {\text{sk}}\leftarrow {\textsf {Keygen}}({\text{msk}},f)}
: uses the master secret key to generate a new secret key
sk
{\displaystyle {\text{sk}}}
for the function
f
{\displaystyle f}
.
c
←
Enc
(
pk
,
x
)
{\displaystyle c\leftarrow {\textsf {Enc}}({\text{pk}},x)}
: uses the public key to encrypt a message
x
{\displaystyle x}
.
y
←
Dec
(
sk
,
c
)
{\displaystyle y\leftarrow {\textsf {Dec}}({\text{sk}},c)}
: uses secret key to calculate
y
=
f
(
x
)
{\displaystyle y=f(x)}
where
x
{\displaystyle x}
is the value that
c
{\displaystyle c}
encrypts.
The security of FE requires that any information an adversary learns from an encryption of
x
{\displaystyle x}
is revealed by
f
(
x
)
{\displaystyle f(x)}
. Formally, this is defined by simulation.
== Applications ==
Functional encryption generalizes several existing primitives including Identity-based encryption (IBE) and attribute-based encryption (ABE). In the IBE case, define
F
(
k
,
x
)
{\displaystyle F(k,x)}
to be equal to
x
{\displaystyle x}
when
k
{\displaystyle k}
corresponds to an identity that is allowed to decrypt, and
⊥
{\displaystyle \perp }
otherwise. Similarly, in the ABE case, define
F
(
k
,
x
)
=
x
{\displaystyle F(k,x)=x}
when
k
{\displaystyle k}
encodes attributes with permission to decrypt and
⊥
{\displaystyle \perp }
otherwise.
== History ==
Functional encryption was proposed by Amit Sahai and Brent Waters in 2005 and formalized by Dan Boneh, Amit Sahai and Brent Waters in 2010. Until recently, however, most instantiations of Functional Encryption supported only limited function classes such as boolean formulae. In 2012, several researchers developed Functional Encryption schemes that support arbitrary functions.
== References == | Wikipedia/Functional_encryption |
In mathematics, a set B of elements of a vector space V is called a basis (pl.: bases) if every element of V can be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.
Equivalently, a set B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B. In other words, a basis is a linearly independent spanning set.
A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space.
This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces.
Basis vectors find applications in the study of crystal structures and frames of reference.
== Definition ==
A basis B of a vector space V over a field F (such as the real numbers R or the complex numbers C) is a linearly independent subset of V that spans V. This means that a subset B of V is a basis if it satisfies the two following conditions:
linear independence
for every finite subset
{
v
1
,
…
,
v
m
}
{\displaystyle \{\mathbf {v} _{1},\dotsc ,\mathbf {v} _{m}\}}
of B, if
c
1
v
1
+
⋯
+
c
m
v
m
=
0
{\displaystyle c_{1}\mathbf {v} _{1}+\cdots +c_{m}\mathbf {v} _{m}=\mathbf {0} }
for some
c
1
,
…
,
c
m
{\displaystyle c_{1},\dotsc ,c_{m}}
in F, then
c
1
=
⋯
=
c
m
=
0
{\displaystyle c_{1}=\cdots =c_{m}=0}
;
spanning property
for every vector v in V, one can choose
a
1
,
…
,
a
n
{\displaystyle a_{1},\dotsc ,a_{n}}
in F and
v
1
,
…
,
v
n
{\displaystyle \mathbf {v} _{1},\dotsc ,\mathbf {v} _{n}}
in B such that
v
=
a
1
v
1
+
⋯
+
a
n
v
n
{\displaystyle \mathbf {v} =a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}}
.
The scalars
a
i
{\displaystyle a_{i}}
are called the coordinates of the vector v with respect to the basis B, and by the first property they are uniquely determined.
A vector space that has a finite basis is called finite-dimensional. In this case, the finite subset can be taken as B itself to check for linear independence in the above definition.
It is often convenient or even necessary to have an ordering on the basis vectors, for example, when discussing orientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of an ordered basis, which is therefore not simply an unstructured set, but a sequence, an indexed family, or similar; see § Ordered bases and coordinates below.
== Examples ==
The set R2 of the ordered pairs of real numbers is a vector space under the operations of component-wise addition
(
a
,
b
)
+
(
c
,
d
)
=
(
a
+
c
,
b
+
d
)
{\displaystyle (a,b)+(c,d)=(a+c,b+d)}
and scalar multiplication
λ
(
a
,
b
)
=
(
λ
a
,
λ
b
)
,
{\displaystyle \lambda (a,b)=(\lambda a,\lambda b),}
where
λ
{\displaystyle \lambda }
is any real number. A simple basis of this vector space consists of the two vectors e1 = (1, 0) and e2 = (0, 1). These vectors form a basis (called the standard basis) because any vector v = (a, b) of R2 may be uniquely written as
v
=
a
e
1
+
b
e
2
.
{\displaystyle \mathbf {v} =a\mathbf {e} _{1}+b\mathbf {e} _{2}.}
Any other pair of linearly independent vectors of R2, such as (1, 1) and (−1, 2), forms also a basis of R2.
More generally, if F is a field, the set
F
n
{\displaystyle F^{n}}
of n-tuples of elements of F is a vector space for similarly defined addition and scalar multiplication. Let
e
i
=
(
0
,
…
,
0
,
1
,
0
,
…
,
0
)
{\displaystyle \mathbf {e} _{i}=(0,\ldots ,0,1,0,\ldots ,0)}
be the n-tuple with all components equal to 0, except the ith, which is 1. Then
e
1
,
…
,
e
n
{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}
is a basis of
F
n
,
{\displaystyle F^{n},}
which is called the standard basis of
F
n
.
{\displaystyle F^{n}.}
A different flavor of example is given by polynomial rings. If F is a field, the collection F[X] of all polynomials in one indeterminate X with coefficients in F is an F-vector space. One basis for this space is the monomial basis B, consisting of all monomials:
B
=
{
1
,
X
,
X
2
,
…
}
.
{\displaystyle B=\{1,X,X^{2},\ldots \}.}
Any set of polynomials such that there is exactly one polynomial of each degree (such as the Bernstein basis polynomials or Chebyshev polynomials) is also a basis. (Such a set of polynomials is called a polynomial sequence.) But there are also many bases for F[X] that are not of this form.
== Properties ==
Many properties of finite bases result from the Steinitz exchange lemma, which states that, for any vector space V, given a finite spanning set S and a linearly independent set L of n elements of V, one may replace n well-chosen elements of S by the elements of L to get a spanning set containing L, having its other elements in S, and having the same number of elements as S.
Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require the axiom of choice or a weaker form of it, such as the ultrafilter lemma.
If V is a vector space over a field F, then:
If L is a linearly independent subset of a spanning set S ⊆ V, then there is a basis B such that
L
⊆
B
⊆
S
.
{\displaystyle L\subseteq B\subseteq S.}
V has a basis (this is the preceding property with L being the empty set, and S = V).
All bases of V have the same cardinality, which is called the dimension of V. This is the dimension theorem.
A generating set S is a basis of V if and only if it is minimal, that is, no proper subset of S is also a generating set of V.
A linearly independent set L is a basis if and only if it is maximal, that is, it is not a proper subset of any linearly independent set.
If V is a vector space of dimension n, then:
A subset of V with n elements is a basis if and only if it is linearly independent.
A subset of V with n elements is a basis if and only if it is a spanning set of V.
== Coordinates ==
Let V be a vector space of finite dimension n over a field F, and
B
=
{
b
1
,
…
,
b
n
}
{\displaystyle B=\{\mathbf {b} _{1},\ldots ,\mathbf {b} _{n}\}}
be a basis of V. By definition of a basis, every v in V may be written, in a unique way, as
v
=
λ
1
b
1
+
⋯
+
λ
n
b
n
,
{\displaystyle \mathbf {v} =\lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n},}
where the coefficients
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
are scalars (that is, elements of F), which are called the coordinates of v over B. However, if one talks of the set of the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the same set of coefficients. For example,
3
b
1
+
2
b
2
{\displaystyle 3\mathbf {b} _{1}+2\mathbf {b} _{2}}
and
2
b
1
+
3
b
2
{\displaystyle 2\mathbf {b} _{1}+3\mathbf {b} _{2}}
have the same set of coefficients {2, 3}, and are different. It is therefore often convenient to work with an ordered basis; this is typically done by indexing the basis elements by the first natural numbers. Then, the coordinates of a vector form a sequence similarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with an origin, is also called a coordinate frame or simply a frame (for example, a Cartesian frame or an affine frame).
Let, as usual,
F
n
{\displaystyle F^{n}}
be the set of the n-tuples of elements of F. This set is an F-vector space, with addition and scalar multiplication defined component-wise. The map
φ
:
(
λ
1
,
…
,
λ
n
)
↦
λ
1
b
1
+
⋯
+
λ
n
b
n
{\displaystyle \varphi :(\lambda _{1},\ldots ,\lambda _{n})\mapsto \lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n}}
is a linear isomorphism from the vector space
F
n
{\displaystyle F^{n}}
onto V. In other words,
F
n
{\displaystyle F^{n}}
is the coordinate space of V, and the n-tuple
φ
−
1
(
v
)
{\displaystyle \varphi ^{-1}(\mathbf {v} )}
is the coordinate vector of v.
The inverse image by
φ
{\displaystyle \varphi }
of
b
i
{\displaystyle \mathbf {b} _{i}}
is the n-tuple
e
i
{\displaystyle \mathbf {e} _{i}}
all of whose components are 0, except the ith that is 1. The
e
i
{\displaystyle \mathbf {e} _{i}}
form an ordered basis of
F
n
{\displaystyle F^{n}}
, which is called its standard basis or canonical basis. The ordered basis B is the image by
φ
{\displaystyle \varphi }
of the canonical basis of
F
n
{\displaystyle F^{n}}
.
It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis of
F
n
{\displaystyle F^{n}}
, and that every linear isomorphism from
F
n
{\displaystyle F^{n}}
onto V may be defined as the isomorphism that maps the canonical basis of
F
n
{\displaystyle F^{n}}
onto a given ordered basis of V. In other words, it is equivalent to define an ordered basis of V, or a linear isomorphism from
F
n
{\displaystyle F^{n}}
onto V.
== Change of basis ==
Let V be a vector space of dimension n over a field F. Given two (ordered) bases
B
old
=
(
v
1
,
…
,
v
n
)
{\displaystyle B_{\text{old}}=(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}
and
B
new
=
(
w
1
,
…
,
w
n
)
{\displaystyle B_{\text{new}}=(\mathbf {w} _{1},\ldots ,\mathbf {w} _{n})}
of V, it is often useful to express the coordinates of a vector x with respect to
B
o
l
d
{\displaystyle B_{\mathrm {old} }}
in terms of the coordinates with respect to
B
n
e
w
.
{\displaystyle B_{\mathrm {new} }.}
This can be done by the change-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer to
B
o
l
d
{\displaystyle B_{\mathrm {old} }}
and
B
n
e
w
{\displaystyle B_{\mathrm {new} }}
as the old basis and the new basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one has expressions involving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates.
Typically, the new basis vectors are given by their coordinates over the old basis, that is,
w
j
=
∑
i
=
1
n
a
i
,
j
v
i
.
{\displaystyle \mathbf {w} _{j}=\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}.}
If
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\ldots ,x_{n})}
and
(
y
1
,
…
,
y
n
)
{\displaystyle (y_{1},\ldots ,y_{n})}
are the coordinates of a vector x over the old and the new basis respectively, the change-of-basis formula is
x
i
=
∑
j
=
1
n
a
i
,
j
y
j
,
{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}
for i = 1, ..., n.
This formula may be concisely written in matrix notation. Let A be the matrix of the
a
i
,
j
{\displaystyle a_{i,j}}
, and
X
=
[
x
1
⋮
x
n
]
and
Y
=
[
y
1
⋮
y
n
]
{\displaystyle X={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}\quad {\text{and}}\quad Y={\begin{bmatrix}y_{1}\\\vdots \\y_{n}\end{bmatrix}}}
be the column vectors of the coordinates of v in the old and the new basis respectively, then the formula for changing coordinates is
X
=
A
Y
.
{\displaystyle X=AY.}
The formula can be proven by considering the decomposition of the vector x on the two bases: one has
x
=
∑
i
=
1
n
x
i
v
i
,
{\displaystyle \mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {v} _{i},}
and
x
=
∑
j
=
1
n
y
j
w
j
=
∑
j
=
1
n
y
j
∑
i
=
1
n
a
i
,
j
v
i
=
∑
i
=
1
n
(
∑
j
=
1
n
a
i
,
j
y
j
)
v
i
.
{\displaystyle \mathbf {x} =\sum _{j=1}^{n}y_{j}\mathbf {w} _{j}=\sum _{j=1}^{n}y_{j}\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}=\sum _{i=1}^{n}{\biggl (}\sum _{j=1}^{n}a_{i,j}y_{j}{\biggr )}\mathbf {v} _{i}.}
The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, here
B
old
{\displaystyle B_{\text{old}}}
; that is
x
i
=
∑
j
=
1
n
a
i
,
j
y
j
,
{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}
for i = 1, ..., n.
== Related notions ==
=== Free module ===
If one replaces the field occurring in the definition of a vector space by a ring, one gets the definition of a module. For modules, linear independence and spanning sets are defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set".
Like for vector spaces, a basis of a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called a free module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules through free resolutions.
A module over the integers is exactly the same thing as an abelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, if G is a subgroup of a finitely generated free abelian group H (that is an abelian group that has a finite basis), then there is a basis
e
1
,
…
,
e
n
{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}
of H and an integer 0 ≤ k ≤ n such that
a
1
e
1
,
…
,
a
k
e
k
{\displaystyle a_{1}\mathbf {e} _{1},\ldots ,a_{k}\mathbf {e} _{k}}
is a basis of G, for some nonzero integers
a
1
,
…
,
a
k
{\displaystyle a_{1},\ldots ,a_{k}}
. For details, see Free abelian group § Subgroups.
=== Analysis ===
In the context of infinite-dimensional vector spaces over the real or complex numbers, the term Hamel basis (named after Georg Hamel) or algebraic basis can be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives are orthogonal bases on Hilbert spaces, Schauder bases, and Markushevich bases on normed linear spaces. In the case of the real numbers R viewed as a vector space over the field Q of rational numbers, Hamel bases are uncountable, and have specifically the cardinality of the continuum, which is the cardinal number
2
ℵ
0
{\displaystyle 2^{\aleph _{0}}}
, where
ℵ
0
{\displaystyle \aleph _{0}}
(aleph-nought) is the smallest infinite cardinal, the cardinal of the integers.
The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a large class of vector spaces including e.g. Hilbert spaces, Banach spaces, or Fréchet spaces.
The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: If X is an infinite-dimensional normed vector space that is complete (i.e. X is a Banach space), then any Hamel basis of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Consider
c
00
{\displaystyle c_{00}}
, the space of the sequences
x
=
(
x
n
)
{\displaystyle x=(x_{n})}
of real numbers that have only finitely many non-zero elements, with the norm
‖
x
‖
=
sup
n
|
x
n
|
{\textstyle \|x\|=\sup _{n}|x_{n}|}
. Its standard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis.
==== Example ====
In the study of Fourier series, one learns that the functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functions f satisfying
∫
0
2
π
|
f
(
x
)
|
2
d
x
<
∞
.
{\displaystyle \int _{0}^{2\pi }\left|f(x)\right|^{2}\,dx<\infty .}
The functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are linearly independent, and every function f that is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense that
lim
n
→
∞
∫
0
2
π
|
a
0
+
∑
k
=
1
n
(
a
k
cos
(
k
x
)
+
b
k
sin
(
k
x
)
)
−
f
(
x
)
|
2
d
x
=
0
{\displaystyle \lim _{n\to \infty }\int _{0}^{2\pi }{\biggl |}a_{0}+\sum _{k=1}^{n}\left(a_{k}\cos \left(kx\right)+b_{k}\sin \left(kx\right)\right)-f(x){\biggr |}^{2}dx=0}
for suitable (real or complex) coefficients ak, bk. But many square-integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereas orthonormal bases of these spaces are essential in Fourier analysis.
=== Geometry ===
The geometric notions of an affine space, projective space, convex set, and cone have related notions of basis. An affine basis for an n-dimensional affine space is
n
+
1
{\displaystyle n+1}
points in general linear position. A projective basis is
n
+
2
{\displaystyle n+2}
points in general position, in a projective space of dimension n. A convex basis of a polytope is the set of the vertices of its convex hull. A cone basis consists of one point by edge of a polygonal cone. See also a Hilbert basis (linear programming).
=== Random basis ===
For a probability distribution in Rn with a probability density function, such as the equidistribution in an n-dimensional ball with respect to Lebesgue measure, it can be shown that n randomly and independently chosen vectors will form a basis with probability one, which is due to the fact that n linearly dependent vectors x1, ..., xn in Rn should satisfy the equation det[x1 ⋯ xn] = 0 (zero determinant of the matrix with columns xi), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases.
It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. For spaces with inner product, x is ε-orthogonal to y if
|
⟨
x
,
y
⟩
|
/
(
‖
x
‖
‖
y
‖
)
<
ε
{\displaystyle \left|\left\langle x,y\right\rangle \right|/\left(\left\|x\right\|\left\|y\right\|\right)<\varepsilon }
(that is, cosine of the angle between x and y is less than ε).
In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution in n-dimensional ball. Choose N independent random vectors from a ball (they are independent and identically distributed). Let θ be a small positive number. Then for
N random vectors are all pairwise ε-orthogonal with probability 1 − θ. This N growth exponentially with dimension n and
N
≫
n
{\displaystyle N\gg n}
for sufficiently big n. This property of random bases is a manifestation of the so-called measure concentration phenomenon.
The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from the n-dimensional cube [−1, 1]n as a function of dimension, n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was within π/2 ± 0.037π/2 then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are within π/2 ± 0.037π/2 then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For each n, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented.
== Proof that every vector space has a basis ==
Let V be any vector space over some field F. Let X be the set of all linearly independent subsets of V.
The set X is nonempty since the empty set is an independent subset of V, and it is partially ordered by inclusion, which is denoted, as usual, by ⊆.
Let Y be a subset of X that is totally ordered by ⊆, and let LY be the union of all the elements of Y (which are themselves certain subsets of V).
Since (Y, ⊆) is totally ordered, every finite subset of LY is a subset of an element of Y, which is a linearly independent subset of V, and hence LY is linearly independent. Thus LY is an element of X. Therefore, LY is an upper bound for Y in (X, ⊆): it is an element of X, that contains every element of Y.
As X is nonempty, and every totally ordered subset of (X, ⊆) has an upper bound in X, Zorn's lemma asserts that X has a maximal element. In other words, there exists some element Lmax of X satisfying the condition that whenever Lmax ⊆ L for some element L of X, then L = Lmax.
It remains to prove that Lmax is a basis of V. Since Lmax belongs to X, we already know that Lmax is a linearly independent subset of V.
If there were some vector w of V that is not in the span of Lmax, then w would not be an element of Lmax either. Let Lw = Lmax ∪ {w}. This set is an element of X, that is, it is a linearly independent subset of V (because w is not in the span of Lmax, and Lmax is independent). As Lmax ⊆ Lw, and Lmax ≠ Lw (because Lw contains the vector w that is not contained in Lmax), this contradicts the maximality of Lmax. Thus this shows that Lmax spans V.
Hence Lmax is linearly independent and spans V. It is thus a basis of V, and this proves that every vector space has a basis.
This proof relies on Zorn's lemma, which is equivalent to the axiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true. Thus the two assertions are equivalent.
== See also ==
Basis of a matroid
Basis of a linear program
Coordinate system
Change of basis – Coordinate change in linear algebra
Frame of a vector space – Similar to the basis of a vector space, but not necessarily linearly independentPages displaying short descriptions of redirect targets
Spherical basis – Basis used to express spherical tensors
== Notes ==
== References ==
=== General references ===
Blass, Andreas (1984), "Existence of bases implies the axiom of choice" (PDF), Axiomatic set theory, Contemporary Mathematics volume 31, Providence, R.I.: American Mathematical Society, pp. 31–33, ISBN 978-0-8218-5026-8, MR 0763890
Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker, ISBN 978-0-8247-8419-5
Lang, Serge (1987), Linear algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96412-6
=== Historical references ===
Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales (On operations in abstract sets and their application to integral equations)" (PDF), Fundamenta Mathematicae (in French), 3: 133–181, doi:10.4064/fm-3-1-133-181, ISSN 0016-2736
Bolzano, Bernard (1804), Betrachtungen über einige Gegenstände der Elementargeometrie (Considerations of some aspects of elementary geometry) (in German)
Bourbaki, Nicolas (1969), Éléments d'histoire des mathématiques (Elements of history of mathematics) (in French), Paris: Hermann
Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory", Historia Mathematica, 22 (3): 227–261, doi:10.1006/hmat.1995.1024, MR 1347828
Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur (in French), Chez Firmin Didot, père et fils
Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (in German), reprint: Hermann Grassmann. Translated by Lloyd C. Kannenberg. (2000), Extension Theory, Kannenberg, L.C., Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2031-5
Hamel, Georg (1905), "Eine Basis aller Zahlen und die unstetigen Lösungen der Funktionalgleichung f(x+y)=f(x)+f(y)", Mathematische Annalen (in German), 60 (3), Leipzig: 459–462, doi:10.1007/BF01457624, S2CID 120063569
Hamilton, William Rowan (1853), Lectures on Quaternions, Royal Irish Academy
Möbius, August Ferdinand (1827), Der Barycentrische Calcul : ein neues Hülfsmittel zur analytischen Behandlung der Geometrie (Barycentric calculus: a new utility for an analytic treatment of geometry) (in German), archived from the original on 2009-04-12
Moore, Gregory H. (1995), "The axiomatization of linear algebra: 1875–1940", Historia Mathematica, 22 (3): 262–303, doi:10.1006/hmat.1995.1025
Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalle Operazioni della Logica Deduttiva (in Italian), Turin{{citation}}: CS1 maint: location missing publisher (link)
== External links ==
Instructional videos from Khan Academy
Introduction to bases of subspaces
Proof that any subspace basis has same number of elements
"Linear combinations, span, and basis vectors". Essence of linear algebra. August 6, 2016. Archived from the original on 2021-11-17 – via YouTube.
"Basis", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Basis_(algebra) |
Nettle is a cryptographic library designed to fit easily in a wide range of toolkits and applications. It began as a collection of low-level cryptography functions from lsh in 2001. Since June 2009 (version 2.0) Nettle is a GNU package.
== Features ==
Since version 3, nettle provides the AES block cipher (a subset of Rijndael) (with assembly optimizations for x86 and sparc), the ARCFOUR (also known as RC4) stream cipher (with x86 and sparc assembly), the ARCTWO (also known as RC2) stream cipher, BLOWFISH, CAMELLIA (with x86 and x86_64 assembly optimizations), CAST-128, DES and 3DES block ciphers, the ChaCha stream cipher (with assembly for x86_64), GOSTHASH94, the MD2, MD4, and MD5 (with x86 assembly) digests, the PBKDF2 key derivation function, the POLY1305 (with assembly for x86_64) and UMAC message authentication codes, RIPEMD160, the Salsa20 stream cipher (with assembly for x86_64 and ARM), the SERPENT block cipher (with assembly for x86_64), SHA-1 (with x86, x86_64 and ARM assembly), the SHA-2 (SHA-224, SHA-256, SHA-384, and SHA-512) digests, SHA-3 (a subset of the Keccak digest family), the TWOFISH block cipher, RSA, DSA and ECDSA public-key algorithms, the Yarrow pRNG. Version 3.1 introduced support for Curve25519 and EdDSA operations. The public-key algorithms use GMP.
Nettle is used by GnuTLS.
== Licence and motivation ==
An API which fits one application well may not work well in a different context resulting in a proliferation of cryptographic libraries designed for particular applications. Nettle is an attempt to avoid this problem by doing one thing (the low-level cryptography) and providing a simple and general interface to it. In particular, Nettle doesn't do algorithm selection, memory allocation or any I/O. Thus Nettle is intended to provide a core cryptography library upon which numerous application and context specific interfaces can be built. The code, test cases, benchmarks, documentation, etc. of these interfaces can then be shared without having to replicate Nettle's cryptographic code.
Nettle is primarily licensed under a dual licence scheme comprising The GNU General Public License version 2 or later and The GNU Lesser General Public License version 3 or later. A few individual files are licensed under more permissive licences or in the public domain. The copyright notices at the top of the library's source files precisely define the licence status of particular files.
The Nettle manual "is in the public domain" and may be used and reproduced freely.
== See also ==
Botan
Bouncy Castle
Cryptlib
Libgcrypt
Crypto++
Comparison of cryptography libraries
== References == | Wikipedia/Nettle_(cryptographic_library) |
A cryptographic hash function (CHF) is a hash algorithm (a map of an arbitrary binary string to a binary string with a fixed size of
n
{\displaystyle n}
bits) that has special properties desirable for a cryptographic application:
the probability of a particular
n
{\displaystyle n}
-bit output result (hash value) for a random input string ("message") is
2
−
n
{\displaystyle 2^{-n}}
(as for any good hash), so the hash value can be used as a representative of the message;
finding an input string that matches a given hash value (a pre-image) is infeasible, assuming all input strings are equally likely. The resistance to such search is quantified as security strength: a cryptographic hash with
n
{\displaystyle n}
bits of hash value is expected to have a preimage resistance strength of
n
{\displaystyle n}
bits, unless the space of possible input values is significantly smaller than
2
n
{\displaystyle 2^{n}}
(a practical example can be found in § Attacks on hashed passwords);
a second preimage resistance strength, with the same expectations, refers to a similar problem of finding a second message that matches the given hash value when one message is already known;
finding any pair of different messages that yield the same hash value (a collision) is also infeasible: a cryptographic hash is expected to have a collision resistance strength of
n
/
2
{\displaystyle n/2}
bits (lower due to the birthday paradox).
Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, (message) digests, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes.
Non-cryptographic hash functions are used in hash tables and to detect accidental errors; their constructions frequently provide no resistance to a deliberate attack. For example, a denial-of-service attack on hash tables is possible if the collisions are easy to find, as in the case of linear cyclic redundancy check (CRC) functions.
== Properties ==
Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value.
A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties:
Pre-image resistance
Given a hash value h, it should be difficult to find any message m such that h = hash(m). This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks.
Second pre-image resistance
Given an input m1, it should be difficult to find a different input m2 such that hash(m1) = hash(m2). This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks.
Collision resistance
It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise, collisions may be found by a birthday attack.
Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function that is only second pre-image resistant is considered insecure and is therefore not recommended for real applications.
Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash.
A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given hash(m) and len(m) but not m, by choosing a suitable m′ an attacker can calculate hash(m ∥ m′), where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems.
In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties.
Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum.
=== Degree of difficulty ===
In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a dozen bits to the latter.
For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult.
In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use.
== Illustration ==
An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.)
== Applications ==
=== Verifying the integrity of messages and files ===
An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file.
MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value.
=== Signature generation and verification ===
Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes.
=== Password verification ===
Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value.
However, use of standard cryptographic hash functions, such as the SHA series, is no longer considered safe for password storage.: 5.1.1.2 These algorithms are designed to be computed quickly, so if the hashed values are compromised, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. For details, see § Attacks on hashed passwords.
A password hash also requires the use of a large random, non-secret salt value that can be stored with the password hash. The salt is hashed with the password, altering the password hash mapping for each password, thereby making it infeasible for an adversary to store tables of precomputed hash values to which the password hash digest can be compared or to test a large number of purloined hash values in parallel.
=== Proof-of-work ===
A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try 219 times to find a valid header.
=== File or data identifier ===
A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree, which allows for additional benefits.
One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too.
However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants, such as open source applications with multiple sources of download, where malicious files could be substituted in with the same appearance to the user, or an authentic file is modified to contain malicious data.
==== Content-addressable storage ====
== Hash functions based on block ciphers ==
There are several methods to use a block cipher to build a cryptographic hash function, specifically a one-way compression function.
The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead.
A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption.
== Hash function design ==
=== Merkle–Damgård construction ===
A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function.
The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form.
=== Wide pipe versus narrow pipe ===
A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction.
Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks.
== Use in building other cryptographic primitives ==
Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly.
Message authentication codes (MACs) (also called keyed hash functions) are often built from hash functions. HMAC is such a MAC.
Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example, SHACAL, BEAR and LION.
Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it.
Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function.
== Concatenation ==
Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions.
For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to n-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those n messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time.
== Cryptographic hash algorithms ==
There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions.
=== MD5 ===
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds, which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes).
=== SHA-1 ===
SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes).
Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3.
=== RIPEMD-160 ===
RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes).
=== Whirlpool ===
Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes).
=== SHA-2 ===
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher.
SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64.
The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes.
=== SHA-3 ===
SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction, which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits.
Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits.
=== BLAKE2 ===
BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size.
=== BLAKE3 ===
BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2.
== Attacks on cryptographic hash algorithms ==
There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions.
Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD).
On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251 and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer.
In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known, and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance.
A successful, practical attack broke MD5 (used within certificates for Transport Layer Security) in 2008.
Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack.
== Attacks on hashed passwords ==
Rather than store plain user passwords, controlled-access systems frequently store the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all-too-frequent occurrence), the thief will only have the hash values, not the passwords.
Passwords may still be retrieved by an attacker from the hashes, because most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that even all possible combinations may be tested if calculation of the hash does not take too much time.
The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt.
The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches.: 5.1.1.2 Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KDFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.: 5.1.1.2
== See also ==
== References ==
=== Citations ===
=== Sources ===
Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (7 December 2018). "Hash functions". Handbook of Applied Cryptography. CRC Press. pp. 33–. ISBN 978-0-429-88132-9.
Aumasson, Jean-Philippe (6 November 2017). Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press. ISBN 978-1-59327-826-7. OCLC 1012843116.
== External links ==
Paar, Christof; Pelzl, Jan (2009). "11: Hash Functions". Understanding Cryptography, A Textbook for Students and Practitioners. Springer. Archived from the original on 2012-12-08. (companion web site contains online cryptography course that covers hash functions)
"The ECRYPT Hash Function Website".
Buldas, A. (2011). "Series of mini-lectures about cryptographic hash functions". Archived from the original on 2012-12-06.
Open source python based application with GUI used to verify downloads. | Wikipedia/Cryptographic_hash_functions |
Group-based cryptography is a use of groups to construct cryptographic primitives. A group is a very general algebraic object and most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group.
== Examples ==
Shpilrain–Zapata public-key protocols
Magyarik–Wagner public key protocol
Anshel–Anshel–Goldfeld key exchange
Ko–Lee et al. key exchange protocol
== See also ==
Non-commutative cryptography
== References ==
== Further reading ==
Paul, Kamakhya; Goswami, Pinkimani; Singh, Madan Mohan. (2022). "ALGEBRAIC BRAID GROUP PUBLIC KEY CRYPTOGRAPHY", Jnanabha, Vol. 52(2) (2022), 218-223. ISSN 0304-9892 (Print) ISSN 2455-7463 (Online)
== External links ==
Cryptography and Braid Groups page (archived version 7/17/2017) | Wikipedia/Braid_group_cryptography |
The GeForce RTX 40 series is a family of consumer graphics processing units (GPUs) developed by Nvidia as part of its GeForce line of graphics cards, succeeding the GeForce RTX 30 series. The series was announced on September 20, 2022, at the GPU Technology Conference, and launched on October 12, 2022, starting with its flagship model, the RTX 4090. It was succeeded by the GeForce RTX 50 series, which debuted on January 30, 2025, after being previously announced at CES.
The cards are based on Nvidia's Ada Lovelace architecture and feature Nvidia RTX's third-generation RT cores for hardware-accelerated real-time ray tracing, and fourth-generation deep-learning-focused Tensor Cores.
== Architecture ==
Architectural highlights of the Ada Lovelace architecture include the following:
CUDA Compute Capability 8.9
TSMC 4N process (5 nm custom designed for Nvidia) – not to be confused with N4
Fourth-generation Tensor Cores with FP8, FP16, bfloat16, TensorFloat-32 (TF32) and sparsity acceleration
Third-generation Ray Tracing Cores, along with concurrent ray tracing, shading and compute
Shader Execution Reordering – needs to be enabled by the developer
Dual NVENC with 8K 10-bit 120FPS AV1 fixed function hardware encoding
A new generation of Optical Flow Accelerator to aid DLSS 3.0 intermediate AI-based frame generation
No NVLink support
DisplayPort 1.4a and HDMI 2.1 display connections
Double-precision (FP64) performance of Ada Lovelace silicon is 1/64 of single-precision (FP32) performance.
== Release ==
The RTX 4090 was released as the first model of the series on October 12, 2022, for $1,599 US, and the 16GB RTX 4080 was released on November 16, 2022, for $1,199 US. An RTX 4080 12GB was announced in September 2022, originally to be priced at $899 US, however following some controversy in the media it was "unlaunched" by Nvidia. On January 5, 2023, that model was released as the RTX 4070 Ti for $799 US. The RTX 4070 was then released on April 13, 2023 at $599 US MSRP. The RTX 4060 Ti was released on May 24, 2023, at $399 US, and the RTX 4060 on June 29, 2023, at $299 US. An RTX 4060 Ti 16GB followed on July 18, 2023, at $499 US. On January 8, 2024, Nvidia released the RTX 4070 Super at $599, RTX 4070 Ti Super at $799 and RTX 4080 Super at $999. These video cards were launched at higher specs and lower prices than their original counterparts. In the same vein, the production of the RTX 4080 and RTX 4070 Ti have stopped due to the Super series, but the 4070 will remain. In August 2024, Nvidia, citing the need "to improve supply and availability", introduced a variant of the RTX 4070 card with GDDR6 running at 20Gbps while all the other specs remain the same.
== Reception ==
=== RTX 4090 ===
Upon release, the RTX 4090's performance received praise from reviewers, with Tom's Hardware saying "it now ranks among the best graphics cards". A review by PC Gamer gave it 83/100, calling it "a hell of an introduction to the sort of extreme performance Ada can deliver when given a long leash". Tom Warren in a review for The Verge said that the RTX 4090 is "a beast of a graphics card that marks a new era for PC gaming". John Loeffler from GamesRadar wrote that the RTX 4090 offers "an incredible gen-on-gen performance improvement over the RTX 3090", "even putting the Nvidia GeForce RTX 3090 Ti to shame" and being "too powerful for a lot of gamers to really take advantage of". German online magazine golem.de described the RTX 4090 as the first GPU capable of native 4K gaming with ray tracing, as with the predecessor "one often preferred to forego ray tracing in favor of native resolution, especially in 4K." Aside from gaming performance, PCWorld's review highlighted the RTX 4090's benefits for content creators and streamers with its 24GB VRAM and AV1 encoding ability and was positive towards the Founders Edition's quiet operation and cooling ability. According to Joanna Nelius from USA Today, "The RTX 4090 drives such high frame rates that it would be a waste to have anything other than a 4K monitor, especially one with a refresh rate under 144Hz."
However, it also received criticism for its value proposition given that it begins at $1,599. Analysis by TechSpot found that the RTX 4090's value at 1440p was worse than the RTX 3090 Ti and that the RTX 4090 did not make much sense for 1440p as it was limited by CPU bottlenecks. Power consumption was another point of criticism for the RTX 4090. The RTX 4090 has a TDP of 450W compared to the 350W of its last generation equivalent. However, Jarred Walton of Tom's Hardware noted that the RTX 4090 has the same 450W TDP as the RTX 3090 Ti while delivering much higher performance with that power consumption.
Aftermarket AIB RTX 4090 variants received criticism in particular for their massive cooler sizes with some models being 4 PCIe slots tall. This made it difficult to fit an RTX 4090 into many mainstream PC cases. YouTube reviewer JayzTwoCents showed an Asus ROG Strix RTX 4090 model being comparable in size to an entire PlayStation 5 console. Another comparison showed Nvidia's RTX 4090 Founders Edition next to the Xbox Series X for size. It was theorized that a reason for the massive coolers on some models is that the RTX 4090 was originally designed to have a TDP up to 600W before it was reduced to its official 450W TDP.
=== RTX 4080 ===
The RTX 4080 received more mixed reviews when compared to the RTX 4090. TechSpot gave the RTX 4080 score of 80/100. Antony Leather of Forbes found that the RTX 4080 consistently performed better than the RTX 3090 Ti. The GPU's power efficiency was positively received with Digital Trends finding that the GPU had an average power draw of 271W despite its rated 320W TDP.
The RTX 4080's $1199 price received criticism for its dramatic increase from that of the RTX 3080. In his review for RockPaperShotgun, James Archer wrote that the RTX 4080 "produces sizeable gains on the RTX 3080, though they're not exactly proportional to the price rise". In another critique, RockPaperShotgun highlighted that AIB models can significantly exceed the base $1199 Founders Edition price, creating further value considerations. The Asus ROG Strix AIB model they reviewed came in at $1550 which is $50 less than the RTX 4090 Founders Edition. Tom Warren of The Verge recommended waiting to see what AMD could deliver in performance and value with their RDNA 3 GPUs. AMD's direct competitor, the Radeon RX 7900 XTX, comes in at $999 compared to the $1199 price of the RTX 4080.
The RTX 4080 received criticism for reusing the RTX 4090's massive 4-slot coolers which are not required to cool the RTX 4080's 320W TDP. A smaller cooler would have been sufficient. The RTX 3080 and RTX 3080 Ti with their respective 320W and 350W TDPs maintained 2-slot coolers while the 320W RTX 4080 has a 3-slot cooler on the Founders Edition and 4-slots on many AIB models.
It was reported that RTX 4080 sales were quite weak compared to the RTX 4090, which had shockingly sold out during its launch a month earlier. The global cost of living crisis and the RTX 4080's generational pricing increase have been suggested as major contributing factors for poor sales numbers.
=== RTX 4070 Ti ===
Following its relaunch as the RTX 4070 Ti, the GPU received mixed reviews. While its good thermal, 1080p, and 1440p performance were praised, its 4K performance is considered weak for the high $800 price tag.
Jacob Roach from Digital Trends gave the RTX 4070 Ti a 6/10, criticizing the lower memory bandwidth compared to the RTX 4080, its worse 4K performance, and the lack of a Founder's Edition model, with fears that the board partner cards would sell for over the $800 price tag, although he did praise the thermal performance and efficiency of his unit.
Michael Justin Allen Sexton from PCMag reviewed the ZOTAC RTX 4070 Ti Amp Extreme Airo model, giving it a 3.5/5, criticizing the large size of the card and it being only $100 less than AMD's direct competitor, the Radeon RX 7900 XT, which comes in at $899 compared to the $799 price of the RTX 4070 Ti.
It was reported that RTX 4070 Ti sales were quite weak, similar to the RTX 4080 at launch.
=== RTX 4070 ===
At launch, Jacob Roach from Digital Trends gave the RTX 4070 a 4.5/5, praising its efficiency, the fact the Founder's Edition only took up two slots, and its good GPU performance at 1440p and 4K. However, following the launch of the AMD Radeon RX 7800 XT, AMD's direct competitor to the RTX 4070, Monica J. White from Digital Trends compared the latter to the RTX 4070, and said she would recommend the 7800 XT over the RTX 4070 because of better rasterization performance at 1440p, the fact that the 7800 XT had 16GB of VRAM compared to 12GB on the RTX 4070, and that the 7800 XT had a $100 cheaper price.
After the launch of AMD's Radeon RX 7800 XT, the direct competitor to the RTX 4070, which was released on September 6, 2023, the price of some RTX 4070 cards was lowered to $550 and the official MSRP was lowered to $550 following the launch of the Super series.
=== RTX 4060 Ti (8GB & 16GB) ===
The RTX 4060 Ti 8GB released in late May 2023. Many reviewers and customers denounced Nvidia for the lack of appropriate video memory for the release of the card, pointing that the RTX 3060 of the previous generation was released with 12GB of VRAM. Furthermore, many reviews mentioned that an 8GB card was simply not enough for modern standards, for example Digital Trends saying "It makes sense for the upcoming RTX 4060, which is launching in July with 8GB of VRAM for $300, but not for the $400 RTX 4060 Ti. At this price and performance, it should have 16GB of memory, or at the very least, a larger bus size." A few weeks later, prompted by consumers backlash, Nvidia announced they would release the RTX 4060 Ti 16GB in July 2023.
Despite that, the release of the 16GB variant itself was met with suspicion. Nvidia did not send samples for review, including to big names such as Gamers Nexus, Hardware Unboxed, and Linus Tech Tips. It was also later found out that AIBs were also prevented, presumably by Nvidia, to be allowed to give cards out for reviews. According to Gamesradar: "Coming in at $499, the Nvidia RTX 4060 Ti 16GB is arguably a hard sell. Normally, we'd test whether the GPU offers bang for buck, but it looks like reviews aren't going to be a thing. In a way, that sort of makes sense, as it should perform fairly similarly to the existing 8GB model, with the added benefit of more VRAM."
YouTube reviewers Hardware Unboxed criticized both models of the RTX 4060 Ti, calling the 8GB of VRAM model "laughably bad at $400" and "frankly, we don't even need to test the RTX 4060 Ti 8GB to tell you it's a disgustingly poor value product...". They criticized its 8GB of VRAM, which led to poor 1% low frame rates at 1080p and unusable 1% low frame rates at 1440p in games such as The Last of Us Part I and The Callisto Protocol both using the ultra quality preset. A Plague Tale: Requiem was also unusable at 1080p not only using the ultra quality preset with ray tracing enabled, but using the high quality preset with ray tracing enabled as it suffered similar unusable 1% low performance. While Halo Infinite and Forspoken did not have poor 1% low performance at 1080p, they instead sometimes failed to load textures with the 8GB RTX 4060 Ti due to a lack of VRAM. These were not an issue with the 16GB RTX 4060 Ti, however, the 16GB RTX 4060 Ti was criticized for costing $500 to "fix" the issues with running out of VRAM, which was $100 more than the 8GB RTX 4060 Ti for otherwise identical performance outside of VRAM limitations.
=== RTX 4060 ===
Jacob Roach from Digital Trends gave the RTX 4060 a 3/5, criticizing the poor performance of the GPU for the price, its low 8GB of VRAM capacity, the slow memory interface, and its performance at 1440p, although its efficiency was praised. He stated: "In the past, the RTX 4060 would have been classified as a workhorse GPU... The RTX 4060 falls short of filling that post, offering solid generational improvements but middling competitive performance, and relying on DLSS 3 and ray tracing to find its value."
== Products ==
=== Desktop ===
RTX 4060 and RTX 4060 Ti, as well as a later revision of the RTX 4070, use GDDR6 video memory. All other cards use GDDR6X video memory manufactured by Micron.
RTX 4060 and RTX 4060 Ti have a limited 8 lane PCI Express 4.0 bus interface. All other cards support the full 16 lanes.
There is a slower (on average 1% slower according to a single review so far) variant of the RTX 4070 with 20Gbps GDDR6 VRAM introduced in August 2024. All the other specs are identical.
=== Mobile ===
All SKUs feature GDDR6 memory.
== Controversies ==
=== RTX 4080 12GB ===
When the 12GB variant of the RTX 4080 was announced, numerous publications, prominent YouTubers, reviewers, and the community criticized Nvidia for marketing it as an RTX 4080, given the equivalent performance of previous XX80-class cards when compared with other cards of the same series, and the large gap in specifications and performance compared to the 16GB variant. The criticism focused on Nvidia's naming scheme - two 4080 models were to be offered, an "RTX 4080 12GB" and an "RTX 4080 16GB" - the obvious interpretation was that the only difference between the two products would be a difference in VRAM capacity. However, unlike other examples of Nvidia GPUs with differing memory configurations within the same class which have typically been very close in processing performance, the RTX 4080 12GB uses a completely different processing unit and memory architecture: the 4080 12GB would use the AD104 die, which features 27% fewer CUDA cores than the 16GB variant's AD103 die. It would also have a reduced TDP (285W - the 16GB variant was configured to draw up to 320W), and a cut-down 192-bit memory bus. In prior generations (e.g., the GeForce GTX 10, RTX 20 and RTX 30 series), a 192-bit bus width had been reserved for XX60-class cards and below. These changes made the card up to 30% slower than the RTX 4080 16GB in memory-agnostic performance, while being priced significantly higher than previous XX70-class cards ($900 vs. $500 for the RTX 3070, approximately 80% more expensive).
On October 14, 2022, Nvidia announced that due to the confusion caused by the naming scheme, it would be "unlaunching"—i.e. postponing the launch of—the RTX 4080 12GB, with the RTX 4080 16GB's launch remaining unaffected.
On January 3, 2023, Nvidia reintroduced the RTX 4080 12GB as the RTX 4070 Ti during CES 2023 and reduced its list price by $100. The 4070 Ti's production was stopped with the introduction of the 4070 Ti Super.
=== 12VHPWR connector failures ===
=== RTX 4090 U.S. export restrictions ===
The United States Department of Commerce began enacting restrictions on the GeForce RTX 4090 for export to certain countries in 2023. This was targeted mainly towards China as an attempt to halt its AI development. However as a result of this, the market price of the RTX 4090 rose by 25% as China began to stockpile 4090s. This led to Nvidia releasing a China-specific model of the RTX 4090 called the RTX 4090D (D standing for "Dragon"). The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original RTX 4090 SKU.
The restrictions began on November 17, 2023, effectively banning the RTX 4090 from China and other countries on the export restricted list.
== Gallery ==
== See also ==
GeForce GTX 10 series
GeForce GTX 16 series
GeForce RTX 20 series
GeForce RTX 30 series
GeForce RTX 50 series
Ada Lovelace (microarchitecture)
Nvidia Workstation GPUs (formerly Quadro)
Nvidia Data Center GPUs (formerly Tesla)
List of Nvidia graphics processing units
Radeon RX 7000 series – competing AMD series released in a similar time-frame
== References ==
== External links ==
Official website
Nvidia GeForce RTX 40 series' announcement
NVIDIA ADA GPU ARCHITECTURE whitepaper
Media related to Nvidia GeForce 40 series video cards at Wikimedia Commons | Wikipedia/GeForce_40_series |
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher).
Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.
== Significance ==
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively.
A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.
== Key size and encryption system ==
Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA and Elliptic-curve cryptography [ECC]). They may be grouped according to the central algorithm used (e.g. ECC and Feistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm.
The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007, a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should be deprecated, since they may become breakable in the foreseeable future. Cryptography professor Arjen Lenstra observed that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."
The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.
== Brute-force attack ==
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entire space of keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.
With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n increases. The large number of operations (2128) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, a quantum computer capable of running Grover's algorithm would be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports key lengths of 256 bits and longer.
== Symmetric algorithm key lengths ==
IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years".
However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys for general use. Because of this, DES was replaced in most security applications by Triple DES, which has 112 bits of security when using 168-bit keys (triple key).
The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret.
In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.
Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys.
== Asymmetric algorithm key lengths ==
The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future.
Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely accepted recommendation of a 1024-bit minimum since at least 2002.
1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable sometime between 2006 and 2010, while 2048-bit keys are sufficient until 2030. As of 2020 the largest RSA key publicly known to be cracked is RSA-250 with 829 bits.
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.
Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit Elliptic-curve Diffie–Hellman (ECDH) key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.
The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.
== Effect of quantum computing attacks on key strength ==
The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest now, decrypt later".
Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.
In a 2016 Quantum Computing FAQ, the NSA affirmed:
"A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. [...] It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. [...] The public-key algorithms (RSA, Diffie-Hellman, [Elliptic-curve Diffie–Hellman] ECDH, and [Elliptic Curve Digital Signature Algorithm] ECDSA) are all vulnerable to attack by a sufficiently large quantum computer. [...] While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. [...] Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. [...] The AES-256 and SHA-384 algorithms are symmetric, and believed to be safe from attack by a large quantum computer."
In a 2022 press release, the NSA notified:
"A cryptanalytically-relevant quantum computer (CRQC) would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used today. Given foreign pursuits in quantum computing, now is the time to plan, prepare and budget for a transition to [quantum-resistant] QR algorithms to assure sustained protection of [National Security Systems] NSS and related assets in the event a CRQC becomes an achievable reality."
Since September 2022, the NSA has been transitioning from the Commercial National Security Algorithm Suite (now referred to as CNSA 1.0), originally launched in January 2016, to the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), both summarized below:
CNSA 2.0
CNSA 1.0
== See also ==
Key stretching
== Notes ==
== References ==
== Further reading ==
Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57. March, 2007
Blaze, Matt; Diffie, Whitfield; Rivest, Ronald L.; et al. "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". January, 1996
Arjen K. Lenstra, Eric R. Verheul: Selecting Cryptographic Key Sizes. J. Cryptology 14(4): 255-293 (2001) — Citeseer link
== External links ==
www.keylength.com: An online keylength calculator
Articles discussing the implications of quantum computing
NIST cryptographic toolkit
Burt Kaliski: TWIRL and RSA key sizes (May 2003) | Wikipedia/Cryptographic_key_length |
In mathematics, the graph of a function
f
{\displaystyle f}
is the set of ordered pairs
(
x
,
y
)
{\displaystyle (x,y)}
, where
f
(
x
)
=
y
.
{\displaystyle f(x)=y.}
In the common case where
x
{\displaystyle x}
and
f
(
x
)
{\displaystyle f(x)}
are real numbers, these pairs are Cartesian coordinates of points in a plane and often form a curve.
The graphical representation of the graph of a function is also known as a plot.
In the case of functions of two variables – that is, functions whose domain consists of pairs
(
x
,
y
)
{\displaystyle (x,y)}
–, the graph usually refers to the set of ordered triples
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
where
f
(
x
,
y
)
=
z
{\displaystyle f(x,y)=z}
. This is a subset of three-dimensional space; for a continuous real-valued function of two real variables, its graph forms a surface, which can be visualized as a surface plot.
In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details.
A graph of a function is a special case of a relation.
In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective.
== Definition ==
Given a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
from a set X (the domain) to a set Y (the codomain), the graph of the function is the set
G
(
f
)
=
{
(
x
,
f
(
x
)
)
:
x
∈
X
}
,
{\displaystyle G(f)=\{(x,f(x)):x\in X\},}
which is a subset of the Cartesian product
X
×
Y
{\displaystyle X\times Y}
. In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph.
== Examples ==
=== Functions of one variable ===
The graph of the function
f
:
{
1
,
2
,
3
}
→
{
a
,
b
,
c
,
d
}
{\displaystyle f:\{1,2,3\}\to \{a,b,c,d\}}
defined by
f
(
x
)
=
{
a
,
if
x
=
1
,
d
,
if
x
=
2
,
c
,
if
x
=
3
,
{\displaystyle f(x)={\begin{cases}a,&{\text{if }}x=1,\\d,&{\text{if }}x=2,\\c,&{\text{if }}x=3,\end{cases}}}
is the subset of the set
{
1
,
2
,
3
}
×
{
a
,
b
,
c
,
d
}
{\displaystyle \{1,2,3\}\times \{a,b,c,d\}}
G
(
f
)
=
{
(
1
,
a
)
,
(
2
,
d
)
,
(
3
,
c
)
}
.
{\displaystyle G(f)=\{(1,a),(2,d),(3,c)\}.}
From the graph, the domain
{
1
,
2
,
3
}
{\displaystyle \{1,2,3\}}
is recovered as the set of first component of each pair in the graph
{
1
,
2
,
3
}
=
{
x
:
∃
y
,
such that
(
x
,
y
)
∈
G
(
f
)
}
{\displaystyle \{1,2,3\}=\{x:\ \exists y,{\text{ such that }}(x,y)\in G(f)\}}
.
Similarly, the range can be recovered as
{
a
,
c
,
d
}
=
{
y
:
∃
x
,
such that
(
x
,
y
)
∈
G
(
f
)
}
{\displaystyle \{a,c,d\}=\{y:\exists x,{\text{ such that }}(x,y)\in G(f)\}}
.
The codomain
{
a
,
b
,
c
,
d
}
{\displaystyle \{a,b,c,d\}}
, however, cannot be determined from the graph alone.
The graph of the cubic polynomial on the real line
f
(
x
)
=
x
3
−
9
x
{\displaystyle f(x)=x^{3}-9x}
is
{
(
x
,
x
3
−
9
x
)
:
x
is a real number
}
.
{\displaystyle \{(x,x^{3}-9x):x{\text{ is a real number}}\}.}
If this set is plotted on a Cartesian plane, the result is a curve (see figure).
=== Functions of two variables ===
The graph of the trigonometric function
f
(
x
,
y
)
=
sin
(
x
2
)
cos
(
y
2
)
{\displaystyle f(x,y)=\sin(x^{2})\cos(y^{2})}
is
{
(
x
,
y
,
sin
(
x
2
)
cos
(
y
2
)
)
:
x
and
y
are real numbers
}
.
{\displaystyle \{(x,y,\sin(x^{2})\cos(y^{2})):x{\text{ and }}y{\text{ are real numbers}}\}.}
If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure).
Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function:
f
(
x
,
y
)
=
−
(
cos
(
x
2
)
+
cos
(
y
2
)
)
2
.
{\displaystyle f(x,y)=-(\cos(x^{2})+\cos(y^{2}))^{2}.}
== See also ==
== References ==
== Further reading ==
== External links ==
Weisstein, Eric W. "Function Graph." From MathWorld—A Wolfram Web Resource. | Wikipedia/Graph_of_a_function_of_two_variables |
In cryptography, a Feistel cipher (also known as Luby–Rackoff block cipher) is a symmetric structure used in the construction of block ciphers, named after the German-born physicist and cryptographer Horst Feistel, who did pioneering research while working for IBM; it is also commonly known as a Feistel network. A large number of block ciphers use the scheme, including the US Data Encryption Standard, the Soviet/Russian GOST and the more recent Blowfish and Twofish ciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times.
== History ==
Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM's Lucifer cipher, designed by Horst Feistel and Don Coppersmith in 1973. Feistel networks gained respectability when the U.S. Federal Government adopted the DES (a cipher based on Lucifer, with changes made by the NSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design).
== Design ==
A Feistel network uses a round function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block. In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such as substitution–permutation networks is that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.: 465 : 347 Furthermore, the encryption and decryption operations are very similar, even identical in some cases, requiring only a reversal of the key schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations.
== Theoretical work ==
The structure and properties of Feistel ciphers have been extensively analyzed by cryptographers.
Michael Luby and Charles Rackoff analyzed the Feistel cipher construction and proved that if the round function is a cryptographically secure pseudorandom function, with Ki used as the seed, then 3 rounds are sufficient to make the block cipher a pseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who gets oracle access to its inverse permutation). Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers.
Further theoretical work has generalized the construction somewhat and given more precise bounds for security.
== Construction details ==
Let
F
{\displaystyle \mathrm {F} }
be the round function and let
K
0
,
K
1
,
…
,
K
n
{\displaystyle K_{0},K_{1},\ldots ,K_{n}}
be the sub-keys for the rounds
0
,
1
,
…
,
n
{\displaystyle 0,1,\ldots ,n}
respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces: (
L
0
{\displaystyle L_{0}}
,
R
0
{\displaystyle R_{0}}
).
For each round
i
=
0
,
1
,
…
,
n
{\displaystyle i=0,1,\dots ,n}
, compute
L
i
+
1
=
R
i
,
{\displaystyle L_{i+1}=R_{i},}
R
i
+
1
=
L
i
⊕
F
(
R
i
,
K
i
)
,
{\displaystyle R_{i+1}=L_{i}\oplus \mathrm {F} (R_{i},K_{i}),}
where
⊕
{\displaystyle \oplus }
means XOR. Then the ciphertext is
(
R
n
+
1
,
L
n
+
1
)
{\displaystyle (R_{n+1},L_{n+1})}
.
Decryption of a ciphertext
(
R
n
+
1
,
L
n
+
1
)
{\displaystyle (R_{n+1},L_{n+1})}
is accomplished by computing for
i
=
n
,
n
−
1
,
…
,
0
{\displaystyle i=n,n-1,\ldots ,0}
R
i
=
L
i
+
1
,
{\displaystyle R_{i}=L_{i+1},}
L
i
=
R
i
+
1
⊕
F
(
L
i
+
1
,
K
i
)
.
{\displaystyle L_{i}=R_{i+1}\oplus \operatorname {F} (L_{i+1},K_{i}).}
Then
(
L
0
,
R
0
)
{\displaystyle (L_{0},R_{0})}
is the plaintext again.
The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption.
=== Unbalanced Feistel cipher ===
Unbalanced Feistel ciphers use a modified structure where
L
0
{\displaystyle L_{0}}
and
R
0
{\displaystyle R_{0}}
are not of equal lengths. The Skipjack cipher is an example of such a cipher. The Texas Instruments digital signature transponder uses a proprietary unbalanced Feistel cipher to perform challenge–response authentication.
The Thorp shuffle is an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.
=== Other uses ===
The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, the optimal asymmetric encryption padding (OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certain asymmetric-key encryption schemes.
A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (see format-preserving encryption).
=== Feistel networks as a design component ===
Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example, MISTY1 is a Feistel cipher using a three-round Feistel network in its round function, Skipjack is a modified Feistel cipher using a Feistel network in its G permutation, and Threefish (part of Skein) is a non-Feistel block cipher that uses a Feistel-like MIX function.
== List of Feistel ciphers ==
Feistel or modified Feistel:
Generalised Feistel:
CAST-256
CLEFIA
MacGuffin
RC2
RC6
Skipjack
SMS4
== See also ==
Cryptography
Stream cipher
Substitution–permutation network
Lifting scheme for discrete wavelet transform has pretty much the same structure
Format-preserving encryption
Lai–Massey scheme
== References == | Wikipedia/Feistel_network |
Terrestrial Trunked Radio (TETRA; formerly known as Trans-European Trunked Radio), a European standard for a trunked radio system, is a professional mobile radio and two-way transceiver specification. TETRA was specifically designed for use by government agencies, emergency services, (police forces, fire departments, ambulance) for public safety networks, rail transport staff for train radios, transport services and the military. TETRA is the European version of trunked radio, similar to Project 25.
TETRA is a European Telecommunications Standards Institute (ETSI) standard, first version published 1995; it is mentioned by the European Radiocommunications Committee (ERC).
== Description ==
TETRA uses time-division multiple access (TDMA) with four user channels on one radio carrier and 25 kHz spacing between carriers. Both point-to-point and point-to-multipoint transfer can be used. Digital data transmission is also included in the standard though at a low data rate.
TETRA Mobile Stations (MS) can communicate direct-mode operation (DMO) or using trunked-mode operation (TMO) using switching and management infrastructure (SwMI) made of TETRA base stations (TBS). As well as allowing direct communications in situations where network coverage is not available, DMO also includes the possibility of using a sequence of one or more TETRA terminals as relays. This functionality is called DMO gateway (from DMO to TMO) or DMO repeater (from DMO to DMO). In emergency situations this feature allows direct communications underground or in areas of bad coverage.
In addition to voice and dispatch services, the TETRA system supports several types of data communication. Status messages and short data services (SDS) are provided over the system's main control channel, while packet-switched data or circuit-switched data communication uses specifically assigned channels.
TETRA provides for authentication of terminals towards infrastructure and vice versa. For protection against eavesdropping, air interface encryption and end-to-end encryption is available.
The common mode of operation is in a group calling mode in which a single button push will connect the user to the users in a selected call group and/or a dispatcher. It is also possible for the terminal to act as a one-to-one walkie talkie but without the normal range limitation since the call still uses the network. TETRA terminals can act as mobile phones (cell phones), with a full-duplex direct connection to other TETRA Users or the PSTN. Emergency buttons, provided on the terminals, enable the users to transmit emergency signals, to the dispatcher, overriding any other activity taking place at the same time.
== Advantages ==
The main advantages of TETRA over other technologies (such as GSM) are:
The much lower frequency used gives longer range, which in turn permits very high levels of geographic coverage with a smaller number of transmitters, thus cutting infrastructure costs.
During a voice call, the communications are not interrupted when moving to another network site. This is a unique feature, which dPMR networks typically provide, that allows a number of fall-back modes such as the ability for a base station to process local calls. So called 'mission critical' networks can be built with TETRA where all aspects are fail-safe/multiple-redundant.
In the absence of a network, mobiles/portables can use 'direct mode' whereby they share channels directly (walkie-talkie mode).
Gateway mode - where a single mobile with connection to the network can act as a relay for other nearby mobiles that are out of range of the infrastructure. A dedicated transponder system isn't required in order to achieve this functionality, unlike with analogue radio systems.
TETRA also provides a point-to-point function that traditional analogue emergency services radio systems did not provide. This enables users to have a one-to-one trunked 'radio' link between sets without the need for the direct involvement of a control room operator/dispatcher.
Unlike cellular technologies, which connect one subscriber to one other subscriber (one-to-one), TETRA is built to do one-to-one, one-to-many and many-to-many. These operational modes are directly relevant to the public safety and professional users.
Security TETRA supports terminal registration, authentication, air-interface encryption and end-to-end encryption.
Rapid deployment (transportable) network solutions are available for disaster relief and temporary capacity provision.
Network solutions are available in both reliable circuit-switched (telephone like) architectures and flat, IP architectures with soft (software) switches.
Further information is available from the TETRA Association (formerly TETRA MoU) and the standards can be downloaded for free from ETSI.
== Disadvantages ==
Its main disadvantages are:
Serious security issues have been identified, including an intentional weakening of the TEA1 cipher, constituting a full break within a minute on consumer hardware. (See Description)
Requires a linear amplifier to meet the stringent RF specifications that allow it to exist alongside other radio services.
Data transfer is slow by modern standards.
Up to 7.2 kbit/s per timeslot, in the case of point-to-point connections, and 3.5 kbit/s per timeslot in case of IP encapsulation.
Both options permit the use of between one and four timeslots.
Different implementations include one of the previous connectivity capabilities, both, or none, and one timeslot or more.
These rates are ostensibly faster than the competing technologies DMR, dPMR, and P25 are capable of.
Latest version of standard supports 115.2 kbit/s in 25 kHz or up to 691.2 kbit/s in an expanded 150 kHz channel. To overcome the limitations many software vendors have begun to consider hybrid solutions where TETRA is used for critical signalling while large data synchronization and transfer of images and video is done over 3G / LTE.
== Technical details ==
=== Radio aspects ===
For its modulation TETRA, uses π⁄4 differential quadrature phase-shift keying. The symbol (baud) rate is 18,000 symbols per second, and each symbol maps to 2 bits, thus resulting in 36,000 bit/s gross.
As a form of phase shift keying is used to transmit data during each burst, it would seem reasonable to expect the transmit power to be constant. However it is not. This is because the sidebands, which are essentially a repetition of the data in the main carrier's modulation, are filtered off with a sharp filter so that unnecessary spectrum is not used up. This results in an amplitude modulation and is why TETRA requires linear amplifiers. The resulting ratio of peak to mean (RMS) power is 3.65 dB. If non-linear (or not-linear enough) amplifiers are used, the sidebands re-appear and cause interference on adjacent channels. Commonly used techniques for achieving the necessary linearity include Cartesian loops, and adaptive predistortion.
The base stations normally transmit continuously and (simultaneously) receive continuously from various mobiles on different carrier frequencies; hence the TETRA system is a frequency-division duplex (FDD) system. TETRA also uses FDMA/TDMA (see above) like GSM. The mobiles normally only transmit on 1 slot/4 and receive on 1 slot/4 (instead of 1 slot/8 for GSM).
Speech signals in TETRA are sampled at 8 kHz and then compressed with a vocoder using algebraic code-excited linear prediction (ACELP). This creates a data stream of 4.567 kbit/s. This data stream is error-protection encoded before transmission to allow correct decoding even in noisy (erroneous) channels. The data rate after coding is 7.2 kbit/s. The capacity of a single traffic slot when used 17/18 frames.
A single slot consists of 255 usable symbols, the remaining time is used up with synchronisation sequences and turning on/off, etc. A single frame consists of 4 slots, and a multiframe (whose duration is 1.02 seconds) consists of 18 frames. Hyperframes also exist, but are mostly used for providing synchronisation to encryption algorithms.
The downlink (i.e., the output of the base station) is normally a continuous transmission consisting of either specific communications with mobile(s), synchronisation or other general broadcasts. All slots are usually filled with a burst even if idle (continuous mode). Although the system uses 18 frames per second only 17 of these are used for traffic channels, with the 18th frame reserved for signalling, Short Data Service messages (like SMS in GSM) or synchronisation. The frame structure in TETRA (17.65 frames per second), consists of 18,000 symbols/s; 255 symbols/slot; 4 slots/frame, and is the cause of the perceived "amplitude modulation" at 17 Hz and is especially apparent in mobiles/portables which only transmit on one slot/4. They use the remaining three slots to switch frequency to receive a burst from the base station two slots later and then return to their transmit frequency (TDMA).
=== Radio frequencies ===
=== Air interface encryption ===
To provide confidentiality the TETRA air interface is encrypted using one of the TETRA Encryption Algorithm (TEA) ciphers. The encryption provides confidentiality (protect against eavesdropping) as well as protection of signalling.
Currently 7 different ciphers are standardized: TEA1 to TEA4 in TEA Set A and TEA5 to TEA7 in TEA Set B. These TEA ciphers should not be confused with the block cipher Tiny Encryption Algorithm. The TEA ciphers have different availability due to export control and use restrictions. In the past few details were published concerning these proprietary ciphers. Riess mentions in early TETRA design documents that encryption should be done with a stream cipher, due to the property of not propagating transmission errors. Parkinson later confirms this and explains that TEA is a stream cipher with 80-bit keys. The algorithms were later reversed and it appeared that TEA1 reduces its key strength to 32 bits. TEA1 and TEA4 provide basic level security, and are meant for commercial use. The TEA2 cipher is restricted to European public safety organisations. The TEA3 cipher is for situations where TEA2 is suitable but not available.
==== Security vulnerabilities of the air interface encryption ====
An in-depth review published in July 2023 by the company Midnight Blue of the TETRA standard and encryption algorithms, the first made public in the last 20 years, has found multiple security flaws, referred by the company as "TETRA:BURST". A total of 5 flaws were filed to the CVE database:
The Air Interface Encryption (AIE) keystream generator is vulnerable to decryption oracle attacks due to the use of publicly-broadcast network time—keystream reuse can be triggered for acknowledged unicast communication when using static keys (security class 2). This is not applicable for group communication.
TEA1 contains a secret reduction step that effectively downgrades the cryptographic strength from 80 to 32 bits, allowing anyone to break the cipher and subsequently decrypt the signal in as little as one minute using a consumer laptop. "There's no other way in which this can function than that this is an intentional backdoor," "This constitutes a full break of the cipher, allowing for interception or manipulation of radio traffic", according to the news report posted on ComputerWeekly. The deliberately weakened TEA1 flaw seems to be known in intelligence circles and is referred to in the famous 2006 Wikileaks dump of US diplomatic communications.
AIE contains no authentication for the ciphertext, making malleability attacks possible.
The cryptographic anonymization scheme is weak and can be partially reversed to track users.
The authentication algorithm theoretically allowed attackers to set the Derived Cipher Key (DCK) to 0 but not to decrypt traffic.
In addition, the Midnight Blue team spots a "peculiarity regarding the TEA3 S-box", but has yet to determine whether it constitutes a weakness.
These vulnerabilities remained publicly unknown for 28 years after TETRA's publication because TETRA does not make definitions of its cryptographic algorithms public, an example of security through obscurity. The Midnight Blue team gained access to TETRA's cryptographic code by attacking the trusted execution environment on a TETRA-enabled radio. The team points to a list of previously broken cryptographic systems relying on obscurity and argues that the Kerckhoffs's principle should have been followed: the system would have been safer when its structure is publicly known.
=== Cell selection ===
==== Cell re-selection (or hand-over) in images ====
This first representation demonstrates where the slow reselect threshold (SRT), the fast reselect threshold (FRT), and propagation delay exceed parameters are most likely to be. These are represented in association with the decaying radio carrier as the distance increases from the TETRA base station.
From this illustration, these SRT and FRT triggering points are associated to the decaying radio signal strength of the respective cell carriers. The thresholds are situated so that the cell reselection procedures occur on time and assure communication continuity for on-going communication calls.
==== Initial cell selection ====
The next diagram illustrates where a given TETRA radio cell initial selection. The initial cell selection is performed by procedures located in the MLE and in the MAC. When the cell selection is made, and possible registration is performed, the mobile station (MS) is said to be attached to the cell. The mobile is allowed to initially select any suitable cell that has a positive C1 value; i.e., the received signal level is greater than the minimum receive level for access parameter.
The initial cell selection procedure shall ensure that the MS selects a cell in which it can reliably decode downlink data (i.e., on a main control channel/MCCH), and which has a high probability of uplink communication. The minimum conditions that shall have to be met are that C1 > 0. Access to the network shall be conditional on the successful selection of a cell.
At mobile switch on, the mobile makes its initial cell selection of one of the base stations, which indicates the initial exchanges at activation.
Refer to EN 300 392 2 16.3.1 Activation and control of underlying MLE service
Note 18.5.12 Minimum RX access level
The minimum receive access level information element shall indicate the minimum received signal level required at the SwMI in a cell, either the serving cell or a neighbour cell as defined in table 18.24.
==== Cell improvable ====
The next diagram illustrates where a given TETRA radio cell becomes improvable. The serving cell becomes improvable when the following occurs: the C1 of the serving cell is below the value defined in the radio network parameter cell reselection parameters, slow reselect threshold for a period of 5 seconds, and the C1 or C2 of a neighbour cell exceeds the C1 of the serving cell by the value defined in the radio network parameter cell reselection parameters, slow reselect hysteresis for a period of 5 seconds.
==== Cell usable ====
The next diagram illustrates where a given TETRA radio cell becomes usable. A neighbour cell becomes radio usable when the cell has a downlink radio connection of sufficient quality.
The following conditions must be met in order to declare a neighbour cell radio usable: The neighbour cell has a path loss parameter C1 or C2 that is, for a period of 5 seconds, greater than the fast reselect threshold plus the fast reselect threshold, and the service level provided by the neighbour cell is higher than that of the serving cell. No successful cell reselection shall have taken place within the previous 15 seconds unless MM requests a cell reselection. The MS-MLE shall check the criterion for serving cell relinquishment as often as one neighbour cell is scanned or monitored.
The following conditions will cause the MS to rate the neighbour cell to have higher service level than the current serving cell:
The MS subscriber class is supported on the neighbour cell but not on the serving cell.
The neighbour cell is a priority cell and the serving cell is not.
The neighbour cell supports a service (that is, TETRA standard speech, packet data, or encryption) that is not supported by the serving cell and the MS requires that service to be available.
The cell service level indicates that the neighbour cell is less loaded than the serving cell.
==== Cell relinquishable (abandonable) ====
The next diagram illustrates where a given TETRA radio cell becomes relinquishable (abandonable). The serving cell becomes relinquishable when the following occurs: the C1 of the serving cell is below the value defined in the radio network parameter cell reselection parameters, fast reselect threshold, for a period of 5 seconds, and the C1 or C2 of a neighbour cell exceeds the C1 of the serving cell by the value defined in the radio network parameter cell reselection parameters, fast reselect hysteresis, for a period of 5 seconds.
No successful cell reselection shall have taken place within the previous 15 seconds unless Mobility Management (MM) requests a cell reselection. The MS-MLE shall check the criterion for serving cell relinquishment as often as one neighbour cell is scanned or monitored.
==== Radio down-link failure ====
When the FRT threshold is breached, the MS is in a situation where it is essential to relinquish (or abandon) the serving cell and obtain another of at least usable quality. That is to say, the mobile station is aware that the radio signal is decaying rapidly, and must cell reselect rapidly, before communications are terminated because of radio link failure. When the mobile station radio-signal breaches the minimum receive level, the radio is no longer in a position to maintain acceptable communications for the user, and the radio link is broken.
Radio link failure: (C1 < 0). Using the suggested values, this would be satisfied with the serving cell level below −105 dBm. Cell reselection procedures are then activated in order to find a suitable radio base station.
== Man-machine interface (MMI) ==
=== Virtual MMI for terminals ===
Any given TETRA radio terminal using Java (Java ME/CLDC) based technology, provides the end user with the communication rights necessary to fulfil his or her work role on any short duration assignment.
For dexterity, flexibility, and evolution ability, the public transportation radio engineering department, have chosen to use the open sources, Java language specification administered by Sun and the associated work groups in order to produce a transport application tool kit.
Service acquisition admits different authorised agents to establish communication channels between different services by calling the service identity, and without possessing the complete knowledge of the ISSI, GSSI, or any other TETRA related communication establishment numbering plan. Service acquisition is administered through a communication rights centralised service or roll allocation server, interfaced into the TETRA core network.
In summary, the TETRA MMI aims are to:
Allow any given agent while in exercise, to exploit any given radio terminal without materiel constraint.
Provide specific transportation application software to the end-user agents (service acquisition, fraud, and aggression control).
This transport application tool-kit has been produced successfully and with TETRA communication technology and assures for the public transport application requirements for the future mentioned hereafter.
The home (main) menu presents the end user with three possibilities:
Service acquisition,
Status SDS,
End-user parameters.
Service acquisition provides a means of virtually personalising the end user to any given radio terminal and onto TETRA network for the duration the end user conserves the terminal under his possession.
Status SDS provides the end user with a mechanism for generating a 440 Hz repeating tone that signals a fraud occurrence to members within the same (dynamic or static) Group Short Subscriber Identity (GSSI) or to a specific Individual Short Subscriber Identity (ISSI) for the duration of the assignment (an hour, a morning patrol or a given short period allocated to the assignment). The advantage being that each of the end users may attach themselves to any given terminal, and group for short durations without requiring any major reconfiguration by means of radio software programming tools. Similarly, the aggression feature functions, but with a higher tone frequency (880 Hz), and with a quicker repetitious nature, so to highlight the urgency of the alert.
The parameters tab provides an essential means to the terminal end-user allowing them to pre-configure the target (preprogrammed ISSI or GSSI ) destination communication number. With this pre-programmed destination number, the end-user shall liaise with the destination radio terminal or roll allocation server, and may communicate, in the group, or into a dedicated server to which the service acquisition requests are received, preprocessed, and ultimately dispatched though the TETRA core network. This simplifies the reconfiguration or recycling configuration process allowing flexibility on short assignments.
The parameters tab also provides a means of choosing between preselected tones to match the work group requirements for the purposes of fraud and aggression alerts. A possibility of selecting any given key available from the keypad to serve as an aggression or fraud quick key is also made possible though the transport application software tool kit. It is recommend to use the asterisk and the hash keys for the fraud and aggression quick keys respectively. For the fraud and aggression tones, it is also recommend to use 440 Hz slow repeating tone (blank space 500 milli-seconds) and 880 Hz fast repeating tone (blank space 250 milliseconds) respectively. The tone options are as follows: 440 Hz, 620 Hz, 880 Hz, and 1060 Hz.
The parameters page provides an aid or help menu and the last tab within parameters describes briefly the tool kit the version and the history of the transport application tool kit to date.
=== TETRA Enhanced Data Service (TEDS) ===
The TETRA Association, working with ETSI, developed the TEDS standard, a wideband data solution, which enhances TETRA with a much higher capacity and throughput for data. In addition to those provided by TETRA, TEDS uses a range of adaptive modulation schemes and a number of different carrier sizes from 25 kHz to 150 kHz. Initial implementations of TEDS will be in the existing TETRA radio spectrum, and will likely employ 50 kHz channel bandwidths as this enables an equivalent coverage footprint for voice and TEDS services. TEDS performance is optimised for wideband data rates, wide area coverage and spectrum efficiency.
Advances in DSP technology have led to the introduction of multi-carrier transmission standards employing QAM modulation. WiMAX, Wi-Fi and TEDS standards are part of this family.
Refer also to:
JSR-118;
Mobile Information Device Profile, JSR-37;
Wireless Messaging API, JSR120;
Connected Limited Device Configuration, JSR-139; and
Technology for the Wireless Industry, JTWI-185.
== Comparison to Project 25 ==
Project 25 and TETRA are utilised for the public safety Radio network and Private Sector Radio network worldwide however, it has some differences in technical features and capacities.
TETRA: It is optimized for high population density areas, with spectral efficiency (4 time slots in 25 kHz: four communications channels per 25 kHz channel, an efficient use of spectrum). It is suitable for high population density areas and supports full duplex voice, data and messaging. but, it is generally unavailable for simulcast, VHF band - however particular vendors have introduced Simulcast and VHF into their TETRA platform..
P25: it is optimized for wider area coverage with low population density, and support for simulcast. however, it is limited to data support. (Phase 1 P25 radio systems operate in a 12.5 kHz analogue, digital or mixed mode, and P25 Phase II will use a 2-timeslot TDMA structure in 12.5 kHz channels.
Currently, P25 deployed to more than 53 countries and TETRA deployed to more than 114 countries.
== Professional usage ==
At the end of 2009 there were 114 countries using TETRA systems in Europe, Middle East, Africa, Asia Pacific, Caribbean and Latin America.
The TETRA-system is in use by the public sector in the following countries. Only TETRA network infrastructure installations are listed. TETRA being an open standard, each of these networks can use any mix of TETRA mobile terminals from a wide range of suppliers.
== Amateur Radio usage ==
In the past decade, TETRA has seen an uptick in usage in the amateur radio community. The perceived higher audio quality compared to other digital voice modes, capacity for packet data, SDS, single frequency DMO repeaters, close proximity of the UHF (430-440MHz) amateur radio band and full duplex audio in TMO are motivating arguments to experiment contacts with this technology.
Multiple constraints have to be noted when using TETRA for amateur radio service:
In most countries, encryption cannot be used.
Most older (pre-2010) terminals don't cover the Region 1 amateur radio frequency range (430-440) natively, and must be modified via software with a possible impact on RF performance.
Multiple amateur DMO and TMO networks are established throughout Europe.
Additionally, an open-source project aims to create a complete SDR-based TETRA stack, with a working DMO repeater proof of concept.
== See also ==
Digital mobile radio, a TDMA digital radio standard from ETSI
Digital private mobile radio (dPMR), an FDMA digital radio standard from ETSI
NXDN, a two-way FDMA digital radio protocol from Icom and JVC Kenwood
P25 (Project 25), a TIA APCO standard (USA)
TETRAPOL (previously MATRA)
== References ==
== External links ==
ETSI's website with free access to TETRA standards
HamTetra Community Resources
Report on the health effects of TETRA prepared for the Home Office
TETRA in use by radio amateurs
The Critical Communications Association (TCCA)
Radiocommunication objectives and requirements for Public Protection and Disaster Relief (PPDR) | Wikipedia/Terrestrial_Trunked_Radio |
The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including:
SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1.
SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses were discovered in SHA-1, and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2: A family of two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.
The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS).
== Comparison of SHA functions ==
In the table below, internal state means the "internal hash sum" after each compression of a data block.
== Validation ==
All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by the CMVP (Cryptographic Module Validation Program), a joint program run by the American National Institute of Standards and Technology (NIST) and the Canadian Communications Security Establishment (CSE).
== References == | Wikipedia/SHA_hash_functions |
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie–Hellman key exchange, public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems.
== Description ==
Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography are digital signature and public-key encryption:
In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message: 283 —it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach.
== Applications ==
The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication.
Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols.
== Hybrid cryptosystems ==
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
== Weaknesses ==
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.
=== Algorithms ===
All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
=== Alteration of public keys ===
Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.
=== Public key infrastructure ===
One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
=== Unencrypted metadata ===
Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
== History ==
During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys.
=== Anticipation ===
In his 1874 book The Principles of Science, William Stanley Jevons wrote:
Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know.
Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."
=== Classified discovery ===
In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.
In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange.
The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential.
—Ralph Benjamin
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.
=== Public discovery ===
In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.
In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American.
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC.
== Examples ==
Examples of well-regarded asymmetric key techniques for varied purposes include:
Diffie–Hellman key exchange protocol
DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm
ElGamal
Elliptic-curve cryptography
Elliptic Curve Digital Signature Algorithm (ECDSA)
Elliptic-curve Diffie–Hellman (ECDH)
Ed25519 and Ed448 (EdDSA)
X25519 and X448 (ECDH/EdDH)
Various password-authenticated key agreement techniques
Paillier cryptosystem
RSA encryption algorithm (PKCS#1)
Cramer–Shoup cryptosystem
YAK authenticated key agreement protocol
Examples of asymmetric key algorithms not yet widely adopted include:
NTRUEncrypt cryptosystem
Kyber
McEliece cryptosystem
Examples of notable – yet insecure – asymmetric key algorithms include:
Merkle–Hellman knapsack cryptosystem
Examples of protocols using asymmetric key algorithms include:
S/MIME
GPG, an implementation of OpenPGP, and an Internet Standard
EMV, EMV Certificate Authority
IPsec
PGP
ZRTP, a secure VoIP protocol
Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer
SILC
SSH
Bitcoin
Off-the-Record Messaging
== See also ==
== Notes ==
== References ==
== External links ==
Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s.
An account of how GCHQ kept their invention of PKE secret until 1997 | Wikipedia/Asymmetric_key_algorithm |
Nigel Smart is a professor at COSIC at the Katholieke Universiteit Leuven and Chief Academic Officer at Zama. He is a cryptographer with interests in the theory of cryptography and its application in practice.
== Education ==
Smart received a BSc degree in mathematics from the University of Reading in 1989. He then obtained his PhD degree from the University of Kent at Canterbury in 1992; his thesis was titled The Computer Solutions of Diophantine Equations.
== Career ==
Smart proceeded to work as a research fellow at the University of Kent, the Erasmus University Rotterdam, and Cardiff University until 1995. From 1995 to 1997, he was a lecturer in mathematics at the University of Kent, and then spent three years in industry at Hewlett-Packard from 1997 to 2000. From 2000 to 2017 he was at the University of Bristol, where he founded the cryptology research group. From 2018 he has been based in the COSIC group at the Katholieke Universiteit Leuven.
Smart held a Royal Society Wolfson Merit Award (2008–2013), and two ERC Advanced Grant (2011–2016 and 2016-2021). He was a director of the International Association for Cryptologic Research (2012–2014), and was elected vice president for the period 2014-2016. In 2016 he was named as a Fellow of the IACR.
== Research ==
Prof. Smart is best known for his work in elliptic curve cryptography, especially work on the ECDLP. He has also worked on pairing-based cryptography contributing a number of algorithms such as the SK-KEM and the Ate-pairing
Smart carries out research on a wide variety of topics in cryptography. He has been instrumental in the effort to make secure multiparty computation practical. A few of his works in this direction include.
His work with Gentry and Halevi on performing the first large calculation using Fully Homomorphic Encryption won the IBM Pat Goldberg Best Paper Award for 2012.
In addition to his three years at HP Laboratories, Smart was a founder of the startup Identum specialising in pairing based cryptography and identity based encryption. This was bought by Trend Micro in 2008. In 2013 he formed, with Yehuda Lindell, Unbound Security (formally called Dyadic Security), a company focusing on deploying distributed cryptographic solutions based on multi-party computations. Unbound Security was bought by Coinbase in 2021. He is also the co-founder, along with Kenny Paterson, of the Real World Crypto conference series.
=== Publications ===
Nigel P. Smart (1998). The Algorithmic Resolution of Diophantine Equations. Cambridge University Press. ISBN 978-0-521-64633-8.
Ian F. Blake, Gadiel Seroussi and Nigel P. Smart (1999). Elliptic Curves in Cryptography. Cambridge University Press. ISBN 978-0-521-65374-9.
Nigel P. Smart (2002). Cryptography An Introduction. McGraw Hill. ISBN 978-0-07-709987-9.
I.F. Blake; G. Seroussi & Nigel P. Smart (2004). Advances in Elliptic Curve Cryptography. Cambridge University Press. ISBN 978-0-521-60415-4.
Nigel P. Smart, ed. (2005). Cryptography and Coding. Springer-Verlag, LNCS 3796. ISBN 978-3-540-30276-6.
Nigel P. Smart, ed. (2008). Advances in Cryptology - Eurocrypt 2008. Springer-Verlag, LNCS 4965. ISBN 978-3-540-78966-6.
Daniel Page & Nigel P. Smart (2014). What Is Computer Science? An Information Security Perspective. Springer-Verlag. ISBN 978-3-319-04041-7.
Nigel P. Smart (2015). Cryptography Made Simple. Springer International Publishing. ISBN 978-3-319-21935-6.
Arpita Patra & Nigel P. Smart (2017). Progress in Cryptology - INDOCRYPT 2017. Springer-Verlag. ISBN 978-3-319-71667-1.
== References == | Wikipedia/Nigel_Smart_(cryptographer) |
Open Network for Digital Commerce (ONDC) is a public technology initiative launched by the Department for Promotion of Industry and Internal Trade (DPIIT), Government of India to foster decentralized open e-commerce model and is led by a private non-profit Section 8 company. It was incorporated on 31 December 2021 with initial investment from Quality Council of India and Protean eGov Technologies Limited (formerly NSDL e-Governance Infrastructure Limited).
== History ==
ONDC is not an application, an intermediary, or software, but a set of specifications designed to foster open interchange and connections between shoppers, technology platforms, and retailers. ONDC was incorporated with the mission and vision of creating an inclusive ecosystem of e-commerce.
On 5 July 2021, a nine-member Advisory Council was constituted by Department for Promotion of Industry and Internal Trade (DPIIT). The Quality Council of India (QCI) was tasked with incubating the ONDC based on open source methodologies using open specifications and network protocols. It incorporates inventory, logistics, dispute resolution and is planning to cover 25% of domestic digital commerce within two years of launch. ONDC will unbundle delivery services so that customers can choose their own logistics provider. Till 26 October 2021, QCI established a team of experts and on-boarded some small and medium size industries as volunteers for project execution while DPIIT approved ₹10 crore as initial investment.
From November 2021 to March 2022, various public and private sector entities picked up stakes in ONDC by investing seed money to become early promoters. This includes Punjab National Bank (9.5% for ₹25 crore). State Bank of India (7.84% for ₹10 crore). Axis Bank (7.84%). Kotak Mahindra Bank (7.84%). BSE Investments (5.88%). Central Depository Services (6.78%). ICICI Bank (5.97% for ₹10 crore). Small Industries Development Bank of India (7.84% for ₹10 crore). On 23 March 2022, Common Service Centres (CSC) under Ministry of Electronics and Information Technology (MeitY) announced that it will promote ONDC for ecommerce and logictics in rural areas through 3 Lakh Grameen e-Stores. National Payment Corporation of India (NPCI) and National Stock Exchange of India (NSE) committed funding for ONDC as promoters. On 31 August 2022, Ministry of Finance and Reserve Bank of India (RBI) cleared NPCI to acquire 10% stake at ONDC by investing ₹10 crore. Bank of India acquired 5.56% stake by investing ₹10 crore.
As of July 2022, more than 20 organisations have committed investment ₹255 crore (US$30 million) into the project such as UCO Bank, HDFC Bank. Bank of Baroda etc. HDFC Bank acquired 7.84% stake of ONDC. In April, ONDC had received ₹157.5 crore for first stage of the project from 17 banks and financial institutions. For the pilot project, eSamudaay will help in dealing with consumer facing interface, Gofrugal Technologies supplied enterprise resource planning software, GrowthFalcons for digital marketing and SellerApp will help sellers with automation and provide digital insight on sales. On 9 August 2022, SIDBI signed memorandum of understanding (MoU) with ONDC to bring small industries on the network. To make transition to ONDC easier, Yes Bank is working with SellerApp for business enterprise customers.
As per Nandan Nilekani, the Account Aggregator Network (AAN) will work alongside ONDC which will allow everyone in the supply chain to access formal credit in an efficient manner. ONDC signed MoU with Jammu and Kashmir Trade Promotion Organization (JKTPO) on 25 August 2022 to increase eCommerce adaptation in Jammu and Kashmir. Union government is planning to integrate One District One Product scheme with ONDC. IDFC First Bank joined ONDC on 8 September 2022 and will push its current account holders to use the network for business transaction. During talks at Stanford University, Piyush Goyal clarified that through ONDC India is interested in creating multiple unicorns instead of few trillion dollar eCommerce companies. ONDC signed MoU with Government of Madhya Pradesh on 22 November 2022 to increase local adoption, sprint programming and software development. ONDC will incorporate mobility, hospitality and travel sectors.
As per McKinsey & Company, by 2030 ONDC has the potential to increase digital consumption in India by $340 billion. On 6 August 2024, Government of Nagaland and ONDC signed MoU for supporting digital sales of small companies, farmers, craftsmen, and entrepreneurs. Ilandlo Services has been appointed as the nodal agency.
=== DigiReady Certification ===
The DigiReady Certification portal was launched, as announced by ONDC and the Quality Council of India on 8 February 2024. Through the use of this online self-assessment tool, small and medium-sized enterprises can determine how well-equipped they are to join the ONDC platform as vendors. The certification procedure assesses a number of digital readiness factors, such as the availability of the paperwork required for online operations, the user's familiarity with software and technology, the ability to integrate with current digital processes, and the effectiveness of order and catalog management. Participants will receive an e-certificate designating their company as DigiReady to onboard one of the network seller partners upon satisfactorily answering all critical questions. Anyone who answers a question wrong will be sent to a hand-holding tool to help them become more proficient in the appropriate module so they can answer questions correctly. Seller Applications will contact entities to make the on-boarding process easier for them when they are DigiReady and give their permission.
=== Build for Bharat hackathon ===
The ONDC distributed equity capital of up to $250,000 for Antler as part of the initiative. Additionally, teams that can devise original solutions to challenging business issues in the nation's e-commerce environment will receive $100,000 in Google Cloud credits. There were 27,000 participants in all, including businesses, startups, and academic institutions from all around India.
In cooperation with the Startup India platform, Google Cloud, Antler, Protean, and Paytm, the program was launched in December 2023. With experts, leaders, venture capitalists, and incubators present, it was held in over 50 cities. On 28 June 2024, the hackathon's second edition was introduced in three categories: foundation solutions, scalable solutions, and next-generation initiatives.
=== Reference application ===
On September 11, 2024, ONDC and Bhashini jointly released Saarthi, a reference application meant to help companies build their own unique buyer-side applications. Saarthi helps ONDC network users create buyer apps with multilingual functionality. The app currently supports Hindi, English, Marathi, Bengali, and Tamil. The intention is to eventually support all 22 languages. Saarthi's speech recognition, transliteration, and real-time AI-driven translation capabilities help businesses reach a wider audience, penetrate new markets, and attract more clients. Regular security upgrades for Saarthi were promised by ONDC. Scalability and simplicity of integration are key design considerations for the Saarthi reference application.
== Pilot phases ==
The pilot phase was launched on 29 April 2022 in five cities; New Delhi, Bengaluru, Bhopal, Shillong and Coimbatore. In April, Bengaluru based Woolly Farms received an order from buyer application Paytm. This was the first transaction on ONDC.
Restaurant management platform Petpooja partnered with marketing agency GrowthFalcons to onboard food and beverage brands on ONDC network. They help restaurants with discovery and demand analytics. On 5 May 2022, GrowthFalcons, Paytm and logistics company LoadShare successfully completed the first live cascaded transaction on ONDC. The delivery happened on Petpooja point of sale (PoS) terminal.
By August 2022 all systems will become operational. ONDC will start servicing 100 cities and will also open for wider public use in Bengaluru, New Delhi, Coimbatore, Bhopal and Shillong where pilot projects are ongoing. By the end of 2022, ONDC will go live at national level covering the entire country. ONDC expanded to Noida, Faridabad, Lucknow, Bijnor, Bhopal, Chhindwara, Kolkata, Pune, Chennai, Kannur, Thrissur, Udupi, Kanchipura, Pollachi, Mannar and Ramnathpuram on 18 July 2022.
ONDC activated ecommerce in agriculture domain on 20 June 2022 with National Bank for Agriculture and Rural Development (NABARD). ONDC is attempting to integrate all Agri-tech and Non Agri-tech ecommerce platforms into common network while these companies build solutions that can further accelerate the adoption and establish market linkage for Farmer Producer Organizations.
Government of Uttar Pradesh started actively promoting One District One Product (ODOP) scheme on ONDC. DPIIT will initiate integration of Union Government Ministries and various departments under State Governments with ONDC. The platform will try to rectify issues registered with National Consumer Helpline like delivery of wrong, defective, damaged goods, non-delivery or delayed delivery, failed refunds or violation of service agreements. Government is planning to leverage the incubators under Startup India Seed Fund Scheme (SISFS) to build applications for ONDC with vernacular language support for greater accessibility. ONDC restricted referral commission of sending a buyer to a shopper at 3% compare to 30% in major eCommerce platforms.
From 21 cities, the pilot phase moved to 51 cities by third week of August. The Economic Times reported that ONDC is facing teething troubles with respect to address and geolocation APIs. From September, Bengaluru will become the first city to get all user access. By 2024, ONDC is planning to bring 3 crore sellers and 30 crore shoppers on the network. On 31 August 2022, government announced that ONDC will move to beta-testing phase for general public. ONDC will charge fee from sellers for maintenance and development of the network.
By allowing the integration of CSC's e-Grameen app on the ONDC network as a buyer application, Common Services Centers (CSCs), under the Ministry of Electronics and Information Technology (MEitY), have teamed with the ONDC to make e-commerce accessible to rural communities. During the initial stage, CSC will function as a buyer-side platform, enabling citizens who visit the center to place orders using the e-Grameen app. The sellers who have registered on the CSC platform will be able to accept orders over the ONDC network during the second phase.
=== Public Distribution System ===
In Himachal Pradesh, Ministry of Consumer Affairs, Food and Public Distribution started a pilot program to add fair price stores to ONDC. The program is being funded as a pilot by the Department of Food and Public Distribution (DFPD) in collaboration with nStore, a technology partner, and MicroSave, a company that offers fair price shops assistance. On 6 February 2024, the first order was placed at 11 fair price stores to kick off the experimental program. The pilot helps fair price stores boost their usage and income from market goods while also giving customers in rural areas access to groceries and other necessities through ONDC. Fair pricing shops on ONDC now provide household groceries that were previously only available at regular kirana stores. These products are available for purchase using ONDC buyer apps, such as Paytm. The National Cooperative Consumers' Federation (NCCF) operates fair price stores all over India. Other states will follow suit with the trial project's success.
=== Public beta ===
The public beta was rolled out on 30 September 2022 in Bangalore Urban district. On the first day, LoadShare processed 100 orders. Paytm app faced issue over missing order option. There is also one order cancellation from merchant side. As per government, ONDC will charge 8–10% of the selling price of the product compare to 18–40% charged by Amazon and Flipkart. The commission rate is not fixed and will be decided by market forces. Small retailers are also less to fear that the platform will open dark stores. Restaurants will have access to customer data. From 15 October 2022, ONDC initiated outreach program for restaurants and cloud kitchens in Bengaluru. From December 2022, public beta will rollout in Delhi and Mumbai.
From March 2023, ONDC will start network-wide scoring of sellers. This network-wide rating will include elements of feedback regarding transactions and performance of the sellers including the extent of grievance resolution. As of 16 January 2023, ONDC completed 4,000 successful transactions as part of beta testing. NPCI Bharat Bill Payment System launched NOCS platform that will provide reconciliation and settlement services for ONDC transactions. It will be integrated with banks, fintechs and e-commerce players. Initially it will go live with AU Small Finance Bank, Axis Bank, HDFC Bank, IDFC First Bank, and YES Bank. During Diwali week, ONDC recorded nearly 1.2 million transactions from 6 November to 13 November 2023 across 600+ cities. In December 2023, ONDC touched 5.5 million transactions in a month of which 2.1 million is from retail category while 3.4 million from mobility category. Retail purchase has grown from 1,281 in January 2023 to 2.1 million in December 2023.
In an attempt to guarantee that consumers and sellers reap the benefits and that a strong e-commerce ecosystem is constructed to endure any disturbance in the physical market, the Indian government has started a feasibility study to integrate all of its associated platforms with ONDC. A test of this kind involves combining E-NAM with ONDC for agricultural commodities.
Over 59,000 of the 2.3 lakh sellers and service providers that have been on-boarded by different seller platforms are food and beverage suppliers, according to Som Parkash, Minister of State for Commerce and Industry. In order to educate its members about the advantages of the ONDC platform, ONDC collaborates closely with the National Restaurant Association of India (NRAI) and the Federation of Hotel & Restaurant Associations of India (FHRAI). In addition, ONDC is actively collaborating with the Ministry of Micro, Small, and Medium Enterprises to connect MSME-Mart, which has over 2 lakh MSMEs, with ONDC and to onboard MSMEs to the network through current seller applications. The "Feet on Street" initiative by ONDC assists Network Participants (NPs) in locating and instructing vendors regarding the advantages of ONDC and the process of joining through seller applications. It also provides handholding support to sellers during the onboarding process and in developing an entry-level basic catalogue. An additional project is the ONDC Academy, a collection of instructive and enlightening textual and video content that offers curated learning experiences, best practices, and advice for every member of the ONDC network to have a successful e-commerce journey.
In February 2024, ONDC fulfilled over 7.1 million orders in total. ONDC exceeded 6.75 million cumulative orders in January 2024. Of them, 3.56 million (or 52.8%) were orders related to mobility, and 3.19 million (or 47.2%) were non-mobility orders. Currently, over 370,000 vendors and service providers use ONDC. More than 800 cities have seen transaction fulfillment from ONDC. Additionally, ONDC has started a pilot program to onboard street food sellers in Lucknow and Delhi. 500 street food vendors will be added to the system in each of these cities, and more seller and buyer apps will be added as part of a national rollout. 8.9 million transactions were made in the retail and ride-hailing divisions, according to ONDC, which also saw a 23% increase in total transaction volume month over month. The retail category saw a record-breaking 200,000 retail transactions in a single day in May 2024, with 5 million orders placed, up from 3.59 million in April 2024. With 630,000 orders in the home and kitchen category, 330,000 orders in fashion, and 2 million orders in other retail subcategories, the grocery and food delivery categories each surpassed the one million order threshold for the first time. The food category made up only 20% of all retail orders, down from 76% in the previous year. In terms of Network orders, Delhi, Uttar Pradesh, and Maharashtra continued to be the top three states. Bihar recorded a 42% increase in orders, while Uttar Pradesh saw nearly twice as many.
Compared to 12.90 million transactions in September 2024, ONDC saw a 7.6 percent increase to 14 million transactions in the month of October 2024. 5.5 million came from the mobility category and 8.4 million from the non-mobility category. These numbers have increased by 10% and 7.6%, respectively. Groceries generated almost one million transactions, while the food and beverage category made the largest contribution in the non-mobility space with two million. The fashion section has recorded 11 million transactions, compared to 9 million in September. In October 2023, ONDC recorded just 4.5 million transactions, but year-over-year, its transactions have increased by 200 percent.
=== B2B e-commerce ===
ONDC will start testing B2B commerce from December 2022. Network participants initiated ONDC process integration from November.
=== Technical Resources ===
These are some helpful links to read more about ONDC.
Official Github repo - https://github.com/ONDC-Official
Technical resources - https://resources.ondc.org/tech-resources
News and Articles - https://opencommerce.network/
== ONDC Advisory Council ==
The primary role of the advisory council is to watch over ONDC implementation in the country. The members were selected based on their experience in fields such as technology, finance, commerce etc. Members included Nandan Nilekani co-founder of Infosys, Anjali Bansal founder of Avaana Capital, R S Sharma Chief Executive Office National Health Authority, Adil Zainulbhai Chairman of Quality Council of India and Capacity Building Commission, Arvind Gupta Co-founder and Head of Digital India Foundation, Dilip Asbe Managing Director and chief executive officer at National Payment Corporation of India, Suresh Sethi Managing Director and chief executive officer at National Securities Depository Limited, Praveen Khandelwal Secretary-General of Confederation of All India Traders and Kumar Rajagopalan Chief Executive Office at Retailers Association of India. Convener of the ONDC Advisory Council is Anil Agrawal Additional Secretary from Department for Promotion of Industry and Internal Trade under Ministry of Commerce and Industry. Some of the members are volunteers of ISPIRT. Supriyo Ghosh is the chief architect behind ONDC.
As per ONDC Advisory Council, a total of seven apps that include one from buyer side, five from seller side and one from logistics service provider side completed cascaded transactions in the pilot phase for grocery, food and beverage segment using Beckn Protocol as of 23 June 2022 in all the five cities. ONDC Advisory Council tasked to increase awareness and initiate a pilot phase among offline traders, handicraft artisans to prioritise their on-boarding. From 24 June 2022, ONDC Advisory Council started working on network expansion, governance and upgrade for the platform before nationwide rollout.
== Objectives ==
The major objectives include:
ending monopolies of the platforms
democratisation and decentralisation
digitisation of the value chain
standardisation of operations
inclusivity and access for sellers, especially small and medium enterprises as well as local businesses
increased efficiency in logistics
more choices and independency for consumers
ensured data privacy and confidentiality
decreased cost of operation
It is compared to Unified Payments Interface (UPI).
== Structure ==
The ONDC uses "free softwared methodology, open specifications and open network protocol". On the ONDC, the consumers and merchants can transact goods and services independent of platform or application. Beckn will provide the technology and specification layer on which ONDC will design the network policies, network trust, network grievance handling and network reputation system. ONDC plans on implementing dynamic pricing model, digitised inventory management and optimise the delivery cost to help reduce cost of doing business for everyone on the platform. ONDC will work on a hyper local search engine model based on GPS proximity data as default setting. The buyer can independently select the seller and logistics partner to complete the order.
=== Beckn Protocol ===
The backend of the ONDC is built on Beckn Protocol, an open and interoperable protocol for decentralised digital commerce. Beckn Gateways provides anonymised aggregated data generated from the network. The interoperability achieved by unbundling packet transmission layer from experience layer so that the core of commercial transactions such as discovery, order booking, payment, delivery and fulfillment can happen in a standardised manner. It can be customised based on the requirement of customer and provider by taking modular approach. Beckn is not ecommerce specific and can be used in other area of digital public infrastructure like healthcare, mobility etc.
Nandan Nilekani, Pramod Varma and Sujith Nair developed Beckn Protocol in partnership with ISPIRT. The project is supported by Beckn Foundation. Beckn brand is owned by Open Shared Mobility Foundation in which Nandan Nilekani is the sole investor. It facilitates the third layer of public digital infrastructure that is the digital transaction layer in an open digital ecosystem. This helps promote market competition and regulate anti-competitive behavior. The development came after failure of US standard bodies to set new standards and left it to Big Tech companies to solve the issue. Beckn Protocol is a substitute for and compatible with the similar protocols developed by US that are in use globally for transactions and sending information between computers. Success of UPI gave further impetus to develop more open-source ecosystem in other areas at national scale.
=== Industry Working Group ===
Microsoft and Oracle Corporation are looking for partnership to supply technology solutions which can add value to base ONDC network by helping companies scale up their operations. They are part of an industry working group for developing buyers, sellers and logistics provider registry, network reputation index and online dispute resolution framework. To ease on-boarding process, Microsoft started providing sellers toolkit. Oracle is designing a neutral, transparent reputation index for sellers. Amazon is working with ONDC to digitise Kirana stores. Sequoia Capital, SoftBank Group and Indian Venture Capital Association started pushing portfolio companies they invested in to actively take part in ONDC.
Microsoft announced that it is going to join ONDC to introduce social commerce in India and will also launch a shopping app for buyers to help them in price discovery. From August 2022 to January 2024, PhonePe will invest $15 million for developing ecommerce platform on the network. Amazon is exploring how to integrate with the network. Zoho Corporation will provide technical support and facilitate live chat among buyers and sellers.
To streamline payment process for buyers, sellers and logistics partners, Razorpay is going to integrate payment reconciliation service on ONDC to provide single view of all transactions. It will be available to all network participants and will validate every transactions with audit trail for documentation. This is to resolve future dispute or discrepancy.
Mahindra Logistics will provide same-day and next-day intra-city and inter-city pick-up and delivery services to all sellers on ONDC. Logistics company XpressBees will help ONDC offer pan-India delivery services to over 2,800 cities and 20,000 pincodes. On 19 December 2023, ONDC announced two distinct partnerships with Google and Meta Platforms in order to increase platform awareness through the use of WhatsApp and Google Maps. It will aid in increasing the number of monthly transactions between big businesses and retail customers on the platform. Kochi Metro will be integrated with Google Maps by ONDC, enabling customers to purchase tickets straight from the app by utilizing the ONDC framework. Meta is planning to incorporate the ONDC framework into WhatsApp in order to expedite the on-boarding process for small businesses. From 2024, Meta will concentrate on monetization via the ONDC platform. Additionally, Google Maps will integrate more public transportation options. In order to enable 500,000 micro, small and medium-sized enterprises (MSMEs) create conversational buyer and seller experiences on WhatsApp and move them into the digital platform, Meta and ONDC will collaborate to develop digital skills.
A Memorandum of Understanding (MoU) was signed by ONDC and Dhiway to collaboratively develop an interoperable distributed ledger network as a scoring repository called Confidex using CORD, an open-source, layer-one distributed ledger framework. With the goal of making the entire e-commerce process—from on-boarding to payment reconciliation—simpler, Zoho Corporation introduced Vikra on September 25, 2024. Vikra enables businesses to generate product catalogues, sell through platforms like Paytm, Ola, and Snapdeal, and onboard employees more quickly.
==== Concerns ====
Industry is raising concerns on the liability of network participants for consumer complaints and the implementation of ecommerce rules on various stakeholders. There is also pertaining questions regarding third-party network policy audits and whether it will include search algorithms. Experts are discussing how stakeholders can make sure that any entity developing both buyer and seller side app on ONDC must be stopped from gaming the system. Companies and ONDC are also trying to resolve the issue of network disruption liability and uptime guarantee. Private sector is waiting for traction before they make any big, long term investment on ONDC.
According to JM Financial, ONDC getting data, algorithms of customer buying behavior from larger e-commerce players is a challenge. ONDC framework limits data usage and storage which affects interoperability. As per CEO T Koshy, ONDC will not subsidize pricing for customers using shareholders money. Instead it will focus more on innovation and special value addition to product. Government told ONDC to prepare detail report on grievance redressal mechanism and submit it by the end of September 2022. The system needs to be robust and competitive. Third-party agency will handle Online Dispute Resolution (ODR) by checking digital trail left by individual orders. Customers can check complaint status and how many are resolved. It will involved in the final rating of the merchants. ONDC will record all complaints filed against a merchant which will be linked directly to the network-wide reputation index that tabulates percentages of complaints raised, resolved or not resolved, as well as those that consumers may choose to take to consumer courts. The draft ODR policy will published by an expert committee of NITI Aayog. ONDC dashboard will publish monthly statistics of complaints received by seller side apps. On 2 September 2022, ONDC invited public feedback on Issue and Grievance Management Structure (IGMS) resolving buyer and seller complaints. The mechanism will be similar to what non-profit company Sahamati uses for their Account Aggregator Network but much faster and traceable.
According to Consumer Affairs Secretary Rohit Kumar Singh, in order for customers to receive satisfactory services, ONDC must address the liability issue. Additionally, he alerted T Koshy, CEO of ONDC, of the problem, saying that without it, ONDC could not be able to serve customers.
== Acceptance ==
=== Buyer Apps ===
Paytm
Mystore
Magicpin
Khojle by Jagran
Ola
Spice Money
Crafts Villa
NobrokerHood
=== Seller Apps ===
Snapdeal
Bitsila
Mystore
ideamasters
GrowthFalcons
Meesho
DIGIIT
UdyamWell
Easy Pay
=== Logistics ===
Ekart
Dunzo
Delhivery
Loadshare
Porter
As of 12 May 2022, Flipkart, Reliance Retail and Amazon started talks with ONDC officials. As Logistics Network Partners (LNP), Ekart and Dunzo joined ONDC. Paytm and PhonePe which is in advance stage of integration will act as Payment Network Partners (PNP) for ONDC. PhonePe planning to join ONDC both from buyers and sellers side. Till 15 May 2022, 24 ecommerce platforms are joining ONDC. Paytm Mall confirmed joining ONDC network with primary focus on exports. As of July 6, 200 sellers are active in grocery, and food and beverage category. To access ONDC, a seller has to join a seller platform. After the success of Tez atop Unified Payment Interface (UPI), Google initiated talks with ONDC to integrate Google Shopping with the platform. Ecom Express and Shiprocket are at the final stages of integration as LNP.
Paytm from buyer side, GoFrugal, SellerApp, Growth Falcon, eSamudaay, Digiit from seller side and LoadShare from logistics side are successfully integrated with ONDC. StoreHippo started testing of ONDC connected marketplace for small and medium-sized enterprises. Grab integrated as LNP for hyper local intercity delivery on 1 August 2022. As of August, Titan Company, Zoho Corporation, retail intelligence platform Bizom are on advance stages of integration with the network. IDBI Bank, India Post, Marico, Airtel are also interested in joining ONDC. Dunzo for Business (D4B) has joined ONDC as LNP on 4 August to provide last mile delivery for business to business (B2B) transaction.
By the end of August, SellerApp will onboard 3,000+ retail stores from Bengaluru. Innoviti Payment Solutions is integrating seller side app for small merchants on ONDC to publish their goods and offer credit to buyers. Business banking solution provider Zyro joined ONDC from seller side. They will help small retailers create digital stores on the network. Blowhorn integrated itself with ONDC as logistics network partner to perform last mile delivery and is in final phase of testing. They will go live from fourth week of September 2022. Menson will onboard 11,500 restaurants on ONDC. Crowdsourced third-party logistics platform Shadowfax integrated with ONDC on 19 October 2022. Kochi Open Mobility Network (KOMN) will integrate with ONDC to make transporters in Kerala discoverable on the network. On 23 November 2022, Meesho joined the network and started the pilot project in Bengaluru.
On 24 February 2023, Amazon announced integration of its logistics network and Smart Commerce Services with ONDC. Within the first year of operations, 31,000 merchants with 37 lakh products are registered with the network. In mobility space, 56,000 vehicles registered with ONDC with a daily average ride of 20,000. As of 24 April 2023, ONDC completed the milestone of 5,000 daily orders in foods & beverages, and grocery category and on average 6,000 orders during weekends. It crossed the mark of 10,000 orders on 30 April 2023. As of 24th May 2023, ONDC achieved a grand one-year milestone after completing 1 year of operation. It expanded over 236 cities of India. Now, it has approximately 36,000 sellers and over 45 network participants. It has scaled its operations in more than 8 categories. According to the Ministry of Commerce, ONDC is flourishing with weekly 13K+ retail orders (avg.) and 36k+ mobility rides per day. As of 27 November 2023, DTDC is operational on ONDC.
On 12 January 2024, Hyperlocal Pharmacy Aggregator MedPay became a member of ONDC. Customers can now access its drugstore network via third-party buyer apps. Senco Gold & Diamonds became the first jewellery brand to join ONDC.
=== Agribusiness ===
On ONDC, 4,000 farmers producer organizations (FPOs) have offered up to 3,100 different types of value-added agricultural products since April 2023. By the end of FY24, 6,000 FPOs are expected to be on ONDC. FPO Rich Returns, situated in Rajasthan, sold ₹2,00,000 worth of millet based products, garlic papadam, and chickpea items through Mystore in November 2023. Aryahi Fed Farmers Producer Company, a collective of 673 farmers located in Uttar Pradesh, is among the first to sell more than ₹5,00,000 worth of honey and millet based valued added products through Mystore within six months of joining the company. Using the Mystore online marketplace, the Odisha-based farmers' cooperative Daringbadi Farmers Producer Company supplied Geographically Indicated (GI)-tagged turmeric powder to customers in 11 states.
With the goal to raise sales to ₹10 million by 2024, BASIX Krishi Samruddhi, a group that has assisted 59 FPOs into ONDC across Odisha, West Bengal, Bihar, and Uttar Pradesh, reported a total sales of ₹4 million in 2023. Because of India Post's extensive network, FPOs leveraged its services to distribute their goods, but since then, more logistic partners have been recruited to guarantee even faster product delivery to customers.
Under a central scheme launched in 2020 with a budgetary provision of ₹6,865 crore, nearly 5,000 out of 8,000 registered Farmer Producer Organizations (FPOs) have been onboarded on the ONDC portal for selling the produce online to consumers across the country, against the Ministry of Agriculture and Farmers' Welfare's target of 10,000.
=== Retail ===
Reliance Retail has started a pilot program on ONDC using the Fynd retail platform. The pilot is live in Madurai. Reliance Retail will scale it up to five locations if it is successful, and eventually to 100 outlets on ONDC. All of Reliance Retail's brands will gradually launch on ONDC.
On 30 July 2024, ONDC introduced an interoperable QR code to link online and offline vendors, regional craftspeople, and neighborhood store owners with consumers. With ONDC-registered buyer apps like Magicpin and Paytm, merchants can create a unique QR code that customers can scan to access their online store on the ONDC platform.
==== Fast-moving consumer goods ====
Fast-moving consumer goods (FMCG) companies and brands such as ITC Limited, Hindustan Unilever, Dabur and Nivea started talks on joining the network. Shopify also started showing interest on ONDC platform. Products with geographical indicator tags, Khadi and those made by Scheduled Tribes will be promoted on the platform for national level market access and community development. Snapdeal will go live on ONDC by end of August 2022. It will deal on fashion, home and beauty, and personal care categories connecting 2,500 cities. The company will take help of LNP for last mile delivery. Livestream shopping platform Kiko Live will start operation from September 2022 starting with grocery, stationary items, medicine, gifts and novelty, flowers, fashion and cosmetics, electronics, electrical and hardware, dairy products and fresh meat.
Through its multi-brand direct-to-consumer platform, UShop, Hindustan Unilever became the first fast-moving consumer goods (FMCG) company to join ONDC in 2022. They will also help onboard nearly 1.3 million Kirana stores on the platform. The Coca-Cola Company introduced Coke Shop to join the ONDC. For data-driven insights, market intelligence, and strategy, it also partnered with SellerApp. UK based The Body Shop became the first international beauty and skincare brand to join the platform. By November 2023, ONDC surpassed 1.5 million monthly e-commerce purchases, with food accounting for almost one-third of all orders placed by customers. As orders soar, retail sales expand beyond food to include fashion, beauty, and electronics. ONDC has welcomed the Good Glamm Group, a conglomerate that specializes in content creation, commerce, and community, as a direct seller on 12 December 2023. The company hopes to reach 100% more customers and enhance its revenue potential by 50%.
=== Finance ===
The ONDC successfully tested a few personal loan transactions with Easy Pay and DMI Finance in January 2024. ONDC is constructing the framework for three major product categories: wealth management, insurance, and credit. The platform is currently undergoing various stages of integration with Aditya Birla Capital, Tata Capital, Canara Bank, Kotak Mahindra Bank, Bajaj Finserv, Bajaj Allianz and Aditya Birla Health. Enabling small business owners to transact loans is one of ONDC's primary goals. It is creating the framework for a financing product that is based on GST invoices. Additionally, efforts are being made to standardize the data format that is stored on buyer-side applications' platforms. The goal is to enable lenders and insurance providers to incorporate the data into their rule engines so they can identify the right clients. This will make it possible to process financial product purchases in a number of use cases simply. As of now, 85 enterprises have expressed interest in the credit segment; however, only 7 of them enrolled in the pilot program. Companies from the buyer-side applications are Tata Digital, Paynearby, Easypay, and Rapidor. During the trial period, DMI Finance, Aditya Birla Finance, and Karnataka Bank are the lenders. ONDC will first offer health, auto, and marine insurance.
Early adopters of buyer-side apps are anticipated to include Policybazaar (Marine), Cliniq 360 (Health), and Insurance Dekho (Marine, Health, and Motor). The insurers Aditya Birla Health (Health), Kotak General (Marine), and Bajaj Allianz (Marine, Motor) are anticipated to go operational initially. With a few chosen partners, ONDC is developing sachet maritime insurance solutions up to ₹3 to ₹5 to cover supplies during transit and lessen the risk of doing business for smaller sellers. The first wealth management product would be sachet mutual fund investments, accessible to those at the bottom of the economic pyramid, for less than ₹100. MF Utilities is the first seller-side app that ONDC has onboarded. It makes it possible for a buyer-side app to trade in mutual funds and show a single snapshot of an individual's holdings across platforms.
=== Mobility ===
ONDC entered urban mobility space in partnership with Namma Yatri, an auto rickshaw hailing mobile application launched by Juspay Technologies. In Namma Yatri app, ONDC completed 25,000 daily rides. Namma Yatri will operate on a zero-commission approach to link drivers and passengers seeking trips in Delhi. It is currently in use in Kolkata, Bangalore, Kochi, and Mysore. Over 10,000 drivers have been onboarded by Namma Yatri in Delhi, with over 50,000 drivers expected to join in the following three months. In 2023, the drivers received ₹330 crore on rides made via the app. As of 2023, Namma Yatri has over 1.7 lakh drivers on board and 40 lakh riders who had reserved 20 million rides using the application.
On 18 December 2023, the Hyderabad Auto & Taxi Drivers Association announced the release of a new ride-hailing app supported by ONDC protocols. Yaary, a startup with its headquarters in Hyderabad, developed the app. More than 20,000 auto and taxi drivers in Hyderabad have already been on boarded by the firm, and it is working with many other driver associations in other cities to introduce comparable mobility apps.
In Chennai, Namma Yatri made its debut on 29 January 2024. As the first metro service in India to integrate with the ONDC, Chennai Metro enables users to easily purchase single and return travel tickets through a variety of ONDC network apps, including RedBus, Namma Yatri, and Rapido. Millions of Indian commuters will have easier access to urban transit as metro systems in Kochi, Kanpur, Pune, and other cities are set to follow Chennai's lead. As part of its partnership with ONDC, Uber is looking into services including intercity bus and metro train ticket purchases in India.
=== Tourism ===
EaseMyTrip announced the signing of a Letter of Intent (LOI) to join the ONDC on 23 May 2024. In order to provide travel services on the ONDC, EaseMyTrip has announced the launch of ScanMyTrip on 12 September 2024. By allowing them to offer their services—flights, hotels, and homestays—on the ONDC, ScanMyTrip seeks to empower MSMEs, homestays, and OTAs (Online Travel Agencies).
==== Hospitality industry ====
On 21 June 2022, National Restaurant Association of India (NRAI) initiated talks with officials from ONDC. Initially some large and small restaurants will join the network as pilot project and work on technology standardization. Restaurant Network Partners (RNP) under ONDC will help in on-boarding process and act as an aggregator. Post on-boarding, RNP will educate and train the restaurants on online delivery business, maintain packaging and hygiene norms and ensure that food preparation is as per service level agreements to protect consumer interest. Building consumer pipeline and delivery infrastructure will not come under RNP.
Indian Railway Catering and Tourism Corporation (IRCTC) also engaging with ONDC to let consumers buy things during travel. To de-risk online sales from food aggregators Zomato and Swiggy, Domino's gone live on ONDC in National Capital Region. On 30 January 2024, the platform integration was finished and other cities will go live in phase manner. Rebel Foods, a cloud kitchen, expanded its EatSure direct-to-consumer (D2C) platform to ONDC.
McDonald's restaurants offering à la carte and meal options that serve the Eastern and Northern regions of India joined the ONDC. At the ONDC Startup Mahotsav on 17 May 2024, EaseMyTrip signed a Letter of Intent to become a part of the ONDC Network.
From July 2024, the Bruhat Bengaluru Hoteliers Association (BBHA) and GrowthFalcons will collaborate to focus on the meal delivery industry. Bengaluru is currently placing at least 50,000–60,000 food delivery orders each day on ONDC using Ola Consumer as the buying app.
=== Automotive industry ===
Hero MotoCorp joined ONDC and would provide products, accessories, and parts for two-wheelers. 'Hero Genuine Parts' will be made available for customers using any buyer app on the network, including Mystore and Paytm.
=== Entertainment ===
On September 24, 2024, OTTplay by HT Media Labs made an announcement about joining the ONDC network. OTTplay will offer its membership plans via gift cards starting from ₹249 per month, which include popular packs like Jhakaas, Simply South, and Totally Sorted. With these gift cards accessible over the ONDC network, customers will have easy access to them. The membership options facilitate users' exploration and enjoyment of entertainment catered to their own interests. It has become the first OTT platform to join ONDC.
=== Cross-border trade ===
The first successful international cross-border transaction was carried out by ONDC and the Singapore-based business-to-business network Proxtera on 10 January 2024.
== References ==
== External links ==
Official website | Wikipedia/Open_Network_for_Digital_Commerce |
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish (with significant advantage) between a function chosen randomly from the PRF family and a random oracle (a function whose outputs are fixed completely at random). Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes.
Pseudorandom functions are not to be confused with pseudorandom generators (PRGs). The guarantee of a PRG is that a single output appears random if the input was chosen at random. On the other hand, the guarantee of a PRF is that all its outputs appear random, regardless of how the corresponding inputs were chosen, as long as the function was drawn at random from the PRF family.
A pseudorandom function family can be constructed from any pseudorandom generator, using, for example, the "GGM" construction given by Goldreich, Goldwasser, and Micali. While in practice, block ciphers are used in most instances where a pseudorandom function is needed, they do not, in general, constitute a pseudorandom function family, as block ciphers such as AES are defined for only limited numbers of input and key sizes.
== Motivations from random functions ==
A PRF is an efficient (i.e. computable in polynomial time), deterministic function that maps two distinct sets (domain and range) and looks like a truly random function.
Essentially, a truly random function would just be composed of a lookup table filled with uniformly distributed random entries. However, in practice, a PRF is given an input string in the domain and a hidden random seed and runs multiple times with the same input string and seed, always returning the same value. Nonetheless, given an arbitrary input string, the output looks random if the seed is taken from a uniform distribution.
A PRF is considered to be good if its behavior is indistinguishable from a truly random function. Therefore, given an output from either the truly random function or a PRF, there should be no efficient method to correctly determine whether the output was produced by the truly random function or the PRF.
== Formal definition ==
Pseudorandom functions take inputs
x
∈
{
0
,
1
}
∗
{\displaystyle x\in \{0,1\}^{*}}
, where
∗
{\displaystyle {}^{*}}
is the Kleene star. Both the input size
I
=
|
x
|
{\displaystyle I=|x|}
and output size
λ
{\displaystyle \lambda }
depend only on the index size
n
:=
|
s
|
{\displaystyle n:=|s|}
.
A family of functions,
f
s
:
{
0
,
1
}
I
(
n
)
→
{
0
,
1
}
λ
(
n
)
{\displaystyle f_{s}:\left\{0,1\right\}^{I(n)}\rightarrow \left\{0,1\right\}^{\lambda (n)}}
is pseudorandom if the following conditions are satisfied:
There exists a polynomial-time algorithm that computes
f
s
(
x
)
{\displaystyle f_{s}(x)}
given any
s
{\displaystyle s}
and
x
{\displaystyle x}
.
Let
F
n
{\displaystyle F_{n}}
be the distribution of functions
f
s
{\displaystyle f_{s}}
where
s
{\displaystyle s}
is uniformly distributed over
{
0
,
1
}
n
{\displaystyle \{0,1\}^{n}}
, and let
R
F
n
{\displaystyle RF_{n}}
denote the uniform distribution over the set of all functions from
{
0
,
1
}
I
(
n
)
{\displaystyle \{0,1\}^{I(n)}}
to
{
0
,
1
}
λ
(
n
)
{\displaystyle \{0,1\}^{\lambda (n)}}
. Then we require
F
n
{\displaystyle F_{n}}
and
R
F
n
{\displaystyle RF_{n}}
are computationally indistinguishable, where n is the security parameter. That is, for any adversary that can query the oracle of a function sampled from either
F
n
{\displaystyle F_{n}}
or
R
F
n
{\displaystyle RF_{n}}
, the advantage that she can tell apart which kind of oracle is given to her is negligible in
n
{\displaystyle n}
.
== Oblivious pseudorandom functions ==
In an oblivious pseudorandom function, abbreviated OPRF, information is concealed from two parties that are involved in a PRF. That is, if Alice cryptographically hashes her secret value, cryptographically blinds the hash to produce the message she sends to Bob, and Bob mixes in his secret value and gives the result back to Alice, who unblinds it to get the final output, Bob is not able to see either Alice's secret value or the final output, and Alice is not able to see Bob's secret input, but Alice sees the final output which is a PRF of the two inputs -- a PRF of Alice's secret and Bob's secret. This enables transactions of sensitive cryptographic information to be secure even between untrusted parties.
An OPRF is used in some implementations of password-authenticated key agreement.
An OPRF is used in the Password Monitor functionality in Microsoft Edge.
== Application ==
PRFs can be used for:
dynamic perfect hashing; even if the adversary can change the key-distribution depending on the values the hashing function has assigned to the previous keys, the adversary can not force collisions.
Constructing deterministic, memoryless authentication schemes (message authentication code based) which are provably secure against chosen message attack.
Distributing unforgeable ID numbers, which can be locally verified by stations that contain only a small amount of storage.
Constructing identification friend or foe systems.
== See also ==
Pseudorandom permutation
== Notes ==
== References ==
Goldreich, Oded (2001). Foundations of Cryptography: Basic Tools. Cambridge: Cambridge University Press. ISBN 978-0-511-54689-1.
Pass, Rafael, A Course in Cryptography (PDF), retrieved 22 December 2015 | Wikipedia/Pseudorandom_function_family |
Multivariate cryptography is the generic term for asymmetric cryptographic primitives based on multivariate polynomials over a finite field
F
{\displaystyle F}
. In certain cases, those polynomials could be defined over both a ground and an extension field. If the polynomials have degree two, we talk about multivariate quadratics. Solving systems of multivariate polynomial equations is proven to be NP-complete. That's why those schemes are often considered to be good candidates for post-quantum cryptography. Multivariate cryptography has been very productive in terms of design and cryptanalysis. Overall, the situation is now more stable and the strongest schemes have withstood the test of time. It is commonly admitted that Multivariate cryptography turned out to be more successful as an approach to build signature schemes primarily because multivariate schemes provide the shortest signature among post-quantum algorithms.
== History ==
Tsutomu Matsumoto and Hideki Imai (1988) presented their so-called C* scheme at the Eurocrypt conference. Although C* has been broken by Jacques Patarin (1995), the general principle of Matsumoto and Imai has inspired a generation of improved proposals. In later work, the "Hidden Monomial Cryptosystems" was developed by (in French) Jacques Patarin. It is based on a ground and an extension field. "Hidden Field Equations" (HFE), developed by Patarin in 1996, remains a popular multivariate scheme today [P96]. The security of HFE has been thoroughly investigated, beginning with a direct Gröbner basis attack [FJ03, GJS06], key-recovery attacks (Kipnis & Shamir 1999) [BFP13], and more. The plain version of HFE is considered to be practically broken, in the sense that secure parameters lead to an impractical scheme. However, some simple variants of HFE, such as the minus variant and the vinegar variant allow one to strengthen the basic HFE against all known attacks.
In addition to HFE, Patarin developed other schemes. In 1997 he presented “Balanced Oil & Vinegar” and in 1999 “Unbalanced Oil and Vinegar”, in cooperation with Aviad Kipnis and Louis Goubin (Kipnis, Patarin & Goubin 1999).
== Construction ==
Multivariate Quadratics involves a public and a private key. The private key consists of two affine transformations, S and T, and an easy to invert quadratic map
P
′
:
F
m
→
F
n
{\displaystyle P'\colon F^{m}\rightarrow F^{n}}
. We denote the
n
×
n
{\displaystyle n\times n}
matrix of the affine endomorphisms
S
:
F
n
→
F
n
{\displaystyle S\colon F^{n}\rightarrow F^{n}}
by
M
S
{\displaystyle M_{S}}
and the shift vector by
v
S
∈
F
n
{\displaystyle v_{S}\in F^{n}}
and similarly for
T
:
F
m
→
F
m
{\displaystyle T\colon F^{m}\rightarrow F^{m}}
. In other words,
S
(
x
)
=
M
S
x
+
v
S
{\displaystyle S(x)=M_{S}x+v_{S}}
and
T
(
y
)
=
M
T
y
+
v
T
{\displaystyle T(y)=M_{T}y+v_{T}}
.
The triple
(
S
−
1
,
P
′
−
1
,
T
−
1
)
{\displaystyle (S^{-1},{P'}^{-1},T^{-1})}
is the private key, also known as the trapdoor. The public key is the composition
P
=
S
∘
P
′
∘
T
{\displaystyle P=S\circ P'\circ T}
which is by assumption hard to invert without the knowledge of the trapdoor.
== Signature ==
Signatures are generated using the private key and are verified using the public key as follows. The message is hashed to a vector in
y
∈
F
n
{\displaystyle y\in F^{n}}
via a known hash function. The signature is
x
=
P
−
1
(
y
)
=
T
−
1
(
P
′
−
1
(
S
−
1
(
y
)
)
)
{\displaystyle x=P^{-1}(y)=T^{-1}\left({P'}^{-1}\left(S^{-1}(y)\right)\right)}
.
The receiver of the signed document must have the public key P in possession. He computes the hash
y
{\displaystyle y}
and checks that the signature
x
{\displaystyle x}
fulfils
P
(
x
)
=
y
{\displaystyle P(x)=y}
.
== Applications ==
Unbalanced Oil and Vinegar
Hidden Field Equations
SFLASH by NESSIE
Rainbow
TTS
QUARTZ
QUAD (cipher)
Four multivariate cryptography signature schemes (GeMMS, LUOV, Rainbow and MQDSS) have made their way into the 2nd round of the NIST post-quantum competition: see slide 12 of the report.
== References ==
[BFP13] L. Bettale, Jean-Charles Faugère, and L. Perret, Cryptanalysis of HFE, Multi-HFE and Variants for Odd and Even Characteristic. DCC'13
[FJ03] Jean-Charles Faugère and A. Joux, Algebraic Cryptanalysis of Hidden Field Equation (HFE) Cryptosystems Using Gröbner Bases. CRYPTO'03
[GJS06] L. Granboulan, Antoine Joux, J. Stern: Inverting HFE Is Quasipolynomial. CRYPTO'06.
Kipnis, Aviad; Shamir, Adi (1999). "Cryptanalysis of the HFE Public Key Cryptosystem by Relinearization". Advances in Cryptology – CRYPTO' 99. Berlin, Heidelberg: Springer. doi:10.1007/3-540-48405-1_2. ISBN 978-3-540-66347-8. ISSN 0302-9743. MR 1729291.
Kipnis, Aviad; Patarin, Jacques; Goubin, Louis (1999). "Unbalanced Oil and Vinegar Signature Schemes" (PDF). In Jacques Stern (ed.). Advances in Cryptology – CRYPTO' 99. Eurocrypt'99. Springer. doi:10.1007/3-540-48910-x_15. ISBN 3-540-65889-0. ISSN 0302-9743. MR 1717470.
Matsumoto, Tsutomu; Imai, Hideki (1988). "Public Quadratic Polynomial-Tuples for Efficient Signature-Verification and Message-Encryption". Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. doi:10.1007/3-540-45961-8_39. ISBN 978-3-540-50251-7. ISSN 0302-9743. MR 0994679.
Patarin, Jacques (1995). "Cryptanalysis of the Matsumoto and Imai Public Key Scheme of Eurocrypt'88". Advances in Cryptology – CRYPT0' 95. Lecture Notes in Computer Science. Vol. 963. Berlin, Heidelberg: Springer. pp. 248–261. doi:10.1007/3-540-44750-4_20. ISBN 978-3-540-60221-7. ISSN 0302-9743. MR 1445572.
[P96] Jacques Patarin, Hidden Field Equations (HFE) and Isomorphisms of Polynomials (IP): two new Families of Asymmetric Algorithms (extended version); Eurocrypt '96
Christopher Wolf and Bart Preneel, Taxonomy of Public Key Schemes based on the problem of Multivariate Quadratic equations; Current Version: 2005-12-15
An Braeken, Christopher Wolf, and Bart Preneel, A Study of the Security of Unbalanced Oil and Vinegar Signature Schemes, Current Version: 2005-08-06
Jintai Ding, Research Project: Cryptanalysis on Rainbow and TTS multivariate public key signature scheme
Jacques Patarin, Nicolas Courtois, Louis Goubin, SFLASH, a fast asymmetric signature scheme for low-cost smartcards. Primitive specification and supporting documentation.
Bo-Yin Yang, Chen-Mou Cheng, Bor-Rong Chen, and Jiun-Ming Chen, Implementing Minimized Multivariate PKC on Low-Resource Embedded Systems, 2006
Bo-Yin Yang, Jiun-Ming Chen, and Yen-Hung Chen, TTS: High-Speed Signatures on a Low-Cost Smart Card, 2004
Nicolas T. Courtois, Short Signatures, Provable Security, Generic Attacks and Computational Security of Multivariate Polynomial Schemes such as HFE, Quartz and Sflash, 2005
Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone, Handbook of Applied Cryptography, 1997
== External links ==
[1] The HFE public key encryption and signature
[2] HFEBoost | Wikipedia/Multivariate_cryptography |
In mathematics, the supersingular isogeny graphs are a class of expander graphs that arise in computational number theory and have been applied in elliptic-curve cryptography. Their vertices represent supersingular elliptic curves over finite fields and their edges represent isogenies between curves.
== Definition and properties ==
A supersingular isogeny graph is determined by choosing a large prime number
p
{\displaystyle p}
and a small prime number
ℓ
{\displaystyle \ell }
, and considering the class of all supersingular elliptic curves defined over the finite field
F
p
2
{\displaystyle \mathbb {F} _{p^{2}}}
. There are approximately
(
p
+
1
)
/
12
{\displaystyle (p+1)/12}
such curves, each two of which can be related by isogenies. The vertices in the supersingular isogeny graph represent these curves (or more concretely, their j-invariants, elements of
F
p
2
{\displaystyle \mathbb {F} _{p^{2}}}
) and the edges represent isogenies of degree
ℓ
{\displaystyle \ell }
between two curves.
The supersingular isogeny graphs are
ℓ
+
1
{\displaystyle \ell +1}
-regular graphs, meaning that each vertex has exactly
ℓ
+
1
{\displaystyle \ell +1}
neighbors. They were proven by Pizer to be Ramanujan graphs, graphs with optimal expansion properties for their degree. The proof is based on Pierre Deligne's proof of the Ramanujan–Petersson conjecture.
== Cryptographic applications ==
One proposal for a cryptographic hash function involves starting from a fixed vertex of a supersingular isogeny graph, using the bits of the binary representation of an input value to determine a sequence of edges to follow in a walk in the graph, and using the identity of the vertex reached at the end of the walk as the hash value for the input. The security of the proposed hashing scheme rests on the assumption that it is difficult to find paths in this graph that connect arbitrary pairs of vertices.
It has also been proposed to use walks in two supersingular isogeny graphs with the same vertex set but different edge sets (defined using different choices of the
ℓ
{\displaystyle \ell }
parameter) to develop a key exchange primitive analogous to Diffie–Hellman key exchange, called supersingular isogeny key exchange, suggested as a form of post-quantum cryptography. However, a leading variant of supersingular isogeny key exchange was broken in 2022 using non-quantum methods.
== References == | Wikipedia/Supersingular_isogeny_graph |
Network switching subsystem (NSS) (or GSM core network) is the component of a GSM system that carries out call out and mobility management functions for mobile phones roaming on the network of base stations. It is owned and deployed by mobile phone operators and allows mobile devices to communicate with each other and telephones in the wider public switched telephone network (PSTN). The architecture contains specific features and functions which are needed because the phones are not fixed in one location.
The NSS originally consisted of the circuit-switched core network, used for traditional GSM services such as voice calls, SMS, and circuit switched data calls. It was extended with an overlay architecture to provide packet-switched data services known as the GPRS core network. This allows GSM mobile phones to have access to services such as WAP, MMS and the Internet.
== Mobile switching center (MSC) ==
=== Description ===
The mobile switching center (MSC) is the primary service delivery node for GSM/CDMA, responsible for routing voice calls and SMS as well as other services (such as conference calls, FAX, and circuit-switched data).
The MSC sets up and releases the end-to-end connection, handles mobility and hand-over requirements during the call and takes care of charging and real-time prepaid account monitoring.
In the GSM mobile phone system, in contrast with earlier analogue services, fax and data information is sent digitally encoded directly to the MSC. Only at the MSC is this re-coded into an "analogue" signal (although actually this will almost certainly mean sound is encoded digitally as a pulse-code modulation (PCM) signal in a 64-kbit/s timeslot, known as a DS0 in America).
There are various different names for MSCs in different contexts which reflects their complex role in the network, all of these terms though could refer to the same MSC, but doing different things at different times.
The gateway MSC (G-MSC) is the MSC that determines which "visited MSC" (V-MSC) the subscriber who is being called is currently located at. It also interfaces with the PSTN. All mobile to mobile calls and PSTN to mobile calls are routed through a G-MSC. The term is only valid in the context of one call, since any MSC may provide both the gateway function and the visited MSC function. However, some manufacturers design dedicated high capacity MSCs which do not have any base station subsystems (BSS) connected to them. These MSCs will then be the gateway MSC for many of the calls they handle.
The visited MSC (V-MSC) is the MSC where a customer is currently located. The visitor location register (VLR) associated with this MSC will have the subscriber's data in it.
The anchor MSC is the MSC from which a handover has been initiated. The target MSC is the MSC toward which a handover should take place. A mobile switching center server is a part of the redesigned MSC concept starting from 3GPP Release 4.
=== Mobile switching center server (MSC-Server, MSCS or MSS) ===
The mobile switching center server is a soft-switch variant (therefore it may be referred to as mobile soft switch, MSS) of the mobile switching center, which provides circuit-switched calling mobility management, and GSM services to the mobile phones roaming within the area that it serves. The functionality enables split control between (signaling ) and user plane (bearer in network element called as media gateway/MG), which guarantees better placement of network elements within the network.
MSS and media gateway (MGW) makes it possible to cross-connect circuit-switched calls switched by using IP, ATM AAL2 as well as TDM. More information is available in 3GPP TS 23.205.
The term Circuit switching (CS) used here originates from traditional telecommunications systems. However, modern MSS and MGW devices mostly use generic Internet technologies and form next-generation telecommunication networks. MSS software may run on generic computers or virtual machines in cloud environment.
=== Other GSM core network elements connected to the MSC ===
The MSC connects to the following elements:
The home location register (HLR) for obtaining data about the SIM and mobile services ISDN number (MSISDN; i.e., the telephone number).
The base station subsystems (BSS) which handles the radio communication with 2G and 2.5G mobile phones.
The UMTS terrestrial radio access network (UTRAN) which handles the radio communication with 3G mobile phones.
The visitor location register (VLR) provides subscriber information when the subscriber is outside its home network.
Other MSCs for procedures such as hand over.
=== Procedures implemented ===
Tasks of the MSC include:
Delivering calls to subscribers as they arrive based on information from the VLR.
Connecting outgoing calls to other mobile subscribers or the PSTN.
Delivering SMSs from subscribers to the short message service center (SMSC) and vice versa.
Arranging handovers from BSC to BSC.
Carrying out handovers from this MSC to another.
Supporting supplementary services such as conference calls or call hold.
Generating billing information.
== Home location register (HLR) ==
The home location register (HLR) is a central database that contains details of each mobile phone subscriber that is authorized to use the GSM core network. There can be several logical, and physical, HLRs per public land mobile network (PLMN), though one international mobile subscriber identity (IMSI)/MSISDN pair can be associated with only one logical HLR (which can span several physical nodes) at a time.
The HLRs store details of every SIM card issued by the mobile phone operator. Each SIM has a unique identifier called an IMSI which is the primary key to each HLR record.
Another important item of data associated with the SIM are the MSISDNs, which are the telephone numbers used by mobile phones to make and receive calls. The primary MSISDN is the number used for making and receiving voice calls and SMS, but it is possible for a SIM to have other secondary MSISDNs associated with it for fax and data calls. Each MSISDN is also a unique key to the HLR record. The HLR data is stored for as long as a subscriber remains with the mobile phone operator.
Examples of other data stored in the HLR against an IMSI record is:
GSM services that the subscriber has requested or been given.
General Packet Radio Service (GPRS) settings to allow the subscriber to access packet services.
Current location of subscriber (VLR and serving GPRS support node/SGSN).
Call divert settings applicable for each associated MSISDN.
The HLR is a system which directly receives and processes MAP transactions and messages from elements in the GSM network, for example, the location update messages received as mobile phones roam around.
=== Other GSM core network elements connected to the HLR ===
The HLR connects to the following elements:
The G-MSC for handling incoming calls
The VLR for handling requests from mobile phones to attach to the network
The SMSC for handling incoming SMSs
The voice mail system for delivering notifications to the mobile phone that a message is waiting
The AuC for authentication and ciphering and exchange of data (triplets)
=== Procedures implemented ===
The main function of the HLR is to manage the fact that SIMs and phones move around a lot. The following procedures are implemented to deal with this:
Manage the mobility of subscribers by means of updating their position in administrative areas called 'location areas', which are identified with a LAC. The action of a user of moving from one LA to another is followed by the HLR with a Location area update procedure.
Send the subscriber data to a VLR or SGSN when a subscriber first roams there.
Broker between the G-MSC or SMSC and the subscriber's current VLR in order to allow incoming calls or text messages to be delivered.
Remove subscriber data from the previous VLR when a subscriber has roamed away from it.
Responsible for all SRI related queries (i.e. for invoke SRI, HLR should give sack SRI or SRI reply).
== Authentication center (AuC) ==
=== Description ===
The authentication center (AuC) is a function to authenticate each SIM card that attempts to connect to the gsm core network (typically when the phone is powered on). Once the authentication is successful, the HLR is allowed to manage the SIM and services described above. An encryption key is also generated that is subsequently used to encrypt all wireless communications (voice, SMS, etc.) between the mobile phone and the GSM core network.
If the authentication fails, then no services are possible from that particular combination of SIM card and mobile phone operator attempted. There is an additional form of identification check performed on the serial number of the mobile phone described in the EIR section below, but this is not relevant to the AuC processing.
Proper implementation of security in and around the AuC is a key part of an operator's strategy to avoid SIM cloning.
The AuC does not engage directly in the authentication process, but instead generates data known as triplets for the MSC to use during the procedure. The security of the process depends upon a shared secret between the AuC and the SIM called the Ki. The Ki is securely burned into the SIM during manufacture and is also securely replicated onto the AuC. This Ki is never transmitted between the AuC and SIM, but is combined with the IMSI to produce a challenge/response for identification purposes and an encryption key called Kc for use in over the air communications.
=== Other GSM core network elements connected to the AuC ===
The AuC connects to the following elements:
The MSC which requests a new batch of triplet data for an IMSI after the previous data have been used. This ensures that same keys and challenge responses are not used twice for a particular mobile.
=== Procedures implemented ===
The AuC stores the following data for each IMSI:
the Ki
Algorithm id. (the standard algorithms are called A3 or A8, but an operator may choose a proprietary one).
When the MSC asks the AuC for a new set of triplets for a particular IMSI, the AuC first generates a random number known as RAND. This RAND is then combined with the Ki to produce two numbers as follows:
The Ki and RAND are fed into the A3 algorithm and the signed response (SRES) is calculated.
The Ki and RAND are fed into the A8 algorithm and a session key called Kc is calculated.
The numbers (RAND, SRES, Kc) form the triplet sent back to the MSC. When a particular IMSI requests access to the GSM core network, the MSC sends the RAND part of the triplet to the SIM. The SIM then feeds this number and the Ki (which is burned onto the SIM) into the A3 algorithm as appropriate and an SRES is calculated and sent back to the MSC. If this SRES matches with the SRES in the triplet (which it should if it is a valid SIM), then the mobile is allowed to attach and proceed with GSM services.
After successful authentication, the MSC sends the encryption key Kc to the base station controller (BSC) so that all communications can be encrypted and decrypted. Of course, the mobile phone can generate the Kc itself by feeding the same RAND supplied during authentication and the Ki into the A8 algorithm.
The AuC is usually collocated with the HLR, although this is not necessary. Whilst the procedure is secure for most everyday use, it is by no means hack proof. Therefore, a new set of security methods was designed for 3G phones.
In practice, A3 and A8 algorithms are generally implemented together (known as A3/A8, see COMP128). An A3/A8 algorithm is implemented in Subscriber Identity Module (SIM) cards and in GSM network Authentication Centers. It is used to authenticate the customer and generate a key for encrypting voice and data traffic, as defined in 3GPP TS 43.020 (03.20 before Rel-4). Development of A3 and A8 algorithms is considered a matter for individual GSM network operators, although example implementations are available. To encrypt Global System for Mobile Communications (GSM) cellular communications A5 algorithm is used.
== Visitor location register (VLR) ==
=== Description ===
The Visitor Location Register (VLR) is a database of the MSs (Mobile stations) that have roamed into the jurisdiction of the Mobile Switching Center (MSC) which it serves. Each main base transceiver station in the network is served by exactly one VLR (one BTS may be served by many MSCs in case of MSC in pool), hence a subscriber cannot be present in more than one VLR at a time.
The data stored in the VLR has either been received from the Home Location Register (HLR), or collected from the MS. In practice, for performance reasons, most vendors integrate the VLR directly to the V-MSC and, where this is not done, the VLR is very tightly linked with the MSC via a proprietary interface. Whenever an MSC detects a new MS in its network, in addition to creating a new record in the VLR, it also updates the HLR of the mobile subscriber, apprising it of the new location of that MS. If VLR data is corrupted it can lead to serious issues with text messaging and call services.
Data stored include:
IMSI (the subscriber's identity number).
Authentication data.
MSISDN (the subscriber's phone number).
GSM services that the subscriber is allowed to access.
access point (GPRS) subscribed.
The HLR address of the subscriber.
SCP Address(For Prepaid Subscriber).
=== Procedures implemented ===
The primary functions of the VLR are:
To inform the HLR that a subscriber has arrived in the particular area covered by the VLR.
To track where the subscriber is within the VLR area (location area) when no call is ongoing.
To allow or disallow which services the subscriber may use.
To allocate roaming numbers during the processing of incoming calls.
To purge the subscriber record if a subscriber becomes inactive whilst in the area of a VLR. The VLR deletes the subscriber's data after a fixed time period of inactivity and informs the HLR (e.g., when the phone has been switched off and left off or when the subscriber has moved to an area with no coverage for a long time).
To delete the subscriber record when a subscriber explicitly moves to another, as instructed by the HLR.
== Equipment identity register (EIR) ==
EIR is a system that handles real-time requests to check the IMEI (checkIMEI) of mobile devices that come from the switching equipment (MSC, SGSN, MME). The answer contains the result of the check:
whitelisted – the device is allowed to register on the network.
blacklisted – the device is prohibited from registering on the network.
greylisted – the device is allowed to register on the network temporarily.
An error ‘unknown equipment’ may also be returned.
The switching equipment must use the EIR response to determine whether or not to allow the device to register or re-register on the network. Since the response of switching equipment to ‘greylisted’ and ‘unknown equipment’ responses is not clearly described in the standard, they are most often not used.
Most often, EIR uses the IMEI blacklist feature, which contains the IMEI of the devices that need to be banned from the network. As a rule, these are stolen or lost devices. Mobile operators rarely use EIR capabilities to block devices on their own. Usually blocking begins when there is a law in the country, which obliges all cellular operators of the country to do so. Therefore, in the delivery of the basic components of the network switching subsystem (core network) is often already present EIR with basic functionality, which includes a ‘whitelisted’ response to all CheckIMEI and the ability to fill IMEI blacklist, which will be given a ‘blacklisted’ response.
When the legislative framework for blocking registration of devices in cellular networks appears in the country, the telecommunications regulator usually has a Central EIR (CEIR) system, which is integrated with the EIR of all operators and transmits to them the actual lists of identifiers that must be used when processing CheckIMEI requests. In doing so, there may be many new requirements for EIR systems that are not present in the legacy EIR:
Synchronizing lists with CEIR. CEIR systems are not described by a standard, so the protocols and exchange mode may differ from country to country.
Supporting additional lists – IMEI white list, IMEI grey list, list of allocated TACs, etc.
Support in lists not only IMEI but also bindings – IMEI-IMSI, IMEI-MSISDN, IMEI-IMSI-MSISDN.
Supporting the customized logic of lists applying.
Automatic adding the item to a list in separate scenarios.
Sending SMS notifications to subscribers in separate scenarios.
Integration with the billing system to receive IMSI-MSISDN bundles.
Accumulating the subscribers’ profiles (history of device changing).
Long-term storage of processing of all CheckIMEI requests.
Other functions may be required in individual cases. For example, Kazakhstan has introduced mandatory registration of devices and their binding to subscribers. But when a subscriber appears in the network with a new device, the network operation is not blocked completely, and the subscriber is allowed to register the device. To do this, there are blocked all services, except the following: calls to a specific service number, sending SMS to a specific service number, and all Internet traffic is redirected to a specific landing page. This is achieved by the fact that EIR can send commands to several MNO systems (HLR, PCRF, SMSC, etc.).
The most common suppliers of individual EIR systems (not as part of a complex solution) are the companies BroadForward, Mahindra Comviva, Mavenir, Nokia, Eastwind.
== Other support functions ==
Connected more or less directly to the GSM core network are many other functions.
=== Billing center (BC) ===
The billing center is responsible for processing the toll tickets generated by the VLRs and HLRs and generating a bill for each subscriber. It is also responsible for generating billing data of roaming subscriber.
=== Multimedia messaging service center (MMSC) ===
The multimedia messaging service center supports the sending of multimedia messages (e.g., images, audio, video and their combinations) to (or from) MMS-bluetooth.
=== Voicemail system (VMS) ===
The voicemail system records and stores voicemail.
=== Lawful interception functions ===
According to U.S. law, which has also been copied into many other countries, especially in Europe, all telecommunications equipment must provide facilities for monitoring the calls of selected users. There must be some level of support for this built into any of the different elements. The concept of lawful interception is also known, following the relevant U.S. law, as CALEA.
Generally, lawful Interception implementation is similar to the implementation of conference call. While A and B are talking with each other, C can join the call and listen silently.
== See also ==
The GSM core network.
Base station subsystem
COMP128
4GLET
== References ==
== External links ==
4GLET - the standardisation body for GSM and UMTS
UMTS Networks: Protocols, Terminology and Implementation - a PDF eBook by Gunnar Heine | Wikipedia/Network_switching_subsystem |
In cryptanalysis, gardening is the act of encouraging a target to use known plaintext in an encrypted message, typically by performing some action the target is sure to report. It was a term used during World War II at the British Government Code and Cypher School at Bletchley Park, England, for schemes to entice the Germans to include particular words, which the British called "cribs", in their encrypted messages. This term presumably came from RAF minelaying missions, or "gardening" sorties. "Gardening" was standard RAF slang for sowing mines in rivers, ports and oceans from low heights, possibly because each sea area around the European coasts was given a code-name of flowers or vegetables.
The technique is claimed to have been most effective against messages produced by the German Navy's Enigma machines. If the Germans had recently swept a particular area for mines, and analysts at Bletchley Park were in need of some cribs, they might (and apparently did on several occasions) request that the area be mined again. This would hopefully evoke encrypted messages from the local command mentioning Minen (German for mines), the location, and perhaps messages also from the headquarters with minesweeping ships to assign to that location, mentioning the same. It worked often enough to try several times.
This crib-based decryption is usually not considered a chosen-plaintext attack, even though plain text effectively chosen by the British was injected into the ciphertext, because the choice was very limited and the cryptanalysts did not care what the crib was so long as they knew it. Most chosen-plaintext cryptanalysis requires very specific patterns (e.g. long repetitions of "AAA...", "BBB...", "CCC...", etc.) which could not be mistaken for normal messages. It does, however, show that the boundary between these two is somewhat fuzzy.
Another notable example occurred during the lead up to the Battle of Midway. U.S. cryptanalysts had decrypted numerous Japanese messages about a planned operation at "AF", but the code word "AF" came from a second location code book which was not known. Suspecting it was Midway island, they arranged for the garrison there to report in the clear about a breakdown of their desalination plant. A Japanese report about "AF" being short of fresh water soon followed, confirming the guess.
== See also ==
Cryptanalysis of the Enigma
Known-plaintext attack
== Notes == | Wikipedia/Gardening_(cryptography) |
A progressive web application (PWA), or progressive web app, is a type of web app that can be installed on a device as a standalone application. PWAs are installed using the offline cache of the device's web browser.
PWAs were introduced from 2016 as an alternative to native (device-specific) applications, with the advantage that they do not require separate bundling or distribution for different platforms. They can be used on a range of different systems, including desktop and mobile devices. Publishing the app to digital distribution systems, such as the Apple App Store, Google Play, or the Microsoft Store on Windows, is optional.
Because a PWA is delivered in the form of a webpage or website built using common web technologies including HTML, CSS, JavaScript, and WebAssembly, it can work on any platform with a PWA-compatible browser. As of 2021, PWA features are supported to varying degrees by Google Chrome, Apple Safari, Brave, Firefox for Android, and Microsoft Edge but not by Firefox for desktop.
== History ==
=== Predecessors ===
At Apple's Worldwide Developers Conference in 2007, Steve Jobs announced that the iPhone would "run applications created with Web 2.0 Internet standards". No software development kit (SDK) was required, and the apps would be fully integrated into the device through the Safari browser engine. This model was later switched to the App Store, as a means of appeasing frustrated developers. In October 2007 Jobs announced that an SDK would be launched the following year. As a result, although Apple continued to support web apps, the vast majority of iOS applications shifted toward the App Store.
Beginning in the early 2010s dynamic web pages allowed web technologies to be used to create interactive web applications. Responsive web design, and the screen-size flexibility it provides have made PWA development more accessible. Continued enhancements to HTML, CSS, and JavaScript allowed web applications to incorporate greater levels of interactivity, making native-like experiences possible on a website.
In 2013, Mozilla released Firefox OS. It was intended to be an open-source operating system for running web apps as native apps on mobile devices. Firefox OS was based on the Gecko rendering engine with a user interface called Gaia, written in HTML5. The development of Firefox OS ended in 2016, and the project was completely discontinued in 2017, although a fork of Firefox OS was used as the basis of KaiOS, a feature phone platform.
=== Initial introduction ===
In 2015, designer Frances Berriman and Google Chrome engineer Alex Russell coined the term "progressive web apps" to describe apps taking advantage of new features supported by modern browsers, including service workers and web app manifests, that let users upgrade web apps to progressive web applications in their native operating system (OS). Google then put significant efforts into promoting PWA development for Android. Firefox introduced support for service workers in 2016, and Microsoft Edge and Apple Safari followed in 2018, making service workers available on all major systems.
By 2019, PWAs were supported by desktop versions of some browsers, including Microsoft Edge (on Windows) and Google Chrome (on Windows, macOS, ChromeOS, and Linux).
In December 2020, Firefox for desktop abandoned the implementation of PWAs (specifically, removed the prototype "site-specific browser" configuration that had been available as an experimental feature). A Firefox architect noted: "The signal I hope we are sending is that PWA support is not coming to desktop Firefox anytime soon." Mozilla supports PWAs on Android and plans to keep supporting it.
== Browser support ==
== Characteristics ==
Progressive web apps are all designed to work on any browser that is compliant with the appropriate web standards. As with other cross-platform solutions, the goal is to help developers build cross-platform apps more easily than they would with native apps. Progressive web apps employ the progressive enhancement web development strategy.
Some progressive web apps use an architectural approach called the App Shell Model. In this model, service workers store the Basic User Interface or "shell" of the responsive web design web application in the browser's offline cache. This model allows for PWAs to maintain native-like use with or without web connectivity. This can improve loading time, by providing an initial static frame, a layout or architecture into which content can be loaded progressively as well as dynamically.
=== Technical criteria ===
The technical baseline criteria for a site to be considered a progressive web app and therefore capable of being installed by browsers were described by Russell in 2016 and updated since:
Originate from a secure origin. Served over TLS and have no active mixed content. Progressive web apps must be served via HTTPS to ensure user privacy, security, and content authenticity.
Register a service worker with a fetch handler. Progressive web apps must use service workers to create programmable content caches. Unlike regular HTTP web cache, which caches content after the first use and then rely on various heuristics to guess when content is no longer needed, programmable caches can explicitly prefetch content in advance before it's used for the first time and explicitly discard it when it is no longer needed. This requirement helps pages to be accessible offline or on low-quality networks.
Reference a web app manifest. The manifest must contain at least the five key properties: name or short_name, start_url, and display (with a value of standalone, fullscreen or minimal-ui), and icons (with 192 px and a 512 px versions). Information contained in the manifest makes PWAs easily shareable via a URL, discoverable by search engines, and alleviates complex installation procedures (but PWAs may still be listed in a third-party app store). Furthermore, PWAs support native app-style interactions and navigation, including being added to the home screen, displaying splash screens, etc.
== Technologies ==
There are many technologies commonly used to create progressive web apps. A web application is considered a PWA if it satisfies the installation criteria, thus can work offline and can be added to the device's home screen. To meet this definition, all PWAs require at minimum a manifest and a service worker. Other technologies may be used to store data, communicate with servers or execute code.
=== Manifest ===
The web app manifest is a World Wide Web Consortium (W3C) specification defining a JSON-based manifest (usually labelled manifest.json) to provide developers with a centralized place to put metadata associated with a web application including:
The name of the web application
Links to the web app icons or image objects
The preferred URL to launch or open the web app
The web app configuration data
Default orientation of the web app
The option to set the display mode, e.g. full screen
This metadata is crucial for an app to be added to a home screen or otherwise listed alongside native apps.
==== iOS support ====
iOS Safari partially implements manifests, while most of the PWA metadata can be defined via Apple-specific extensions to the meta tags. These tags allow developers to enable full-screen display, define icons and splash screens, and specify a name for the application.
=== Service workers ===
A service worker is a web worker that implements a programmable network proxy that can respond to web/HTTP requests from the main document. It is able to check the availability of a remote server, cache content when that server is available, and serve that content to the document later. Service workers, like any other web workers, work separately from the main document context. Service workers can handle push notifications and synchronize data in the background, cache or retrieve resource requests, intercept network requests and receive centralized updates independently of the document that registered them, even when that document is not loaded.
Service workers go through a three-step lifecycle of Registration, Installation and Activation. Registration involves telling the browser the location of the service worker in preparation for installation. Installation occurs when there is no service worker installed in the browser for the web app, or if there is an update to the service worker. Activation occurs when all of the PWA's pages are closed, so that there is no conflict between the previous version and the updated one. The lifecycle also helps maintain consistency when switching among versions of a service worker since only a single service worker can be active for a domain.
=== WebAssembly ===
WebAssembly allows precompiled code to run in a web browser, at near-native speed. Thus, libraries written in languages such as C can be added to web apps. Announced in 2015 and first released in March 2017, WebAssembly became a W3C recommendation on December 5, 2019 and it received the Programming Languages Software Award from ACM SIGPLAN in 2021.
=== Data storage ===
Progressive Web App execution contexts get unloaded whenever possible, so progressive web apps need to store the majority of their long-term internal state (user data, dynamically loaded application resources) in one of the following manners:
Web Storage
Web Storage is a W3C standard API that enables key-value storage in modern browsers. The API consists of two objects, sessionStorage (that enables session-only storage that gets wiped upon browser session end) and localStorage (that enables storage that persists across sessions).
Indexed Database API
Indexed Database API is a W3C standard database API available in all major browsers. The API is supported by modern browsers and enables storage of JSON objects and any structures representable as a string. The Indexed Database API can be used with a wrapper library providing additional constructs around it.
== Comparison with native apps ==
In 2017, Twitter released Twitter Lite, a PWA alternative to the official native Android and iOS apps. According to Twitter, Twitter Lite consumed only 1–3% of the size of the native apps. Starbucks provides a PWA that is 99.84% smaller than its equivalent iOS app. After deploying its PWA, Starbucks doubled the number of online orders, with desktop users ordering at about the same rate as mobile app users.
A 2018 review published by Forbes, found that users of Pinterest's PWA spent 40% more time on the site compared to the previous mobile website. Ad revenue rates also increased by 44%, and core engagements by 60%. Flipkart saw 60% of customers who had uninstalled their native app return to use the Flipkart PWA. Lancôme saw an 84% decrease in time until the page is interactive, leading to a 17% increase in conversions and a 53% increase in mobile sessions on iOS with their PWA.
== Release via app stores ==
Since a PWA does not require separate bundling or distribution for different platforms and is available to users via the web, it is not necessary for developers to release it over digital distribution systems like the Apple App Store, Google Play, Microsoft Store, or Samsung Galaxy Store. The major app stores support the publication of PWAs to varying to degrees. Google Play, Microsoft Store, and Samsung Galaxy Store support PWAs, but Apple App Store does not. Microsoft Store publishes some qualifying PWAs automatically (even without app authors' requests) after discovering them via Bing indexing.
== See also ==
Google Lighthouse, an open-source audit tool for PWAs developed by Google
== References ==
== External links ==
Web Applications Working Group index of standards | Wikipedia/Progressive_web_application |
Tor is a free overlay network for enabling anonymous communication. It is built on free and open-source software run by over seven thousand volunteer-operated relays worldwide, as well as by millions of users who route their Internet traffic via random paths through these relays.
Using Tor makes it more difficult to trace a user's Internet activity by preventing any single point on the Internet (other than the user's device) from being able to view both where traffic originated from and where it is ultimately going to at the same time. This conceals a user's location and usage from anyone performing network surveillance or traffic analysis from any such point, protecting the user's freedom and ability to communicate confidentially.
== History ==
The core principle of Tor, known as onion routing, was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson, and computer scientists Michael G. Reed and David Goldschlag, to protect American intelligence communications online. Onion routing is implemented by means of encryption in the application layer of the communication protocol stack, nested like the layers of an onion. The alpha version of Tor, developed by Syverson and computer scientists Roger Dingledine and Nick Mathewson and then called The Onion Routing project (which was later given the acronym "Tor"), was launched on 20 September 2002. The first public release occurred a year later.
In 2004, the Naval Research Laboratory released the code for Tor under a free license, and the Electronic Frontier Foundation (EFF) began funding Dingledine and Mathewson to continue its development. In 2006, Dingledine, Mathewson, and five others founded The Tor Project, a Massachusetts-based 501(c)(3) research-education nonprofit organization responsible for maintaining Tor. The EFF acted as The Tor Project's fiscal sponsor in its early years, and early financial supporters included the U.S. Bureau of Democracy, Human Rights, and Labor and International Broadcasting Bureau, Internews, Human Rights Watch, the University of Cambridge, Google, and Netherlands-based Stichting NLnet.
Over the course of its existence, various Tor vulnerabilities have been discovered and occasionally exploited. Attacks against Tor are an active area of academic research that is welcomed by The Tor Project itself.
== Usage ==
Tor enables its users to surf the Internet, chat and send instant messages anonymously, and is used by a wide variety of people for both licit and illicit purposes. Tor has, for example, been used by criminal enterprises, hacktivism groups, and law enforcement agencies at cross purposes, sometimes simultaneously; likewise, agencies within the U.S. government variously fund Tor (the U.S. State Department, the National Science Foundation, and – through the Broadcasting Board of Governors, which itself partially funded Tor until October 2012 – Radio Free Asia) and seek to subvert it. Tor was one of a dozen circumvention tools evaluated by a Freedom House-funded report based on user experience from China in 2010, which include Ultrasurf, Hotspot Shield, and Freegate.
Tor is not meant to completely solve the issue of anonymity on the web. Tor is not designed to completely erase tracking but instead to reduce the likelihood for sites to trace actions and data back to the user.
Tor can also be used for illegal activities. These can include privacy protection or censorship circumvention, as well as distribution of child abuse content, drug sales, or malware distribution.
Tor has been described by The Economist, in relation to Bitcoin and Silk Road, as being "a dark corner of the web". It has been targeted by the American National Security Agency and the British GCHQ signals intelligence agencies, albeit with marginal success, and more successfully by the British National Crime Agency in its Operation Notarise. At the same time, GCHQ has been using a tool named "Shadowcat" for "end-to-end encrypted access to VPS over SSH using the Tor network". Tor can be used for anonymous defamation, unauthorized news leaks of sensitive information, copyright infringement, distribution of illegal sexual content, selling controlled substances, weapons, and stolen credit card numbers, money laundering, bank fraud, credit card fraud, identity theft and the exchange of counterfeit currency; the black market utilizes the Tor infrastructure, at least in part, in conjunction with Bitcoin. It has also been used to brick IoT devices.
In its complaint against Ross William Ulbricht of Silk Road, the US Federal Bureau of Investigation acknowledged that Tor has "known legitimate uses". According to CNET, Tor's anonymity function is "endorsed by the Electronic Frontier Foundation (EFF) and other civil liberties groups as a method for whistleblowers and human rights workers to communicate with journalists". EFF's Surveillance Self-Defense guide includes a description of where Tor fits in a larger strategy for protecting privacy and anonymity.
In 2014, the EFF's Eva Galperin told Businessweek that "Tor's biggest problem is press. No one hears about that time someone wasn't stalked by their abuser. They hear how somebody got away with downloading child porn."
The Tor Project states that Tor users include "normal people" who wish to keep their Internet activities private from websites and advertisers, people concerned about cyber-spying, and users who are evading censorship such as activists, journalists, and military professionals. In November 2013, Tor had about four million users. According to the Wall Street Journal, in 2012 about 14% of Tor's traffic connected from the United States, with people in "Internet-censoring countries" as its second-largest user base. Tor is increasingly used by victims of domestic violence and the social workers and agencies that assist them, even though shelter workers may or may not have had professional training on cyber-security matters. Properly deployed, however, it precludes digital stalking, which has increased due to the prevalence of digital media in contemporary online life. Along with SecureDrop, Tor is used by news organizations such as The Guardian, The New Yorker, ProPublica and The Intercept to protect the privacy of whistleblowers.
In March 2015, the Parliamentary Office of Science and Technology released a briefing which stated that "There is widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the U.K." and that "Even if it were, there would be technical challenges." The report further noted that Tor "plays only a minor role in the online viewing and distribution of indecent images of children" (due in part to its inherent latency); its usage by the Internet Watch Foundation, the utility of its onion services for whistleblowers, and its circumvention of the Great Firewall of China were touted.
Tor's executive director, Andrew Lewman, also said in August 2014 that agents of the NSA and the GCHQ have anonymously provided Tor with bug reports.
The Tor Project's FAQ offers supporting reasons for the EFF's endorsement:
Criminals can already do bad things. Since they're willing to break laws, they already have lots of options available that provide better privacy than Tor provides...
Tor aims to provide protection for ordinary people who want to follow the law. Only criminals have privacy right now, and we need to fix that...
So yes, criminals could in theory use Tor, but they already have better options, and it seems unlikely that taking Tor away from the world will stop them from doing their bad things. At the same time, Tor and other privacy measures can fight identity theft, physical crimes like stalking, and so on.
== Operation ==
Tor aims to conceal its users' identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation of onion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe. These onion routers employ encryption in a multi-layered manner (hence the onion metaphor) to ensure perfect forward secrecy between relays, thereby providing users with anonymity in a network location. That anonymity extends to the hosting of censorship-resistant content by Tor's anonymous onion service feature. Furthermore, by keeping some of the entry relays (bridge relays) secret, users can evade Internet censorship that relies upon blocking public Tor relays.
Because the IP address of the sender and the recipient are not both in cleartext at any hop along the way, anyone eavesdropping at any point along the communication channel cannot directly identify both ends. Furthermore, to the recipient, it appears that the last Tor node (called the exit node), rather than the sender, is the originator of the communication.
=== Originating traffic ===
A Tor user's SOCKS-aware applications can be configured to direct their network traffic through a Tor instance's SOCKS interface, which is listening on TCP port 9050 (for standalone Tor) or 9150 (for Tor Browser bundle) at localhost. Tor periodically creates virtual circuits through the Tor network through which it can multiplex and onion-route that traffic to its destination. Once inside a Tor network, the traffic is sent from router to router along the circuit, ultimately reaching an exit node at which point the cleartext packet is available and is forwarded on to its original destination. Viewed from the destination, the traffic appears to originate at the Tor exit node.
Tor's application independence sets it apart from most other anonymity networks: it works at the Transmission Control Protocol (TCP) stream level. Applications whose traffic is commonly anonymized using Tor include Internet Relay Chat (IRC), instant messaging, and World Wide Web browsing.
=== Onion services ===
Tor can also provide anonymity to websites and other servers. Servers configured to receive inbound connections only through Tor are called onion services (formerly, hidden services). Rather than revealing a server's IP address (and thus its network location), an onion service is accessed through its onion address, usually via the Tor Browser or some other software designed to use Tor. The Tor network understands these addresses by looking up their corresponding public keys and introduction points from a distributed hash table within the network. It can route data to and from onion services, even those hosted behind firewalls or network address translators (NAT), while preserving the anonymity of both parties. Tor is necessary to access these onion services. Because the connection never leaves the Tor network, and is handled by the Tor application on both ends, the connection is always end-to-end encrypted.
Onion services were first specified in 2003 and have been deployed on the Tor network since 2004. They are unlisted by design, and can only be discovered on the network if the onion address is already known, though a number of sites and services do catalog publicly known onion addresses. Popular sources of .onion links include Pastebin, Twitter, Reddit, other Internet forums, and tailored search engines.
While onion services are often discussed in terms of websites, they can be used for any TCP service, and are commonly used for increased security or easier routing to non-web services, such as secure shell remote login, chat services such as IRC and XMPP, or file sharing. They have also become a popular means of establishing peer-to-peer connections in messaging and file sharing applications. Web-based onion services can be accessed from a standard web browser without client-side connection to the Tor network using services like Tor2web, which remove client anonymity.
== Attacks and limitations ==
Like all software with an attack surface, Tor's protections have limitations, and Tor's implementation or design have been vulnerable to attacks at various points throughout its history. While most of these limitations and attacks are minor, either being fixed without incident or proving inconsequential, others are more notable.
=== End-to-end traffic correlation ===
Tor is designed to provide relatively high performance network anonymity against an attacker with a single vantage point on the connection (e.g., control over one of the three relays, the destination server, or the user's internet service provider). Like all current low-latency anonymity networks, Tor cannot and does not attempt to protect against an attacker performing simultaneous monitoring of traffic at the boundaries of the Tor network—i.e., the traffic entering and exiting the network. While Tor does provide protection against traffic analysis, it cannot prevent traffic confirmation via end-to-end correlation.
There are no documented cases of this limitation being used at scale; as of the 2013 Snowden leaks, law enforcement agencies such as the NSA were unable to perform dragnet surveillance on Tor itself, and relied on attacking other software used in conjunction with Tor, such as vulnerabilities in web browsers.
However, targeted attacks have been able to make use of traffic confirmation on individual Tor users, via police surveillance or investigations confirming that a particular person already under suspicion was sending Tor traffic at the exact times the connections in question occurred. The relay early traffic confirmation attack also relied on traffic confirmation as part of its mechanism, though on requests for onion service descriptors, rather than traffic to the destination server.
=== Consensus attacks ===
Like many decentralized systems, Tor relies on a consensus mechanism to periodically update its current operating parameters. For Tor, these include network parameters like which nodes are good and bad relays, exits, guards, and how much traffic each can handle. Tor's architecture for deciding the consensus relies on a small number of directory authority nodes voting on current network parameters. Currently, there are nine directory authority nodes, and their health is publicly monitored. The IP addresses of the authority nodes are hard coded into each Tor client. The authority nodes vote every hour to update the consensus, and clients download the most recent consensus on startup. A compromise of the majority of the directory authorities could alter the consensus in a way that is beneficial to an attacker. Alternatively, a network congestion attack, such as a DDoS, could theoretically prevent the consensus nodes from communicating, and thus prevent voting to update the consensus (though such an attack would be visible).
=== Server-side restrictions ===
Tor makes no attempt to conceal the IP addresses of exit relays, or hide from a destination server the fact that a user is connecting via Tor. Operators of Internet sites therefore have the ability to prevent traffic from Tor exit nodes or to offer reduced functionality for Tor users. For example, Wikipedia generally forbids all editing when using Tor or when using an IP address also used by a Tor exit node, and the BBC blocks the IP addresses of all known Tor exit nodes from its iPlayer service.
Apart from intentional restrictions of Tor traffic, Tor use can trigger defense mechanisms on websites intended to block traffic from IP addresses observed to generate malicious or abnormal traffic. Because traffic from all Tor users is shared by a comparatively small number of exit relays, tools can misidentify distinct sessions as originating from the same user, and attribute the actions of a malicious user to a non-malicious user, or observe an unusually large volume of traffic for one IP address. Conversely, a site may observe a single session connecting from different exit relays, with different Internet geolocations, and assume the connection is malicious, or trigger geo-blocking. When these defense mechanisms are triggered, it can result in the site blocking access, or presenting captchas to the user.
=== Relay early traffic confirmation attack ===
In July 2014, the Tor Project issued a security advisory for a "relay early traffic confirmation" attack, disclosing the discovery of a group of relays attempting to de-anonymize onion service users and operators. A set of onion service directory nodes (i.e., the Tor relays responsible for providing information about onion services) were found to be modifying traffic of requests. The modifications made it so the requesting client's guard relay, if controlled by the same adversary as the onion service directory node, could easily confirm that the traffic was from the same request. This would allow the adversary to simultaneously know the onion service involved in the request, and the IP address of the client requesting it (where the requesting client could be a visitor or owner of the onion service).
The attacking nodes joined the network on 30 January, using a Sybil attack to comprise 6.4% of guard relay capacity, and were removed on 4 July. In addition to removing the attacking relays, the Tor application was patched to prevent the specific traffic modifications that made the attack possible.
In November 2014, there was speculation in the aftermath of Operation Onymous, resulting in 17 arrests internationally, that a Tor weakness had been exploited. A representative of Europol was secretive about the method used, saying: "This is something we want to keep for ourselves. The way we do this, we can't share with the whole world, because we want to do it again and again and again."
A BBC source cited a "technical breakthrough"
that allowed tracking physical locations of servers, and the initial number of infiltrated sites led to the exploit speculation. A Tor Project representative downplayed this possibility, suggesting that execution of more traditional police work was more likely.
In November 2015, court documents suggested a connection between the attack and arrests, and raised concerns about security research ethics. The documents revealed that the FBI obtained IP addresses of onion services and their visitors from a "university-based research institute", leading to arrests. Reporting from Motherboard found that the timing and nature of the relay early traffic confirmation attack matched the description in the court documents. Multiple experts, including a senior researcher with the ICSI of UC Berkeley, Edward Felten of Princeton University, and the Tor Project agreed that the CERT Coordination Center of Carnegie Mellon University was the institute in question. Concerns raised included the role of an academic institution in policing, sensitive research involving non-consenting users, the non-targeted nature of the attack, and the lack of disclosure about the incident.
=== Vulnerable applications ===
Many attacks targeted at Tor users result from flaws in applications used with Tor, either in the application itself, or in how it operates in combination with Tor. E.g., researchers with Inria in 2011 performed an attack on BitTorrent users by attacking clients that established connections both using and not using Tor, then associating other connections shared by the same Tor circuit.
==== Fingerprinting ====
When using Tor, applications may still provide data tied to a device, such as information about screen resolution, installed fonts, language configuration, or supported graphics functionality, reducing the set of users a connection could possibly originate from, or uniquely identifying them. This information is known as the device fingerprint, or browser fingerprint in the case of web browsers. Applications implemented with Tor in mind, such as Tor Browser, can be designed to minimize the amount of information leaked by the application and reduce its fingerprint.
==== Eavesdropping ====
Tor cannot encrypt the traffic between an exit relay and the destination server.
If an application does not add an additional layer of end-to-end encryption between the client and the server, such as Transport Layer Security (TLS, used in HTTPS) or the Secure Shell (SSH) protocol, this allows the exit relay to capture and modify traffic. Attacks from malicious exit relays have recorded usernames and passwords, and modified Bitcoin addresses to redirect transactions.
Some of these attacks involved actively removing the HTTPS protections that would have otherwise been used. To attempt to prevent this, Tor Browser has since made it so only connections via onion services or HTTPS are allowed by default.
==== Firefox/Tor browser attacks ====
In 2011, the Dutch authority investigating child pornography discovered the IP address of a Tor onion service site from an unprotected administrator's account and gave it to the FBI, who traced it to Aaron McGrath. After a year of surveillance, the FBI launched "Operation Torpedo" which resulted in McGrath's arrest and allowed them to install their Network Investigative Technique (NIT) malware on the servers for retrieving information from the users of the three onion service sites that McGrath controlled. The technique exploited a vulnerability in Firefox/Tor Browser that had already been patched, and therefore targeted users that had not updated. A Flash application sent a user's IP address directly back to an FBI server, and resulted in revealing at least 25 US users as well as numerous users from other countries. McGrath was sentenced to 20 years in prison in early 2014, while at least 18 others (including a former Acting HHS Cyber Security Director) were sentenced in subsequent cases.
In August 2013, it was discovered that the Firefox browsers in many older versions of the Tor Browser Bundle were vulnerable to a JavaScript-deployed shellcode attack, as NoScript was not enabled by default. Attackers used this vulnerability to extract users' MAC and IP addresses and Windows computer names. News reports linked this to a FBI operation targeting Freedom Hosting's owner, Eric Eoin Marques, who was arrested on a provisional extradition warrant issued by a United States' court on 29 July. The FBI extradited Marques from Ireland to the state of Maryland on 4 charges: distributing; conspiring to distribute; and advertising child pornography, as well as aiding and abetting advertising of child pornography. The FBI acknowledged the attack in a 12 September 2013 court filing in Dublin; further technical details from a training presentation leaked by Edward Snowden revealed the code name for the exploit as "EgotisticalGiraffe".
In 2022, Kaspersky researchers found that when looking up "Tor Browser" in Chinese on YouTube, one of the URLs provided under the top-ranked Chinese-language video actually pointed to malware disguised as Tor Browser. Once installed, it saved browsing history and form data that genuine Tor forgot by default, and downloaded malicious components if the device's IP addresses was in China. Kaspersky researchers noted that the malware was not stealing data to sell for profit, but was designed to identify users.
==== Onion service configuration ====
Like client applications that use Tor, servers relying on onion services for protection can introduce their own weaknesses. Servers that are reachable through Tor onion services and the public Internet can be subject to correlation attacks, and all onion services are susceptible to misconfigured services (e.g., identifying information included by default in web server error responses), leaking uptime and downtime statistics, intersection attacks, or various user errors. The OnionScan program, written by independent security researcher Sarah Jamie Lewis, comprehensively examines onion services for such flaws and vulnerabilities.
== Software ==
The main implementation of Tor is written primarily in C. Starting in 2020, the Tor Project began development of a full rewrite of the C Tor codebase in Rust. The project, named Arti, was publicly announced in July 2021.
=== Tor Browser ===
The Tor Browser is a web browser capable of accessing the Tor network. It was created as the Tor Browser Bundle by Steven J. Murdoch and announced in January 2008. The Tor Browser consists of a modified Mozilla Firefox ESR web browser, the TorButton, TorLauncher, NoScript and the Tor proxy. Users can run the Tor Browser from removable media. It can operate under Microsoft Windows, macOS, Android and Linux.
The default search engine is DuckDuckGo (until version 4.5, Startpage.com was its default). The Tor Browser automatically starts Tor background processes and routes traffic through the Tor network. Upon termination of a session the browser deletes privacy-sensitive data such as HTTP cookies and the browsing history. This is effective in reducing web tracking and canvas fingerprinting, and it also helps to prevent creation of a filter bubble.
To allow download from places where accessing the Tor Project URL may be risky or blocked, a GitHub repository is maintained with links for releases hosted in other domains.
=== Tor Messenger ===
On 29 October 2015, the Tor Project released Tor Messenger Beta, an instant messaging program based on Instantbird with Tor and OTR built in and used by default. Like Pidgin and Adium, Tor Messenger supports multiple different instant messaging protocols; however, it accomplishes this without relying on libpurple, implementing all chat protocols in the memory-safe language JavaScript instead.
According to Lucian Armasu of Toms Hardware, in April 2018, the Tor Project shut down the Tor Messenger project for three reasons: the developers of "Instabird" [sic] discontinued support for their own software, limited resources and known metadata problems. The Tor Messenger developers explained that overcoming any vulnerabilities discovered in the future would be impossible due to the project relying on outdated software dependencies.
=== Tor Phone ===
In 2016, Tor developer Mike Perry announced a prototype tor-enabled smartphone based on CopperheadOS. It was meant as a direction for Tor on mobile. The project was called 'Mission Improbable'. Copperhead's then lead developer Daniel Micay welcomed the prototype.
=== Third-party applications ===
The Vuze (formerly Azureus) BitTorrent client, Bitmessage anonymous messaging system, and TorChat instant messenger include Tor support. The Briar messenger routes all messaging via Tor by default. OnionShare allows users to share files using Tor.
The Guardian Project is actively developing a free and open-source suite of applications and firmware for the Android operating system to improve the security of mobile communications. The applications include the ChatSecure instant messaging client, Orbot Tor implementation (also available for iOS), Orweb (discontinued) privacy-enhanced mobile browser, Orfox, the mobile counterpart of the Tor Browser, ProxyMob Firefox add-on, and ObscuraCam.
Onion Browser is open-source, privacy-enhancing web browser for iOS, which uses Tor. It is available in the iOS App Store, and source code is available on GitHub.
Brave added support for Tor in its desktop browser's private-browsing mode.
=== Security-focused operating systems ===
In September of 2024, it was announced that Tails, a security-focused operating system, had become part of the Tor Project. Other security-focused operating systems that make or made extensive use of Tor include Hardened Linux From Scratch, Incognito, Liberté Linux, Qubes OS, Subgraph, Parrot OS, Tor-ramdisk, and Whonix.
== Reception, impact, and legislation ==
Tor has been praised for providing privacy and anonymity to vulnerable Internet users such as political activists fearing surveillance and arrest, ordinary web users seeking to circumvent censorship, and people who have been threatened with violence or abuse by stalkers. The U.S. National Security Agency (NSA) has called Tor "the king of high-secure, low-latency Internet anonymity", and BusinessWeek magazine has described it as "perhaps the most effective means of defeating the online surveillance efforts of intelligence agencies around the world". Other media have described Tor as "a sophisticated privacy tool", "easy to use" and "so secure that even the world's most sophisticated electronic spies haven't figured out how to crack it".
Advocates for Tor say it supports freedom of expression, including in countries where the Internet is censored, by protecting the privacy and anonymity of users. The mathematical underpinnings of Tor lead it to be characterized as acting "like a piece of infrastructure, and governments naturally fall into paying for infrastructure they want to use".
The project was originally developed on behalf of the U.S. intelligence community and continues to receive U.S. government funding, and has been criticized as "more resembl[ing] a spook project than a tool designed by a culture that values accountability or transparency". As of 2012, 80% of The Tor Project's $2M annual budget came from the United States government, with the U.S. State Department, the Broadcasting Board of Governors, and the National Science Foundation as major contributors, aiming "to aid democracy advocates in authoritarian states". Other public sources of funding include DARPA, the U.S. Naval Research Laboratory, and the Government of Sweden. Some have proposed that the government values Tor's commitment to free speech, and uses the darknet to gather intelligence. Tor also receives funding from NGOs including Human Rights Watch, and private sponsors including Reddit and Google. Dingledine said that the United States Department of Defense funds are more similar to a research grant than a procurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users.
Critics say that Tor is not as secure as it claims, pointing to U.S. law enforcement's investigations and shutdowns of Tor-using sites such as web-hosting company Freedom Hosting and online marketplace Silk Road. In October 2013, after analyzing documents leaked by Edward Snowden, The Guardian reported that the NSA had repeatedly tried to crack Tor and had failed to break its core security, although it had had some success attacking the computers of individual Tor users. The Guardian also published a 2012 NSA classified slide deck, entitled "Tor Stinks", which said: "We will never be able to de-anonymize all Tor users all the time", but "with manual analysis we can de-anonymize a very small fraction of Tor users". When Tor users are arrested, it is typically due to human error, not to the core technology being hacked or cracked. On 7 November 2014, for example, a joint operation by the FBI, ICE Homeland Security investigations and European Law enforcement agencies led to 17 arrests and the seizure of 27 sites containing 400 pages. A late 2014 report by Der Spiegel using a new cache of Snowden leaks revealed, however, that as of 2012 the NSA deemed Tor on its own as a "major threat" to its mission, and when used in conjunction with other privacy tools such as OTR, Cspace, ZRTP, RedPhone, Tails, and TrueCrypt was ranked as "catastrophic," leading to a "near-total loss/lack of insight to target communications, presence..."
=== 2011 ===
In March 2011, The Tor Project received the Free Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt."
Iran tried to block Tor at least twice in 2011. One attempt simply blocked all servers with 2-hour-expiry security certificates; it was successful for less than 24 hours.
=== 2012 ===
In 2012, Foreign Policy magazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers".
=== 2013 ===
In 2013, Jacob Appelbaum described Tor as a "part of an ecosystem of software that helps people regain and reclaim their autonomy. It helps to enable people to have agency of all kinds; it helps others to help each other and it helps you to help yourself. It runs, it is open and it is supported by a large community spread across all walks of life."
In June 2013, whistleblower Edward Snowden used Tor to send information about PRISM to The Washington Post and The Guardian.
=== 2014 ===
In 2014, the Russian government offered a $111,000 contract to "study the possibility of obtaining technical information about users and users' equipment on the Tor anonymous network".
In September 2014, in response to reports that Comcast had been discouraging customers from using the Tor Browser, Comcast issued a public statement that "We have no policy against Tor, or any other browser or software."
In October 2014, The Tor Project hired the public relations firm Thomson Communications to improve its public image (particularly regarding the terms "Dark Net" and "hidden services," which are widely viewed as being problematic) and to educate journalists about the technical aspects of Tor.
Turkey blocked downloads of Tor Browser from the Tor Project.
=== 2015 ===
In June 2015, the special rapporteur from the United Nations' Office of the High Commissioner for Human Rights specifically mentioned Tor in the context of the debate in the U.S. about allowing so-called backdoors in encryption programs for law enforcement purposes in an interview for The Washington Post.
In July 2015, the Tor Project announced an alliance with the Library Freedom Project to establish exit nodes in public libraries. The pilot program, which established a middle relay running on the excess bandwidth afforded by the Kilton Library in Lebanon, New Hampshire, making it the first library in the U.S. to host a Tor node, was briefly put on hold when the local city manager and deputy sheriff voiced concerns over the cost of defending search warrants for information passed through the Tor exit node. Although the Department of Homeland Security (DHS) had alerted New Hampshire authorities to the fact that Tor is sometimes used by criminals, the Lebanon Deputy Police Chief and the Deputy City Manager averred that no pressure to strong-arm the library was applied, and the service was re-established on 15 September 2015. U.S. Rep. Zoe Lofgren (D-Calif) released a letter on 10 December 2015, in which she asked the DHS to clarify its procedures, stating that "While the Kilton Public Library's board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility that DHS employees are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens." In a 2016 interview, Kilton Library IT Manager Chuck McAndrew stressed the importance of getting libraries involved with Tor: "Librarians have always cared deeply about protecting privacy, intellectual freedom, and access to information (the freedom to read). Surveillance has a very well-documented chilling effect on intellectual freedom. It is the job of librarians to remove barriers to information." The second library to host a Tor node was the Las Naves Public Library in Valencia, Spain, implemented in the first months of 2016.
In August 2015, an IBM security research group, called "X-Force", put out a quarterly report that advised companies to block Tor on security grounds, citing a "steady increase" in attacks from Tor exit nodes as well as botnet traffic.
In September 2015, Luke Millanta created OnionView (now defunct), a web service that plots the location of active Tor relay nodes onto an interactive map of the world. The project's purpose was to detail the network's size and escalating growth rate.
In December 2015, Daniel Ellsberg (of the Pentagon Papers), Cory Doctorow (of Boing Boing), Edward Snowden, and artist-activist Molly Crabapple, amongst others, announced their support of Tor.
=== 2016 ===
In March 2016, New Hampshire state representative Keith Ammon introduced a bill allowing public libraries to run privacy software. The bill specifically referenced Tor. The text was crafted with extensive input from Alison Macrina, the director of the Library Freedom Project. The bill was passed by the House 268–62.
Also in March 2016, the first Tor node, specifically a middle relay, was established at a library in Canada, the Graduate Resource Centre (GRC) in the Faculty of Information and Media Studies (FIMS) at the University of Western Ontario. Given that the running of a Tor exit node is an unsettled area of Canadian law, and that in general institutions are more capable than individuals to cope with legal pressures, Alison Macrina of the Library Freedom Project has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established.
On 16 May 2016, CNN reported on the case of core Tor developer "isis agora lovecruft", who had fled to Germany under the threat of a subpoena by the FBI during the Thanksgiving break of the previous year. The Electronic Frontier Foundation legally represented lovecruft.
On 2 December 2016, The New Yorker reported on burgeoning digital privacy and security workshops in the San Francisco Bay Area, particularly at the hackerspace Noisebridge, in the wake of the 2016 United States presidential election; downloading the Tor browser was mentioned. Also, in December 2016, Turkey has blocked the usage of Tor, together with ten of the most used VPN services in Turkey, which were popular ways of accessing banned social media sites and services.
Tor (and Bitcoin) was fundamental to the operation of the dark web marketplace AlphaBay, which was taken down in an international law enforcement operation in July 2017. Despite federal claims that Tor would not shield a user, however, elementary operational security errors outside of the ambit of the Tor network led to the site's downfall.
=== 2017 ===
In June 2017 the Democratic Socialists of America recommended intermittent Tor usage for politically active organizations and individuals as a defensive mitigation against information security threats. And in August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web (which relies on Tor as its infrastructure) on behalf of banks and retailers routinely share their findings with the FBI and with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust.
=== 2018 ===
In June 2018, Venezuela blocked access to the Tor network. The block affected both direct connections to the network and connections being made via bridge relays.
On 20 June 2018, Bavarian police raided the homes of the board members of the non-profit Zwiebelfreunde, a member of torservers.net, which handles the European financial transactions of riseup.net in connection with a blog post there which apparently promised violence against the upcoming Alternative for Germany convention. Tor came out strongly against the raid on its support organization, which provides legal and financial aid for the setting up and maintenance of high-speed relays and exit nodes. According to torservers.net, on 23 August 2018 the German court at Landgericht München ruled that the raid and seizures were illegal. The hardware and documentation seized had been kept under seal, and purportedly were neither analyzed nor evaluated by the Bavarian police.
Since October 2018, Chinese online communities within Tor have begun to dwindle due to increased efforts to stop them by the Chinese government.
=== 2019 ===
In November 2019, Edward Snowden called for a full, unabridged simplified Chinese translation of his autobiography, Permanent Record, as the Chinese publisher had violated their agreement by expurgating all mentions of Tor and other matters deemed politically sensitive by the Chinese Communist Party.
=== 2021 ===
On 8 December 2021, the Russian government agency Roskomnadzor announced it has banned Tor and six VPN services for failing to abide by the Russian Internet blacklist. Russian ISPs unsuccessfully attempted to block Tor's main website as well as several bridges beginning on 1 December 2021. The Tor Project has appealed to Russian courts over this ban.
=== 2022 ===
In response to Internet censorship during the Russian invasion of Ukraine, the BBC and VOA have directed Russian audiences to Tor. The Russian government increased efforts to block access to Tor through technical and political means, while the network reported an increase in traffic from Russia, and increased Russian use of its anti-censorship Snowflake tool.
Russian courts temporarily lifted the blockade on Tor's website (but not connections to relays) on May 24, 2022 due to Russian law requiring that the Tor Project be involved in the case. However, the blockade was reinstated on July 21, 2022.
Iran implemented rolling internet blackouts during the Mahsa Amini protests, and Tor and Snowflake were used to circumvent them.
China, with its highly centralized control of its internet, had effectively blocked Tor.
== Improved security ==
Tor responded to earlier vulnerabilities listed above by patching them and improving security. In one way or another, human (user) errors can lead to detection. The Tor Project website provides the best practices (instructions) on how to properly use the Tor browser. When improperly used, Tor is not secure. For example, Tor warns its users that not all traffic is protected; only the traffic routed through the Tor browser is protected. Users are also warned to use HTTPS versions of websites, not to torrent with Tor, not to enable browser plugins, not to open documents downloaded through Tor while online, and to use safe bridges. Users are also warned that they cannot provide their name or other revealing information in web forums over Tor and stay anonymous at the same time.
Despite intelligence agencies' claims that 80% of Tor users would be de-anonymized within 6 months in the year 2013, that has still not happened. In fact, as late as September 2016, the FBI could not locate, de-anonymize and identify the Tor user who hacked into the email account of a staffer on Hillary Clinton's email server.
The best tactic of law enforcement agencies to de-anonymize users appears to remain with Tor-relay adversaries running poisoned nodes, as well as counting on the users themselves using the Tor browser improperly. For example, downloading a video through the Tor browser and then opening the same file on an unprotected hard drive while online can make the users' real IP addresses available to authorities.
=== Odds of detection ===
When properly used, odds of being de-anonymized through Tor are said to be extremely low. Tor project's co-founder Nick Mathewson explained that the problem of "Tor-relay adversaries" running poisoned nodes means that a theoretical adversary of this kind is not the network's greatest threat:
"No adversary is truly global, but no adversary needs to be truly global," he says. "Eavesdropping on the entire Internet is a several-billion-dollar problem. Running a few computers to eavesdrop on a lot of traffic, a selective denial of service attack to drive traffic to your computers, that's like a tens-of-thousands-of-dollars problem." At the most basic level, an attacker who runs two poisoned Tor nodes—one entry, one exit—is able to analyse traffic and thereby identify the tiny, unlucky percentage of users whose circuit happened to cross both of those nodes. In 2016 the Tor network offers a total of around 7,000 relays, around 2,000 guard (entry) nodes and around 1,000 exit nodes. So the odds of such an event happening are one in two million (1⁄2000 × 1⁄1000), give or take."
Tor does not provide protection against end-to-end timing attacks: if an attacker can watch the traffic coming out of the target computer, and also the traffic arriving at the target's chosen destination (e.g. a server hosting a .onion site), that attacker can use statistical analysis to discover that they are part of the same circuit.
A similar attack has been used by German authorities to track down users related to Boystown.
== Levels of security ==
Depending on individual user needs, Tor browser offers three levels of security located under the Security Level (the small gray shield at the top-right of the screen) icon > Advanced Security Settings. In addition to encrypting the data, including constantly changing an IP address through a virtual circuit comprising successive, randomly selected Tor relays, several other layers of security are at a user's disposal:
=== Standard ===
At this level, all features from the Tor Browser and other websites are enabled.
=== Safer ===
This level eliminates website features that are often pernicious to the user. This may cause some sites to lose functionality. JavaScript is disabled on all non-HTTPS sites; some fonts and mathematical symbols are disabled. Also, audio and video (HTML5 media) are click-to-play.
=== Safest ===
This level only allows website features required for static sites and basic services. These changes affect images, media, and scripts. Javascript is disabled by default on all sites; some fonts, icons, math symbols, and images are disabled; audio and video (HTML5 media) are click-to-play.
=== Introduction of proof-of-work defense for Onion services ===
In 2023, Tor unveiled a new defense mechanism to safeguard its onion services against denial of service (DoS) attacks. With the release of Tor 0.4.8, this proof-of-work (PoW) defense promises to prioritize legitimate network traffic while deterring malicious attacks.
== Notes ==
== See also ==
== Citations ==
== General and cited references ==
== External links ==
Official website | Wikipedia/Tor_(anonymity_network) |
MARS is a block cipher that was IBM's submission to the Advanced Encryption Standard process. MARS was selected as an AES finalist in August 1999, after the AES2 conference in March 1999, where it was voted as the fifth and last finalist algorithm.
The MARS design team included Don Coppersmith, who had been involved in the creation of the previous Data Encryption Standard (DES) twenty years earlier. The project was specifically designed to resist future advances in cryptography by adopting a layered, compartmentalized approach.
IBM's official report stated that MARS and Serpent were the only two finalists to implement any form of safety net with regard to would-be advances in cryptographic mathematics. The Twofish team made a similar statement about its cipher.
MARS has a 128-bit block size and a variable key size of between 128 and 448 bits (in 32-bit increments). Unlike most block ciphers, MARS has a heterogeneous structure: several rounds of a cryptographic core are "jacketed" by unkeyed mixing rounds, together with key whitening.
== Security analysis ==
Subkeys with long runs of ones or zeroes may lead to efficient attacks on MARS. The two least significant bits of round keys used in multiplication are
always set to the value 1. Thus, there are always two inputs that are unchanged through the multiplication process regardless of the subkey, and two others which have fixed output regardless of the subkey.
A meet-in-the-middle attack published in 2004 by John Kelsey and Bruce Schneier can break 21 out of 32 rounds of MARS.
== Notes and references ==
== External links ==
256bit Ciphers - MARS Reference implementation and derived code
Specification of MARS Archived 2018-09-11 at the Wayback Machine
IBM's page on MARS
SCAN's entry on MARS
John Savard's description of MARS | Wikipedia/MARS_(cryptography) |
One-key MAC (OMAC) is a family of message authentication codes constructed from a block cipher much like the CBC-MAC algorithm. It may be used to provide assurance of the authenticity and, hence, the integrity of data. Two versions are defined:
The original OMAC of February 2003, which is rarely used. The preferred name is now "OMAC2".
The OMAC1 refinement, which became an NIST recommendation in May 2005 under the name CMAC.
OMAC is free for all uses: it is not covered by any patents.
== History ==
The core of the CMAC algorithm is a variation of CBC-MAC that Black and Rogaway proposed and analyzed under the name "XCBC" and submitted to NIST. The XCBC algorithm efficiently addresses the security deficiencies of CBC-MAC, but requires three keys.
Iwata and Kurosawa proposed an improvement of XCBC that requires less key material (just one key) and named the resulting algorithm One-Key CBC-MAC (OMAC) in their papers. They later submitted the OMAC1 (= CMAC), a refinement of OMAC, and additional security analysis.
== Algorithm ==
To generate an ℓ-bit CMAC tag (t) of a message (m) using a b-bit block cipher (E) and a secret key (k), one first generates two b-bit sub-keys (k1 and k2) using the following algorithm (this is equivalent to multiplication by x and x2 in a finite field GF(2b)). Let ≪ denote the standard left-shift operator and ⊕ denote bit-wise exclusive or:
Calculate a temporary value k0 = Ek(0).
If msb(k0) = 0, then k1 = k0 ≪ 1, else k1 = (k0 ≪ 1) ⊕ C; where C is a certain constant that depends only on b. (Specifically, C is the non-leading coefficients of the lexicographically first irreducible degree-b binary polynomial with the minimal number of ones: 0x1B for 64-bit, 0x87 for 128-bit, and 0x425 for 256-bit blocks.)
If msb(k1) = 0, then k2 = k1 ≪ 1, else k2 = (k1 ≪ 1) ⊕ C.
Return keys (k1, k2) for the MAC generation process.
As a small example, suppose b = 4, C = 00112, and k0 = Ek(0) = 01012. Then k1 = 10102 and k2 = 0100 ⊕ 0011 = 01112.
The CMAC tag generation process is as follows:
Divide message into b-bit blocks m = m1 ∥ ... ∥ mn−1 ∥ mn, where m1, ..., mn−1 are complete blocks. (The empty message is treated as one incomplete block.)
If mn is a complete block then mn′ = k1 ⊕ mn else mn′ = k2 ⊕ (mn ∥ 10...02).
Let c0 = 00...02.
For i = 1, ..., n − 1, calculate ci = Ek(ci−1 ⊕ mi).
cn = Ek(cn−1 ⊕ mn′)
Output t = msbℓ(cn).
The verification process is as follows:
Use the above algorithm to generate the tag.
Check that the generated tag is equal to the received tag.
== Variants ==
CMAC-C1 is a variant of CMAC that provides additional commitment and context-discovery security guarantees.
== Implementations ==
Python implementation: see the usage of the AES_CMAC() function in "impacket/blob/master/tests/misc/test_crypto.py", and its definition in "impacket/blob/master/impacket/crypto.py"
Ruby implementation
== References ==
== External links ==
RFC 4493 The AES-CMAC Algorithm
RFC 4494 The AES-CMAC-96 Algorithm and Its Use with IPsec
RFC 4615 The Advanced Encryption Standard-Cipher-based Message Authentication Code-Pseudo-Random Function-128 (AES-CMAC-PRF-128)
OMAC Online Test
More information on OMAC
Rust implementation | Wikipedia/OMAC_(cryptography) |
A cryptographic hash function (CHF) is a hash algorithm (a map of an arbitrary binary string to a binary string with a fixed size of
n
{\displaystyle n}
bits) that has special properties desirable for a cryptographic application:
the probability of a particular
n
{\displaystyle n}
-bit output result (hash value) for a random input string ("message") is
2
−
n
{\displaystyle 2^{-n}}
(as for any good hash), so the hash value can be used as a representative of the message;
finding an input string that matches a given hash value (a pre-image) is infeasible, assuming all input strings are equally likely. The resistance to such search is quantified as security strength: a cryptographic hash with
n
{\displaystyle n}
bits of hash value is expected to have a preimage resistance strength of
n
{\displaystyle n}
bits, unless the space of possible input values is significantly smaller than
2
n
{\displaystyle 2^{n}}
(a practical example can be found in § Attacks on hashed passwords);
a second preimage resistance strength, with the same expectations, refers to a similar problem of finding a second message that matches the given hash value when one message is already known;
finding any pair of different messages that yield the same hash value (a collision) is also infeasible: a cryptographic hash is expected to have a collision resistance strength of
n
/
2
{\displaystyle n/2}
bits (lower due to the birthday paradox).
Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, (message) digests, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes.
Non-cryptographic hash functions are used in hash tables and to detect accidental errors; their constructions frequently provide no resistance to a deliberate attack. For example, a denial-of-service attack on hash tables is possible if the collisions are easy to find, as in the case of linear cyclic redundancy check (CRC) functions.
== Properties ==
Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value.
A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties:
Pre-image resistance
Given a hash value h, it should be difficult to find any message m such that h = hash(m). This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks.
Second pre-image resistance
Given an input m1, it should be difficult to find a different input m2 such that hash(m1) = hash(m2). This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks.
Collision resistance
It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise, collisions may be found by a birthday attack.
Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function that is only second pre-image resistant is considered insecure and is therefore not recommended for real applications.
Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash.
A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given hash(m) and len(m) but not m, by choosing a suitable m′ an attacker can calculate hash(m ∥ m′), where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems.
In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties.
Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum.
=== Degree of difficulty ===
In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a dozen bits to the latter.
For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult.
In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use.
== Illustration ==
An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.)
== Applications ==
=== Verifying the integrity of messages and files ===
An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file.
MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value.
=== Signature generation and verification ===
Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes.
=== Password verification ===
Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value.
However, use of standard cryptographic hash functions, such as the SHA series, is no longer considered safe for password storage.: 5.1.1.2 These algorithms are designed to be computed quickly, so if the hashed values are compromised, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. For details, see § Attacks on hashed passwords.
A password hash also requires the use of a large random, non-secret salt value that can be stored with the password hash. The salt is hashed with the password, altering the password hash mapping for each password, thereby making it infeasible for an adversary to store tables of precomputed hash values to which the password hash digest can be compared or to test a large number of purloined hash values in parallel.
=== Proof-of-work ===
A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try 219 times to find a valid header.
=== File or data identifier ===
A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree, which allows for additional benefits.
One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too.
However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants, such as open source applications with multiple sources of download, where malicious files could be substituted in with the same appearance to the user, or an authentic file is modified to contain malicious data.
==== Content-addressable storage ====
== Hash functions based on block ciphers ==
There are several methods to use a block cipher to build a cryptographic hash function, specifically a one-way compression function.
The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead.
A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption.
== Hash function design ==
=== Merkle–Damgård construction ===
A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function.
The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form.
=== Wide pipe versus narrow pipe ===
A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction.
Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks.
== Use in building other cryptographic primitives ==
Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly.
Message authentication codes (MACs) (also called keyed hash functions) are often built from hash functions. HMAC is such a MAC.
Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example, SHACAL, BEAR and LION.
Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it.
Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function.
== Concatenation ==
Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions.
For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to n-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those n messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time.
== Cryptographic hash algorithms ==
There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions.
=== MD5 ===
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds, which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes).
=== SHA-1 ===
SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes).
Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3.
=== RIPEMD-160 ===
RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes).
=== Whirlpool ===
Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes).
=== SHA-2 ===
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher.
SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64.
The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes.
=== SHA-3 ===
SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction, which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits.
Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits.
=== BLAKE2 ===
BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size.
=== BLAKE3 ===
BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2.
== Attacks on cryptographic hash algorithms ==
There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions.
Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD).
On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251 and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer.
In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known, and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance.
A successful, practical attack broke MD5 (used within certificates for Transport Layer Security) in 2008.
Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack.
== Attacks on hashed passwords ==
Rather than store plain user passwords, controlled-access systems frequently store the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all-too-frequent occurrence), the thief will only have the hash values, not the passwords.
Passwords may still be retrieved by an attacker from the hashes, because most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that even all possible combinations may be tested if calculation of the hash does not take too much time.
The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt.
The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches.: 5.1.1.2 Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KDFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.: 5.1.1.2
== See also ==
== References ==
=== Citations ===
=== Sources ===
Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (7 December 2018). "Hash functions". Handbook of Applied Cryptography. CRC Press. pp. 33–. ISBN 978-0-429-88132-9.
Aumasson, Jean-Philippe (6 November 2017). Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press. ISBN 978-1-59327-826-7. OCLC 1012843116.
== External links ==
Paar, Christof; Pelzl, Jan (2009). "11: Hash Functions". Understanding Cryptography, A Textbook for Students and Practitioners. Springer. Archived from the original on 2012-12-08. (companion web site contains online cryptography course that covers hash functions)
"The ECRYPT Hash Function Website".
Buldas, A. (2011). "Series of mini-lectures about cryptographic hash functions". Archived from the original on 2012-12-06.
Open source python based application with GUI used to verify downloads. | Wikipedia/One-way_hash_function |
The basic study of system design is the understanding of component parts and their subsequent interaction with one another.
Systems design has appeared in a variety of fields, including sustainability, computer/software architecture, and sociology.
== Product Development ==
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured.
Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data, for an electronic control system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering.
=== Physical design ===
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
User Interface Design
Data Design
Process Design
=== Architecture design ===
Designing the overall structure of a system focuses on creating a scalable, reliable, and efficient system. For example, services like Google, Twitter, Facebook, Amazon, and Netflix exemplify large-scale distributed systems. Here are key considerations:
Functional and non-functional requirements
Capacity estimation
Usage of relational and/or NoSQL databases
Vertical scaling, horizontal scaling, sharding
Load balancing
Primary-secondary replication
Cache and CDN
Stateless and Stateful servers
Datacenter georouting
Message Queue, Publish-Subscribe Architecture
Performance Metrics Monitoring and Logging
Build, test, configure deploy automation
Finding single point of failure
API Rate Limiting
Service Level Agreement
=== Machine Learning Systems Design ===
Machine learning systems design focuses on building scalable, reliable, and efficient systems that integrate machine learning (ML) models to solve real-world problems. ML systems require careful consideration of data pipelines, model training, and deployment infrastructure. ML systems are often used in applications such as recommendation engines, fraud detection, and natural language processing.
Key components to consider when designing ML systems include:
Problem Definition: Clearly define the problem, data requirements, and evaluation metrics. Success criteria often involve accuracy, latency, and scalability.
Data Pipeline: Build automated pipelines to collect, clean, transform, and validate data.
Model Selection and Training: Choose appropriate algorithms (e.g., linear regression, decision trees, neural networks) and train models using frameworks like TensorFlow or PyTorch.
Deployment and Serving: Deploy trained models to production environments using scalable architectures such as containerized services (e.g., Docker and Kubernetes).
Monitoring and Maintenance: Continuously monitor model performance, retrain as necessary, and ensure data drift is addressed.
Designing an ML system involves balancing trade-offs between accuracy, latency, cost, and maintainability, while ensuring system scalability and reliability. The discipline overlaps with MLOps, a set of practices that unifies machine learning development and operations to ensure smooth deployment and lifecycle management of ML systems.
== See also ==
== References ==
== Further reading ==
Bentley, Lonnie D.; Dittman, Kevin C.; Whitten, Jeffrey L. (2004) [1986]. System analysis and design methods.
Churchman, C. West (1971). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. New York: Basic Books. ISBN 0-465-01608-1.
Gosling, William (1962). The design of engineering systems. New York: Wiley.
Hawryszkiewycz, Igor T. (1994). Introduction to system analysis and design. Prentice Hall PTR.
Levin, Mark S. (2015). Modular system design and evaluation. Springer.
Maier, Mark W.; Rechtin, Eberhardt (2000). The Art of System Architecting (Second ed.). Boca Raton: CRC Press.
J. H. Saltzer; D. P. Reed; D. D. Clark (1 November 1984). "End-to-end arguments in system design" (PDF). ACM Transactions on Computer Systems. 2 (4): 277–288. doi:10.1145/357401.357402. ISSN 0734-2071. S2CID 215746877. Wikidata Q56503280.
Whitten, Jeffrey L.; Bentley, Lonnie D.; Dittman, Kevin C. (2004). Fundamentals of system analysis and design methods.
== External links ==
Interactive System Design. Course by Chris Johnson, 1993
[1] Course by Prof. Birgit Weller, 2020 | Wikipedia/System_designer |
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption.
== Types ==
Symmetric-key encryption can use either stream ciphers or block ciphers.
Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table.
Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks.
== Implementations ==
Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA.
== Use as a cryptographic primitive ==
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption.
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM).
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
== Construction of symmetric ciphers ==
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.
== Security of symmetric ciphers ==
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.
Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant".
== Key management ==
== Key establishment ==
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
== Key generation ==
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization.
== Reciprocal cipher ==
A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher.
Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.
Examples of reciprocal ciphers include:
Atbash
Beaufort cipher
Enigma machine
Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher.
the Porta polyalphabetic cipher is self-reciprocal.
Purple cipher
RC4
ROT13
XOR cipher
Vatsyayana cipher
The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round.
== Notes ==
== References == | Wikipedia/Private_key_cryptography |
In cryptography, a distributed point function is a cryptographic primitive that allows two distributed processes to share a piece of information, and compute functions of their shared information, without revealing the information itself to either process. It is a form of secret sharing.
Given any two values
a
{\displaystyle a}
and
b
{\displaystyle b}
one can define a point function
P
a
,
b
(
x
)
{\displaystyle P_{a,b}(x)}
(a variant of the Kronecker delta function) by
P
a
,
b
(
x
)
=
{
b
for
x
=
a
0
for
x
≠
a
{\displaystyle P_{a,b}(x)={\begin{cases}b\qquad {\text{for }}x=a\\0\qquad {\text{for }}x\neq a\end{cases}}}
That is, it is zero everywhere except at
a
{\displaystyle a}
, where its value is
b
{\displaystyle b}
.
A distributed point function consists of a family of functions
f
k
{\displaystyle f_{k}}
, parameterized by keys
k
{\displaystyle k}
, and a method for deriving two keys
q
{\displaystyle q}
and
r
{\displaystyle r}
from any two input values
a
{\displaystyle a}
and
b
{\displaystyle b}
, such that for all
x
{\displaystyle x}
,
P
a
,
b
(
x
)
=
f
q
(
x
)
⊕
f
r
(
x
)
,
{\displaystyle P_{a,b}(x)=f_{q}(x)\oplus f_{r}(x),}
where
⊕
{\displaystyle \oplus }
denotes the bitwise exclusive or of the two function values. However, given only one of these two keys, the values of
f
{\displaystyle f}
for that key should be indistinguishable from random.
It is known how to construct an efficient distributed point function from another cryptographic primitive, a one-way function.
In the other direction, if a distributed point function is known, then it is possible to perform private information retrieval.
As a simplified example of this, it is possible to test whether a key
a
{\displaystyle a}
belongs to replicated distributed database without revealing to the database servers (unless they collude with each other) which key was sought. To find the key
a
{\displaystyle a}
in the database, create a distributed point function for
P
a
,
1
(
x
)
{\displaystyle P_{a,1}(x)}
and send the resulting two keys
q
{\displaystyle q}
and
r
{\displaystyle r}
to two different servers holding copies of the database. Each copy applies its function
f
q
{\displaystyle f_{q}}
or
f
r
{\displaystyle f_{r}}
to all the keys in its copy of the database, and returns the exclusive or of the results. The two returned values will differ if
a
{\displaystyle a}
belongs to the database, and will be equal otherwise.
== References == | Wikipedia/Distributed_point_function |
Malleability is a property of some cryptographic algorithms. An encryption algorithm is "malleable" if it is possible to transform a ciphertext into another ciphertext which decrypts to a related plaintext. That is, given an encryption of a plaintext
m
{\displaystyle m}
, it is possible to generate another ciphertext which decrypts to
f
(
m
)
{\displaystyle f(m)}
, for a known function
f
{\displaystyle f}
, without necessarily knowing or learning
m
{\displaystyle m}
.
Malleability is often an undesirable property in a general-purpose cryptosystem, since it allows an attacker to modify the contents of a message. For example, suppose that a bank uses a stream cipher to hide its financial information, and a user sends an encrypted message containing, say, "TRANSFER $0000100.00 TO ACCOUNT #199." If an attacker can modify the message on the wire, and can guess the format of the unencrypted message, the attacker could change the amount of the transaction, or the recipient of the funds, e.g. "TRANSFER $0100000.00 TO ACCOUNT #227". Malleability does not refer to the attacker's ability to read the encrypted message. Both before and after tampering, the attacker cannot read the encrypted message.
On the other hand, some cryptosystems are malleable by design. In other words, in some circumstances it may be viewed as a feature that anyone can transform an encryption of
m
{\displaystyle m}
into a valid encryption of
f
(
m
)
{\displaystyle f(m)}
(for some restricted class of functions
f
{\displaystyle f}
) without necessarily learning
m
{\displaystyle m}
. Such schemes are known as homomorphic encryption schemes.
A cryptosystem may be semantically secure against chosen-plaintext attacks or even non-adaptive chosen-ciphertext attacks (CCA1) while still being malleable. However, security against adaptive chosen-ciphertext attacks (CCA2) is equivalent to non-malleability.
== Example malleable cryptosystems ==
In a stream cipher, the ciphertext is produced by taking the exclusive or of the plaintext and a pseudorandom stream based on a secret key
k
{\displaystyle k}
, as
E
(
m
)
=
m
⊕
S
(
k
)
{\displaystyle E(m)=m\oplus S(k)}
. An adversary can construct an encryption of
m
⊕
t
{\displaystyle m\oplus t}
for any
t
{\displaystyle t}
, as
E
(
m
)
⊕
t
=
m
⊕
t
⊕
S
(
k
)
=
E
(
m
⊕
t
)
{\displaystyle E(m)\oplus t=m\oplus t\oplus S(k)=E(m\oplus t)}
.
In the RSA cryptosystem, a plaintext
m
{\displaystyle m}
is encrypted as
E
(
m
)
=
m
e
mod
n
{\displaystyle E(m)=m^{e}{\bmod {n}}}
, where
(
e
,
n
)
{\displaystyle (e,n)}
is the public key. Given such a ciphertext, an adversary can construct an encryption of
m
t
{\displaystyle mt}
for any
t
{\displaystyle t}
, as
E
(
m
)
⋅
t
e
mod
n
=
(
m
t
)
e
mod
n
=
E
(
m
t
)
{\textstyle E(m)\cdot t^{e}{\bmod {n}}=(mt)^{e}{\bmod {n}}=E(mt)}
. For this reason, RSA is commonly used together with padding methods such as OAEP or PKCS1.
In the ElGamal cryptosystem, a plaintext
m
{\displaystyle m}
is encrypted as
E
(
m
)
=
(
g
b
,
m
A
b
)
{\displaystyle E(m)=(g^{b},mA^{b})}
, where
(
g
,
A
)
{\displaystyle (g,A)}
is the public key. Given such a ciphertext
(
c
1
,
c
2
)
{\displaystyle (c_{1},c_{2})}
, an adversary can compute
(
c
1
,
t
⋅
c
2
)
{\displaystyle (c_{1},t\cdot c_{2})}
, which is a valid encryption of
t
m
{\displaystyle tm}
, for any
t
{\displaystyle t}
.
In contrast, the Cramer-Shoup system (which is based on ElGamal) is not malleable.
In the Paillier, ElGamal, and RSA cryptosystems, it is also possible to combine several ciphertexts together in a useful way to produce a related ciphertext. In Paillier, given only the public key and an encryption of
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
, one can compute a valid encryption of their sum
m
1
+
m
2
{\displaystyle m_{1}+m_{2}}
. In ElGamal and in RSA, one can combine encryptions of
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
to obtain a valid encryption of their product
m
1
m
2
{\displaystyle m_{1}m_{2}}
.
Block ciphers in the cipher block chaining mode of operation, for example, are partly malleable: flipping a bit in a ciphertext block will completely mangle the plaintext it decrypts to, but will result in the same bit being flipped in the plaintext of the next block. This allows an attacker to 'sacrifice' one block of plaintext in order to change some data in the next one, possibly managing to maliciously alter the message. This is essentially the core idea of the padding oracle attack on CBC, which allows the attacker to decrypt almost an entire ciphertext without knowing the key. For this and many other reasons, a message authentication code is required to guard against any method of tampering.
== Complete non-malleability ==
Fischlin, in 2005, defined the notion of complete non-malleability as the ability of the system to remain non-malleable while giving the adversary additional power to choose a new public key which could be a function of the original public key. In other words, the adversary shouldn't be able to come up with a ciphertext whose underlying plaintext is related to the original message through a relation that also takes public keys into account.
== See also ==
Homomorphic encryption
== References == | Wikipedia/Malleability_(cryptography) |
NaSHA is a hash function accepted as a first round SHA-3 candidate for the NIST hash function competition.
NaSHA was designed by Smile Markovski and Aleksandra Mileva with contributions by Simona Samardziski (programmer) and Boro Jakimovski (programmer). NaSHA supports internal state sizes of 1024 and 2048 bits, and arbitrary output sizes between 125 and 512 bits. It uses quasigroup string transformations with quasigroups of order 264, defined by extended Feistel networks. The quasigroups used in every iteration of the compression function are different and depend on the processed message block.
The authors claim performance of up to 23.06 cycles per byte on an Intel Core 2 Duo in 64-bit mode.
Cryptanalysis during the SHA-3 competition has indicated that 384/512 version of NaSHA is susceptible to collision attacks, but the authors disputed those attacks and also included small changes to achieve the strength of 224/256 version.
== References ==
== External links ==
The official NaSHA website
The First Round SHA-3 candidates | Wikipedia/NaSHA_(hash_function) |
Post-Quantum Cryptography Standardization is a program and competition by NIST to update their standards to include post-quantum cryptography. It was announced at PQCrypto 2016. 23 signature schemes and 59 encryption/KEM schemes were submitted by the initial submission deadline at the end of 2017 of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.
On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.
== Background ==
Academic research on the potential impact of quantum computing dates back to at least 2001. A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly used RSA algorithm insecure by 2030. As a result, a need to standardize quantum-secure cryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namely digital signatures and key encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.
The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs in quantum computing are made.
It is currently undecided whether the future standards will be published as FIPS or as NIST Special Publication (SP).
== Round one ==
Under consideration were:
(strikethrough means it had been withdrawn)
=== Round one submissions published attacks ===
Guess Again by Lorenz Panny
RVB by Lorenz Panny
RaCoSS by Daniel J. Bernstein, Andreas Hülsing, Tanja Lange and Lorenz Panny
HK17 by Daniel J. Bernstein and Tanja Lange
SRTPI by Bo-Yin Yang
WalnutDSA
by Ward Beullens and Simon R. Blackburn
by Matvei Kotov, Anton Menshov and Alexander Ushakov
DRS by Yang Yu and Léo Ducas
DAGS by Elise Barelli and Alain Couvreur
Edon-K by Matthieu Lequesne and Jean-Pierre Tillich
RLCE by Alain Couvreur, Matthieu Lequesne, and Jean-Pierre Tillich
Hila5 by Daniel J. Bernstein, Leon Groot Bruinderink, Tanja Lange and Lorenz Panny
Giophantus by Ward Beullens, Wouter Castryck and Frederik Vercauteren
RankSign by Thomas Debris-Alazard and Jean-Pierre Tillich
McNie by Philippe Gaborit; Terry Shue Chien Lau and Chik How Tan
== Round two ==
Candidates moving on to the second round were announced on January 30, 2019. They are:
== Round three ==
On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends. NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.
On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually. The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns.
=== Finalists ===
=== Alternate candidates ===
=== Intellectual property concerns ===
After NIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surrounding lattice-based schemes such as Kyber and NewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.
=== Round three submissions published attacks ===
Rainbow: by Ward Beullens on a classical computer
=== Adaptations ===
During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly:
CRYSTAL-Kyber and SABER
may change the nested hashes used in their proposals in order for their security claims to hold.
FALCON
side channel attack using electromagnetic measurements to extract the secret signing keys. A masking may be added in order to resist the attack. This adaptation affects performance and should be considered whilst standardizing.
=== Selected Algorithms 2022 ===
On July 5, 2022, NIST announced the first group of winners from its six-year competition.
== Round four ==
On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.
=== Round four submissions published attacks ===
SIKE: by Wouter Castryck and Thomas Decru on a classical computer
=== Selected Algorithm 2025 ===
On March 11, 2025, NIST announced the selection of a backup algorithm for KEM.
== First release ==
On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards. According to the release announcement:
While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are:
Federal Information Processing Standard (FIPS) 203, intended as the primary standard for general encryption. Among its advantages are comparatively small encryption keys that two parties can exchange easily, as well as its speed of operation. The standard is based on the CRYSTALS-Kyber algorithm, which has been renamed ML-KEM, short for Module-Lattice-Based Key-Encapsulation Mechanism.
FIPS 204, intended as the primary standard for protecting digital signatures. The standard uses the CRYSTALS-Dilithium algorithm, which has been renamed ML-DSA, short for Module-Lattice-Based Digital Signature Algorithm.
FIPS 205, also designed for digital signatures. The standard employs the Sphincs+ algorithm, which has been renamed SLH-DSA, short for Stateless Hash-Based Digital Signature Algorithm. The standard is based on a different math approach than ML-DSA, and it is intended as a backup method in case ML-DSA proves vulnerable.
Similarly, when the draft FIPS 206 standard built around FALCON is released, the algorithm will be dubbed FN-DSA, short for FFT (fast-Fourier transform) over NTRU-Lattice-Based Digital Signature Algorithm.
On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange. The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found. The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027.
== Additional Digital Signature Schemes ==
=== Round One ===
NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements. Under consideration are:
(strikethrough means it has been withdrawn)
=== Round one submissions published attacks ===
3WISE by Daniel Smith-Tone
EagleSign by Mehdi Tibouchi
KAZ-SIGN by Daniel J. Bernstein; Scott Fluhrer
Xifrat1-Sign.I by Lorenz Panny
eMLE-Sig 2.0 by Mehdi Tibouchi (implementation by Lorenz Panny)
HPPC by Ward Beullens; Pierre Briaud, Maxime Bros, and Ray Perlner
ALTEQ by Markku-Juhani O. Saarinen (implementation only?)
Biscuit by Charles Bouillaguet
MEDS by Markku-Juhani O. Saarinen and Ward Beullens (implementation only)
FuLeeca by Felicitas Hörmann and Wessel van Woerden
LESS by the LESS team (implementation only)
DME-Sign by Markku-Juhani O. Saarinen (implementation only?); Pierre Briaud, Maxime Bros, Ray Perlner, and Daniel Smith-Tone
EHTv3 by Eamonn Postlethwaite and Wessel van Woerden; Keegan Ryan and Adam Suhl
Enhanced pqsigRM by Thomas Debris-Alazard, Pierre Loisel and Valentin Vasseur; Pierre Briaud, Maxime Bros, Ray Perlner and Daniel Smith-Tone
HAETAE by Markku-Juhani O. Saarinen (implementation only?)
HuFu by Markku-Juhani O. Saarinen
SDitH by Kevin Carrier and Jean-Pierre Tillich; Kevin Carrier, Valérian Hatey, and Jean-Pierre Tillich
VOX by Hiroki Furue and Yasuhiko Ikematsu
AIMer by Fukang Liu, Mohammad Mahzoun, Morten Øygarden, Willi Meier
SNOVA by Yasuhiko Ikematsu and Rika Akiyama
PROV by Ludovic Perret, and River Moreira Ferreira (implementation only)
=== Round Two ===
NIST deemed 14 submissions to pass to the second round.
== See also ==
Advanced Encryption Standard process
CAESAR Competition – Competition to design authenticated encryption schemes
Lattice-based cryptography
NIST hash function competition
== References ==
== External links ==
NIST's official Website on the standardization process
Post-quantum cryptography website by djb | Wikipedia/Post-Quantum_Cryptography_Standardization |
Pairing-based cryptography is the use of a pairing between elements of two cryptographic groups to a third group with a mapping
e
:
G
1
×
G
2
→
G
T
{\displaystyle e:G_{1}\times G_{2}\to G_{T}}
to construct or analyze cryptographic systems.
== Definition ==
The following definition is commonly used in most academic papers.
Let
F
q
{\displaystyle \mathbb {F} _{q}}
be a finite field over prime
q
{\displaystyle q}
,
G
1
,
G
2
{\displaystyle G_{1},G_{2}}
two additive cyclic groups of prime order
q
{\displaystyle q}
and
G
T
{\displaystyle G_{T}}
another cyclic group of order
q
{\displaystyle q}
written multiplicatively. A pairing is a map:
e
:
G
1
×
G
2
→
G
T
{\displaystyle e:G_{1}\times G_{2}\rightarrow G_{T}}
, which satisfies the following properties:
Bilinearity
∀
a
,
b
∈
F
q
∗
,
P
∈
G
1
,
Q
∈
G
2
:
e
(
a
P
,
b
Q
)
=
e
(
P
,
Q
)
a
b
{\displaystyle \forall a,b\in \mathbb {F} _{q}^{*},P\in G_{1},Q\in G_{2}:\ e\left(aP,bQ\right)=e\left(P,Q\right)^{ab}}
Non-degeneracy
e
≠
1
{\displaystyle e\neq 1}
Computability
There exists an efficient algorithm to compute
e
{\displaystyle e}
.
== Classification ==
If the same group is used for the first two groups (i.e.
G
1
=
G
2
{\displaystyle G_{1}=G_{2}}
), the pairing is called symmetric and is a mapping from two elements of one group to an element from a second group.
Some researchers classify pairing instantiations into three (or more) basic types:
G
1
=
G
2
{\displaystyle G_{1}=G_{2}}
;
G
1
≠
G
2
{\displaystyle G_{1}\neq G_{2}}
but there is an efficiently computable homomorphism
ϕ
:
G
2
→
G
1
{\displaystyle \phi :G_{2}\to G_{1}}
;
G
1
≠
G
2
{\displaystyle G_{1}\neq G_{2}}
and there are no efficiently computable homomorphisms between
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
.
== Usage in cryptography ==
If symmetric, pairings can be used to reduce a hard problem in one group to a different, usually easier problem in another group.
For example, in groups equipped with a bilinear mapping such as the Weil pairing or Tate pairing, generalizations of the computational Diffie–Hellman problem are believed to be infeasible while the simpler decisional Diffie–Hellman problem can be easily solved using the pairing function. The first group is sometimes referred to as a Gap Group because of the assumed difference in difficulty between these two problems in the group.
Let
e
{\displaystyle e}
be a non-degenerate, efficiently computable, bilinear pairing. Let
g
{\displaystyle g}
be a generator of
G
{\displaystyle G}
. Consider an instance of the CDH problem,
g
{\displaystyle g}
,
g
x
{\displaystyle g^{x}}
,
g
y
{\displaystyle g^{y}}
. Intuitively, the pairing function
e
{\displaystyle e}
does not help us compute
g
x
y
{\displaystyle g^{xy}}
, the solution to the CDH problem. It is conjectured that this instance of the CDH problem is intractable. Given
g
z
{\displaystyle g^{z}}
, we may check to see if
g
z
=
g
x
y
{\displaystyle g^{z}=g^{xy}}
without knowledge of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
, by testing whether
e
(
g
x
,
g
y
)
=
e
(
g
,
g
z
)
{\displaystyle e(g^{x},g^{y})=e(g,g^{z})}
holds.
By using the bilinear property
x
+
y
+
z
{\displaystyle x+y+z}
times, we see that if
e
(
g
x
,
g
y
)
=
e
(
g
,
g
)
x
y
=
e
(
g
,
g
)
z
=
e
(
g
,
g
z
)
{\displaystyle e(g^{x},g^{y})=e(g,g)^{xy}=e(g,g)^{z}=e(g,g^{z})}
, then, since
G
T
{\displaystyle G_{T}}
is a prime order group,
x
y
=
z
{\displaystyle xy=z}
.
While first used for cryptanalysis, pairings have also been used to construct many cryptographic systems for which no other efficient implementation is known, such as identity-based encryption or attribute-based encryption schemes. Thus, the security level of some pairing friendly elliptic curves have been later reduced.
Pairing-based cryptography is used in the KZG cryptographic commitment scheme.
A contemporary example of using bilinear pairings is exemplified in the BLS digital signature scheme.
Pairing-based cryptography relies on hardness assumptions separate from e.g. the elliptic-curve cryptography, which is older and has been studied for a longer time.
== Cryptanalysis ==
In June 2012 the National Institute of Information and Communications Technology (NICT), Kyushu University, and Fujitsu Laboratories Limited improved the previous bound for successfully computing a discrete logarithm on a supersingular elliptic curve from 676 bits to 923 bits.
In 2016, the Extended Tower Number Field Sieve algorithm allowed to reduce the complexity of finding discrete logarithm in some resulting groups of pairings. There are several variants of the multiple and extended tower number field sieve algorithm expanding the applicability and improving the complexity of the algorithm. A unified description of all such algorithms with further improvements was published in 2019. In view of these advances, several works provided revised concrete estimates on the key sizes of secure pairing-based cryptosystems.
== References ==
== External links ==
Lecture on Pairing-Based Cryptography
Ben Lynn's PBC Library | Wikipedia/Pairing-based_cryptography |
Hyperelliptic curve cryptography is similar to elliptic curve cryptography (ECC) insofar as the Jacobian of a hyperelliptic curve is an abelian group in which to do arithmetic, just as we use the group of points on an elliptic curve in ECC.
== Definition ==
An (imaginary) hyperelliptic curve of genus
g
{\displaystyle g}
over a field
K
{\displaystyle K}
is given by the equation
C
:
y
2
+
h
(
x
)
y
=
f
(
x
)
∈
K
[
x
,
y
]
{\displaystyle C:y^{2}+h(x)y=f(x)\in K[x,y]}
where
h
(
x
)
∈
K
[
x
]
{\displaystyle h(x)\in K[x]}
is a polynomial of degree not larger than
g
{\displaystyle g}
and
f
(
x
)
∈
K
[
x
]
{\displaystyle f(x)\in K[x]}
is a monic polynomial of degree
2
g
+
1
{\displaystyle 2g+1}
. From this definition it follows that elliptic curves are hyperelliptic curves of genus 1. In hyperelliptic curve cryptography
K
{\displaystyle K}
is often a finite field. The Jacobian of
C
{\displaystyle C}
, denoted
J
(
C
)
{\displaystyle J(C)}
, is a quotient group, thus the elements of the Jacobian are not points, they are equivalence classes of divisors of degree 0 under the relation of linear equivalence. This agrees with the elliptic curve case, because it can be shown that the Jacobian of an elliptic curve is isomorphic with the group of points on the elliptic curve. The use of hyperelliptic curves in cryptography came about in 1989 from Neal Koblitz. Although introduced only 3 years after ECC, not many cryptosystems implement hyperelliptic curves because the implementation of the arithmetic isn't as efficient as with cryptosystems based on elliptic curves or factoring (RSA). The efficiency of implementing the arithmetic depends on the underlying finite field
K
{\displaystyle K}
, in practice it turns out that finite fields of characteristic 2 are a good choice for hardware implementations while software is usually faster in odd characteristic.
The Jacobian on a hyperelliptic curve is an Abelian group and as such it can serve as group for the discrete logarithm problem (DLP). In short, suppose we have an Abelian group
G
{\displaystyle G}
and
g
{\displaystyle g}
an element of
G
{\displaystyle G}
, the DLP on
G
{\displaystyle G}
entails finding the integer
a
{\displaystyle a}
given two elements of
G
{\displaystyle G}
, namely
g
{\displaystyle g}
and
g
a
{\displaystyle g^{a}}
. The first type of group used was the multiplicative group of a finite field, later also Jacobians of (hyper)elliptic curves were used. If the hyperelliptic curve is chosen with care, then Pollard's rho method is the most efficient way to solve DLP. This means that, if the Jacobian has
n
{\displaystyle n}
elements, that the running time is exponential in
log
(
n
)
{\displaystyle \log(n)}
. This makes it possible to use Jacobians of a fairly small order, thus making the system more efficient. But if the hyperelliptic curve is chosen poorly, the DLP will become quite easy to solve. In this case there are known attacks which are more efficient than generic discrete logarithm solvers or even subexponential. Hence these hyperelliptic curves must be avoided. Considering various attacks on DLP, it is possible to list the features of hyperelliptic curves that should be avoided.
== Attacks against the DLP ==
All generic attacks on the discrete logarithm problem in finite abelian groups such as the Pohlig–Hellman algorithm and Pollard's rho method can be used to attack the DLP in the Jacobian of hyperelliptic curves. The Pohlig-Hellman attack reduces the difficulty of the DLP by looking at the order of the group we are working with. Suppose the group
G
{\displaystyle G}
that is used has
n
=
p
1
r
1
⋯
p
k
r
k
{\displaystyle n=p_{1}^{r_{1}}\cdots p_{k}^{r_{k}}}
elements, where
p
1
r
1
⋯
p
k
r
k
{\displaystyle p_{1}^{r_{1}}\cdots p_{k}^{r_{k}}}
is the prime factorization of
n
{\displaystyle n}
. Pohlig-Hellman reduces the DLP in
G
{\displaystyle G}
to DLPs in subgroups of order
p
i
{\displaystyle p_{i}}
for
i
=
1
,
.
.
.
,
k
{\displaystyle i=1,...,k}
. So for
p
{\displaystyle p}
the largest prime divisor of
n
{\displaystyle n}
, the DLP in
G
{\displaystyle G}
is just as hard to solve as the DLP in the subgroup of order
p
{\displaystyle p}
. Therefore, we would like to choose
G
{\displaystyle G}
such that the largest prime divisor
p
{\displaystyle p}
of
#
G
=
n
{\displaystyle \#G=n}
is almost equal to
n
{\displaystyle n}
itself. Requiring
n
p
≤
4
{\textstyle {\frac {n}{p}}\leq 4}
usually suffices.
The index calculus algorithm is another algorithm that can be used to solve DLP under some circumstances. For Jacobians of (hyper)elliptic curves there exists an index calculus attack on DLP. If the genus of the curve becomes too high, the attack will be more efficient than Pollard's rho. Today it is known that even a genus of
g
=
3
{\displaystyle g=3}
cannot assure security. Hence we are left with elliptic curves and hyperelliptic curves of genus 2.
Another restriction on the hyperelliptic curves we can use comes from the Menezes-Okamoto-Vanstone-attack / Frey-Rück-attack. The first, often called MOV for short, was developed in 1993, the second came about in 1994. Consider a (hyper)elliptic curve
C
{\displaystyle C}
over a finite field
F
q
{\displaystyle \mathbb {F} _{q}}
where
q
{\displaystyle q}
is the power of a prime number. Suppose the Jacobian of the curve has
n
{\displaystyle n}
elements and
p
{\displaystyle p}
is the largest prime divisor of
n
{\displaystyle n}
. For
k
{\displaystyle k}
the smallest positive integer such that
p
|
q
k
−
1
{\displaystyle p|q^{k}-1}
there exists a computable injective group homomorphism from the subgroup of
J
(
C
)
{\displaystyle J(C)}
of order
p
{\displaystyle p}
to
F
q
k
∗
{\displaystyle \mathbb {F} _{q^{k}}^{*}}
. If
k
{\displaystyle k}
is small, we can solve DLP in
J
(
C
)
{\displaystyle J(C)}
by using the index calculus attack in
F
q
k
∗
{\textstyle \mathbb {F} _{q^{k}}^{*}}
. For arbitrary curves
k
{\displaystyle k}
is very large (around the size of
q
g
{\displaystyle q^{g}}
); so even though the index calculus attack is quite fast for multiplicative groups of finite fields this attack is not a threat for most curves. The injective function used in this attack is a pairing and there are some applications in cryptography that make use of them. In such applications it is important to balance the hardness of the DLP in
J
(
C
)
{\displaystyle J(C)}
and
F
q
k
∗
{\textstyle \mathbb {F} _{q^{k}}^{*}}
; depending on the security level values of
k
{\displaystyle k}
between 6 and 12 are useful.
The subgroup of
F
q
k
∗
{\textstyle \mathbb {F} _{q^{k}}^{*}}
is a torus. There exists some independent usage in torus based cryptography.
We also have a problem, if
p
{\displaystyle p}
, the largest prime divisor of the order of the Jacobian, is equal to the characteristic of
F
q
.
{\displaystyle \mathbb {F} _{q}.}
By a different injective map we could then consider the DLP in the additive group
F
q
{\displaystyle \mathbb {F} _{q}}
instead of DLP on the Jacobian. However, DLP in this additive group is trivial to solve, as can easily be seen. So also these curves, called anomalous curves, are not to be used in DLP.
== Order of the Jacobian ==
Hence, in order to choose a good curve and a good underlying finite field, it is important to know the order of the Jacobian. Consider a hyperelliptic curve
C
{\textstyle C}
of genus
g
{\textstyle g}
over the field
F
q
{\textstyle \mathbb {F} _{q}}
where
q
{\textstyle q}
is the power of a prime number and define
C
k
{\textstyle C_{k}}
as
C
{\textstyle C}
but now over the field
F
q
k
{\textstyle \mathbb {F} _{q^{k}}}
. It can be shown that the order of the Jacobian of
C
k
{\textstyle C_{k}}
lies in the interval
[
(
q
k
−
1
)
2
g
,
(
q
k
+
1
)
2
g
]
{\textstyle [({\sqrt {q}}^{k}-1)^{2g},({\sqrt {q}}^{k}+1)^{2g}]}
, called the Hasse-Weil interval.
But there is more, we can compute the order using the zeta-function on hyperelliptic curves. Let
A
k
{\textstyle A_{k}}
be the number of points on
C
k
{\textstyle C_{k}}
. Then we define the zeta-function of
C
=
C
1
{\textstyle C=C_{1}}
as
Z
C
(
t
)
=
exp
(
∑
i
=
1
∞
A
i
t
i
i
)
{\textstyle Z_{C}(t)=\exp(\sum _{i=1}^{\infty }{A_{i}{\frac {t^{i}}{i}}})}
. For this zeta-function it can be shown that
Z
C
(
t
)
=
P
(
t
)
(
1
−
t
)
(
1
−
q
t
)
{\textstyle Z_{C}(t)={\frac {P(t)}{(1-t)(1-qt)}}}
where
P
(
t
)
{\textstyle P(t)}
is a polynomial of degree
2
g
{\textstyle 2g}
with coefficients in
Z
{\textstyle \mathbb {Z} }
. Furthermore
P
(
t
)
{\textstyle P(t)}
factors as
P
(
t
)
=
∏
i
=
1
g
(
1
−
a
i
t
)
(
1
−
a
i
¯
t
)
{\textstyle P(t)=\prod _{i=1}^{g}{(1-a_{i}t)(1-{\bar {a_{i}}}t)}}
where
a
i
∈
C
{\textstyle a_{i}\in \mathbb {C} }
for all
i
=
1
,
.
.
.
,
g
{\textstyle i=1,...,g}
. Here
a
¯
{\textstyle {\bar {a}}}
denotes the complex conjugate of
a
{\displaystyle a}
. Finally we have that the order of
J
(
C
k
)
{\textstyle J(C_{k})}
equals
∏
i
=
1
g
|
1
−
a
i
k
|
2
{\textstyle \prod _{i=1}^{g}{|1-a_{i}^{k}|^{2}}}
. Hence orders of Jacobians can be found by computing the roots of
P
(
t
)
{\textstyle P(t)}
.
== References ==
== External links ==
Colm Ó hÉigeartaigh Implementation of some hyperelliptic curves algorithms using MIRACL
DJ Bernstein Surface1271: high-speed genus-2-hyperelliptic-curve cryptography – incomplete work from 2006 intending to produce a Diffie–Hellman variant, but stalled due to difficulties in choosing surfaces (in turn because point-counting for large surfaces is unavailable). Contains software for the Pentium M scalar multiplication on a Kummer surface. | Wikipedia/Hyperelliptic_curve_cryptography |
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of (even an) honest node is corrupted if at least one of the incoming packets is corrupted.
An attacker can easily corrupt a packet even if it is encrypted by either forging the signature or by producing a collision under the hash function. This will give an attacker access to the packets and the ability to corrupt them. Denis Charles, Kamal Jain and Kristin Lauter designed a new homomorphic encryption signature scheme for use with network coding to prevent pollution attacks.
The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. In this scheme it is computationally infeasible for a node to sign a linear combination of the packets without disclosing what linear combination was used in the generation of the packet. Furthermore, we can prove that the signature scheme is secure under well known cryptographic assumptions of the hardness of the discrete logarithm problem and the computational Elliptic curve Diffie–Hellman.
== Network coding ==
Let
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
be a directed graph where
V
{\displaystyle V}
is a set, whose elements are called vertices or nodes, and
E
{\displaystyle E}
is a set of ordered pairs of vertices, called arcs, directed edges, or arrows. A source
s
∈
V
{\displaystyle s\in V}
wants to transmit a file
D
{\displaystyle D}
to a set
T
⊆
V
{\displaystyle T\subseteq V}
of the vertices. One chooses a vector space
W
/
F
p
{\displaystyle W/\mathbb {F} _{p}}
(say of dimension
d
{\displaystyle d}
), where
p
{\displaystyle p}
is a prime, and views the data to be transmitted as a bunch of vectors
w
1
,
…
,
w
k
∈
W
{\displaystyle w_{1},\ldots ,w_{k}\in W}
. The source then creates the augmented vectors
v
1
,
…
,
v
k
{\displaystyle v_{1},\ldots ,v_{k}}
by setting
v
i
=
(
0
,
…
,
0
,
1
,
…
,
0
,
w
i
1
,
…
,
w
i
d
)
{\displaystyle v_{i}=(0,\ldots ,0,1,\ldots ,0,w_{i_{1}},\ldots ,w{i_{d}})}
where
w
i
j
{\displaystyle w_{i_{j}}}
is the
j
{\displaystyle j}
-th coordinate of the vector
w
i
{\displaystyle w_{i}}
. There are
(
i
−
1
)
{\displaystyle (i-1)}
zeros before the first '1' appears in
v
i
{\displaystyle v_{i}}
. One can assume without loss of generality that the vectors
v
i
{\displaystyle v_{i}}
are linearly independent. We denote the linear subspace (of
F
p
k
+
d
{\displaystyle \mathbb {F} _{p}^{k+d}}
) spanned by these vectors by
V
{\displaystyle V}
. Each outgoing edge
e
∈
E
{\displaystyle e\in E}
computes a linear combination,
y
(
e
)
{\displaystyle y(e)}
, of the vectors entering the vertex
v
=
i
n
(
e
)
{\displaystyle v=in(e)}
where the edge originates, that is to say
y
(
e
)
=
∑
f
:
o
u
t
(
f
)
=
v
(
m
e
(
f
)
y
(
f
)
)
{\displaystyle y(e)=\sum _{f:\mathrm {out} (f)=v}(m_{e}(f)y(f))}
where
m
e
(
f
)
∈
F
p
{\displaystyle m_{e}(f)\in \mathbb {F} _{p}}
. We consider the source as having
k
{\displaystyle k}
input edges carrying the
k
{\displaystyle k}
vectors
w
i
{\displaystyle w_{i}}
. By induction, one has that the vector
y
(
e
)
{\displaystyle y(e)}
on any edge is a linear combination
y
(
e
)
=
∑
1
≤
i
≤
k
(
g
i
(
e
)
v
i
)
{\displaystyle y(e)=\sum _{1\leq i\leq k}(g_{i}(e)v_{i})}
and is a vector in
V
{\displaystyle V}
. The k-dimensional vector
g
(
e
)
=
(
g
1
(
e
)
,
…
,
g
k
(
e
)
)
{\displaystyle g(e)=(g_{1}(e),\ldots ,g_{k}(e))}
is simply the first k coordinates of the vector
y
(
e
)
{\displaystyle y(e)}
. We call the matrix whose rows are the vectors
g
(
e
1
)
,
…
,
g
(
e
k
)
{\displaystyle g(e_{1}),\ldots ,g(e_{k})}
, where
e
i
{\displaystyle e_{i}}
are the incoming edges for a vertex
t
∈
T
{\displaystyle t\in T}
, the global encoding matrix for
t
{\displaystyle t}
and denote it as
G
t
{\displaystyle G_{t}}
. In practice the encoding vectors are chosen at random so the matrix
G
t
{\displaystyle G_{t}}
is invertible with high probability. Thus, any receiver, on receiving
y
1
,
…
,
y
k
{\displaystyle y_{1},\ldots ,y_{k}}
can find
w
1
,
…
,
w
k
{\displaystyle w_{1},\ldots ,w_{k}}
by solving
[
y
′
y
2
′
⋮
y
k
′
]
=
G
t
[
w
1
w
2
⋮
w
k
]
{\displaystyle {\begin{bmatrix}y'\\y_{2}'\\\vdots \\y_{k}'\end{bmatrix}}=G_{t}{\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{k}\end{bmatrix}}}
where the
y
i
′
{\displaystyle y_{i}'}
are the vectors formed by removing the first
k
{\displaystyle k}
coordinates of the vector
y
i
{\displaystyle y_{i}}
.
== Decoding at the receiver ==
Each receiver,
t
∈
T
{\displaystyle t\in T}
, gets
k
{\displaystyle k}
vectors
y
1
,
…
,
y
k
{\displaystyle y_{1},\ldots ,y_{k}}
which are random linear combinations of the
v
i
{\displaystyle v_{i}}
’s.
In fact, if
y
i
=
(
α
i
1
,
…
,
α
i
k
,
a
i
1
,
…
,
a
i
d
)
{\displaystyle y_{i}=(\alpha _{i_{1}},\ldots ,\alpha _{i_{k}},a_{i_{1}},\ldots ,a_{i_{d}})}
then
y
i
=
∑
1
≤
j
≤
k
(
α
i
j
v
j
)
.
{\displaystyle y_{i}=\sum _{1\leq j\leq k}(\alpha _{ij}v_{j}).}
Thus we can invert the linear transformation to find the
v
i
{\displaystyle v_{i}}
’s with high probability.
== History ==
Krohn, Freedman and Mazieres proposed a theory in 2004 that if we have a hash function
H
:
V
⟶
G
{\displaystyle H:V\longrightarrow G}
such that:
H
{\displaystyle H}
is collision resistant – it is hard to find
x
{\displaystyle x}
and
y
{\displaystyle y}
such that
H
(
x
)
=
H
(
y
)
{\displaystyle H(x)=H(y)}
;
H
{\displaystyle H}
is a homomorphism –
H
(
x
+
y
)
=
H
(
x
)
+
H
(
y
)
{\displaystyle H(x+y)=H(x)+H(y)}
.
Then server can securely distribute
H
(
v
i
)
{\displaystyle H(v_{i})}
to each receiver, and to check if
y
=
∑
1
≤
i
≤
k
(
α
i
v
i
)
{\displaystyle y=\sum _{1\leq i\leq k}(\alpha _{i}v_{i})}
we can check whether
H
(
y
)
=
∑
1
≤
i
≤
k
(
α
i
H
(
v
i
)
)
{\displaystyle H(y)=\sum _{1\leq i\leq k}(\alpha _{i}H(v_{i}))}
The problem with this method is that the server needs to transfer secure information to each of the receivers. The hash functions
H
{\displaystyle H}
needs to be transmitted to all the nodes in the network through a separate secure channel.
H
{\displaystyle H}
is expensive to compute and secure transmission of
H
{\displaystyle H}
is not economical either.
== Advantages of homomorphic signatures ==
Establishes authentication in addition to detecting pollution.
No need for distributing secure hash digests.
Smaller bit lengths in general will suffice. Signatures of length 180 bits have as much security as 1024 bit RSA signatures.
Public information does not change for subsequent file transmission.
== Signature scheme ==
The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority.
=== Elliptic curves cryptography over a finite field ===
Elliptic curve cryptography over a finite field is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields.
Let
F
q
{\displaystyle \mathbb {F} _{q}}
be a finite field such that
q
{\displaystyle q}
is not a power of 2 or 3. Then an elliptic curve
E
{\displaystyle E}
over
F
q
{\displaystyle \mathbb {F} _{q}}
is a curve given by an equation of the form
y
2
=
x
3
+
a
x
+
b
,
{\displaystyle y^{2}=x^{3}+ax+b,\,}
where
a
,
b
∈
F
q
{\displaystyle a,b\in \mathbb {F} _{q}}
such that
4
a
3
+
27
b
2
≠
0
{\displaystyle 4a^{3}+27b^{2}\not =0}
Let
K
⊇
F
q
{\displaystyle K\supseteq \mathbb {F} _{q}}
, then,
E
(
K
)
=
{
(
x
,
y
)
|
y
2
=
x
3
+
a
x
+
b
}
⋃
{
O
}
{\displaystyle E(K)=\{(x,y)|y^{2}=x^{3}+ax+b\}\bigcup \{O\}}
forms an abelian group with O as identity. The group operations can be performed efficiently.
=== Weil pairing ===
Weil pairing is a construction of roots of unity by means of functions on an elliptic curve
E
{\displaystyle E}
, in such a way as to constitute a pairing (bilinear form, though with multiplicative notation) on the torsion subgroup of
E
{\displaystyle E}
. Let
E
/
F
q
{\displaystyle E/\mathbb {F} _{q}}
be an elliptic curve and let
F
¯
q
{\displaystyle \mathbb {\bar {F}} _{q}}
be an algebraic closure of
F
q
{\displaystyle \mathbb {F} _{q}}
. If
m
{\displaystyle m}
is an integer, relatively prime to the characteristic of the field
F
q
{\displaystyle \mathbb {F} _{q}}
, then the group of
m
{\displaystyle m}
-torsion points,
E
[
m
]
=
P
∈
E
(
F
¯
q
)
:
m
P
=
O
{\displaystyle E[m]={P\in E(\mathbb {\bar {F}} _{q}):mP=O}}
.
If
E
/
F
q
{\displaystyle E/\mathbb {F} _{q}}
is an elliptic curve and
gcd
(
m
,
q
)
=
1
{\displaystyle \gcd(m,q)=1}
then
E
[
m
]
≅
(
Z
/
m
Z
)
∗
(
Z
/
m
Z
)
{\displaystyle E[m]\cong (\mathbb {Z} /m\mathbb {Z} )*(\mathbb {Z} /m\mathbb {Z} )}
There is a map
e
m
:
E
[
m
]
∗
E
[
m
]
→
μ
m
(
F
q
)
{\displaystyle e_{m}:E[m]*E[m]\rightarrow \mu _{m}(\mathbb {F} _{q})}
such that:
(Bilinear)
e
m
(
P
+
R
,
Q
)
=
e
m
(
P
,
Q
)
e
m
(
R
,
Q
)
and
e
m
(
P
,
Q
+
R
)
=
e
m
(
P
,
Q
)
e
m
(
P
,
R
)
{\displaystyle e_{m}(P+R,Q)=e_{m}(P,Q)e_{m}(R,Q){\text{ and }}e_{m}(P,Q+R)=e_{m}(P,Q)e_{m}(P,R)}
.
(Non-degenerate)
e
m
(
P
,
Q
)
=
1
{\displaystyle e_{m}(P,Q)=1}
for all P implies that
Q
=
O
{\displaystyle Q=O}
.
(Alternating)
e
m
(
P
,
P
)
=
1
{\displaystyle e_{m}(P,P)=1}
.
Also,
e
m
{\displaystyle e_{m}}
can be computed efficiently.
=== Homomorphic signatures ===
Let
p
{\displaystyle p}
be a prime and
q
{\displaystyle q}
a prime power. Let
V
/
F
p
{\displaystyle V/\mathbb {F} _{p}}
be a vector space of dimension
D
{\displaystyle D}
and
E
/
F
q
{\displaystyle E/\mathbb {F} _{q}}
be an elliptic curve such that
P
1
,
…
,
P
D
∈
E
[
p
]
{\displaystyle P_{1},\ldots ,P_{D}\in E[p]}
.
Define
h
:
V
⟶
E
[
p
]
{\displaystyle h:V\longrightarrow E[p]}
as follows:
h
(
u
1
,
…
,
u
D
)
=
∑
1
≤
i
≤
D
(
u
i
P
i
)
{\displaystyle h(u_{1},\ldots ,u_{D})=\sum _{1\leq i\leq D}(u_{i}P_{i})}
.
The function
h
{\displaystyle h}
is an arbitrary homomorphism from
V
{\displaystyle V}
to
E
[
p
]
{\displaystyle E[p]}
.
The server chooses
s
1
,
…
,
s
D
{\displaystyle s_{1},\ldots ,s_{D}}
secretly in
F
p
{\displaystyle \mathbb {F} _{p}}
and publishes a point
Q
{\displaystyle Q}
of p-torsion such that
e
p
(
P
i
,
Q
)
≠
1
{\displaystyle e_{p}(P_{i},Q)\not =1}
and also publishes
(
P
i
,
s
i
Q
)
{\displaystyle (P_{i},s_{i}Q)}
for
1
≤
i
≤
D
{\displaystyle 1\leq i\leq D}
.
The signature of the vector
v
=
u
1
,
…
,
u
D
{\displaystyle v=u_{1},\ldots ,u_{D}}
is
σ
(
v
)
=
∑
1
≤
i
≤
D
(
u
i
s
i
P
i
)
{\displaystyle \sigma (v)=\sum _{1\leq i\leq D}(u_{i}s_{i}P_{i})}
Note: This signature is homomorphic since the computation of h is a homomorphism.
=== Signature verification ===
Given
v
=
u
1
,
…
,
u
D
{\displaystyle v=u_{1},\ldots ,u_{D}}
and its signature
σ
{\displaystyle \sigma }
, verify that
e
p
(
σ
,
Q
)
=
e
p
(
∑
1
≤
i
≤
D
(
u
i
s
i
P
i
)
,
Q
)
=
∏
i
e
p
(
u
i
s
i
P
i
,
Q
)
=
∏
i
e
p
(
u
i
P
i
,
s
i
Q
)
{\displaystyle {\begin{aligned}e_{p}(\sigma ,Q)&=e_{p}\left(\sum _{1\leq i\leq D}(u_{i}s_{i}P_{i}),Q\right)\\&=\prod _{i}e_{p}(u_{i}s_{i}P_{i},Q)\\&=\prod _{i}e_{p}(u_{i}P_{i},s_{i}Q)\end{aligned}}}
The verification crucially uses the bilinearity of the Weil-pairing.
== System setup ==
The server computes
σ
(
v
i
)
{\displaystyle \sigma (v_{i})}
for each
1
≤
i
≤
k
{\displaystyle 1\leq i\leq k}
. Transmits
v
i
,
σ
(
v
i
)
{\displaystyle v_{i},\sigma (v_{i})}
.
At each edge
e
{\displaystyle e}
while computing
y
(
e
)
=
∑
f
∈
E
:
o
u
t
(
f
)
=
i
n
(
e
)
(
m
e
(
f
)
y
(
f
)
)
{\displaystyle y(e)=\sum _{f\in E:\mathrm {out} (f)=\mathrm {in} (e)}(m_{e}(f)y(f))}
also compute
σ
(
y
(
e
)
)
=
∑
f
∈
E
:
o
u
t
(
f
)
=
i
n
(
e
)
(
m
e
(
f
)
σ
(
y
(
f
)
)
)
{\displaystyle \sigma (y(e))=\sum _{f\in E:\mathrm {out} (f)=\mathrm {in} (e)}(m_{e}(f)\sigma (y(f)))}
on the elliptic curve
E
{\displaystyle E}
.
The signature is a point on the elliptic curve with coordinates in
F
q
{\displaystyle \mathbb {F} _{q}}
. Thus the size of the signature is
2
log
q
{\displaystyle 2\log q}
bits (which is some constant times
l
o
g
(
p
)
{\displaystyle log(p)}
bits, depending on the relative size of
p
{\displaystyle p}
and
q
{\displaystyle q}
), and this is the transmission overhead. The computation of the signature
h
(
e
)
{\displaystyle h(e)}
at each vertex requires
O
(
d
i
n
log
p
log
1
+
ϵ
q
)
{\displaystyle O(d_{in}\log p\log ^{1+\epsilon }q)}
bit operations, where
d
i
n
{\displaystyle d_{in}}
is the in-degree of the vertex
i
n
(
e
)
{\displaystyle in(e)}
. The verification of a signature requires
O
(
(
d
+
k
)
log
2
+
ϵ
q
)
{\displaystyle O((d+k)\log ^{2+\epsilon }q)}
bit operations.
== Proof of security ==
Attacker can produce a collision under the hash function.
If given
(
P
1
,
…
,
P
r
)
{\displaystyle (P_{1},\ldots ,P_{r})}
points in
E
[
p
]
{\displaystyle E[p]}
find
a
=
(
a
1
,
…
,
a
r
)
∈
F
p
r
{\displaystyle a=(a_{1},\ldots ,a_{r})\in \mathbb {F} _{p}^{r}}
and
b
=
(
b
1
,
…
,
b
r
)
∈
F
p
r
{\displaystyle b=(b_{1},\ldots ,b_{r})\in \mathbb {F} _{p}^{r}}
such that
a
≠
b
{\displaystyle a\not =b}
and
∑
1
≤
i
≤
r
(
a
i
P
i
)
=
∑
1
≤
j
≤
r
(
b
j
P
j
)
.
{\displaystyle \sum _{1\leq i\leq r}(a_{i}P_{i})=\sum _{1\leq j\leq r}(b_{j}P_{j}).}
Proposition: There is a polynomial time reduction from discrete log on the cyclic group of order
p
{\displaystyle p}
on elliptic curves to Hash-Collision.
If
r
=
2
{\displaystyle r=2}
, then we get
x
P
+
y
Q
=
u
P
+
v
Q
{\displaystyle xP+yQ=uP+vQ}
. Thus
(
x
−
u
)
P
+
(
y
−
v
)
Q
=
0
{\displaystyle (x-u)P+(y-v)Q=0}
.
We claim that
x
≠
u
{\displaystyle x\not =u}
and
y
≠
v
{\displaystyle y\not =v}
. Suppose that
x
=
u
{\displaystyle x=u}
, then we would have
(
y
−
v
)
Q
=
0
{\displaystyle (y-v)Q=0}
, but
Q
{\displaystyle Q}
is a point of order
p
{\displaystyle p}
(a prime) thus
y
−
u
≡
0
mod
p
{\displaystyle y-u\equiv 0{\bmod {p}}}
. In other words
y
=
v
{\displaystyle y=v}
in
F
p
{\displaystyle \mathbb {F} _{p}}
. This contradicts the assumption that
(
x
,
y
)
{\displaystyle (x,y)}
and
(
u
,
v
)
{\displaystyle (u,v)}
are distinct pairs in
F
2
{\displaystyle \mathbb {F} _{2}}
. Thus we have that
Q
=
−
(
x
−
u
)
(
y
−
v
)
−
1
P
{\displaystyle Q=-(x-u)(y-v)^{-1}P}
, where the inverse is taken as modulo
p
{\displaystyle p}
.
If we have r > 2 then we can do one of two things. Either we can take
P
1
=
P
{\displaystyle P_{1}=P}
and
P
2
=
Q
{\displaystyle P_{2}=Q}
as before and set
P
i
=
O
{\displaystyle P_{i}=O}
for
i
{\displaystyle i}
> 2 (in this case the proof reduces to the case when
r
=
2
{\displaystyle r=2}
), or we can take
P
1
=
r
1
P
{\displaystyle P_{1}=r_{1}P}
and
P
i
=
r
i
Q
{\displaystyle P_{i}=r_{i}Q}
where
r
i
{\displaystyle r_{i}}
are chosen at random from
F
p
{\displaystyle \mathbb {F} _{p}}
. We get one equation in one unknown (the discrete log of
Q
{\displaystyle Q}
). It is quite possible that the equation we get does not involve the unknown. However, this happens with very small probability as we argue next. Suppose the algorithm for Hash-Collision gave us that
a
r
1
P
+
∑
2
≤
i
≤
r
(
b
i
r
i
Q
)
=
0.
{\displaystyle ar_{1}P+\sum _{2\leq i\leq r}(b_{i}r_{i}Q)=0.}
Then as long as
∑
2
≤
i
≤
r
b
i
r
i
≢
0
mod
p
{\displaystyle \sum _{2\leq i\leq r}b_{i}r_{i}\not \equiv 0{\bmod {p}}}
, we can solve for the discrete log of Q. But the
r
i
{\displaystyle r_{i}}
’s are unknown to the oracle for Hash-Collision and so we can interchange the order in which this process occurs. In other words, given
b
i
{\displaystyle b_{i}}
, for
2
≤
i
≤
r
{\displaystyle 2\leq i\leq r}
, not all zero, what is the probability that the
r
i
{\displaystyle r_{i}}
’s we chose satisfies
∑
2
≤
i
≤
r
(
b
i
r
i
)
=
0
{\displaystyle \sum _{2\leq i\leq r}(b_{i}r_{i})=0}
? It is clear that the latter probability is
1
p
{\displaystyle 1 \over p}
. Thus with high probability we can solve for the discrete log of
Q
{\displaystyle Q}
.
We have shown that producing hash collisions in this scheme is difficult. The other method by which an adversary can foil our system is by forging a signature. This scheme for the signature is essentially the Aggregate Signature version of the Boneh-Lynn-Shacham signature scheme. Here it is shown that forging a signature is at least as hard as solving the elliptic curve Diffie–Hellman problem. The only known way to solve this problem on elliptic curves is via computing discrete-logs. Thus forging a signature is at least as hard as solving the computational co-Diffie–Hellman on elliptic curves and probably as hard as computing discrete-logs.
== See also ==
Network coding
Homomorphic encryption
Elliptic-curve cryptography
Weil pairing
Elliptic-curve Diffie–Hellman
Elliptic Curve Digital Signature Algorithm
Digital Signature Algorithm
== References ==
== External links ==
Comprehensive View of a Live Network Coding P2P System
Signatures for Network Coding(presentation) CISS 2006, Princeton
University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra | Wikipedia/Homomorphic_signatures_for_network_coding |
Kleptography is the study of stealing information securely and subliminally. The term was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96.
Kleptography is a subfield of cryptovirology and is a natural extension of the theory of subliminal channels that was pioneered by Gus Simmons while at Sandia National Laboratory. A kleptographic backdoor is synonymously referred to as an asymmetric backdoor. Kleptography encompasses secure and covert communications through cryptosystems and cryptographic protocols. This is reminiscent of, but not the same as steganography that studies covert communications through graphics, video, digital audio data, and so forth.
== Kleptographic attack ==
=== Meaning ===
A kleptographic attack is an attack which uses asymmetric cryptography to implement a cryptographic backdoor. For example, one such attack could be to subtly modify how the public and private key pairs are generated by the cryptosystem so that the private key could be derived from the public key using the attacker's private key. In a well-designed attack, the outputs of the infected cryptosystem would be computationally indistinguishable from the outputs of the corresponding uninfected cryptosystem. If the infected cryptosystem is a black-box implementation such as a hardware security module, a smartcard, or a Trusted Platform Module, a successful attack could go completely unnoticed.
A reverse engineer might be able to uncover a backdoor inserted by an attacker, and when it is a symmetric backdoor, even use it themself. However, by definition a kleptographic backdoor is asymmetric and the reverse-engineer cannot use it. A kleptographic attack (asymmetric backdoor) requires a private key known only to the attacker in order to use the backdoor. In this case, even if the reverse engineer was well-funded and gained complete knowledge of the backdoor, it would remain useless for them to extract the plaintext without the attacker's private key.
=== Construction ===
Kleptographic attacks can be constructed as a cryptotrojan that infects a cryptosystem and opens a backdoor for the attacker, or can be implemented by the manufacturer of a cryptosystem. The attack does not necessarily have to reveal the entirety of the cryptosystem's output; a more complicated attack technique may alternate between producing uninfected output and insecure data with the backdoor present.
=== Design ===
Kleptographic attacks have been designed for RSA key generation, the Diffie–Hellman key exchange, the Digital Signature Algorithm, and other cryptographic algorithms and protocols. SSL, SSH, and IPsec protocols are vulnerable to kleptographic attacks. In each case, the attacker is able to compromise the particular cryptographic algorithm or protocol by inspecting the information that the backdoor information is encoded in (e.g., the public key, the digital signature, the key exchange messages, etc.) and then exploiting the logic of the asymmetric backdoor using their secret key (usually a private key).
A. Juels and J. Guajardo proposed a method (KEGVER) through which a third party can verify RSA key generation. This is devised as a form of distributed key generation in which the secret key is only known to the black box itself. This assures that the key generation process was not modified and that the private key cannot be reproduced through a kleptographic attack.
=== Examples ===
Four practical examples of kleptographic attacks (including a simplified SETUP attack against RSA) can be found in JCrypTool 1.0, the platform-independent version of the open-source CrypTool project. A demonstration of the prevention of kleptographic attacks by means of the KEGVER method is also implemented in JCrypTool.
The Dual_EC_DRBG cryptographic pseudo-random number generator from the NIST SP 800-90A is thought to contain a kleptographic backdoor. Dual_EC_DRBG utilizes elliptic curve cryptography, and NSA is thought to hold a private key which, together with bias flaws in Dual_EC_DRBG, allows NSA to decrypt SSL traffic between computers using Dual_EC_DRBG for example. The algebraic nature of the attack follows the structure of the repeated Dlog Kleptogram in the work of Young and Yung.
== References == | Wikipedia/Kleptographic |
In cryptography, Tiger is a cryptographic hash function designed by Ross Anderson and Eli Biham in 1995 for efficiency on 64-bit platforms. The size of a Tiger hash value is 192 bits. Truncated versions (known as Tiger/128 and Tiger/160) can be used for compatibility with protocols assuming a particular hash size. Unlike the SHA-2 family, no distinguishing initialization values are defined; they are simply prefixes of the full Tiger/192 hash value.
Tiger2 is a variant where the message is padded by first appending a byte with the hexadecimal value of 0x80 as in MD4, MD5 and SHA, rather than with the hexadecimal value of 0x01 as in the case of Tiger. The two variants are otherwise identical.
== Algorithm ==
Tiger is based on Merkle–Damgård construction. The one-way compression function operates on 64-bit words, maintaining 3 words of state and processing 8 words of data. There are 24 rounds, using a combination of operation mixing with XOR and addition/subtraction, rotates, and S-box lookups, and a fairly intricate key scheduling algorithm for deriving 24 round keys from the 8 input words.
Although fast in software, Tiger's large S-boxes (four S-boxes, each with 256 64-bit entries totaling 8 KiB) make implementations in hardware or microcontrollers difficult.
== Usage ==
Tiger is frequently used in Merkle hash tree form, where it is referred to as TTH (Tiger Tree Hash). TTH is used by many clients on the Direct Connect and Gnutella file sharing networks, and can optionally be included in the BitTorrent metafile for better content availability.
Tiger was considered for inclusion in the OpenPGP standard, but was abandoned in favor of RIPEMD-160.
== OID ==
RFC 2440 refers to TIGER as having no OID, whereas the GNU Coding Standards list TIGER as having OID 1.3.6.1.4.1.11591.12.2. In the IPSEC subtree, HMAC-TIGER is assigned OID 1.3.6.1.5.5.8.1.3. No OID for TTH has been announced yet.
== Byte order ==
The specification of Tiger does not define the way its output should be printed but only defines the result to be three ordered 64-bit integers. The "testtiger" program at the author's homepage was intended to allow easy testing of the test source code, rather than to define any particular print order. The protocols Direct Connect and ADC as well as the program tthsum use little-endian byte order, which is also preferred by one of the authors.
== Examples ==
In the example below, the 192-bit (24-byte) Tiger hashes are represented as 48 hexadecimal digits in little-endian byte order. The following demonstrates a 43-byte ASCII input and the corresponding Tiger hashes:
Tiger("The quick brown fox jumps over the lazy dog") =
6d12a41e72e644f017b6f0e2f7b44c6285f06dd5d2c5b075
Tiger2("The quick brown fox jumps over the lazy dog") =
976abff8062a2e9dcea3a1ace966ed9c19cb85558b4976d8
Even a small change in the message will (with very high probability) result in a completely different hash, e.g. changing d to c:
Tiger("The quick brown fox jumps over the lazy cog") =
a8f04b0f7201a0d728101c9d26525b31764a3493fcd8458f
Tiger2("The quick brown fox jumps over the lazy cog") =
09c11330283a27efb51930aa7dc1ec624ff738a8d9bdd3df
The hash of the zero-length string is:
Tiger("") =
3293ac630c13f0245f92bbb1766e16167a4e58492dde73f3
Tiger2("") =
4441be75f6018773c206c22745374b924aa8313fef919f41
== Cryptanalysis ==
Unlike MD5 or SHA-0/1, there are no known effective attacks on the full 24-round Tiger except for pseudo-near collision. While MD5 processes its state with 64 simple 32-bit operations per 512-bit block and SHA-1 with 80, Tiger updates its state with a total of 144 such operations per 512-bit block, additionally strengthened by large S-box look-ups.
John Kelsey and Stefan Lucks have found a collision-finding attack on 16-round Tiger with a time complexity equivalent to about 244 compression function invocations and another attack that finds pseudo-near collisions in 20-round Tiger with work less than that of 248 compression function invocations. Florian Mendel et al. have improved upon these attacks by describing a collision attack spanning 19 rounds of Tiger, and a 22-round pseudo-near-collision attack. These attacks require a work effort equivalent to about 262 and 244 evaluations of the Tiger compression function, respectively.
== See also ==
Hash function security summary
Comparison of cryptographic hash functions
List of hash functions
Serpent – a block cipher by the same authors
== References ==
== External links ==
The Tiger home page | Wikipedia/Tiger_(cryptography) |
The MD2 Message-Digest Algorithm is a cryptographic hash function developed by Ronald Rivest in 1989. The algorithm is optimized for 8-bit computers. MD2 is specified in IETF RFC 1319. The "MD" in MD2 stands for "Message Digest".
Even though MD2 is not yet fully compromised, the IETF retired MD2 to "historic" status in 2011, citing "signs of weakness". It is deprecated in favor of SHA-256 and other strong hashing algorithms.
Nevertheless, as of 2014, it remained in use in public key infrastructures as part of certificates generated with MD2 and RSA.
== Description ==
The 128-bit hash value of any message is formed by padding it to a multiple of the block length (128 bits or 16 bytes) and adding a 16-byte checksum to it. For the actual calculation, a 48-byte auxiliary block and a 256-byte S-table are used. The constants were generated by shuffling the integers 0 through 255 using a variant of Durstenfeld's algorithm with a pseudorandom number generator based on decimal digits of π (pi) (see nothing up my sleeve number). The algorithm runs through a loop where it permutes each byte in the auxiliary block 18 times for every 16 input bytes processed. Once all of the blocks of the (lengthened) message have been processed, the first partial block of the auxiliary block becomes the hash value of the message.
The S-table values in hex are:
{ 0x29, 0x2E, 0x43, 0xC9, 0xA2, 0xD8, 0x7C, 0x01, 0x3D, 0x36, 0x54, 0xA1, 0xEC, 0xF0, 0x06, 0x13,
0x62, 0xA7, 0x05, 0xF3, 0xC0, 0xC7, 0x73, 0x8C, 0x98, 0x93, 0x2B, 0xD9, 0xBC, 0x4C, 0x82, 0xCA,
0x1E, 0x9B, 0x57, 0x3C, 0xFD, 0xD4, 0xE0, 0x16, 0x67, 0x42, 0x6F, 0x18, 0x8A, 0x17, 0xE5, 0x12,
0xBE, 0x4E, 0xC4, 0xD6, 0xDA, 0x9E, 0xDE, 0x49, 0xA0, 0xFB, 0xF5, 0x8E, 0xBB, 0x2F, 0xEE, 0x7A,
0xA9, 0x68, 0x79, 0x91, 0x15, 0xB2, 0x07, 0x3F, 0x94, 0xC2, 0x10, 0x89, 0x0B, 0x22, 0x5F, 0x21,
0x80, 0x7F, 0x5D, 0x9A, 0x5A, 0x90, 0x32, 0x27, 0x35, 0x3E, 0xCC, 0xE7, 0xBF, 0xF7, 0x97, 0x03,
0xFF, 0x19, 0x30, 0xB3, 0x48, 0xA5, 0xB5, 0xD1, 0xD7, 0x5E, 0x92, 0x2A, 0xAC, 0x56, 0xAA, 0xC6,
0x4F, 0xB8, 0x38, 0xD2, 0x96, 0xA4, 0x7D, 0xB6, 0x76, 0xFC, 0x6B, 0xE2, 0x9C, 0x74, 0x04, 0xF1,
0x45, 0x9D, 0x70, 0x59, 0x64, 0x71, 0x87, 0x20, 0x86, 0x5B, 0xCF, 0x65, 0xE6, 0x2D, 0xA8, 0x02,
0x1B, 0x60, 0x25, 0xAD, 0xAE, 0xB0, 0xB9, 0xF6, 0x1C, 0x46, 0x61, 0x69, 0x34, 0x40, 0x7E, 0x0F,
0x55, 0x47, 0xA3, 0x23, 0xDD, 0x51, 0xAF, 0x3A, 0xC3, 0x5C, 0xF9, 0xCE, 0xBA, 0xC5, 0xEA, 0x26,
0x2C, 0x53, 0x0D, 0x6E, 0x85, 0x28, 0x84, 0x09, 0xD3, 0xDF, 0xCD, 0xF4, 0x41, 0x81, 0x4D, 0x52,
0x6A, 0xDC, 0x37, 0xC8, 0x6C, 0xC1, 0xAB, 0xFA, 0x24, 0xE1, 0x7B, 0x08, 0x0C, 0xBD, 0xB1, 0x4A,
0x78, 0x88, 0x95, 0x8B, 0xE3, 0x63, 0xE8, 0x6D, 0xE9, 0xCB, 0xD5, 0xFE, 0x3B, 0x00, 0x1D, 0x39,
0xF2, 0xEF, 0xB7, 0x0E, 0x66, 0x58, 0xD0, 0xE4, 0xA6, 0x77, 0x72, 0xF8, 0xEB, 0x75, 0x4B, 0x0A,
0x31, 0x44, 0x50, 0xB4, 0x8F, 0xED, 0x1F, 0x1A, 0xDB, 0x99, 0x8D, 0x33, 0x9F, 0x11, 0x83, 0x14 }
== MD2 hashes ==
The 128-bit (16-byte) MD2 hashes (also termed message digests) are typically represented as 32-digit hexadecimal numbers. The following demonstrates a 43-byte ASCII input and the corresponding MD2 hash:
MD2("The quick brown fox jumps over the lazy dog") =
03d85a0d629d2c442e987525319fc471
As the result of the avalanche effect in MD2, even a small change in the input message will (with overwhelming probability) result in a completely different hash. For example, changing the letter d to c in the message results in:
MD2("The quick brown fox jumps over the lazy cog") =
6b890c9292668cdbbfda00a4ebf31f05
The hash of the zero-length string is:
MD2("") =
8350e5a3e24c153df2275c9f80692773
== Security ==
Rogier and Chauvaud presented in 1995 collisions of MD2's compression function, although they were unable to extend the attack to the full MD2. The described collisions was published in 1997.
In 2004, MD2 was shown to be vulnerable to a preimage attack with time complexity equivalent to 2104 applications of the compression function. The author concludes, "MD2 can no longer be considered a secure one-way hash function".
In 2008, MD2 has further improvements on a preimage attack with time complexity of 273 compression function evaluations and memory requirements of 273 message blocks.
In 2009, MD2 was shown to be vulnerable to a collision attack with time complexity of 263.3 compression function evaluations and memory requirements of 252 hash values. This is slightly better than the birthday attack which is expected to take 265.5 compression function evaluations.
In 2009, security updates were issued disabling MD2 in OpenSSL, GnuTLS, and Network Security Services.
== See also ==
Hash function security summary
Comparison of cryptographic hash functions
MD4
MD5
MD6
SHA-1
== References ==
== Further reading ==
== External links == | Wikipedia/MD2_(cryptography) |
The Proceedings of the USSR Academy of Sciences (Russian: Доклады Академии Наук СССР, Doklady Akademii Nauk SSSR (DAN SSSR), French: Comptes Rendus de l'Académie des Sciences de l'URSS [kɔ̃t ʁɑ̃dy də lakademi de sjɑ̃s də ly.ɛʁ.ɛs.ɛs]) was a Soviet journal that was dedicated to publishing original, academic research papers in physics, mathematics, chemistry, geology, and biology. It was first published in 1933 and ended in 1992 with volume 322, issue 3.
Today, it is continued by Doklady Akademii Nauk (Russian: Доклады Академии Наук), which began publication in 1992. The journal is also known as the Proceedings of the Russian Academy of Sciences (RAS).
Doklady has had a complicated publication and translation history. A number of translation journals exist which publish selected articles from the original by subject section; these are listed below.
The journal is indexed in Russian Science Citation Index.
== History ==
The Russian Academy of Sciences dates from 1724, with a continuous series of variously named publications dating from 1726.
Doklady Akademii Nauk SSSR-Comptes Rendus de l'Académie des Sciences de l'URSS, Seriya A first appeared in 1925 publishing original, academic research papers in the sciences, mathematics and technology in Russian or very occasionally in English, French or German.
Novaia Seriya replaced Seriya A in 1933, with articles published in Russian and immediately followed by a translation into English, French, or German.
In 1935, with v.3(8), Comptes Rendus de l'Académie des Sciences de l'URSS was split off as a separate journal that continued publishing corresponding Doklady translations
until 1947, when it ceased publication. The change in volume numbering with 1935, v.3(8) virtually renumbered: 1933; 1934, v.1-4; and 1935, v.1-2, to: v. 1, 1933; v.2-5, 1934; and v.6-7, 1935.
Doklady Akademii Nauk SSSR continued publication until 1992, when it changed title to Doklady Akademii Nauk.
Some Comptes Rendus (Doklady)... articles also appear to have been published in European journals, and Reaxys/Beilstein and Berichte der Deutschen Chemischen Gesellschaft (Chemische Berichte), Web of Science, Chemical Abstracts (SciFinder) and "Doklady" all use different transliteration schemes.
An excellent review of the multiplicity of national/international transliteration schemes is given by Hoseh.
=== Indexing problems ===
SciFinder incorrectly assigned Doklady Akademii Nauk SSSR, Seriya A to articles published between 1933 and 1935, v.2 in Doklady Akademii Nauk SSSR and to articles published between 1935, v.3(8) to v.56(2), 1947 in Comptes Rendus de l'Académie des Sciences de l'URSS. The print Chemical Abstracts provides the correct journal title. Scifinder records for articles published in 1935, v.3(8), 1935 are inconsistently shown as either v.3 or v.8. SciFinder records also do not consistently provide both the Russian and English, French or German pagination for the 1933–1935, v.2 articles.
Web of Science adds a further complication by incorrectly omitting (Doklady) from the journal title,
while Reaxys/Beilstein incorrectly gives Chemische Berichte as the journal title for articles appearing in Berichte der Deutschen Chemischen Gesellschaft.
Records in SciFinder and INSPEC for articles published in 1933; 1934, v.1-4; and 1935, v.1-2, are shown in Web of Science as v.1, 1933 – v.7, 1935, because the Web of Science records were indexed for the Century of Science from library volumes that had been subsequently renumbered by the cooperating library.
== Translations ==
English translations of "Doklady" articles began again in 1956, when the American Institute of Physics began publishing Soviet Physics - Doklady. Other publications soon followed and within a few years most Doklady subject sections were available in English translation.
Doklady translation journals are as of 2013 published by Pleiades Publishing, Ltd and distributed online by Springer Science+Business Media. The online translation journals now only publish "selected" articles from the original Russian-language journal, which is also available online from 2006.
Determining the appropriate English translation journal for articles published in ‘Doklady’ is often problematic.
One approach is to check the Russian Doklady issue contents page to identify the subject section and then determine in which translation journal the article was published. The translation journal's contents pages will provide a concordance linking the ‘Doklady’ page numbers with the ‘translation journal’ page numbers.
Another approach is to search SciFinder for the original article, which may show the Doklady subject section, since Chemical Abstracts only indexed the original Russian language articles until 1995.
=== Translation journal titles and their Doklady subject sections ===
Several journals of translations exist, each containing articles that pertain to a particular subject.
==== Doklady Biochemistry (Biochemistry) ====
Incorporated into Doklady Biochemistry & Biophysics.
==== Doklady Biochemistry and Biophysics ====
(Biochemistry, Biophysics, Molecular Biology. Incorporating Doklady Biochemistry and Doklady Biophysics.
==== Doklady. Botanical Sciences Sections ====
Its first issue, Jan/Jun 1964, through to the June 1965 issue was called Proceedings of the Academy of Sciences of the USSR. Botanical Sciences Sections and was published by Consultants Bureau Enterprises. Afterward it was published by Consultants Bureau as Doklady. Botanical Sciences Sections (ISSN 0012-4982).
Note that volumes 142-153 (1962–1963) were also called Doklady. Biological Sciences Sections.
==== Doklady Chemical Technology ====
Chemical Technology. Incorporated into Doklady Chemistry.
==== Doklady Chemistry ====
Covers chemistry and chemical technology. Incorporating Doklady Chemical Technology.
==== Doklady of the Academy of Sciences of the USSR. Earth Sciences Sections ====
This journal, also called Proceedings of the Academy of Sciences of the USSR. Geological Sciences Sections and Proceedings of the Academy of Sciences of the USSR. Geochemistry Sections, was published bimonthly as ISSN 0012-494X. Its first issue, Jan/Feb 1959, through to the July/Aug 1959 issue, was published by Consultants Bureau. The Sept/Oct 1959 to Nov/Dec 1961 issues were published by Consultants Bureau Enterprises. Scripta Technica, Inc then published it from 1962 to 1966. Finally, the American Geological Institute published it from 1967 to 1972.
==== Doklady. Earth Science Sections ====
This was published by the American Geological Institute from 1973 to 1985.
==== Transactions (Doklady) of the USSR Academy of Sciences. Earth Science Sections ====
This journal, published bimonthly as ISSN 0891-5571, was published by Scripta Technica in cooperation with the American Geological Institute and American Geophysical Union from 1986 to 1993.
==== Doklady Earth Sciences ====
Covers geochemistry, geography, geology, geophysics, hydrogeology, lithology, mineralogy,
paleontology, petrography, oceanology, soil science.
==== Soviet Mathematics ====
This was published by the American Mathematical Society as ISSN 0038-5573. It contained translations of the mathematics sections of volumes 130-220 (1960-early 1975).
==== Soviet Mathematics - Doklady ====
This was published by the American Mathematical Society as ISSN 0197-6788. It first appeared in 1979 as volume 20, and ceased in 1992 with volume 45, issue 1. The journal contained translations of the mathematics sections of volumes 244–322 of DAN SSSR (1979–end).
==== Doklady Mathematics ====
Covers mathematics.
==== Doklady Physical Chemistry ====
Covers physical chemistry.
==== Doklady Physics ====
Doklady Physics covers aerodynamics, astronomy, astrophysics, cosmology, crystallography, cybernetics and control theory, fluid mechanics, heat engineering, hydraulics, mathematical physics, mechanics, physics, power engineering, technical physics, theory of elasticity.
==== Doklady Soil Science ====
Incorporated into Doklady Earth Sciences.
== Footnotes ==
== References ==
Johns Hopkins University Libraries' catalog page
NIST Research Library's catalog page | Wikipedia/Proceedings_of_the_USSR_Academy_of_Sciences |
Geometric cryptography is an area of cryptology where messages and ciphertexts are represented by geometric quantities such as angles or intervals and where computations are performed by ruler and compass constructions. The difficulty or impossibility of solving certain geometric problems like trisection of an angle using ruler and compass only is the basis for the various protocols in geometric cryptography. This field of study was suggested by Mike Burmester, Ronald L. Rivest and Adi Shamir in 1996. Though the cryptographic methods based on geometry have practically no real life applications, they are of use as pedagogic tools for the elucidation of other more complex cryptographic protocols. Geometric cryptography may have applications in the future once current mainstream encryption methods are made obsolete by quantum computing.
== A geometric one-way function ==
Some of the geometric cryptographic methods are based on the impossibility of trisecting an angle using ruler and compass. Given an arbitrary angle, there is a straightforward ruler and compass construction for finding the triple of the given angle. But there is no ruler and compass construction for finding the angle which is exact one-third of an arbitrary angle. Hence the function which assigns the triple of an angle to a given angle can be thought of as a one-way function, the only constructions allowed being ruler and compass constructions.
== A geometric identification protocol ==
A geometric identification protocol has been suggested based on the one-way function indicated above.
Assume that Alice wishes to establish a means of proving her identity later to Bob.
Initialization: Alice publishes a copy of an angle YA which is constructed by Alice as the triple of an angle XA she has constructed at random. Because trisecting an angle is impossible Alice is confident that she is the only one who knows XA.
Identification Protocol:
Alice gives Bob a copy of an angle R which she has constructed as the triple of an angle K that she has selected at random.
Bob flips a coin and tells Alice the result.
If Bob says "heads" Alice gives Bob a copy of the angle K and Bob checks that 3*K = R.
If Bob says "tails" Alice gives Bob a copy of the angle L = K + XA and Bob checks that 3*L = R + YA.
The four steps are repeated t times independently. Bob accepts Alice's proof of identity only if all t checks are successful.
This protocol is an interactive proof of knowledge of the angle XA (the identity of Alice) with
error 2−t. The protocol is also zero-knowledge.
== See also ==
Post-quantum cryptography
== References == | Wikipedia/Geometric_cryptography |
In theoretical computer science and cryptography, a trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor". Trapdoor functions are a special case of one-way functions and are widely used in public-key cryptography.
In mathematical terms, if f is a trapdoor function, then there exists some secret information t, such that given f(x) and t, it is easy to compute x. Consider a padlock and its key. It is trivial to change the padlock from open to closed without using the key, by pushing the shackle into the lock mechanism. Opening the padlock easily, however, requires the key to be used. Here the key t is the trapdoor and the padlock is the trapdoor function.
An example of a simple mathematical trapdoor is "6895601 is the product of two prime numbers. What are those numbers?" A typical "brute-force" solution would be to try dividing 6895601 by many prime numbers until finding the answer. However, if one is told that 1931 is one of the numbers, one can find the answer by entering "6895601 ÷ 1931" into any calculator. This example is not a sturdy trapdoor function – modern computers can guess all of the possible answers within a second – but this sample problem could be improved by using the product of two much larger primes.
Trapdoor functions came to prominence in cryptography in the mid-1970s with the publication of asymmetric (or public-key) encryption techniques by Diffie, Hellman, and Merkle. Indeed, Diffie & Hellman (1976) coined the term. Several function classes had been proposed, and it soon became obvious that trapdoor functions are harder to find than was initially thought. For example, an early suggestion was to use schemes based on the subset sum problem. This turned out rather quickly to be unsuitable.
As of 2004, the best known trapdoor function (family) candidates are the RSA and Rabin families of functions. Both are written as exponentiation modulo a composite number, and both are related to the problem of prime factorization.
Functions related to the hardness of the discrete logarithm problem (either modulo a prime or in a group defined over an elliptic curve) are not known to be trapdoor functions, because there is no known "trapdoor" information about the group that enables the efficient computation of discrete logarithms.
A trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). A backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion.
== Definition ==
A trapdoor function is a collection of one-way functions { fk : Dk → Rk } (k ∈ K), in which all of K, Dk, Rk are subsets of binary strings {0, 1}*, satisfying the following conditions:
There exists a probabilistic polynomial time (PPT) sampling algorithm Gen s.t. Gen(1n) = (k, tk) with k ∈ K ∩ {0, 1}n and tk ∈ {0, 1}* satisfies | tk | < p (n), in which p is some polynomial. Each tk is called the trapdoor corresponding to k. Each trapdoor can be efficiently sampled.
Given input k, there also exists a PPT algorithm that outputs x ∈ Dk. That is, each Dk can be efficiently sampled.
For any k ∈ K, there exists a PPT algorithm that correctly computes fk.
For any k ∈ K, there exists a PPT algorithm A s.t. for any x ∈ Dk, let y = A ( k, fk(x), tk ), and then we have fk(y) = fk(x). That is, given trapdoor, it is easy to invert.
For any k ∈ K, without trapdoor tk, for any PPT algorithm, the probability to correctly invert fk (i.e., given fk(x), find a pre-image x' such that fk(x' ) = fk(x)) is negligible.
If each function in the collection above is a one-way permutation, then the collection is also called a trapdoor permutation.
== Examples ==
In the following two examples, we always assume that it is difficult to factorize a large composite number (see Integer factorization).
=== RSA assumption ===
In this example, the inverse
d
{\displaystyle d}
of
e
{\displaystyle e}
modulo
ϕ
(
n
)
{\displaystyle \phi (n)}
(Euler's totient function of
n
{\displaystyle n}
) is the trapdoor:
f
(
x
)
=
x
e
mod
n
.
{\displaystyle f(x)=x^{e}\mod n.}
If the factorization of
n
=
p
q
{\displaystyle n=pq}
is known, then
ϕ
(
n
)
=
(
p
−
1
)
(
q
−
1
)
{\displaystyle \phi (n)=(p-1)(q-1)}
can be computed. With this the inverse
d
{\displaystyle d}
of
e
{\displaystyle e}
can be computed
d
=
e
−
1
mod
ϕ
(
n
)
{\displaystyle d=e^{-1}\mod {\phi (n)}}
, and then given
y
=
f
(
x
)
{\displaystyle y=f(x)}
, we can find
x
=
y
d
mod
n
=
x
e
d
mod
n
=
x
mod
n
{\displaystyle x=y^{d}\mod n=x^{ed}\mod n=x\mod n}
.
Its hardness follows from the RSA assumption.
=== Rabin's quadratic residue assumption ===
Let
n
{\displaystyle n}
be a large composite number such that
n
=
p
q
{\displaystyle n=pq}
, where
p
{\displaystyle p}
and
q
{\displaystyle q}
are large primes such that
p
≡
3
(
mod
4
)
,
q
≡
3
(
mod
4
)
{\displaystyle p\equiv 3{\pmod {4}},q\equiv 3{\pmod {4}}}
, and kept confidential to the adversary. The problem is to compute
z
{\displaystyle z}
given
a
{\displaystyle a}
such that
a
≡
z
2
(
mod
n
)
{\displaystyle a\equiv z^{2}{\pmod {n}}}
. The trapdoor is the factorization of
n
{\displaystyle n}
. With the trapdoor, the solutions of z can be given as
c
x
+
d
y
,
c
x
−
d
y
,
−
c
x
+
d
y
,
−
c
x
−
d
y
{\displaystyle cx+dy,cx-dy,-cx+dy,-cx-dy}
, where
a
≡
x
2
(
mod
p
)
,
a
≡
y
2
(
mod
q
)
,
c
≡
1
(
mod
p
)
,
c
≡
0
(
mod
q
)
,
d
≡
0
(
mod
p
)
,
d
≡
1
(
mod
q
)
{\displaystyle a\equiv x^{2}{\pmod {p}},a\equiv y^{2}{\pmod {q}},c\equiv 1{\pmod {p}},c\equiv 0{\pmod {q}},d\equiv 0{\pmod {p}},d\equiv 1{\pmod {q}}}
. See Chinese remainder theorem for more details. Note that given primes
p
{\displaystyle p}
and
q
{\displaystyle q}
, we can find
x
≡
a
p
+
1
4
(
mod
p
)
{\displaystyle x\equiv a^{\frac {p+1}{4}}{\pmod {p}}}
and
y
≡
a
q
+
1
4
(
mod
q
)
{\displaystyle y\equiv a^{\frac {q+1}{4}}{\pmod {q}}}
. Here the conditions
p
≡
3
(
mod
4
)
{\displaystyle p\equiv 3{\pmod {4}}}
and
q
≡
3
(
mod
4
)
{\displaystyle q\equiv 3{\pmod {4}}}
guarantee that the solutions
x
{\displaystyle x}
and
y
{\displaystyle y}
can be well defined.
== See also ==
One-way function
== Notes ==
== References ==
Diffie, W.; Hellman, M. (1976), "New directions in cryptography" (PDF), IEEE Transactions on Information Theory, 22 (6): 644–654, CiteSeerX 10.1.1.37.9720, doi:10.1109/TIT.1976.1055638
Pass, Rafael, A Course in Cryptography (PDF), retrieved 27 November 2015
Goldwasser, Shafi, Lecture Notes on Cryptography (PDF), retrieved 25 November 2015
Ostrovsky, Rafail, Foundations of Cryptography (PDF), retrieved 27 November 2015
Dodis, Yevgeniy, Introduction to Cryptography Lecture Notes (Fall 2008), retrieved 17 December 2015
Lindell, Yehuda, Foundations of Cryptography (PDF), retrieved 17 December 2015 | Wikipedia/Trapdoor_one-way_function |
The Cryptographic Modernization Program is a Department of Defense directed, NSA Information Assurance Directorate led effort to transform and modernize Information Assurance capabilities for the 21st century. It has three phases:
Replacement- All at risk devices to be replaced.
Modernization- Integrate modular (programmable/ embedded) crypto solutions.
Transformation- Be compliant to GIG/ NetCentrics requirements.
The CM is a joint initiative to upgrade the DoD crypto inventory. Of the 1.3 million cryptographic devices in the U.S. inventory, 73 percent will be replaced over the next 10 to 15 years by ongoing and planned C4ISR systems programs, Information Technology modernization initiatives and advanced weapons platforms.
All command and control, communications, computer, intelligence, surveillance, reconnaissance, information technology and weapons systems that rely upon cryptography for the provision of assured confidentiality, integrity, and authentication services will become a part of this long-term undertaking. The Cryptographic Modernization program is a tightly integrated partnership between the NSA, the military departments, operational commands, defense agencies, the Joint Staff, federal government entities and industry.
The program is a multibillion-dollar, multi-year undertaking that will transform cryptographic security capabilities for national security systems at all echelons and points of use. It will exploit new and emerging technologies, provide advanced enabling infrastructure capabilities, and at the same time, modernize legacy devices that are now operationally employed.
The program also directly supports the DoD vision of the Global Information Grid. The security configuration features enable new cryptosystems to provide secure information delivery anywhere on the global grid while using the grid itself for security configuration and provisioning—seamless integration.
== Technology ==
=== Cryptography ===
Most modernized devices will include both Suite A (US only) and Suite B support. This allows for protection of sensitive government data as well as interoperability with coalition partners, such as NATO. The program includes the DOD's Key Management Initiative which is designed to replace cumbersome special purpose channels for distribution of cryptographic keys with a network-based approach by 2015.
=== Interoperability ===
The NSA has also led the effort to create standards for devices to prevent vendor lock in.
High Assurance Internet Protocol Encryptor (HAIPE)
Link Encryptor Family (LEF)
Secure Communications Interoperability Protocol (SCIP)
=== Devices ===
The modernized devices that are being built usually include the ability to add to or replace the current algorithms as firmware updates as newer ones become available.
== References == | Wikipedia/Cryptographic_Modernization_Program |
NSA Suite A Cryptography is NSA cryptography which "contains classified algorithms that will not be released." "Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)."
Incomplete list of Suite A algorithms:
ACCORDION
BATON
CDL 1
CDL 2
FFC
FIREFLY
JOSEKI
KEESEE
MAYFLY
MEDLEY
MERCATOR
SAVILLE
SHILLELAGH
WALBURN
WEASEL
A recently-discovered Internet-available procurement specifications document for the military's new key load device, the NGLD-M, reveals additional, more current, Suite A algorithm names and their uses (page 48, section 3.2.7.1 Algorithms):
ACCORDION 1.3 & 3.0 - TrKEK Encrypt/Decrypt and Internal Key Wrap, respectively.
SPONDULIX-S - KMI Key Agreement
WATARI - Secure Software Confidentiality
KM-TG Series - Security Software Signature
SILVER LINING - Security Software Signature
== See also ==
Commercial National Security Algorithm Suite
NSA Suite B Cryptography
== References ==
General
NSA Suite B Cryptography / Cryptographic Interoperability | Wikipedia/NSA_Suite_A_Cryptography |
In computer science and cryptography, Whirlpool (sometimes styled WHIRLPOOL) is a cryptographic hash function. It was designed by Vincent Rijmen (co-creator of the Advanced Encryption Standard) and Paulo S. L. M. Barreto, who first described it in 2000.
The hash has been recommended by the NESSIE project. It has also been adopted by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as part of the joint ISO/IEC 10118-3 international standard.
== Design features ==
Whirlpool is a hash designed after the Square block cipher, and is considered to be in that family of block cipher functions.
Whirlpool is a Miyaguchi-Preneel construction based on a substantially modified Advanced Encryption Standard (AES).
Whirlpool takes a message of any length less than 2256 bits and returns a 512-bit message digest.
The authors have declared that
"WHIRLPOOL is not (and will never be) patented. It may be used free of charge for any purpose."
=== Version changes ===
The original Whirlpool will be called Whirlpool-0, the first revision of Whirlpool will be called Whirlpool-T and the latest version will be called Whirlpool in the following test vectors.
In the first revision in 2001, the S-box was changed from a randomly generated one with good cryptographic properties to one which has better cryptographic properties and is easier to implement in hardware.
In the second revision (2003), a flaw in the diffusion matrix was found that lowered the estimated security of the algorithm below its potential. Changing the 8x8 rotating matrix constants from (1, 1, 3, 1, 5, 8, 9, 5) to (1, 1, 4, 1, 8, 5, 2, 9) solved this issue.
== Internal structure ==
The Whirlpool hash function is a Merkle–Damgård construction based on an AES-like block cipher W in Miyaguchi–Preneel mode.
The block cipher W consists of an 8×8 state matrix
S
{\displaystyle S}
of bytes, for a total of 512 bits.
The encryption process consists of updating the state with four round functions over 10 rounds. The four round functions are SubBytes (SB), ShiftColumns (SC), MixRows (MR) and AddRoundKey (AK). During each round the new state is computed as
S
=
A
K
∘
M
R
∘
S
C
∘
S
B
(
S
)
{\displaystyle S=AK\circ MR\circ SC\circ SB(S)}
.
=== SubBytes ===
The SubBytes operation applies a non-linear permutation (the S-box) to each byte of the state independently. The 8-bit S-box is composed of 3 smaller 4-bit S-boxes.
=== ShiftColumns ===
The ShiftColumns operation cyclically shifts each byte in each column of the state. Column j has its bytes shifted downwards by j positions.
=== MixRows ===
The MixRows operation is a right-multiplication of each row by an 8×8 matrix over
G
F
(
2
8
)
{\displaystyle GF({2^{8}})}
. The matrix is chosen such that the branch number (an important property when looking at resistance to differential cryptanalysis) is 9, which is maximal.
=== AddRoundKey ===
The AddRoundKey operation uses bitwise xor to add a key calculated by the key schedule to the current state. The key schedule is identical to the encryption itself, except the AddRoundKey function is replaced by an AddRoundConstant function that adds a predetermined constant in each round.
== Whirlpool hashes ==
The Whirlpool algorithm has undergone two revisions since its original 2000 specification.
People incorporating Whirlpool will most likely use the most recent revision of Whirlpool; while there are no known security weaknesses in earlier versions of Whirlpool, the most recent revision has better hardware implementation efficiency characteristics, and is also likely to be more secure. As mentioned earlier, it is also the version adopted in the ISO/IEC 10118-3 international standard.
The 512-bit (64-byte) Whirlpool hashes (also termed message digests) are typically represented as 128-digit hexadecimal numbers.
The following demonstrates a 43-byte ASCII input (not including quotes) and the corresponding Whirlpool hashes:
== Implementations ==
The authors provide reference implementations of the Whirlpool algorithm, including a version written in C and a version written in Java. These reference implementations have been released into the public domain.
Research on the security analysis of the Whirlpool function however, has revealed that on average, the introduction of 8 random faults is sufficient to compromise the 512-bit Whirlpool hash message being processed and the secret key of HMAC-Whirlpool within the context of Cloud of Things (CoTs). This emphasizes the need for increased security measures in its implementation.
== Adoption ==
Two of the first widely used mainstream cryptographic programs that started using Whirlpool were FreeOTFE, followed by TrueCrypt in 2005.
VeraCrypt (a fork of TrueCrypt) included Whirlpool (the final version) as one of its supported hash algorithms.
== See also ==
Digital timestamping
== References ==
== External links ==
The WHIRLPOOL Hash Function at the Wayback Machine (archived 2017-11-29)
Jacksum on SourceForge, a Java implementation of all three revisions of Whirlpool
whirlpool on GitHub – An open source Go implementation of the latest revision of Whirlpool
A Matlab Implementation of the Whirlpool Hashing Function
RHash, an open source command-line tool, which can calculate and verify Whirlpool hash.
Perl Whirlpool module at CPAN
Digest module implementing the Whirlpool hashing algorithm in Ruby
Ironclad a Common Lisp cryptography package containing a Whirlpool implementation
The ISO/IEC 10118-3:2004 standard
Test vectors for the Whirlpool hash from the NESSIE project
Managed C# implementation
Python Whirlpool module | Wikipedia/Whirlpool_(cryptography) |
The Java Cryptography Extension (JCE) is an officially released Standard Extension to the Java Platform and part of Java Cryptography Architecture (JCA). JCE provides a framework and implementation for encryption, key generation and key agreement, and Message Authentication Code (MAC) algorithms. JCE supplements the Java platform, which already includes interfaces and implementations of message digests and digital signatures. Installation is specific to the version of the Java Platform being used, with downloads available for Java 6, Java 7, and Java 8. "The unlimited policy files are required only for JDK 8, 7, and 6 updates earlier than 8u161, 7u171, and 6u181. On those versions and later, the stronger cryptographic algorithms are available by default."
== References ==
== External links ==
Java Cryptography Architecture (JCA) Reference Guide
JDK-8170157 : Enable unlimited cryptographic policy by default in Oracle JDK builds | Wikipedia/Java_Cryptography_Extension |
In computing, the Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the java.security package.
The JCA uses a "provider"-based architecture and contains a set of APIs for various purposes, such as encryption, key generation and management, secure random-number generation, certificate validation, etc. These APIs provide an easy way for developers to integrate security into application code.
== See also ==
Java Cryptography Extension
Bouncy Castle (cryptography)
== External links ==
Official JCA guides: JavaSE6, JavaSE7, JavaSE8, JavaSE9, JavaSE10, JavaSE11 | Wikipedia/Java_Cryptography_Architecture |
A cryptographic protocol is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.
Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects:
Key agreement or establishment
Entity authentication
Symmetric encryption and message authentication material construction
Secured application-level data transport
Non-repudiation methods
Secret sharing methods
Secure multi-party computation
For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support.
There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic application protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS per se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications.
== Advanced cryptographic protocols ==
A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration. Blind signatures can be used for digital cash and digital credentials to prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with. Secure digital timestamping can be used to prove that data (even if confidential) existed at a certain time. Secure multiparty computation can be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer. End-to-end auditable voting systems provide sets of desirable privacy and auditability properties for conducting e-voting. Undeniable signatures include interactive protocols that allow the signer to prove a forgery and limit who can verify the signature. Deniable encryption augments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message. Digital mixes create hard-to-trace communications.
== Formal verification ==
Cryptographic protocols can sometimes be verified formally on an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through the Dolev-Yao model.
Logics, concepts and calculi used for formal reasoning of security protocols:
Burrows–Abadi–Needham logic (BAN logic)
Dolev–Yao model
π-calculus
Protocol composition logic (PCL)
Strand space
Research projects and tools used for formal verification of security protocols:
Automated Validation of Internet Security Protocols and Applications (AVISPA) and follow-up project AVANTSSAR.
Constraint Logic-based Attack Searcher (CL-AtSe)
Open-Source Fixed-Point Model-Checker (OFMC)
SAT-based Model-Checker (SATMC)
Casper
CryptoVerif
Cryptographic Protocol Shapes Analyzer (CPSA)
Knowledge In Security protocolS (KISS)
Maude-NRL Protocol Analyzer (Maude-NPA)
ProVerif
Scyther
Tamarin Prover
Squirrel
=== Notion of abstract protocol ===
To formally verify a protocol it is often abstracted and modelled using Alice & Bob notation. A simple example is the following:
A
→
B
:
{
X
}
K
A
,
B
{\displaystyle A\rightarrow B:\{X\}_{K_{A,B}}}
This states that Alice
A
{\displaystyle A}
intends a message for Bob
B
{\displaystyle B}
consisting of a message
X
{\displaystyle X}
encrypted under shared key
K
A
,
B
{\displaystyle K_{A,B}}
.
== Examples ==
Internet Key Exchange
IPsec
Kerberos
Off-the-Record Messaging
Point to Point Protocol
Secure Shell (SSH)
Signal Protocol
Transport Layer Security
ZRTP
== See also ==
List of cryptosystems
Secure channel
Security Protocols Open Repository
Comparison of cryptography libraries
Quantum cryptographic protocol
== References ==
== Further reading ==
Ermoshina, Ksenia; Musiani, Francesca; Halpin, Harry (September 2016). "End-to-End Encrypted Messaging Protocols: An Overview" (PDF). In Bagnoli, Franco; et al. (eds.). Internet Science. INSCI 2016. Florence, Italy: Springer. pp. 244–254. doi:10.1007/978-3-319-45982-0_22. ISBN 978-3-319-45982-0. | Wikipedia/Cryptographic_protocols |
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.
A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.
Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology.
== Overview ==
Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.
A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban.
== Historical background ==
The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush.
Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation W = K log m (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log Sn = n log S, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.
Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of:
the information entropy and redundancy of a source, and its relevance through the source coding theorem;
the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem;
the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as
the bit—a new way of seeing the most fundamental unit of information.
== Quantities of information ==
Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.
In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because
lim
p
→
0
+
p
log
p
=
0
{\displaystyle \lim _{p\rightarrow 0+}p\log p=0}
for any logarithmic base.
=== Entropy of an information source ===
Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by
H
=
−
∑
i
p
i
log
2
(
p
i
)
{\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})}
where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
Intuitively, the entropy HX of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known.
The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N ⋅ H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N ⋅ H.
If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If
X
{\displaystyle \mathbb {X} }
is the set of all messages {x1, ..., xn} that X could be, and p(x) is the probability of some
x
∈
X
{\displaystyle x\in \mathbb {X} }
, then the entropy, H, of X is defined:
H
(
X
)
=
E
X
[
I
(
x
)
]
=
−
∑
x
∈
X
p
(
x
)
log
p
(
x
)
.
{\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).}
(Here, I(x) is the self-information, which is the entropy contribution of an individual message, and
E
X
{\displaystyle \mathbb {E} _{X}}
is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n.
The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
H
b
(
p
)
=
−
p
log
2
p
−
(
1
−
p
)
log
2
(
1
−
p
)
.
{\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).}
=== Joint entropy ===
The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies.
For example, if (X, Y) represents the position of a chess piece—X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
H
(
X
,
Y
)
=
E
X
,
Y
[
−
log
p
(
x
,
y
)
]
=
−
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
,
y
)
{\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,}
Despite similar notation, joint entropy should not be confused with cross-entropy.
=== Conditional entropy (equivocation) ===
The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y:
H
(
X
|
Y
)
=
E
Y
[
H
(
X
|
y
)
]
=
−
∑
y
∈
Y
p
(
y
)
∑
x
∈
X
p
(
x
|
y
)
log
p
(
x
|
y
)
=
−
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
|
y
)
.
{\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).}
Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
H
(
X
|
Y
)
=
H
(
X
,
Y
)
−
H
(
Y
)
.
{\displaystyle H(X|Y)=H(X,Y)-H(Y).\,}
=== Mutual information (transinformation) ===
Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by:
I
(
X
;
Y
)
=
E
X
,
Y
[
S
I
(
x
,
y
)
]
=
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
,
y
)
p
(
x
)
p
(
y
)
{\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}}
where SI (Specific mutual Information) is the pointwise mutual information.
A basic property of the mutual information is that
I
(
X
;
Y
)
=
H
(
X
)
−
H
(
X
|
Y
)
.
{\displaystyle I(X;Y)=H(X)-H(X|Y).\,}
That is, knowing Y, we can save an average of I(X; Y) bits in encoding X compared to not knowing Y.
Mutual information is symmetric:
I
(
X
;
Y
)
=
I
(
Y
;
X
)
=
H
(
X
)
+
H
(
Y
)
−
H
(
X
,
Y
)
.
{\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,}
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X:
I
(
X
;
Y
)
=
E
p
(
y
)
[
D
K
L
(
p
(
X
|
Y
=
y
)
‖
p
(
X
)
)
]
.
{\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].}
In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
I
(
X
;
Y
)
=
D
K
L
(
p
(
X
,
Y
)
‖
p
(
X
)
p
(
Y
)
)
.
{\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).}
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
=== Kullback–Leibler divergence (information gain) ===
The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution
p
(
X
)
{\displaystyle p(X)}
, and an arbitrary probability distribution
q
(
X
)
{\displaystyle q(X)}
. If we compress data in a manner that assumes
q
(
X
)
{\displaystyle q(X)}
is the distribution underlying some data, when, in reality,
p
(
X
)
{\displaystyle p(X)}
is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
D
K
L
(
p
(
X
)
‖
q
(
X
)
)
=
∑
x
∈
X
−
p
(
x
)
log
q
(
x
)
−
∑
x
∈
X
−
p
(
x
)
log
p
(
x
)
=
∑
x
∈
X
p
(
x
)
log
p
(
x
)
q
(
x
)
.
{\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.}
Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution
p
(
x
)
{\displaystyle p(x)}
. If Alice knows the true distribution
p
(
x
)
{\displaystyle p(x)}
, while Bob believes (has a prior) that the distribution is
q
(
x
)
{\displaystyle q(x)}
, then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
=== Directed Information ===
Directed information,
I
(
X
n
→
Y
n
)
{\displaystyle I(X^{n}\to Y^{n})}
, is an information theory measure that quantifies the information flow from the random process
X
n
=
{
X
1
,
X
2
,
…
,
X
n
}
{\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}}
to the random process
Y
n
=
{
Y
1
,
Y
2
,
…
,
Y
n
}
{\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}}
. The term directed information was coined by James Massey and is defined as
I
(
X
n
→
Y
n
)
≜
∑
i
=
1
n
I
(
X
i
;
Y
i
|
Y
i
−
1
)
{\displaystyle I(X^{n}\to Y^{n})\triangleq \sum _{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1})}
,
where
I
(
X
i
;
Y
i
|
Y
i
−
1
)
{\displaystyle I(X^{i};Y_{i}|Y^{i-1})}
is the conditional mutual information
I
(
X
1
,
X
2
,
.
.
.
,
X
i
;
Y
i
|
Y
1
,
Y
2
,
.
.
.
,
Y
i
−
1
)
{\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})}
.
In contrast to mutual information, directed information is not symmetric. The
I
(
X
n
→
Y
n
)
{\displaystyle I(X^{n}\to Y^{n})}
measures the information bits that are transmitted causally from
X
n
{\displaystyle X^{n}}
to
Y
n
{\displaystyle Y^{n}}
. The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information,
real-time control communication settings, and in statistical physics.
=== Other quantities ===
Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision.
== Coding theory ==
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
Data compression (source coding): There are two formulations for the compression problem:
lossless data compression: the data must be reconstructed exactly;
lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory.
Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal.
=== Source theory ===
Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
==== Rate ====
Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is:
r
=
lim
n
→
∞
H
(
X
n
|
X
n
−
1
,
X
n
−
2
,
X
n
−
3
,
…
)
;
{\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );}
that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is:
r
=
lim
n
→
∞
1
n
H
(
X
1
,
X
2
,
…
X
n
)
;
{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});}
that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
The information rate is defined as:
r
=
lim
n
→
∞
1
n
I
(
X
1
,
X
2
,
…
X
n
;
Y
1
,
Y
2
,
…
Y
n
)
;
{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});}
It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding.
=== Channel capacity ===
Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
Consider the communications process over a discrete channel. A simple model of the process is shown below:
→
Message
W
Encoder
f
n
→
E
n
c
o
d
e
d
s
e
q
u
e
n
c
e
X
n
Channel
p
(
y
|
x
)
→
R
e
c
e
i
v
e
d
s
e
q
u
e
n
c
e
Y
n
Decoder
g
n
→
E
s
t
i
m
a
t
e
d
m
e
s
s
a
g
e
W
^
{\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}}
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:
C
=
max
f
I
(
X
;
Y
)
.
{\displaystyle C=\max _{f}I(X;Y).\!}
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.
Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
==== Capacity of particular channel models ====
A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem.
A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm:
A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use.
==== Channels with memory and directed information ====
In practice many channels have memory. Namely, at time
i
{\displaystyle i}
the channel is given by the conditional probability
P
(
y
i
|
x
i
,
x
i
−
1
,
x
i
−
2
,
.
.
.
,
x
1
,
y
i
−
1
,
y
i
−
2
,
.
.
.
,
y
1
)
{\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})}
.
It is often more comfortable to use the notation
x
i
=
(
x
i
,
x
i
−
1
,
x
i
−
2
,
.
.
.
,
x
1
)
{\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})}
and the channel become
P
(
y
i
|
x
i
,
y
i
−
1
)
{\displaystyle P(y_{i}|x^{i},y^{i-1})}
.
In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information).
=== Fungible information ===
Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.
== Applications to other fields ==
=== Intelligence uses and secrecy applications ===
Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.
Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time.
Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
=== Pseudorandom number generation ===
Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
=== Seismic exploration ===
One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.
=== Semiotics ===
Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.: 171 : 137 Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing.": 91
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.
=== Integrated process organization of neural information ===
Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as functional clusters (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or effective information (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis).
=== Miscellaneous applications ===
Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling.
== See also ==
=== Applications ===
=== History ===
Hartley, R.V.L.
History of information theory
Shannon, C.E.
Timeline of information theory
Yockey, H.P.
Andrey Kolmogorov
=== Theory ===
=== Concepts ===
== References ==
== Further reading ==
=== The classic work ===
=== Other journal articles ===
=== Textbooks on information theory ===
=== Other books ===
== External links ==
"Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education
IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Archived 2018-06-12 at the Wayback Machine | Wikipedia/Information_Theory |
Export control is legislation that regulates the export of goods, software and technology. Some items could potentially be useful for purposes that are contrary to the interest of the exporting country. These items are considered to be controlled. The export of controlled item is regulated to restrict the harmful use of those items. Many governments implement export controls. Typically, legislation lists and classifies the controlled items, classifies the destinations, and requires exporters to apply for a licence to a local government department.
== History ==
The United States has had export controls since the American Revolution, although the modern export control regimes can be traced back to the Trading with the Enemy Act of 1917. A significant piece of legislation was the Export Control Act of 1940 which inter alia aimed to restrict shipments of material to pre-war Japan. In the United Kingdom, the Import, Export and Customs Power (Defence) Act of 1939 was the main legislation prior to World War II.
Post WWII, the Coordinating Committee for Multilateral Export Controls (CoCom) was founded in 1948, and continued until 1994. It was an early Multilateral export control regime.
== Principles ==
In most export control regimes, legislation lists the items which are deemed 'controlled', and lists the destinations to which exports are restricted in some way. The lists of what is controlled often arise from some harmonised regime.
=== Classification ===
Goods may be classified using a various classification systems. The United States uses the Export Control Classification Number (ECCN), India uses the Special Chemicals, Organisms, Materials, Equipment and Technologies (SCOMET) list and Japan uses Ministry of Economy, Trade and Industry (METI) lists.
Some items may be categorised as "designed or modified for military use", some as dual use, and some will not be export controlled. Dual use means that the device has both a civilian and a military purpose.
In several jurisdictions, classifications distinguish between goods, equipment, materials, software and technology; the last two being often considered intangible. Classifications may also be by destination purpose, including cryptography, laser, sonar and torture equipment.
=== Destination ===
An exporting country will consider the impact of its export control policy on its relationships with other countries. Sometimes countries will have trade agreements or arrangements with a group of other countries, which may specify that licences are not required for certain goods. For example, within the EU, licences are not required for shipping civilian goods to other member states; however, licences are required for restricted, military goods.
The exporting country's legislation will demand certain handling for goods in different classifications to destination countries. This could include:
The destination is barred from receiving the goods. This could be because of economic sanctions, or because the export would be against the interests of the country, or for other reasons for example because of human rights concerns. Exports of that class of goods to that destination is not be permitted.
Shipping the goods may require licences from the government. The government has legislation that requires certain items to be scrutinised, perhaps because of the item type, perhaps because of the destination, so there is opportunity for decision making based on policies. While some policies are published, the fine-grained detail may be subject to national security restrictions and based on intelligence. A goal of the legislation is to push exports that are close to the threshold through scrutiny, so a decision can be made by the exporter's government based on unpublished or partially-published criteria. The outcome of a licence application could be:
No licence required - the consideration of the exported item versus the destination was not controlled. This could even be while some jurisdictions request scrutiny for particular items, or for particular destinations
Licence granted - the government has concluded that the shipment will not significantly harm national interests, partner nations' interests, human rights, or other criteria
Licence denied - the government concludes that this shipment is contrary to national interests etc.
The export of the items is without concern, with and that class of goods can be shipped without impediment from export control legislation.
The end user of the goods or some broker will typically be declared, and similar restrictions apply as to countries.
Some individuals or entities may be listed, so that even if the item could normally be exported to the country without a licence, additional restrictions apply for that individual or entity.
=== Licence ===
For any given item being exported, the categorisations will typically lead to different treatments for a given destination, e.g. 'No Licence Required' (NLR) or 'Licence Required'.
If a licence is required for the item, to the destination, the licence issuer will require information as part of the licence application, typically including:
name, address, business number (e.g. EORI) of the exporter
technical details of the exported item, possibly including product documentation, part numbers and likely uses
value of the item and quantity to be shipped. This can be difficult to assess with intangibles (i.e. software or technology)
actual purpose of the item, often with a declaration to this effect from the intended recipient, and assurances that the items will not be used elsewhere.
shipment route, consignee addresses, brokers, agents and other involved third parties.
The declaration from end user could be an End User Undertaking (EUU), an End User Statement (EUS) or an End-user certificate. These EUUs will typically include intended use, and make assurances as to the applications for the goods, e.g. not to be used within missiles.
Licences can subsequently be obtained from the appropriate government department in the exporter's jurisdiction.
A licence will usually have some terms, such as:
obtain and keep EUUs from every destination organisation. (Applies to open-type licences; standard-type licences may require submission of the EUU at the time of licence application.)
include the licence number with the shipping documentation
include some text with the documentation accompanying the item
notify some authorities and make the item available for inspection prior to shipment
keep a records of the shipment, and anticipate a potential audit by an enforcement agency — a record keeping requirement
report the shipments made to some authority within some period — a reporting requirement.
=== Administration and enforcement ===
The process of classification, assessment, licensing, and then confirming the compliance with the licence terms is typically handled by government agency in the exporting country. These include BAFA (The Federal Office of Economics and Export Control) in Germany, BIS (and E2C2) in US, ECJU in UK.
== Circumvention ==
During the development of the Lockheed SR-71 Blackbird supersonic spy plane, the US used 'Third-world countries and bogus operations' in order to obtain sufficient ore to create the titanium for the aircraft.
== Worldwide ==
=== Organisations for harmonisation of controlled items ===
These are known as Multilateral export control regimes
Australia Group (AG)
Chemical Weapons Convention (CWC)
Missile Technology Control Regime (MTCR)
Nuclear Suppliers Group (NSG)
Wassenaar Arrangement (WA)
Zangger Committee
The regimes mean that supporting nations will tend to have similar classifications in their individual legislation. It reduces the administrative load on each of the nations. The harmonised regimes reduce the opportunity for 'tourism' where a particular country is chosen for its lax controls pertaining to some particular item. Even with the harmonised regimes, some countries choose to augment with additional classification, e.g. the USA with its 'xx99x' ECCN classifications in their Commerce Control List.
== United States of America ==
There are several agencies regulating exports in the US.
Directorate of Defense Trade Controls (DDTC) enforces the International Traffic in Arms Regulations (ITAR), which deals with military equipment and similar
Office of Foreign Assets Control (OFAC) deals with programmes of sanctions against various entities
Bureau of Industry and Security (BIS), and particularly the Office of Export Enforcement, enforces Export Administration Regulations (EAR), which cover exports in general. BIS also runs the web-based licensing system, SNAP-R.
Other government bodies involved in export control include the Export Enforcement Coordination Center (E2C2) of Homeland Security Investigations.
Companies engaging in export may be required to establish an Export Management and Compliance Program.
There are some particular treatments for cryptographic exports, where the NSA may require separate notification of intent to publish cryptographic software.
== European Union ==
The export control legislation applying in the EU is Council EU Regulation 2021/821, which came in to force 2021-09-09. This recast the previous legislation 428/2009. The regulation demands that authorisations are required for exports of sensitive items to certain places.
Competent authorities in each EU member state provide the licensing service, e.g. BAFA in Germany, SBDU in France and UAMA in Italy.
Organisations performing exports should have an Internal Compliance Programme (ICP).
The "dual use" regime was established in 2000.
== United Kingdom ==
The principal legislation is the retained EU regulation 428/2009 which still applies with amendments, because of the EU Withdrawal Act. This regulation is harmonised with the Export Control Order. The newer 'recast' EU regulation 2021/821 does not apply to mainland Britain since that comes after Brexit. There is the similar-sounding Export Control Act of 2002 which grants powers to the Secretary of State to impose such rules and this still applies.
Since Brexit, the Northern Ireland Protocol keeps Northern Ireland within the UK customs territory, but de facto means Northern Ireland is aligned with the EU customs. Consequently exports from Northern Ireland are subject to the EU regulations, including the new EU regulation 2021/821. No export licences are required for movement of dual-use goods between Great Britain and Northern Ireland. The Windsor Framework doesn't appear to impact export control.
The export classifications are declared in the UK strategic export control lists.
It is administered by the Export Control Joint Unit (ECJU), part of the Department for Business and Trade, with a web-based administration system SPIRE. (A new system LITE is being phased in since 2021, with a public beta for SIELs only from September 2024.) ECJU also manage enforcement and audit of licence compliance. It is recommended that companies involved in exports nominate staff, conduct training, keep records, perform internal audits and commit to compliance.
Licences include Standard Individual Export Licences (SIEL), Open Individual Export Licences (OIEL) and Open General Export Licences (OGEL) also known as General Export Authorisation (GEA), formerly known before 2013 as an Open General Licence (OGL).
== See also ==
Arms control
CITES
FDI screening
Sanitary and phytosanitary (SPS) measures
== References == | Wikipedia/Export_control |
In cryptography, security level is a measure of the strength that a cryptographic primitive — such as a cipher or hash function — achieves. Security level is usually expressed as a number of "bits of security" (also security strength), where n-bit security means that the attacker would have to perform 2n operations to break it, but other methods have been proposed that more closely model the costs for an attacker. This allows for convenient comparison between algorithms and is useful when combining multiple primitives in a hybrid cryptosystem, so there is no clear weakest link. For example, AES-128 (key size 128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to a RSA using 3072-bit key.
In this context, security claim or target security level is the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is considered broken.
== In symmetric cryptography ==
Symmetric algorithms usually have a strictly defined security claim. For symmetric ciphers, it is typically equal to the key size of the cipher — equivalent to the complexity of a brute-force attack. Cryptographic hash functions with output size of n bits usually have a collision resistance security level n/2 and a preimage resistance level n. This is because the general birthday attack can always find collisions in 2n/2 steps. For example, SHA-256 offers 128-bit collision resistance and 256-bit preimage resistance.
However, there are some exceptions to this. The Phelix and Helix are 256-bit ciphers offering a 128-bit security level. The SHAKE variants of SHA-3 are also different: for a 256-bit output size, SHAKE-128 provides 128-bit security level for both collision and preimage resistance.
== In asymmetric cryptography ==
The design of most asymmetric algorithms (i.e. public-key cryptography) relies on neat mathematical problems that are efficient to compute in one direction, but inefficient to reverse by the attacker. However, attacks against current public-key systems are always faster than brute-force search of the key space. Their security level isn't set at design time, but represents a computational hardness assumption, which is adjusted to match the best currently known attack.
Various recommendations have been published that estimate the security level of asymmetric algorithms, which differ slightly due to different methodologies.
For the RSA cryptosystem at 128-bit security level, NIST and ENISA recommend using 3072-bit keys and IETF 3253 bits. The conversion from key length to a security level estimate is based on the complexity of the GNFS.: §7.5
Diffie–Hellman key exchange and DSA are similar to RSA in terms of the conversion from key length to a security level estimate.: §7.5
Elliptic curve cryptography requires shorter keys, so the recommendations for 128-bit are 256-383 (NIST), 256 (ENISA) and 242 bits (IETF). The conversion from key size f to security level is approximately f / 2: this is because the method to break the Elliptic Curve Discrete Logarithm Problem, the rho method, finishes in 0.886 sqrt(2f) additions.
== Typical levels ==
The following table are examples of typical security levels for types of algorithms as found in s5.6.1.1 of the US NIST SP-800-57 Recommendation for Key Management.
: Table 2
Under NIST recommendation, a key of a given security level should only be transported under protection using an algorithm of equivalent or higher security level.
The security level is given for the cost of breaking one target, not the amortized cost for group of targets. It takes 2128 operations to find a AES-128 key, yet the same number of amortized operations is required for any number m of keys. On the other hand, breaking m ECC keys using the rho method require sqrt(m) times the base cost.
=== Meaning of "broken" ===
A cryptographic primitive is considered broken when an attack is found to have less than its advertised level of security. However, not all such attacks are practical: most currently demonstrated attacks take fewer than 240 operations, which translates to a few hours on an average PC. The costliest demonstrated attack on hash functions is the 261.2 attack on SHA-1, which took 2 months on 900 GTX 970 GPUs, and cost US$75,000 (although the researchers estimate only $11,000 was needed to find a collision).
Aumasson draws the line between practical and impractical attacks at 280 operations. He proposes a new terminology:
A broken primitive has an attack taking ≤ 280 operations. An attack can be plausibly carried out.
A wounded primitive has an attack taking between 280 and around 2100 operations. An attack is not possible right now, but future improvements are likely to make it possible.
An attacked primitive has an attack that is cheaper than the security claim, but much costlier than 2100. Such an attack is too far from being practical.
Finally, an analyzed primitive is one with no attacks cheaper than its security claim.
== References ==
== Further reading ==
Aumasson, Jean-Philippe (2020). Too Much Crypto (PDF). Real World Crypto Symposium.
== See also ==
Computational hardness assumption
40-bit encryption
Cipher security summary
Hash function security summary | Wikipedia/Cryptographic_strength |
A backdoor is a typically covert method of bypassing normal authentication or encryption in a computer, product, embedded device (e.g. a home router), or its embodiment (e.g. part of a cryptosystem, algorithm, chipset, or even a "homunculus computer"—a tiny computer-within-a-computer such as that found in Intel's AMT technology). Backdoors are most often used for securing remote access to a computer, or obtaining access to plaintext in cryptosystems. From there it may be used to gain access to privileged information like passwords, corrupt or delete data on hard drives, or transfer information within autoschediastic networks.
In the United States, the 1994 Communications Assistance for Law Enforcement Act forces internet providers to provide backdoors for government authorities. In 2024, the U.S. government realized that China had been tapping communications in the U.S. using that infrastructure for months, or perhaps longer; China recorded presidential candidate campaign office phone calls —including employees of the then-vice president of the nation– and of the candidates themselves.
A backdoor may take the form of a hidden part of a program, a separate program (e.g. Back Orifice may subvert the system through a rootkit), code in the firmware of the hardware, or parts of an operating system such as Windows. Trojan horses can be used to create vulnerabilities in a device. A Trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor. Although some are secretly installed, other backdoors are deliberate and widely known. These kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords.
Many systems that store information within the cloud fail to create accurate security measures. If many systems are connected within the cloud, hackers can gain access to all other platforms through the most vulnerable system. Default passwords (or other default credentials) can function as backdoors if they are not changed by the user. Some debugging features can also act as backdoors if they are not removed in the release version. In 1993, the United States government attempted to deploy an encryption system, the Clipper chip, with an explicit backdoor for law enforcement and national security access. The chip was unsuccessful.
Recent proposals to counter backdoors include creating a database of backdoors' triggers and then using neural networks to detect them.
== Overview ==
The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference. They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under DARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.
While initially targeting the computer vision domain, backdoor attacks have expanded to encompass various other domains, including text, audio, ML-based computer-aided design, and ML-based wireless signal classification. Additionally, vulnerabilities in backdoors have been demonstrated in deep generative models, reinforcement learning (e.g., AI GO), and deep graph models. These broad-ranging potential risks have prompted concerns from national security agencies regarding their potentially disastrous consequences.
A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. An example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password-less account which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).
Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.
== Politics and attribution ==
There are a number of cloak and dagger considerations that come into play when apportioning responsibility.
Covert backdoors sometimes masquerade as inadvertent defects (bugs) for reasons of plausible deniability. In some cases, these might begin life as an actual bug (inadvertent error), which, once discovered are then deliberately left unfixed and undisclosed, whether by a rogue employee for personal advantage, or with executive awareness and oversight.
It is also possible for an entirely above-board corporation's technology base to be covertly and untraceably tainted by external agents (hackers), though this level of sophistication is thought to exist mainly at the level of nation state actors. For example, if a photomask obtained from a photomask supplier differs in a few gates from its photomask specification, a chip manufacturer would be hard-pressed to detect this if otherwise functionally silent; a covert rootkit running in the photomask etching equipment could enact this discrepancy unbeknown to the photomask manufacturer, either, and by such means, one backdoor potentially leads to another.
In general terms, the long dependency-chains in the modern, highly specialized technological economy and innumerable human-elements process control-points make it difficult to conclusively pinpoint responsibility at such time as a covert backdoor becomes unveiled.
Even direct admissions of responsibility must be scrutinized carefully if the confessing party is beholden to other powerful interests.
== Examples ==
=== Worms ===
Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit, placed secretly on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data-gathering agents, since both surreptitious programs they installed routinely contacted central servers.
A sophisticated attempt to plant a backdoor in the Linux kernel, exposed in November 2003, added a small and subtle code change by subverting the revision control system. In this case, a two-line change appeared to check root access permissions of a caller to the sys_wait4 function, but because it used assignment = instead of equality checking ==, it actually granted permissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack.
In January 2014, a backdoor was discovered in certain Samsung Android products, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.
=== Object code backdoors ===
Harder to detect backdoors involve modifying object code, rather than source code—object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading—in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source on a trusted system.
Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves—for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change.
As object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting the compiler itself—so that when it detects that it is compiling the program under attack it inserts the backdoor—or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to a 1974 paper by Karger and Schell, and was popularized in Thompson's 1984 article, entitled "Reflections on Trusting Trust"; it is hence colloquially known as the "Trusting Trust" attack. See compiler backdoors, below, for details. Analogous attacks can target lower levels of the system,
such as the operating system, and can be inserted during the system booting process; these are also mentioned by Karger and Schell in 1974, and now exist in the form of boot sector viruses.
=== Asymmetric backdoors ===
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g. via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology. Notably, NSA inserted a kleptographic backdoor into the Dual EC DRBG standard.
There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor, designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.
== Compiler backdoors ==
A sophisticated form of black box backdoor is a compiler backdoor, where not only is a compiler subverted—to insert a backdoor in some other program, such as a login program—but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through which retroviruses infect their host. This can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped.
This attack was originally presented in Karger & Schell (1974), which was a United States Air Force security analysis of Multics, where they described such an attack on a PL/I compiler, and call it a "compiler trap door". They also mention a variant where the system initialization code is modified to insert a backdoor during booting, as this is complex and poorly understood, and call it an "initialization trapdoor"; this is now known as a boot sector virus.
This attack was then actually implemented by Ken Thompson, and popularized in his Turing Award acceptance speech in 1983, "Reflections on Trusting Trust", which points out that trust is relative, and the only software one can truly trust is code where every step of the bootstrapping has been inspected. This backdoor mechanism is based on the fact that people only review source (human-written) code, and not compiled machine code (object code). A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.
Thompson's paper describes a modified version of the Unix C compiler that would put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and would also add this feature undetectably to future compiler versions upon their compilation as well. As the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead.
Karger and Schell gave an updated analysis of the original exploit in 2002, and, in 2009, Wheeler wrote a historical overview and survey of the literature. In 2023, Cox published an annotated version of Thompson's backdoor source code.
=== Occurrences ===
Thompson's version was, officially, never released into the wild. However, it is believed that a version was distributed to BBN and at least one use of the backdoor was recorded. There are scattered anecdotal reports of such backdoors in subsequent years.
In August 2009, an attack of this kind was discovered by Sophos labs. The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. The virus looks for a Delphi installation, modifies the SysConst.pas file, which is the source code of a part of the standard library and compiles it. After that, every program compiled by that Delphi installation will contain the virus. An attack that propagates by building its own Trojan horse can be especially hard to discover. It resulted in many software vendors releasing infected executables without realizing it, sometimes claiming false positives. After all, the executable was not tampered with, the compiler was. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.
In 2015, a malicious copy of Xcode, XcodeGhost, also performed a similar attack and infected iOS apps from a dozen of software companies in China. Globally, 4,000 apps were found to be affected. It was not a true Thompson Trojan, as it does not infect development tools themselves, but it did prove that toolchain poisoning can cause substantial damages.
=== Countermeasures ===
Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system – typically one should rebuild a clean system and transfer data (but not executables) over. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing a disassembler from scratch.
A generic method to counter trusting trust attacks is called diverse double-compiling. The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers, which however should have the same behavior. Thus the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of the GCC suite (v. 3.0.4) contained no trojan, using icc (v. 11.0) as the different compiler.
In practice such verifications are not done by end users, except in extreme circumstances of intrusion detection and analysis, due to the rarity of such sophisticated attacks, and because programs are typically distributed in binary form. Removing backdoors (including compiler backdoors) is typically done by simply rebuilding a clean system. However, the sophisticated verifications are of interest to operating system vendors, to ensure that they are not distributing a compromised system, and in high-security settings, where such attacks are a realistic concern.
== List of known backdoors ==
Back Orifice was created in 1998 by hackers from Cult of the Dead Cow group as a remote administration tool. It allowed Windows computers to be remotely controlled over a network and parodied the name of Microsoft's BackOffice.
The Dual EC DRBG cryptographically secure pseudorandom number generator was revealed in 2013 to possibly have a kleptographic backdoor deliberately inserted by NSA, who also had the private key to the backdoor.
Several backdoors in the unlicensed copies of WordPress plug-ins were discovered in March 2014. They were inserted as obfuscated JavaScript code and silently created, for example, an admin account in the website database. A similar scheme was later exposed in a Joomla plugin.
Borland Interbase versions 4.0 through 6.0 had a hard-coded backdoor, put there by the developers. The server code contains a compiled-in backdoor account (username: politically, password: correct), which could be accessed over a network connection; a user logging in with this backdoor account could take full control over all Interbase databases. The backdoor was detected in 2001 and a patch was released.
Juniper Networks backdoor inserted in the year 2008 into the versions of firmware ScreenOS from 6.2.0r15 to 6.2.0r18 and from 6.3.0r12 to 6.3.0r20 that gives any user administrative access when using a special master password.
Several backdoors were discovered in C-DATA Optical Line Termination (OLT) devices. Researchers released the findings without notifying C-DATA because they believe the backdoors were intentionally placed by the vendor.
A backdoor in versions 5.6.0 and 5.6.1 of the popular Linux utility XZ Utils was discovered in March 2024 by software developer Andres Freund. The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution capabilities on the affected Linux systems. The issue has been assigned a CVSS score of 10.0, the highest possible score.
== See also ==
Backdoor:Win32.Hupigon
Hardware backdoor
Titanium (malware)
== Notes ==
== References ==
== External links ==
Finding and Removing Backdoors
Three Archaic Backdoor Trojan Programs That Still Serve Great Pranks Archived 2015-05-27 at the Wayback Machine
Backdoors removal — List of backdoors and their removal instructions.
FAQ Farm's Backdoors FAQ: wiki question and answer forum
List of backdoors and Removal | Wikipedia/Backdoor_(cryptography) |
Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus.
Atomic energy includes:
Nuclear binding energy, the energy required to split a nucleus of an atom.
Nuclear potential energy, the potential energy of the particles inside an atomic nucleus.
Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion.
Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles.
Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb.
== See also ==
Atomic Age
Index of environmental articles
== References == | Wikipedia/Atomic_energy |
In cryptography, a public key certificate, also known as a digital certificate or identity certificate, is an electronic document used to prove the validity of a public key. The certificate includes the public key and information about it, information about the identity of its owner (called the subject), and the digital signature of an entity that has verified the certificate's contents (called the issuer). If the device examining the certificate trusts the issuer and finds the signature to be a valid signature of that issuer, then it can use the included public key to communicate securely with the certificate's subject. In email encryption, code signing, and e-signature systems, a certificate's subject is typically a person or organization. However, in Transport Layer Security (TLS) a certificate's subject is typically a computer or other device, though TLS certificates may identify organizations or individuals in addition to their core role in identifying devices. TLS, sometimes called by its older name Secure Sockets Layer (SSL), is notable for being a part of HTTPS, a protocol for securely browsing the web.
In a typical public-key infrastructure (PKI) scheme, the certificate issuer is a certificate authority (CA), usually a company that charges customers a fee to issue certificates for them. By contrast, in a web of trust scheme, individuals sign each other's keys directly, in a format that performs a similar function to a public key certificate. In case of key compromise, a certificate may need to be revoked.
The most common format for public key certificates is defined by X.509. Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as Public Key Infrastructure (X.509) as defined in RFC 5280.
== Types of certificate ==
=== TLS/SSL server certificate ===
The Transport Layer Security (TLS) protocol – as well as its outdated predecessor, the Secure Sockets Layer (SSL) protocol – ensures that the communication between a client computer and a server is secure. The protocol requires the server to present a digital certificate, proving that it is the intended destination. The connecting client conducts certification path validation, ensuring that:
The subject of the certificate matches the hostname (not to be confused with the domain name) to which the client is trying to connect.
A trusted certificate authority has signed the certificate.
The Subject field of the certificate must identify the primary hostname of the server as the Common Name. The hostname must be publicly accessible, not using private addresses or reserved domains. A certificate may be valid for multiple hostnames (e.g., a domain and its subdomains). Such certificates are commonly called Subject Alternative Name (SAN) certificates or Unified Communications Certificates (UCC). These certificates contain the Subject Alternative Name field, though many CAs also put them into the Subject Common Name field for backward compatibility. If some of the hostnames contain an asterisk (*), a certificate may also be called a wildcard certificate.
Once the certification path validation is successful, the client can establish an encrypted connection with the server.
Internet-facing servers, such as public web servers, must obtain their certificates from a trusted, public certificate authority (CA).
=== TLS/SSL client certificate ===
Client certificates authenticate the client connecting to a TLS service, for instance to provide access control. Because most services provide access to individuals, rather than devices, most client certificates contain an email address or personal name rather than a hostname. In addition, the certificate authority that issues the client certificate is usually the service provider to which client connects because it is the provider that needs to perform authentication. Some service providers even offer free SSL certificates as part of their packages.
While most web browsers support client certificates, the most common form of authentication on the Internet is a username and password pair. Client certificates are more common in virtual private networks (VPN) and Remote Desktop Services, where they authenticate devices.
=== Email certificate ===
In accordance with the S/MIME protocol, email certificates can both establish the message integrity and encrypt messages. To establish encrypted email communication, the communicating parties must have their digital certificates in advance. Each must send the other one digitally signed email and opt to import the sender's certificate.
Some publicly trusted certificate authorities provide email certificates, but more commonly S/MIME is used when communicating within a given organization, and that organization runs its own CA, which is trusted by participants in that email system.
=== Self-signed and root certificates ===
A self-signed certificate is a certificate with a subject that matches its issuer, and a signature that can be verified by its own public key.
Self-signed certificates have their own limited uses. They have full trust value when the issuer and the sole user are the same entity. For example, the Encrypting File System on Microsoft Windows issues a self-signed certificate on behalf of the encrypting user and uses it to transparently decrypt data on the fly. The digital certificate chain of trust starts with a self-signed certificate, called a root certificate, trust anchor, or trust root. A certificate authority self-signs a root certificate to be able to sign other certificates.
An intermediate certificate has a similar purpose to the root certificate – its only use is to sign other certificates. However, an intermediate certificate is not self-signed. A root certificate or another intermediate certificate needs to sign it.
An end-entity or leaf certificate is any certificate that cannot sign other certificates. For instance, TLS/SSL server and client certificates, email certificates, code signing certificates, and qualified certificates are all end-entity certificates.
=== Subject Alternative Name certificate ===
Subject Alternative Name (SAN) certificates are an extension to X.509 that allows various values to be associated with a security certificate using a subjectAltName field. These values are called Subject Alternative Names (SANs). Names include:
Email addresses
IP addresses
URIs
DNS names: this is usually also provided as the Common Name RDN within the Subject field of the main certificate.
Directory names: alternative Distinguished Names to that given in the Subject.
Other names, given as a General Name or Universal Principal Name: a registered object identifier followed by a value.
RFC 2818 (May 2000) specifies Subject Alternative Names as the preferred method of adding DNS names to certificates, deprecating the previous method of putting DNS names in the commonName field. Google Chrome version 58 (March 2017) removed support for checking the commonName field at all, instead only looking at the SANs.
As shown in the picture of Wikimedia's section on the right, the SAN field can contain wildcards. Not all vendors support or endorse mixing wildcards into SAN certificates.
=== Wildcard certificate ===
A public key certificate which uses an asterisk * (the wildcard) in its domain name fragment is called a Wildcard certificate.
Through the use of *, a single certificate may be used for multiple sub-domains. It is commonly used for transport layer security in computer networking.
For example, a single wildcard certificate for https://*.example.com will secure all these subdomains on the https://*.example.com domain:
payment.example.com
contact.example.com
login-secure.example.com
www.example.com
Instead of getting separate certificates for subdomains, you can use a single certificate for all main domains and subdomains and reduce cost.
Because the wildcard only covers one level of subdomains (the asterisk doesn't match full stops), these domains would not be valid for the certificates:
test.login.example.com
example.com
Note possible exceptions by CAs, for example wildcard-plus cert by DigiCert contains an automatic "Plus" property for the naked domain example.com.
==== Limitations ====
Only a single level of subdomain matching is supported in accordance with RFC 2818.
It is not possible to get a wildcard for an Extended Validation Certificate. A workaround could be to add every virtual host name in the Subject Alternative Name (SAN) extension, the major problem being that the certificate needs to be reissued whenever a new virtual server is added. (See Transport Layer Security § Support for name-based virtual servers for more information.)
Wildcards can be added as domains in multi-domain certificates or Unified Communications Certificates (UCC). In addition, wildcards themselves can have subjectAltName extensions, including other wildcards. For example, the wildcard certificate *.wikipedia.org has *.m.wikimedia.org as a Subject Alternative Name. Thus it secures www.wikipedia.org as well as the completely different website name meta.m.wikimedia.org.
RFC 6125 argues against wildcard certificates on security grounds, in particular "partial wildcards".
==== Further examples ====
The wildcard applies only to one level of the domain name. *.example.com matches sub1.example.com but not example.com and not sub2.sub1.domain.com
The wildcard may appear anywhere inside a label as a "partial wildcard" according to early specifications
f*.domain.com is OK. It will match frog.domain.com but not frog.super.domain.com
baz*.example.net is OK and matches baz1.example.net
*baz.example.net is OK and matches foobaz.example.net
b*z.example.net is OK and matches buzz.example.net
However, use of "partial-wildcard" certs is not recommended. As of 2011, partial wildcard support is optional, and is explicitly disallowed in SubjectAltName headers that are required for multi-name certificates. All major browsers have deliberately removed support for partial-wildcard certificates; they will result in a "SSL_ERROR_BAD_CERT_DOMAIN" error. Similarly, it is typical for standard libraries in programming languages to not support "partial-wildcard" certificates. For example, any "partial-wildcard" certificate will not work with the latest versions of both Python and Go. Thus,
Do not allow a label that consists entirely of just a wildcard unless it is the left-most label
sub1.*.domain.com is not allowed.
A cert with multiple wildcards in a name is not allowed.
*.*.domain.com
A cert with * plus a top-level domain is not allowed.
*.com
Too general and should not be allowed.
*
International domain names encoded in ASCII (A-label) are labels that are ASCII-encoded and begin with xn--. URLs with international labels cannot contain wildcards.
xn--caf-dma.com is café.com
xn--caf-dma*.com is not allowed
Lw*.xn--caf-dma.com is allowed
=== Other certificates ===
EMV certificate: EMV is a payment method based on a technical standard for payment cards, payment terminals and automated teller machines (ATM). EMV payment cards are preloaded with a card issuer certificate, signed by the EMV certificate authority to validate authenticity of the payment card during the payment transaction.
Code-signing certificate: Certificates can validate apps (or their binaries) to ensure they were not tampered with during delivery.
Qualified certificate: A certificate identifying an individual, typically for electronic signature purposes. These are most commonly used in Europe, where the eIDAS regulation standardizes them and requires their recognition.
Role-based certificate: Defined in the X.509 Certificate Policy for the Federal Bridge Certification Authority (FBCA), role-based certificates "identify a specific role on behalf of which the subscriber is authorized to act rather than the subscriber’s name and are issued in the interest of supporting accepted business practices."
Group certificate: Defined in the X.509 Certificate Policy for the Federal Bridge Certification Authority (FBCA), for "cases where there are several entities acting in one capacity, and where non-repudiation for transactions is not desired."
== Common fields ==
These are some of the most common fields in certificates. Most certificates contain a number of fields not listed here. Note that in terms of a certificate's X.509 representation, a certificate is not "flat" but contains these fields nested in various structures within the certificate.
Serial Number: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information.
Subject: The entity a certificate belongs to: a machine, an individual, or an organization.
Issuer: The entity that verified the information and signed the certificate.
Not Before: The earliest time and date on which the certificate is valid. Usually set to a few hours or days prior to the moment the certificate was issued, to avoid clock skew problems.
Not After: The time and date past which the certificate is no longer valid.
Key Usage: The valid cryptographic uses of the certificate's public key. Common values include digital signature validation, key encipherment, and certificate signing.
Extended Key Usage: The applications in which the certificate may be used. Common values include TLS server authentication, email protection, and code signing.
Public Key: A public key belonging to the certificate subject.
Signature Algorithm: This contain a hashing algorithm and a digital signature algorithm. For example "sha256RSA" where sha256 is the hashing algorithm and RSA is the signature algorithm.
Signature: The body of the certificate is hashed (hashing algorithm in "Signature Algorithm" field is used) and then the hash is signed (signature algorithm in the "Signature Algorithm" field is used) with the issuer's private key.
=== Example ===
This is an example of a decoded SSL/TLS certificate retrieved from SSL.com's website. The issuer's common name (CN) is shown as SSL.com EV SSL Intermediate CA RSA R3, identifying this as an Extended Validation (EV) certificate. Validated information about the website's owner (SSL Corp) is located in the Subject field. The X509v3 Subject Alternative Name field contains a list of domain names covered by the certificate. The X509v3 Extended Key Usage and X509v3 Key Usage fields show all appropriate uses.
== Usage in the European Union ==
In the European Union, (advanced) electronic signatures on legal documents are commonly performed using digital signatures with accompanying identity certificates. However, only qualified electronic signatures (which require using a qualified trust service provider and signature creation device) are given the same power as a physical signature.
== Certificate authorities ==
In the X.509 trust model, a certificate authority (CA) is responsible for signing certificates. These certificates act as an introduction between two parties, which means that a CA acts as a trusted third party. A CA processes requests from people or organizations requesting certificates (called subscribers), verifies the information, and potentially signs an end-entity certificate based on that information. To perform this role effectively, a CA needs to have one or more broadly trusted root certificates or intermediate certificates and the corresponding private keys. CAs may achieve this broad trust by having their root certificates included in popular software, or by obtaining a cross-signature from another CA delegating trust. Other CAs are trusted within a relatively small community, like a business, and are distributed by other mechanisms like Windows Group Policy.
Certificate authorities are also responsible for maintaining up-to-date revocation information about certificates they have issued, indicating whether certificates are still valid. They provide this information through Online Certificate Status Protocol (OCSP) and/or Certificate Revocation Lists (CRLs). Some of the larger certificate authorities in the market include IdenTrust, DigiCert, and Sectigo.
== Root programs ==
Some major software contain a list of certificate authorities that are trusted by default. This makes it easier for end-users to validate certificates, and easier for people or organizations that request certificates to know which certificate authorities can issue a certificate that will be broadly trusted. This is particularly important in HTTPS, where a web site operator generally wants to get a certificate that is trusted by nearly all potential visitors to their web site.
The policies and processes a provider uses to decide which certificate authorities their software should trust are called root programs. The most influential root programs are:
Microsoft Root Program
Apple Root Program
Mozilla Root Program
Oracle Java root program
Adobe AATL Adobe Approved Trust List and EUTL root programs (used for document signing)
Browsers other than Firefox generally use the operating system's facilities to decide which certificate authorities are trusted. So, for instance, Chrome on Windows trusts the certificate authorities included in the Microsoft Root Program, while on macOS or iOS, Chrome trusts the certificate authorities in the Apple Root Program. Edge and Safari use their respective operating system trust stores as well, but each is only available on a single OS. Firefox uses the Mozilla Root Program trust store on all platforms.
The Mozilla Root Program is operated publicly, and its certificate list is part of the open source Firefox web browser, so it is broadly used outside Firefox. For instance, while there is no common Linux Root Program, many Linux distributions, like Debian, include a package that periodically copies the contents of the Firefox trust list, which is then used by applications.
Root programs generally provide a set of valid purposes with the certificates they include. For instance, some CAs may be considered trusted for issuing TLS server certificates, but not for code signing certificates. This is indicated with a set of trust bits in a root certificate storage system.
== Revocation ==
A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or misissued certificate until expiry. Hence, revocation is an important part of a public key infrastructure. Revocation is performed by the issuing certificate authority, which produces a cryptographically authenticated statement of revocation.
For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns. If revocation information is unavailable (either due to accident or an attack), clients must decide whether to fail-hard and treat a certificate as if it is revoked (and so degrade availability) or to fail-soft and treat it as unrevoked (and allow attackers to sidestep revocation).
Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services, Web browsers limit the revocation checks they will perform, and will fail-soft where they do. Certificate revocation lists are too bandwidth-costly for routine use, and the Online Certificate Status Protocol presents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.
== Website security ==
The most common use of certificates is for HTTPS-based web sites. A web browser validates that an HTTPS web server is authentic, so that the user can feel secure that his/her interaction with the web site has no eavesdroppers and that the web site is who it claims to be. This security is important for electronic commerce. In practice, a web site operator obtains a certificate by applying to a certificate authority with a certificate signing request. The certificate request is an electronic document that contains the web site name, company information and the public key. The certificate provider signs the request, thus producing a public certificate. During web browsing, this public certificate is served to any web browser that connects to the web site and proves to the web browser that the provider believes it has issued a certificate to the owner of the web site.
As an example, when a user connects to https://www.example.com/ with their browser, if the browser does not give any certificate warning message, then the user can be theoretically sure that interacting with https://www.example.com/ is equivalent to interacting with the entity in contact with the email address listed in the public registrar under "example.com", even though that email address may not be displayed anywhere on the web site. No other surety of any kind is implied. Further, the relationship between the purchaser of the certificate, the operator of the web site, and the generator of the web site content may be tenuous and is not guaranteed. At best, the certificate guarantees uniqueness of the web site, provided that the web site itself has not been compromised (hacked) or the certificate issuing process subverted.
A certificate provider can opt to issue three types of certificates, each requiring its own degree of vetting rigor. In order of increasing rigor (and naturally, cost) they are: Domain Validation, Organization Validation and Extended Validation. These rigors are loosely agreed upon by voluntary participants in the CA/Browser Forum.
=== Validation levels ===
==== Domain validation ====
A certificate provider will issue a domain-validated (DV) certificate to a purchaser if the purchaser can demonstrate one vetting criterion: the right to administratively manage the affected DNS domain(s).
==== Organization validation ====
A certificate provider will issue an organization validation (OV) class certificate to a purchaser if the purchaser can meet two criteria: the right to administratively manage the domain name in question, and perhaps, the organization's actual existence as a legal entity. A certificate provider publishes its OV vetting criteria through its certificate policy.
==== Extended validation ====
To acquire an Extended Validation (EV) certificate, the purchaser must persuade the certificate provider of its legal identity, including manual verification checks by a human. As with OV certificates, a certificate provider publishes its EV vetting criteria through its certificate policy.
Until 2019, major browsers such as Chrome and Firefox generally offered users a visual indication of the legal identity when a site presented an EV certificate. This was done by showing the legal name before the domain, and a bright green color to highlight the change. Most browsers deprecated this feature providing no visual difference to the user on the type of certificate used. This change followed security concerns raised by forensic experts and successful attempts to purchase EV certificates to impersonate famous organizations, proving the inefficiency of these visual indicators and highlighting potential abuses.
=== Weaknesses ===
A web browser will give no warning to the user if a web site suddenly presents a different certificate, even if that certificate has a lower number of key bits, even if it has a different provider, and even if the previous certificate had an expiry date far into the future. Where certificate providers are under the jurisdiction of governments, those governments may have the freedom to order the provider to generate any certificate, such as for the purposes of law enforcement. Subsidiary wholesale certificate providers also have the freedom to generate any certificate.
All web browsers come with an extensive built-in list of trusted root certificates, many of which are controlled by organizations that may be unfamiliar to the user. Each of these organizations is free to issue any certificate for any web site and have the guarantee that web browsers that include its root certificates will accept it as genuine. In this instance, end users must rely on the developer of the browser software to manage its built-in list of certificates and on the certificate providers to behave correctly and to inform the browser developer of problematic certificates. While uncommon, there have been incidents in which fraudulent certificates have been issued: in some cases, the browsers have detected the fraud; in others, some time passed before browser developers removed these certificates from their software.
The list of built-in certificates is also not limited to those provided by the browser developer: users (and to a degree applications) are free to extend the list for special purposes such as for company intranets. This means that if someone gains access to a machine and can install a new root certificate in the browser, that browser will recognize websites that use the inserted certificate as legitimate.
For provable security, this reliance on something external to the system has the consequence that any public key certification scheme has to rely on some special setup assumption, such as the existence of a certificate authority.
=== Usefulness versus unsecured web sites ===
In spite of the limitations described above, certificate-authenticated TLS is considered mandatory by all security guidelines whenever a web site hosts confidential information or performs material transactions. This is because, in practice, in spite of the weaknesses described above, web sites secured by public key certificates are still more secure than unsecured http:// web sites.
== Standards ==
The National Institute of Standards and Technology (NIST) Computer Security Division provides guidance documents for public key certificates:
SP 800-32 Introduction to Public Key Technology and the Federal PKI Infrastructure
SP 800-25 Federal Agency Use of Public Key Technology for Digital Signatures and Authentication
== See also ==
Authorization certificate
Pretty Good Privacy
== References ==
== Works cited ==
Chung, Taejoong; Lok, Jay; Chandrasekaran, Balakrishnan; Choffnes, David; Levin, Dave; Maggs, Bruce M.; Mislove, Alan; Rula, John; Sullivan, Nick; Wilson, Christo (2018). "Is the Web Ready for OCSP Must-Staple?" (PDF). Proceedings of the Internet Measurement Conference 2018. pp. 105–118. doi:10.1145/3278532.3278543. ISBN 9781450356190. S2CID 53223350.
Larisch, James; Choffnes, David; Levin, Dave; Maggs, Bruce M.; Mislove, Alan; Wilson, Christo (2017). "CRLite: A Scalable System for Pushing All TLS Revocations to All Browsers". 2017 IEEE Symposium on Security and Privacy (SP). pp. 539–556. doi:10.1109/sp.2017.17. ISBN 978-1-5090-5533-3. S2CID 3926509.
Sheffer, Yaron; Saint-Andre, Pierre; Fossati, Thomas (November 2022). Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). doi:10.17487/RFC9325. RFC 9325.
Smith, Trevor; Dickinson, Luke; Seamons, Kent (2020). "Let's Revoke: Scalable Global Certificate Revocation". Proceedings 2020 Network and Distributed System Security Symposium. doi:10.14722/ndss.2020.24084. ISBN 978-1-891562-61-7. S2CID 211268930. | Wikipedia/Certificate_(cryptography) |
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
== History ==
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.: 8–9
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography." Within IBM, differential cryptanalysis was known as the "T-attack" or "Tickle attack".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 247 chosen plaintexts.
== Attack mechanics ==
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintexts related by a constant difference. Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a differential. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials
(
Δ
x
,
Δ
y
)
{\displaystyle (\Delta _{x},\Delta _{y})}
where
Δ
y
=
S
(
x
⊕
Δ
x
)
⊕
S
(
x
)
{\displaystyle \Delta _{y}=S(x\oplus \Delta _{x})\oplus S(x)}
(and ⊕ denotes exclusive or) for each such S-box S. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than an exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least r − 1 rounds, where r is the total number of rounds. The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including the Advanced Encryption Standard, have been proven secure against the attack.
== Attack in detail ==
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or S-boxes). Observing the desired output difference (between two chosen or known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2−(n − 1) as possible to achieve differential uniformity. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much weaker non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50 or 2−300 which is far lower than the required threshold of 2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x3 in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks. That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion.
== Specialized types ==
Higher-order differential cryptanalysis
Truncated differential cryptanalysis
Impossible differential cryptanalysis
Boomerang attack
== See also ==
Cryptography
Integral cryptanalysis
Linear cryptanalysis
Differential equations of addition
== References ==
== Further reading ==
== External links ==
A tutorial on differential (and linear) cryptanalysis
Helger Lipmaa's links on differential cryptanalysis
A description of the attack applied to DES at the Wayback Machine (archived October 19, 2007) | Wikipedia/Differential_Cryptanalysis |
End-to-end auditable or end-to-end voter verifiable (E2E) systems are voting systems with stringent integrity properties and strong tamper resistance. E2E systems use cryptographic techniques to provide voters with receipts that allow them to verify their votes were counted as cast, without revealing which candidates a voter supported to an external party. As such, these systems are sometimes called receipt-based systems.
== Overview ==
Electronic voting systems arrive at their final vote totals by a series of steps:
voters cast ballots either electronically or manually,
cast vote records are tallied to generate totals,
where elections are conducted locally, such as at the precinct or county level, the results from each level are combined to produce the final tally.
Classical approaches to election integrity focus on ensuring the security of each step individually, going from voter intent to the final total. Such approaches have generally fallen out of favor with distributed system designers, as these local local focus may miss some vulnerabilities while over-protecting others. The alternative is to use end-to-end measures that are designed to demonstrate the integrity of the entire chain.
Comprehensive coverage of election integrity frequently involves multiple stages. Voters are expected to verify that they have marked their ballots as intended, recounts or audits are used to protect the step from marked ballots to ballot-box totals, and publication of all subtotals allows public verification that the overall totals correctly sum the ballot-box totals. Conventional voting schemes do not meet this standard, and as a result cannot conclusively prove that no votes have been tampered with at any point; voters and auditors must instead verify each individual step is fully secure, which may be difficult and introduces many points of failure.
While measures such as voter verified paper audit trails and manual recounts measure the effectiveness of some steps, they offer only weak measurement of the integrity of the physical or electronic ballot boxes. Ballots could be removed, replaced, or could have marks added to them without detection (i.e. to fill in undervoted contests with votes for a desired candidate or to overvote and spoil votes for undesired candidates). This shortcoming motivated the development of the end-to-end auditable voting systems discussed here, sometimes referred to as E2E voting systems. These attempt to cover the entire path from voter attempt to election totals with just two measures:
Individual verifiability, by which any voter may check that their ballot is correctly included in the electronic ballot box, and
Universal verifiability, by which anyone may determine that all of the ballots in the box have been correctly counted.
Because of the importance of the right to a secret ballot, most E2E voting schemes also attempt to meet a third requirement called receipt-freeness:
No voter can prove how he or she voted to any third party.
It was originally believed that combining both properties would be impossible. However, further research has since shown these properties can co-exist. Both are combined in the 2005 Voluntary Voting System Guidelines promulgated by the Election Assistance Commission. This definition is also predominant in the academic literature.
To address ballot stuffing, the following measure can be adopted:
Eligibility verifiability, by which anyone may determine that all counted ballots were cast by registered voters.
Alternatively, assertions regarding ballot stuffing can be externally verified by comparing the number of ballots on hand with the number of registered voters recorded as having voted, and by auditing other aspects of the registration and ballot delivery system.
Support for E2E auditability, based on prior experience using it with in-person elections, is also seen as a requirement for remote voting over the Internet by many experts.
== Proposed E2E Systems ==
In 2004, David Chaum proposed a solution that allows each voter to verify that their votes are cast appropriately and that the votes are accurately tallied using visual cryptography. After the voter selects their candidates, a voting machine prints out a specially formatted version of the ballot on two transparencies. When the layers are stacked, they show the human-readable vote. However, each transparency is encrypted with a form of visual cryptography so that it alone does not reveal any information unless it is decrypted. The voter selects one layer to destroy at the poll. The voting machine retains an electronic copy of the other layer and gives the physical copy as a receipt to allow the voter to confirm that the electronic ballot was not later changed. The system detects changes to the voter's ballot and uses a mix-net decryption procedure to check if each vote is accurately counted. Sastry, Karloff and Wagner pointed out that there are issues with both of the Chaum and VoteHere cryptographic solutions.
Chaum's team subsequently developed Punchscan, which has stronger security properties and uses simpler paper ballots. The paper ballots are voted on and then a privacy-preserving portion of the ballot is scanned by an optical scanner.
The Prêt à Voter system, invented by Peter Ryan, uses a shuffled candidate order and a traditional mix network. As in Punchscan, the votes are made on paper ballots and a portion of the ballot is scanned.
The Scratch and Vote system, invented by Ben Adida, uses a scratch-off surface to hide cryptographic information that can be used to verify the correct printing of the ballot.
The ThreeBallot voting protocol, invented by Ron Rivest, was designed to provide some of the benefits of a cryptographic voting system without using cryptography. It can in principle be implemented on paper although the presented version requires an electronic verifier.
The Scantegrity and Scantegrity II systems provide E2E properties. Rather than replacing the entire voting system, as is the case in all the preceding examples, it works as an add-on for existing optical scan voting systems, producing conventional voter-verifiable paper ballots suitable for risk-limiting audits. Scantegrity II employs invisible ink and was developed by a team that included Chaum, Rivest, and Ryan.
The STAR-Vote system was defined for Travis County, the fifth most populous county in Texas, and home of the state capital, Austin. It illustrated another way to combine an E2E system with conventionally auditable paper ballots, produced in this case by a ballot marking device. The project produced a detailed spec and request for proposals in 2016, and bids were received for all the components, but no existing contractor with an EAC certified voting was willing to adapt their system to work with the novel cryptographic open-source components, as required by the RFP.
Building on the STAR-Vote experience, Josh Benaloh at Microsoft led the design and development of ElectionGuard, a software development kit that can be combined with existing voting systems to add E2E support. The voting system interprets the voter's choices, stores them for further processing, then calls ElectionGuard which encrypts these interpretations and prints a receipt for the voter. The receipt has a number which corresponds to the encrypted interpretation. The voter can then disavow the ballot (spoil it), and vote again. Later, independent sources, such as political parties, can obtain the file of numbered encrypted ballots and sum the different contests on the encrypted file to see if they match the election totals. The voter can ask those independent sources if the number(s) on the voter's receipt(s) appear in the file. If enough voters check that their numbers are in the file, they will find if ballots are omitted. Voters can get the decrypted contents of their spoiled ballots, to determine if they accurately match what the voter remembers was on those ballots. The voter cannot get decrypted copies of voted ballots, to prevent selling votes. If enough voters check spoiled ballots, they will show mistakes in encryptions. ElectionGuard does not detect ballot stuffing, which must be detected by traditional records. It does not detect people who falsify receipts, claiming their ballot is missing or was interpreted in error. Election officials will need to decide how to track claimed errors, how many are needed to start an investigation, how to investigate and how to recover from errors, State law may give staff no authority to take action. ElectionGuard does not tally write-ins, except as an undifferentiated total. It is incompatible with overvotes.
== Use in elections ==
The city of Takoma Park, Maryland used Scantegrity II for its 2009 and 2011 city elections.
Helios has been used since 2009 by several organizations and universities for general elections, board elections, and student council elections.
Wombat Voting was used in student council elections at the private research college Interdisciplinary Center Herzliya in 2011 and 2012, as well as in the primary elections for the Israeli political party Meretz in 2012.
A modified version of Prêt à Voter was used as part of the vVote poll-site electronic voting system at the 2014 Victorian State Election in Australia.
ElectionGuard was combined with a voting system from VotingWorks and used for the Fulton, Wisconsin spring primary election on February 18, 2020.
A touch-screen based DRE-ip implementation was trialed in a polling station in Gateshead on 2 May 2019 as part of the 2019 United Kingdom local elections. A browser-based DRE-ip implementation was used in an online voting trial in October 2022 among the residents of New Town, Kolkata, India during the 2022 Durga Puja festival celebration.
== Examples ==
ADDER
Helios
Prêt à Voter
Punchscan
Scantegrity
Wombat Voting
ThreeBallot
Bingo Voting
homomorphic secret sharing
DRE-i (E2E verifiable e-voting without tallying authorities based on pre-computation)
DRE-ip (E2E verifiable e-voting without tallying authorities based on real-time computation)
Assembly Voting X
== References ==
== External links ==
Verifying Elections with Cryptography — Video of Ben Adida's 90-minute tech talk
Helios: Web-based Open-Audit Voting — PDF describing Ben Adida's Helios web-site
Helios Voting System web-site
Simple Auditable & Anonymous Voting Scheme
Study on Poll-Site Voting and Verification Systems — A review of existing electronic voting systems and its verification systems in supervised environments.
a 2020 MIT Media Lab article about end to end verifiable voting systems, includes discussion of blockchains
A Really Secret Ballot — Article by The Economist | Wikipedia/End-to-end_auditable_voting_systems |
The Dolev–Yao model, named after its authors Danny Dolev and Andrew Yao, is a formal model used to prove properties of interactive cryptographic protocols.
== The network ==
The network is represented by a set of abstract machines that can exchange messages.
These messages consist of formal terms. These terms reveal some of the internal structure of the messages, but some parts will hopefully remain opaque to the adversary.
== The adversary ==
The adversary in this model can overhear, intercept, and synthesize any message and is only limited by the constraints of the cryptographic methods used. In other words: "the attacker carries the message."
This omnipotence has been very difficult to model, and many threat models simplify it, as has been done for the attacker in ubiquitous computing.
== The algebraic model ==
Cryptographic primitives are modeled by abstract operators. For example, asymmetric encryption for a user
x
{\displaystyle x}
is represented by the encryption function
E
x
{\displaystyle E_{x}}
and the decryption function
D
x
{\displaystyle D_{x}}
. Their main properties are that their composition is the identity function (
D
x
E
x
=
E
x
D
x
=
1
{\displaystyle D_{x}E_{x}=E_{x}D_{x}=1}
) and that an encrypted message
E
x
(
M
)
{\displaystyle E_{x}(M)}
reveals nothing about
M
{\displaystyle M}
. Unlike in the real world, the adversary can neither manipulate the encryption's bit representation nor guess the key. The attacker may, however, re-use any messages that have been sent and therefore become known. The attacker can encrypt or decrypt these with any keys he knows, to forge subsequent messages.
A protocol is modeled as a set of sequential runs, alternating between queries (sending a message over the network) and responses (obtaining a message from the network).
=== Remark ===
The symbolic nature of the Dolev–Yao model makes it more manageable than computational models and accessible to algebraic methods but potentially less realistic. However, both kinds of models for cryptographic protocols have been related. Also, symbolic models are very well suited to show that a protocol is broken, rather than secure, under the given assumptions about the attacker's capabilities.
== See also ==
Security
Cryptographic protocol
== References == | Wikipedia/Dolev–Yao_model |
Quantum key distribution (QKD) protocols are used in quantum key distribution. The first protocol of that kind was BB84, introduced in 1984 by Charles H. Bennett and Gilles Brassard. After that, many other protocols have been defined.
== List of quantum key distribution protocols ==
BB84 (1984) is a quantum key distribution scheme that allows two parties to securely communicate a private key for use in one-time pad encryption using the quantum property that information gain is only possible at the expense of disturbing the signal if the two states one is trying to distinguish are not orthogonal and an authenticated public classical channel.
E91 protocol (1991) is a quantum cryptography method that uses entangled pairs of photons to generate keys for secure communication, with the ability to detect any attempts at eavesdropping by an external party through the violation of Bell's Theorem and the preservation of perfect correlation between the measurements of the two parties.
BBM92 protocol (1992) is a quantum key distribution method that uses polarized entangled photon pairs and decoy states to securely transmit non-orthogonal quantum signals.
B92 protocol (1992) is a quantum key distribution method that uses entanglement distillation protocols to prepare and transmit nonorthogonal quantum states with unconditional security, even over lossy and noisy channels, by measuring the state on the Z basis and using local filtering and Z basis measurements to ensure the security of the transmission is determined by the number of errors and the number of filter pairs used.
MSZ96 protocol (1996) uses four nonorthogonal quantum states of a weak optical field to encode a cryptographic key bit without the use of photon polarization or entangled photons.
Six-state protocol (1998) is a method of transmitting secure information using quantum cryptography that is more resistant to noise and easier to detect errors in compared to the BB84 protocol, due to its use of a six-state polarization scheme on three orthogonal bases and its ability to tolerate a noisier channel.
DPS protocol (2002) is a simple and efficient quantum key distribution (QKD) method that does not require a basis selection process like the traditional BB84 protocol, has a simpler receiver configuration with fewer detectors, uses efficient sequential pulses in the time domain for high key creation speed, and is robust against photon-number splitting attacks even with weak coherent light.
Decoy state protocol (2003) is a method used in practical quantum cryptography systems that uses multiple intensity levels at the transmitter's source and monitors bit error rates to detect and prevent photon number splitting attacks, enabling higher secure transmission rates or longer maximum channel lengths.
SARG04 (2004) is a quantum key distribution protocol that was developed as a more robust version of BB84, especially against photon-number-splitting attacks, for use with attenuated laser pulses in situations where the information is originated by a Poissonian source producing weak pulses and received by an imperfect detector.
COW protocol (2005) allows for secure communication between two parties by transmitting a key using weak coherent pulses of light and has advantages of requiring only a random number generator on the client side and being able to transmit key information at a high rate.
Three-stage quantum cryptography protocol (2006) is a method of data encryption that uses random polarization rotations by the two authenticated parties, to continuously encrypt data using single photons and can also be used for exchanging keys, with the possibility of multi-photon quantum cryptography and the ability to address man-in-the-middle attacks through modification.
KMB09 protocol (2009) allows for increased transmission distances between Alice and Bob by using two mutually unbiased bases and introducing a minimum index transmission error rate and quantum bit error rate, which is particularly effective for higher-dimensional photon states.
HDQKD is a technology that enables secure communication between two parties by encoding quantum information in high dimensions, such as optical angular momentum modes, and transmitting it over long distances through multicore fibers or free-space links.
T12 protocol aims to increase the practicality of QKD by removing certain idealizations and including features that can increase the key rate of the system.
== References == | Wikipedia/Quantum_cryptographic_protocol |
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Before digital storage and retrieval of data have become widespread, index cards were used for data storage in a wide range of applications and environments: in the home to record and store recipes, shopping lists, contact information and other organizational data; in business to record presentation notes, project research and notes, and contact information; in schools as flash cards or other visual aids; and in academic research to hold data such as bibliographical citations or notes in a card file. Professional book indexers used index cards in the creation of book indexes until they were replaced by indexing software in the 1980s and 1990s.
Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations, including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues, including supporting concurrent access and fault tolerance.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL, because they use different query languages.
== Terminology and overview ==
Formally, a "database" refers to a set of related data accessed through the use of a "database management system" (DBMS), which is an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that detail how the data is to be organized.
Update – Insertion, modification, and deletion of the data itself.
Retrieval – Selecting data according to specified criteria (e.g., a query, a position in a hierarchy, or a position in relation to other data) and providing that data either directly to the user, or making it available for further processing by the database itself or by other applications. The retrieved data may be made available in a more or less direct form without modification, as it is stored in the database, or in a new form obtained by altering it or combining it with existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large-volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
== History ==
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid-1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2018 they remain dominant: IBM Db2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardized SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
=== 1960s, navigational DBMS ===
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However, CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had its own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL databases are classified as network databases. IMS remains in use as of 2014.
=== 1970s, relational DBMS ===
Edgar F. Codd worked at IBM in San Jose, California, in an office primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
The paper described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper inspired teams at various universities to research the subject, including one at University of California, Berkeley led by Eugene Wong and Michael Stonebraker, who started INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. The university in 1974 hosted a debate between Codd and Bachman which Bruce Lindsay of IBM later described as "throwing lightning bolts at each other!". MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
=== Integrated approach ===
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued in certain applications by some companies like Netezza and Oracle (Exadata).
=== Late 1970s, SQL DBMS ===
IBM formed a team led by Codd that started working on a prototype system, System R despite opposition from others at the company. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (IBM Db2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed in the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two has become irrelevant.
=== 1980s, on the desktop ===
Besides IBM and various software companies such as Sybase and Informix Corporation, most large computer hardware vendors by the 1980s had their own database systems such as DEC's VAX Rdb/VMS. The decade ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
=== 1990s, object-oriented ===
By the start of the decade databases had become a billion-dollar industry in about ten years. The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be related to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
=== 2000s, NoSQL and NewSQL ===
Database sales grew rapidly during the dotcom bubble and, after its end, the rise of ecommerce. The popularity of open source databases such as MySQL has grown since 2000, to the extent that Ken Jacobs of Oracle said in 2005 that perhaps "these guys are doing to us what we did to IBM".
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem, it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
== Use cases ==
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
== Classification ==
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym for federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
== Database management system ==
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database." Examples of DBMS's include MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristics, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and have involved up to thousands of human years of development effort throughout their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with and manipulate the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performs many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
== Application ==
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a website that happens to use a database to store and search information.
=== Application program interface ===
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possibly indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
== Database languages ==
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and are supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and Db2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
== Storage ==
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration settings are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look at the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
=== Materialized views ===
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
=== Replication ===
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to the same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
=== Virtualization ===
With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.
== Security ==
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) are allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this in the database. Monitoring can be set up to attempt to detect security breaches. Therefore, organizations must take database security seriously because of the many benefits it provides. Organizations will be safeguarded from security breaches and hacking activities like firewall intrusion, virus spread, and ransom ware. This helps in protecting the company's essential information, which cannot be shared with outsiders at any cause.
== Transactions and concurrency ==
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
== Migration ==
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This is in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help import databases from other popular DBMSs.
== Building, maintaining, and tuning ==
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
== Backup and restore ==
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
== Static analysis ==
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database systems has many interesting applications, in particular, for security purposes, such as fine-grained access control, watermarking, etc.
== Miscellaneous features ==
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
== Design and modeling ==
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
=== Models ===
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
=== External, conceptual, and internal views ===
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level (or logical level) unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual and internal view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are in the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On the one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
== Research ==
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
== See also ==
== Notes ==
== References ==
== Sources ==
== Further reading ==
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0.
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems.
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts.
Lightstone, S.; Teorey, T.; Nadeau, T. (2007). Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more. Morgan Kaufmann Press. ISBN 978-0-12-369389-1.
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5.
CMU Database courses playlist
MIT OCW 6.830 | Fall 2010 | Database Systems
Berkeley CS W186
== External links ==
DB File extension – information about files with the DB extension | Wikipedia/database |
In database systems, isolation is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. It determines how transaction integrity is visible to other users and systems. A lower isolation level increases the ability of many users to access the same data at the same time, but also increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency effects that users may encounter, but requires more system resources and increases the chances that one transaction will block another.
== DBMS concurrency control ==
Concurrency control comprises the underlying mechanisms in a DBMS which handle isolation and guarantee related correctness. It is heavily used by the database and storage engines both to guarantee the correct execution of concurrent transactions, and (via different mechanisms) the correctness of other DBMS processes. The transaction-related mechanisms typically constrain the database data access operations' timing (transaction schedules) to certain orders characterized as the serializability and recoverability schedule properties. Constraining database access operation execution typically means reduced performance (measured by rates of execution), and thus concurrency control mechanisms are typically designed to provide the best performance possible under the constraints. Often, when possible without harming correctness, the serializability property is compromised for better performance. However, recoverability cannot be compromised, since such typically results in a database integrity violation.
Two-phase locking is the most common transaction concurrency control method in DBMSs, used to provide both serializability and recoverability for correctness. In order to access a database object a transaction first needs to acquire a lock for this object. Depending on the access operation type (e.g., reading or writing an object) and on the lock type, acquiring the lock may be blocked and postponed, if another transaction is holding a lock for that object.
== Client-side isolation ==
Isolation is typically enforced at the database level. However, various client-side systems can also be used. It can be controlled in application frameworks or runtime containers such as J2EE Entity Beans On older systems, it may be implemented systemically (by the application developers), for example through the use of temporary tables. In two-tier, three-tier, or n-tier web applications a transaction manager can be used to maintain isolation. A transaction manager is middleware which sits between an app service (back-end application service) and the operating system. A transaction manager can provide global isolation and atomicity. It tracks when new servers join a transaction and coordinates an atomic commit protocol among the servers. The details are abstracted from the app, making transactions simpler and easier to code. A transaction processing monitor (TPM) is a collection of middle-ware including a transaction manager. A TPM might provide local isolation to an app with a lock manager.
== Read phenomena ==
The ANSI/ISO standard SQL 92 refers to three different read phenomena when a transaction retrieves data that another transaction might have updated.
In the following examples, two transactions take place. In transaction 1, a query is performed, then in transaction 2, an update is performed, and finally in transaction 1, the same query is performed again.
The examples use the following relation:
=== Dirty reads ===
A dirty read (aka uncommitted dependency) occurs when a transaction retrieves a row that has been updated by another transaction that is not yet committed.
In this example, transaction 1 retrieves the row with id 1, then transaction 2 updates the row with id 1, and finally transaction 1 retrieves the row with id 1 again. Now if transaction 2 rolls back its update (already retrieved by transaction 1) or performs other updates, then the view of the row may be wrong in transaction 1. At the READ UNCOMMITTED isolation level, the second SELECT in transaction 1 retrieves the updated row: this is a dirty read. At the READ COMMITTED, REPEATABLE READ, and SERIALIZABLE isolation levels, the second SELECT in transaction 1 retrieves the initial row.
=== Non-repeatable reads ===
A non-repeatable read occurs when a transaction retrieves a row twice and that row is updated by another transaction that is committed in between.
In this example, transaction 1 retrieves the row with id 1, then transaction 2 updates the row with id 1 and is committed, and finally transaction 1 retrieves the row with id 1 again. At the READ UNCOMMITTED and READ COMMITTED isolation levels, the second SELECT in transaction 1 retrieves the updated row: this is a non-repeatable read. At the REPEATABLE READ and SERIALIZABLE isolation levels, the second SELECT in transaction 1 retrieves the initial row.
=== Phantom reads ===
A phantom read occurs when a transaction retrieves a set of rows twice and new rows are inserted into or removed from that set by another transaction that is committed in between.
In this example, transaction 1 retrieves the set of rows with age greater than 17, then transaction 2 inserts a row with age 26 and is committed, and finally transaction 1 retrieves the set of rows with age greater than 17 again. At the READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ isolation levels, the second SELECT in transaction 1 retrieves the new set of rows that includes the inserted row: this is a phantom read. At the SERIALIZABLE isolation level, the second SELECT in transaction 1 retrieves the initial set of rows.
There are two basic strategies used to prevent non-repeatable reads and phantom reads. In the first strategy, lock-based concurrency control, transaction 2 is committed after transaction 1 is committed or rolled back. It produces the serial schedule T1, T2. In the other strategy, multiversion concurrency control, transaction 2 is committed immediately while transaction 1, which started before transaction 2, continues to operate on an old snapshot of the database taken at the start of transaction 1, and when transaction 1 eventually tries to commit, if the result of committing would be equivalent to the serial schedule T1, T2, then transaction 1 is committed; otherwise, there is a commit conflict and transaction 1 is rolled back with a serialization failure.
Under lock-based concurrency control, non-repeatable reads and phantom reads may occur when read locks are not acquired when performing a SELECT, or when the acquired locks on affected rows are released as soon as the SELECT is performed. Under multiversion concurrency control, non-repeatable reads and phantom reads may occur when the requirement that a transaction affected by a commit conflict must be rolled back is relaxed.
== Isolation levels ==
Of the four ACID properties in a DBMS (Database Management System), the isolation property is the one most often relaxed. When attempting to maintain the highest level of isolation, a DBMS usually acquires locks on data which may result in a loss of concurrency, or implements multiversion concurrency control. This requires adding logic for the application to function correctly.
Most DBMSs offer a number of transaction isolation levels, which control the degree of locking that occurs when selecting data. For many database applications, the majority of database transactions can be constructed to avoid requiring high isolation levels (e.g. SERIALIZABLE level), thus reducing the locking overhead for the system. The programmer must carefully analyze database access code to ensure that any relaxation of isolation does not cause software bugs that are difficult to find. Conversely, if higher isolation levels are used, the possibility of deadlock is increased, which also requires careful analysis and programming techniques to avoid.
Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level).
The isolation levels defined by the ANSI/ISO SQL standard are listed as follows.
=== Serializable ===
This is the highest isolation level.
With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon.
When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic.
From : (Second Informal Review Draft) ISO/IEC 9075:1992, Database Language SQL- July 30, 1992:
The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be serializable. A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins.
=== Repeatable reads ===
In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur.
Write skew is possible at this isolation level in some systems. Write skew is a phenomenon where two writes are allowed to the same column(s) in a table by two different writers (who have previously read the columns they are updating), resulting in the column having data that is a mix of the two transactions.
=== Read committed ===
In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level). As in the previous level, range-locks are not managed.
Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.
=== Read uncommitted ===
This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions.
== Default isolation level ==
The default isolation level of different DBMS's varies quite widely. Most databases that feature transactions allow the user to set any isolation level. Some DBMS's also require additional syntax when performing a SELECT statement to acquire locks (e.g. SELECT ... FOR UPDATE to acquire exclusive write locks on accessed rows).
However, the definitions above have been criticized as being ambiguous, and as not accurately reflecting the isolation provided by many databases:
This paper shows a number of weaknesses in the anomaly approach to defining isolation levels. The three ANSI phenomena are ambiguous, and even in their loosest interpretations do not exclude some anomalous behavior ... This leads to some counter-intuitive results. In particular, lock-based isolation levels have different characteristics than their ANSI equivalents. This is disconcerting because commercial database systems typically use locking implementations. Additionally, the ANSI phenomena do not distinguish between a number of types of isolation level behavior that are popular in commercial systems.
There are also other criticisms concerning ANSI SQL's isolation definition, in that it encourages implementors to do "bad things":
... it relies in subtle ways on an assumption that a locking schema is used for concurrency control, as opposed to an optimistic or multi-version concurrency scheme. This implies that the proposed semantics are ill-defined.
== Isolation levels vs read phenomena ==
Anomaly serializable is not the same as serializable. That is, it is necessary, but not sufficient that a serializable schedule should be free of all three phenomena types.
== See also ==
Atomicity
Consistency
Durability
Lock (database)
Optimistic concurrency control
Relational Database Management System
Snapshot isolation
== References ==
== External links ==
Oracle® Database Concepts, chapter 13 Data Concurrency and Consistency, Preventable Phenomena and Transaction Isolation Levels
Oracle® Database SQL Reference, chapter 19 SQL Statements: SAVEPOINT to UPDATE, SET TRANSACTION
in JDBC: Connection constant fields, Connection.getTransactionIsolation(), Connection.setTransactionIsolation(int)
in Spring Framework: @Transactional, Isolation
P.Bailis. When is "ACID" ACID? Rarely | Wikipedia/Isolation_(database_systems) |
Anchor modeling is an agile database modeling technique suited for information that changes over time both in structure and content. It provides a graphical notation used for conceptual modeling similar to that of entity-relationship modeling, with extensions for working with temporal data. The modeling technique involves four modeling constructs: the anchor, attribute, tie and knot, each capturing different aspects of the domain being modeled. The resulting models can be translated to physical database designs using formalized rules. When such a translation is done the tables in the relational database will mostly be in the sixth normal form.
Unlike the star schema (dimensional modelling) and the classical relational model (3NF), data vault and anchor modeling are well-suited for capturing changes that occur when a source system is changed or added, but are considered advanced techniques which require experienced data architects. Both data vaults and anchor models are entity-based models, but anchor models have a more normalized approach.
== Philosophy ==
Anchor modeling was created in order to take advantage of the benefits from a high degree of normalization while avoiding its drawbacks which higher normal forms have with regards to human readability. Advantages such as being able to non-destructively evolve the model, avoid null values, and keep the information free from redundancies are gained. Performance issues due to extra joins are largely avoided thanks to a feature in modern database engines called join elimination or table elimination. In order to handle changes in the information content, anchor modeling emulates aspects of a temporal database in the resulting relational database schema.
== History ==
The earliest installations using anchor modeling were made 2004 in Sweden when a data warehouse for an insurance company was built using the technique.
In 2007 the technique was being used in a few data warehouses and one online transaction processing (OLTP) system, and it was presented internationally by Lars Rönnbäck at the 2007 Transforming Data with Intelligence (TDWI) conference in Amsterdam. This stirred enough interest for the technique to warrant a more formal description. Since then research concerning anchor modeling is being done in a collaboration between the creators Olle Regardt and Lars Rönnbäck and a team at the Department of Computer and Systems Sciences, Stockholm University.
The first paper, in which anchor modeling is formalized, was presented in 2008 at the 28th International Conference on Conceptual Modeling and won the best paper award.
A commercial web site provides material on anchor modeling which is free to use under a Creative Commons license. An online modeling tool is also available, which is free to use and is open source.
== Basic notions ==
Anchor modeling has four basic modeling concepts: anchors, attributes, ties, and knots. Anchors are used to model entities and events, attributes are used to model properties of anchors, ties model the relationships between anchors, and knots are used to model shared properties, such as states. Attributes and ties can be historized when changes in the information they model need to be kept.
An example model showing the different graphical symbols for all the concepts can be seen below. The symbols resemble those used in entity–relationship modeling, with a couple of extensions. A double outline on an attribute or tie indicates that a history of changes is kept. The knot symbol (an outlined square with rounded edges) is also available, but knots cannot be historized. The anchor symbol is a solid square.
== Temporal aspects ==
Anchor modeling handles two types of informational evolution, which are structural changes and content changes. Changes to the structure of information is represented through extensions. The high degree of normalization makes it possible to non-destructively add the necessary modeling concepts needed to capture a change, in such a way that every previous schema always remains as a subset of the current schema. Since the existing schema is not touched, this gives the benefit of being able to evolve the database in a highly iterative manner and without causing any downtime.
Changes in the content of information is done by emulating similar features of a temporal database in a relational database. In anchor modeling, pieces of information can be tied to points in time or to intervals of time (both open and closed). The time points when events occur are modeled using attributes, e g the birth dates of persons or the time of a purchase. The intervals of time in which a value is valid are captured through the historization of attributes and ties, e g the changes of hair color of a person or the period of time during which a person was married. In a relational database this is achieved by adding a single column, with a data type granular enough to capture the speed of the changes, to the table corresponding to the historized attribute or tie. This adds a slight complexity as more than one row in the table have to be examined in order to know if an interval is closed or not.
Points or intervals of time not directly related to the domain being modeled, such as the points of time information entered the database, are handled through the use of metadata in anchor modeling, rather than any of the above-mentioned constructs. If information about such changes to the database needs to be kept then bitemporal anchor modeling can be used, where in addition to updates, also delete statements become non-destructive.
== Relational representation ==
In anchor modeling there is a one-to-one mapping between the symbols used in the conceptual model and tables in the relational database. Every anchor, attribute, tie, and knot have a corresponding table in the database with an unambiguously defined structure. A conceptual model can thereby be translated to a relational database schema using simple automated rules, and vice versa. This is different from many other modeling techniques in which there are complex and sometimes subjective translation steps between the conceptual, logical, and physical levels.
Anchor tables contain a single column in which identities are stored. An identity is assumed to be the only property of an entity that is always present and immutable. As identities are rarely available from the domain being modeled, they are instead technically generated, e g from an incrementing number sequence.
An example of an anchor for the identities of the nephews of Donald Duck is a set of 1-tuples:
{⟨#42⟩, ⟨#43⟩, ⟨#44⟩}
Knots can be thought of as the combination of an anchor and a single attribute. Knot tables contain two columns, one for an identity and one for a value. Due to storing identities and values together, knots cannot be historized. Their usefulness comes from being able to reduce storage requirements and improve performance, since tables referencing knots can store a short value rather than a long string.
An example of a knot for genders is a set of 2-tuples:
{⟨#1, 'Male'⟩, ⟨#2, 'Female'⟩}
Static attribute tables contain two columns, one for the identity of the entity to which the value belongs and one for the actual property value. Historized attribute tables have an extra column for storing the starting point of a time interval. In a knotted attribute table, the value column is an identity that references a knot table.
An example of a static attribute for their names is a set of 2-tuples:
{⟨#42, 'Huey'⟩, ⟨#43, 'Dewey'⟩, ⟨#44, 'Louie'⟩}
An example of a knotted static attribute for their genders is a set of 2-tuples:
{⟨#42, #1⟩, ⟨#43, #1⟩, ⟨#44, #1⟩}
An example of a historized attribute for the (changing) colors of their outfits is a set of 3-tuples:
{⟨#44, 'Orange', 1938-04-15⟩, ⟨#44, 'Green', 1939-04-28⟩, ⟨#44, 'Blue', 1940-12-13⟩}
Static tie tables relate two or more anchors to each other, and contain two or more columns for storing the identities. Historized tie tables have an extra column for storing the starting point of a time interval. Knotted tie tables have an additional column for each referenced knot.
An example of a static tie for the sibling relationship is a set of 2-tuples:
{⟨#42, #43⟩, ⟨#42, #44⟩, ⟨#43, #42⟩, ⟨#43, #44⟩, ⟨#44, #42⟩, ⟨#44, #43⟩}
The resulting tables will all be in sixth normal form except for ties in which not all columns are part of the primary key.
== Compared to other approaches ==
In the 2000s, several data warehouse data modeling patterns have been introduced with the goal of achieving agile data warehouses, including ensemble modeling forms such as anchor modeling, data vault modeling, focal point modeling, and others.
=== Data vault comparison ===
In 2013 at the data modeling conference BI Podium in the Netherlands, Lars Rönnbäck presented a comparison of anchor modeling and data vault modeling.
== References ==
== External links ==
Anchor modeling blog, with video tutorials and research information
Online anchor modeling tool | Wikipedia/Anchor_modeling |
This is a comparison of object–relational database management systems (ORDBMSs). Each system has at least some features of an object–relational database; they vary widely in their completeness and the approaches taken.
The following tables compare general and technical information; please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs.
== Basic data ==
== Object features ==
Information about what fundamental ORDBMSes features are implemented natively.
== Data types ==
Information about what data types are implemented natively.
== See also ==
Comparison of database administration tools
Comparison of object database management systems
Comparison of relational database management systems
List of relational database management systems
== Notes ==
== External links ==
Arvin.dk, comparison of different SQL implementations | Wikipedia/Comparison_of_object–relational_database_management_systems |
In database systems, atomicity (; from Ancient Greek: ἄτομος, romanized: átomos, lit. 'undividable') is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. An atomic transaction is an indivisible and irreducible series of database operations such that either all occur, or none occur. A guarantee of atomicity prevents partial database updates from occurring, because they can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress).
An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither lost nor created if either of those two operations fails.
The same term is also used in the definition of First normal form in database systems, where it instead refers to the concept that the values for fields may not consist of multiple smaller values to be decomposed, such as a string into which multiple names, numbers, dates, or other types may be packed.
== Orthogonality ==
Atomicity does not behave completely orthogonally with regard to the other ACID properties of transactions. For example, isolation relies on atomicity to roll back the enclosing transaction in the event of an isolation violation such as a deadlock; consistency also relies on atomicity to roll back the enclosing transaction in the event of a consistency violation by an illegal transaction.
As a result of this, a failure to detect a violation and roll back the enclosing transaction may cause an isolation or consistency failure.
== Implementation ==
Typically, systems implement Atomicity by providing some mechanism to indicate which transactions have started and which finished; or by keeping a copy of the data before any changes occurred (read-copy-update). Several filesystems have developed methods for avoiding the need to keep multiple copies of data, using journaling (see journaling file system). Databases usually implement this using some form of logging/journaling to track changes. The system synchronizes the logs (often the metadata) as necessary after changes have successfully taken place. Afterwards, crash recovery ignores incomplete entries. Although implementations vary depending on factors such as concurrency issues, the principle of atomicity – i.e. complete success or complete failure – remain.
Ultimately, any application-level implementation relies on operating-system functionality. At the file-system level, POSIX-compliant systems provide system calls such as open(2) and flock(2) that allow applications to atomically open or lock a file. At the process level, POSIX Threads provide adequate synchronization primitives.
The hardware level requires atomic operations such as Test-and-set, Fetch-and-add, Compare-and-swap, or Load-Link/Store-Conditional, together with memory barriers. Portable operating systems cannot simply block interrupts to implement synchronization, since hardware that lacks concurrent execution such as hyper-threading or multi-processing is now extremely rare.
== See also ==
Atomic operation
Transaction processing
Long-running transaction
Read-copy-update
== References == | Wikipedia/Atomicity_(database_systems) |
In computing, the network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, is not restricted to being a hierarchy or lattice.
The network model was adopted by the CODASYL Data Base Task Group in 1969 and underwent a major update in 1971. It is sometimes known as the CODASYL model for this reason. A number of network database systems became popular on mainframe and minicomputers through the 1970s before being widely replaced by relational databases in the 1980s.
== Overview ==
While the hierarchical database model structures data as a tree of records, with each record having one parent record and many children, the network model allows each record to have multiple parent and child records, forming a generalized graph structure. This property applies at two levels: the schema is a generalized graph of record types connected by relationship types (called "set types" in CODASYL), and the database itself is a generalized graph of record occurrences connected by relationships (CODASYL "sets"). Cycles are permitted at both levels. Peer-to-Peer and Client Server are examples of Network Models.
The chief argument in favour of the network model, in comparison to the hierarchical model, was that it allowed a more natural modeling of relationships between entities. Although the model was widely implemented and used, it failed to become dominant for two main reasons. Firstly, IBM chose to stick to the hierarchical model with semi-network extensions in their established products such as IMS and DL/I. Secondly, it was eventually displaced by the relational model, which offered a higher-level, more declarative interface. Until the early 1980s the performance benefits of the low-level navigational interfaces offered by hierarchical and network databases were persuasive for many large-scale applications, but as hardware became faster, the extra productivity and flexibility of the relational model led to the gradual obsolescence of the network model in corporate enterprise usage.
== History ==
The network model's original inventor was Charles Bachman, and it was developed into a standard specification published in 1969 by the Conference on Data Systems Languages (CODASYL) Consortium. This was followed by a second publication in 1971, which became the basis for most implementations. Subsequent work continued into the early 1980s, culminating in an ISO specification, but this had little influence on products.
Bachman's influence is recognized in the term Bachman diagram, a diagrammatic notation that represents a database schema expressed using the network model. In a Bachman diagram, named rectangles represent record types, and arrows represent one-to-many relationship types between records (CODASYL set types).
== Database systems ==
Some well-known database systems that use the network model include:
IMAGE for HP 3000
Integrated Data Store (IDS)
IDMS (Integrated Database Management System)
Univac DMS-1100
Norsk Data SIBAS
Oracle CODASYL DBMS for OpenVMS (originally known as DEC VAX DBMS)
== See also ==
Navigational database
Graph database
== References ==
David M, k., 1997. Fundamentals, Design, and Implementation. database processing ed. s.l.:Prentice-Hall.
== Further reading ==
Charles W. Bachman, The Programmer as Navigator. Turing Award lecture, Communications of the ACM, Volume 16, Issue 11, 1973, pp. 653–658, ISSN 0001-0782, doi:10.1145/355611.362534
== External links ==
"CODASYL Systems Committee "Survey of Data Base Systems"" (PDF). 1968-09-03. Archived from the original (PDF) on 2007-10-12.
Network (CODASYL) Data Model
SIBAS Database running on Norsk Data Servers | Wikipedia/Network_database_model |
William H. Inmon (born 1945) is an American computer scientist, recognized by many as the father of the data warehouse. Inmon wrote the first book, held the first conference (with Arnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject-oriented, non-volatile, integrated, time-variant collection of data in support of management's decisions. Compared with the approach of the other pioneering architect of data warehousing, Ralph Kimball, Inmon's approach is often characterized as a top-down approach.
== Biography ==
William H. Inmon was born July 20, 1945, in San Diego, California. He received his Bachelor of Science degree in mathematics from Yale University in 1967, and his Master of Science degree in computer science from New Mexico State University.
He worked for American Management Systems and Coopers & Lybrand before 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business.
Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008).
In July 2007, Inmon was named by Computerworld as one of the ten people that most influenced the first 40 years of the computer industry.
Inmon's association with data warehousing stems from the fact that he wrote the first book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing.
In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL.
Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL.
== Awards ==
(2002) DAMA International Professional Achievement Award for, "major contributions as the 'father of data warehousing' and a recognized thought leader in decision support" from DAMA International, The Global Data Management Community.
(2018) Received a Lifetime Achievement Award from Data Modelling Zone.
(December 2020) Received a Lifetime Achievement Award from Project Management Institute (PMI).
== Publications ==
Bill Inmon has published more than 60 books in nine languages and 2,000 articles on data warehousing and data management.
Effective Data Base Design, Prentice-Hall, 1981, ISBN 9780132414890
Inmon, William H.; Bird, Thomas J. (1986), The Dynamics of Data Base, Prentice-Hall, ISBN 9780132214742
Information Engineering For The Practitioner : Putting Theory into Practice, Prentice-Hall, 1988, ISBN 9780134645797
Information Systems Architecture, Development in the 90's, QED Pub. Group, 1992, ISBN 978-0894354106
Building the Data Warehouse, Wiley, 1992, ISBN 9780764599446
Inmon, William H.; Kelley, Chuck (1993), Rdb/VMS: Developing the Data Warehouse, QED Pub. Group, ISBN 9780894354298
Inmon, William H.; Imhoff Claudia; Battas, Greg (1996) Building the Operational Data Store, Wiley, ISBN 0-471-12822-8
Inmon, William H.; Imhoff, Claudia; Sousa, Ryan (1998), Corporate Information Factory, Wiley, ISBN 978-0471399612
Inmon, William H.; Terdeman, R. H.; Imhoff, Claudia (2000), Exploration Warehousing: Turning Business Information into Business Opportunity, Wiley, ISBN 978-0471374732
Inmon, William H.; Oneil, Bonnie; Fryman, Lowell (2007), Business Metadata: Capturing Enterprise Knowledge, Elsevier Press, ISBN 978-0123737267
Inmon, William H.; Nesavich, Tony (2007), Tapping Into Unstructured Data: Integrating Unstructured Data and Textual Analytics Into Business Intelligence, Prentice-Hall, ISBN 978-0132360296
Inmon, William H.; Strauss, Derek; Neushloss, Genia (2008), DW 2.0: The Architecture for the Next Generation of Data Warehousing (Morgan Kaufman Series in Data Management Systems), Elsevier Press, ISBN 978-0123743190
Inmon, William H.; Linstedt, Daniel; Levins, Mary (2014), Data Architecture: A Primer for the Data Scientist, Academic Press, ISBN 978-0128169162
Brestoff, Nelson E.; Inmon, William H. (2015), Preventing Litigation: An Early Warning System to Get Big Value Out of Big Data, Business Expert Press, ISBN 978-1631573156
Data Lake Architecture: Designing the Data Lake and Avoiding the Garbage Dump, Technics Publications, 2016, ASIN B01DPEGSO4
Turning Text into Gold: Taxonomies and Textual Analytics, Technics Publications, 2017, ASIN B01N7OK2SZ
Turning Spreadsheets into Corporate Data, Technics Publications, 2017, ISBN 978-1634622288
Hearing the Voice of the Customer, Technics Publications, ISBN 978-1634623315
Inmon, William H.; Puppini, Francesco (2020), The Unified Star Schema: An Agile and Resilient Approach to Data Warehouse and Analytics Design, Technics Publications, ISBN 978-1634628877
Inmon, William H.; Srivastava, Ranjeet (2020), The Textual Warehous, Technics Publications, ISBN 978-1634629546
Inmon, William H.; Levins, Mary; Srivastava, Ranjeet (2021), Building the Data Lakehouse, Technics Publications, ISBN 978-1634629669
== See also ==
Single version of the truth
The Kimball lifecycle, a high-level sequence tasks used to design, develop and deploy a data warehouse or business intelligence system
== References ==
== External links ==
Corporate Information Factory - Internet Archive's copy of www.inmoncif.com, retrieved on 2016-01-16 | Wikipedia/Corporate_information_factory |
An application server is a server that hosts applications or software that delivers a business application through a communication protocol. For a typical web application, the application server sits behind the web servers.
An application server framework is a service layer model. It includes software components available to a software developer through an application programming interface. An application server may have features such as clustering, fail-over, and load-balancing. The goal is for developers to focus on the business logic.
== Java application servers ==
Jakarta EE (formerly Java EE or J2EE) defines the core set of API and features of Java application servers.
The Jakarta EE infrastructure is partitioned into logical containers.
EJB container: Enterprise Beans are used to manage transactions. According to the Java BluePrints, the business logic of an application resides in Enterprise Beans—a modular server component providing many features, including declarative transaction management, and improving application scalability.
Web container: the web modules include Jakarta Servlets and Jakarta Server Pages (JSP).
JCA container (Jakarta Connectors)
JMS provider (Jakarta Messaging)
== Microsoft ==
Microsoft's .NET positions their middle-tier applications and services infrastructure in the Windows Server operating system and the .NET Framework technologies in the role of an application server. The Windows Application Server role includes Internet Information Services (IIS) to provide web server support, the .NET Framework to provide application support, ASP.NET to provide server side scripting, COM+ for application component communication, Message Queuing for multithreaded processing, and the Windows Communication Foundation (WCF) for application communication.
== PHP application servers ==
PHP application servers run and manage PHP applications.
Zend Server, built by Zend, provides application server functionality for the PHP-based applications.
RoadRunner, built by Spiral Scout is a high-performance PHP application server, load-balancer, and process manager written in Go.
== Third-party ==
Mono (a cross platform open-source implementation of .NET supporting nearly all its features, with the exception of Windows OS-specific features), sponsored by Microsoft and released under the MIT License
== Mobile application servers ==
Mobile application servers provide data delivery to mobile devices.
=== Mobile features ===
Core capabilities of mobile application services include
Data routing– data is packaged in smaller (REST) objects with some business logic to minimize demands on bandwidth and battery
Orchestration– transactions and data integration across multiple sources
Authentication service– secure connectivity to back-end systems is managed by the mobile middleware
Off-line support– allows users to access and use data even though the device is not connected
Security– data encryption, device control, SSL, call logging
=== Mobile challenges ===
Although most standards-based infrastructure (including SOAs) are designed to connect to any independent of any vendor, product or technology, most enterprises have trouble connecting back-end systems to mobile applications, because mobile devices add the following technological challenges:
Limited resources – mobile devices have limited power and bandwidth
Intermittent connectivity – cellular service and wifi coverage is often not continuous
Difficult to secure – mobility and BYOD practices make it hard to secure mobile devices
== Deployment models ==
An application server can be deployed:
On premises
Cloud
Private cloud
Platform as a service (PaaS)
== See also ==
Application service provider
List of application servers
== References ==
{Table Web Interfaces} | Wikipedia/Application_server |
In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as filesystems, databases, optimistic replication systems or web caching). Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors.
High level languages, such as C++ and Java, maintain the consistency contract by translating memory operations into low-level operations in a way that preserves memory semantics, reordering some memory instructions, and encapsulating required synchronization with library calls such as pthread_mutex_lock().
== Example ==
Assume that the following case occurs:
The row X is replicated on nodes M and N
The client A writes row X to node M
After a period of time t, client B reads row X from node N
The consistency model determines whether client B will definitely see the write performed by client A, will definitely not, or cannot depend on seeing the write.
== Types ==
Consistency models define rules for the apparent order and visibility of updates, and are on a continuum with tradeoffs. There are two methods to define and categorize consistency models; issue and view.
Issue
Issue method describes the restrictions that define how a process can issue operations.
View
View method which defines the order of operations visible to processes.
For example, a consistency model can define that a process is not allowed to issue an operation until all previously issued operations are completed. Different consistency models enforce different conditions. One consistency model can be considered stronger than another if it requires all conditions of that model and more. In other words, a model with fewer constraints is considered a weaker consistency model.
These models define how the hardware needs to be laid out and at a high-level, how the programmer must code. The chosen model also affects how the compiler can re-order instructions. Generally, if control dependencies between instructions and if writes to same location are ordered, then the compiler can reorder as required. However, with the models described below, some may allow writes before loads to be reordered while some may not.
== Strong consistency models ==
=== Strict consistency ===
Strict consistency is the strongest consistency model. Under this model, a write to a variable by any processor needs to be seen instantaneously by all processors.
The strict model diagram and non-strict model diagrams describe the time constraint – instantaneous. It can be better understood as though a global clock is present in which every write should be reflected in all processor caches by the end of that clock period. The next operation must happen only in the next clock period.
In the following diagram, P means "process" and the global clock's value is represented in the Sequence column.
This is the most rigid model. In this model, the programmer's expected result will be received every time. It is deterministic. Its practical relevance is restricted to a thought experiment and formalism, because instantaneous message exchange is impossible. It doesn't help in answering the question of conflict resolution in concurrent writes to the same data item, because it assumes concurrent writes to be impossible.
=== Sequential consistency ===
The sequential consistency model was proposed by Lamport (1979). It is a weaker memory model than strict consistency model. A write to a variable does not have to be seen instantaneously, however, writes to variables by different processors have to be seen in the same order by all processors. Sequential consistency is met if "the result of any execution is the same as if the (read and write) operations of all processes on the data store were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program." Adve and Gharachorloo, 1996 define two requirements to implement the sequential consistency; program order and write atomicity.
Program order: Program order guarantees that each process issues a memory request ordered by its program.
Write atomicity: Write atomicity defines that memory requests are serviced based on the order of a single FIFO queue.
In sequential consistency, there is no notion of time or most recent write operations. There are some operations interleaving that is the same for all processes. A process can see the write operations of all processes but it can just see its own read operations. Program order within each processor and sequential ordering of operations between processors should be maintained. In order to preserve sequential order of execution between processors, all operations must appear to execute instantaneously or atomically with respect to every other processor.
These operations need only "appear" to be completed because it is physically impossible to send information instantaneously. For instance, in a system utilizing a single globally shared bus, once a bus line is posted with information, it is guaranteed that all processors will see the information at the same instant. Thus, passing the information to the bus line completes the execution with respect to all processors and has appeared to have been executed. Cache-less architectures or cached architectures with interconnect networks that are not instantaneous can contain a slow path between processors and memories. These slow paths can result in sequential inconsistency, because some memories receive the broadcast data faster than others.
Sequential consistency can produce non-deterministic results. This is because the sequence of sequential operations between processors can be different during different runs of the program. All memory operations need to happen in the program order.
Linearizability (also known as atomic consistency or atomic memory) can be defined as sequential consistency with a real-time constraint, by considering a begin time and end time for each operation. An execution is linearizable if each operation taking place in linearizable order by placing a point between its begin time and its end time and guarantees sequential consistency.
Verifying sequential consistency through model checking is undecidable in general, even for finite-state cache coherence protocols.
=== Causal consistency ===
Causal consistency defined by Hutto and Ahamad, 1990, is a weakening of the sequential consistency model by categorizing events into those causally related and those that are not. It defines that only write operations that are causally related, need to be seen in the same order by all processes. For example, if an event b takes effect from an earlier event a, the causal consistency guarantees that all processes see event b after event a. Tanenbaum et al., 2007 provide a stricter definition, that a data store is considered causally consistent under the following conditions:
Writes that are potentially causally related must be seen by all processes in the same order.
Concurrent writes may be seen in a different order on different machines.
This model relaxes sequential consistency on concurrent writes by a processor and on writes that are not causally related. Two writes can become causally related if one write to a variable is dependent on a previous write to any variable if the processor doing the second write has just read the first write. The two writes could have been done by the same processor or by different processors.
As in sequential consistency, reads do not need to reflect changes instantaneously, however, they need to reflect all changes to a variable sequentially.
W(x)2 happens after W(x)1 due to the read made by P2 to x before W(x)2, hence this example is causally consistent under Hutto and Ahamad's definition (although not under Tanenbaum et al.'s, because W(x)2 and W(x)3 are not seen in the same order for all processes). However R(x)2 and R(x)3 happen in a different order on P3 and P4, hence this example is sequentially inconsistent.
=== Processor consistency ===
In order for consistency in data to be maintained and to attain scalable processor systems where every processor has its own memory, the processor consistency model was derived. All processors need to be consistent in the order in which they see writes done by one processor and in the way they see writes by different processors to the same location (coherence is maintained). However, they do not need to be consistent when the writes are by different processors to different locations.
Every write operation can be divided into several sub-writes to all memories. A read from one such memory can happen before the write to this memory completes. Therefore, the data read can be stale. Thus, a processor under PC can execute a younger load when an older store needs to be stalled. Read before write, read after read and write before write ordering is still preserved in this model.
The processor consistency model is similar to the PRAM consistency model with a stronger condition that defines all writes to the same memory location must be seen in the same sequential order by all other processes. Processor consistency is weaker than sequential consistency but stronger than the PRAM consistency model.
The Stanford DASH multiprocessor system implements a variation of processor consistency which is incomparable (neither weaker nor stronger) to Goodman's definitions. All processors need to be consistent in the order in which they see writes by one processor and in the way they see writes by different processors to the same location. However, they do not need to be consistent when the writes are by different processors to different locations.
=== Pipelined RAM consistency, or FIFO consistency ===
Pipelined RAM consistency (PRAM consistency) was presented by Lipton and Sandberg in 1988 as one of the first described consistency models. Due to its informal definition, there are in fact at least two subtly different implementations, one by Ahamad et al. and one by Mosberger.
In PRAM consistency, all processes view the operations of a single process in the same order that they were issued by that process, while operations issued by different processes can be viewed in different order from different processes. PRAM consistency is weaker than processor consistency. PRAM relaxes the need to maintain coherence to a location across all its processors. Here, reads to any variable can be executed before writes in a processor. Read before write, read after read and write before write ordering is still preserved in this model.
=== Cache consistency ===
Cache consistency requires that all write operations to the same memory location are performed in some sequential order. Cache consistency is weaker than processor consistency and incomparable with PRAM consistency.
=== Slow consistency ===
In slow consistency, if a process reads a value previously written to a memory location, it cannot subsequently read any earlier value from that location. Writes performed by a process are immediately visible to that process. Slow consistency is a weaker model than PRAM and cache consistency.
Example:
Slow memory diagram depicts a slow consistency example. The first process writes 1 to the memory location X and then it writes 1 to the memory location Y. The second process reads 1 from Y and it then reads 0 from X even though X was written before Y.
Hutto, Phillip W., and Mustaque Ahamad (1990) illustrate that by appropriate programming, slow memory (consistency) can be expressive and efficient. They mention that slow memory has two valuable properties; locality and supporting reduction from atomic memory. They propose two algorithms to present the expressiveness of slow memory.
== Session guarantees ==
These 4 consistency models were proposed in a 1994 paper. They focus on guarantees in the situation where only a single user or application is making data modifications.
=== Monotonic read consistency ===
If a process reads the value of a data item x, any successive read operation on x by that process will always return that same value or a more recent value.
=== Monotonic write consistency ===
A write operation by a process on a data item X is completed before any successive write operation on X by the same process.
=== Read-your-writes consistency ===
A value written by a process on a data item X will always be available to a successive read operation performed by the same process on data item X.
=== Writes-follows-reads consistency ===
A write operation by a process on a data item x following a previous read operation on x by the same process is guaranteed to take place on the same or a more recent value of x that was read.
== Weak memory consistency models ==
The following models require specific synchronization by programmers.
=== Weak ordering ===
Weak ordering classifies memory operations into two categories: data operations and synchronization operations. To enforce program order, a programmer needs to find at least one synchronisation operation in a program. Synchronization operations signal the processor to make sure it has completed and seen all previous operations done by all processors. Program order and atomicity is maintained only on synchronisation operations and not on all reads and writes. This was derived from the understanding that certain memory operations – such as those conducted in a critical section - need not be seen by all processors until after all operations in the critical section are completed. It assumes reordering memory operations to data regions between synchronisation operations does not affect the outcome of the program. This exploits the fact that programs written to be executed on a multi-processor system contain the required synchronization to make sure that data races do not occur and SC outcomes are produced always.
Coherence is not relaxed in this model. Once these requirements are met, all other "data" operations can be reordered. The way this works is that a counter tracks the number of data operations and until this counter becomes zero, the synchronisation operation isn't issued. Furthermore, no more data operations are issued unless all the previous synchronisations are completed. Memory operations in between two synchronisation variables can be overlapped and reordered without affecting the correctness of the program. This model ensures that write atomicity is always maintained, therefore no additional safety net is required for weak ordering.
In order to maintain weak ordering, write operations prior to a synchronization operation must be globally performed before the synchronization operation. Operations present after a synchronization operation should also be performed only after the synchronization operation completes. Therefore, accesses to synchronization variables is sequentially consistent and any read or write should be performed only after previous synchronization operations have completed.
There is high reliance on explicit synchronization in the program. For weak ordering models, the programmer must use atomic locking instructions such as test-and-set, fetch-and-op, store conditional, load linked or must label synchronization variables or use fences.
=== Release consistency ===
The release consistency model relaxes the weak consistency model by distinguishing the entrance synchronization operation from the exit synchronization operation. Under weak ordering, when a synchronization operation is to be seen, all operations in all processors need to be visible before the synchronization operation is done and the processor proceeds. However, under the release consistency model, during the entry to a critical section, termed as "acquire", all operations with respect to the local memory variables need to be completed. During the exit, termed as "release", all changes made by the local processor should be propagated to all other processors. Coherence is still maintained.
The acquire operation is a load/read that is performed to access the critical section. A release operation is a store/write performed to allow other processors to use the shared variables.
Among synchronization variables, sequential consistency or processor consistency can be maintained. Using SC, all competing synchronization variables should be processed in order. However, with PC, a pair of competing variables need to only follow this order. Younger acquires can be allowed to happen before older releases.
==== RCsc and RCpc ====
There are two types of release consistency, release consistency with sequential consistency (RCsc) and release consistency with processor consistency (RCpc). The latter type denotes which type of consistency is applied to those operations nominated below as special.
There are special (cf. ordinary) memory operations, themselves consisting of two classes of operations: sync or nsync operations. The latter are operations not used for synchronisation; the former are, and consist of acquire and release operations. An acquire is effectively a read memory operation used to obtain access to a certain set of shared locations. Release, on the other hand, is a write operation that is performed for granting permission to access the shared locations.
For sequential consistency (RCsc), the constraints are:
acquire → all,
all → release,
special → special.
For processor consistency (RCpc) the write to read program order is relaxed, having constraints:
acquire → all,
all → release,
special → special (except when special write is followed by special read).
Note: the above notation A → B, implies that if the operation A precedes B in the program order, then program order is enforced.
=== Entry consistency ===
This is a variant of the release consistency model. It also requires the use of acquire and release instructions to explicitly state an entry or exit to a critical section. However, under entry consistency, every shared variable is assigned a synchronization variable specific to it. This way, only when the acquire is to variable x, all operations related to x need to be completed with respect to that processor. This allows concurrent operations of different critical sections of different shared variables to occur. Concurrency cannot be seen for critical operations on the same shared variable. Such a consistency model will be useful when different matrix elements can be processed at the same time.
=== Local consistency ===
In local consistency, each process performs its own operations in the order defined by its program. There is no constraint on the ordering in which the write operations of other processes appear to be performed. Local consistency is the weakest consistency model in shared memory systems.
=== General consistency ===
In general consistency, all the copies of a memory location are eventually identical after all processes' writes are completed.
=== Eventual consistency ===
An eventual consistency is a weak consistency model in a system with the lack of simultaneous updates. It defines that if no update takes a very long time, all replicas eventually become consistent.
Most shared decentralized databases have an eventual consistency model, either BASE: basically available; soft state; eventually consistent, or a combination of ACID and BASE sometimes called SALT: sequential; agreed; ledgered; tamper-resistant, and also symmetric; admin-free; ledgered; and time-consensual.
== Relaxed memory consistency models ==
Some different consistency models can be defined by relaxing one or more requirements in sequential consistency called relaxed consistency models. These consistency models do not provide memory consistency at the hardware level. In fact, the programmers are responsible for implementing the memory consistency by applying synchronization techniques. The above models are classified based on four criteria and are detailed further.
There are four comparisons to define the relaxed consistency:
Relaxation
One way to categorize the relaxed consistency is to define which sequential consistency requirements are relaxed. We can have less strict models by relaxing either program order or write atomicity requirements defined by Adve and Gharachorloo, 1996. Program order guarantees that each process issues a memory request ordered by its program and write atomicity defines that memory requests are serviced based on the order of a single FIFO queue. In relaxing program order, any or all the ordering of operation pairs, write-after-write, read-after-write, or read/write-after-read, can be relaxed. In the relaxed write atomicity model, a process can view its own writes before any other processors.
Synchronizing vs. non-synchronizing
A synchronizing model can be defined by dividing the memory accesses into two groups and assigning different consistency restrictions to each group considering that one group can have a weak consistency model while the other one needs a more restrictive consistency model. In contrast, a non-synchronizing model assigns the same consistency model to the memory access types.
Issue vs. view-based
Issue method provides sequential consistency simulation by defining the restrictions for processes to issue memory operations. Whereas, view method describes the visibility restrictions on the events order for processes.
Relative model strength
Some consistency models are more restrictive than others. In other words, strict consistency models enforce more constraints as consistency requirements. The strength of a model can be defined by the program order or atomicity relaxations and the strength of models can also be compared. Some models are directly related if they apply the same relaxations or more. On the other hand, the models that relax different requirements are not directly related.
Sequential consistency has two requirements, program order and write atomicity. Different relaxed consistency models can be obtained by relaxing these requirements. This is done so that, along with relaxed constraints, the performance increases, but the programmer is responsible for implementing the memory consistency by applying synchronisation techniques and must have a good understanding of the hardware.
Potential relaxations:
Write to read program order
Write to write program order
Read to read and read to write program orders
=== Relaxed write to read ===
An approach to improving the performance at the hardware level is by relaxing the PO of a write followed by a read which effectively hides the latency of write operations. The optimisation this type of relaxation relies on is that it allows the subsequent reads to be in a relaxed order with respect to the previous writes from the processor. Because of this relaxation some programs like XXX may fail to give SC results because of this relaxation. Whereas, programs like YYY are still expected to give consistent results because of the enforcement of the remaining program order constraints.
Three models fall under this category. The IBM 370 model is the strictest model. A read can be complete before an earlier write to a different address, but it is prohibited from returning the value of the write unless all the processors have seen the write. The SPARC V8 total store ordering model (TSO) model partially relaxes the IBM 370 Model, it allows a read to return the value of its own processor's write with respect to other writes to the same location i.e. it returns the value of its own write before others see it. Similar to the previous model, this cannot return the value of write unless all the processors have seen the write. The processor consistency model (PC) is the most relaxed of the three models and relaxes both the constraints such that a read can complete before an earlier write even before it is made visible to other processors.
In Example A, the result is possible only in IBM 370 because read(A) is not issued until the write(A) in that processor is completed. On the other hand, this result is possible in TSO and PC because they allow the reads of the flags before the writes of the flags in a single processor.
In Example B the result is possible only with PC as it allows P2 to return the value of a write even before it is visible to P3. This won't be possible in the other two models.
To ensure sequential consistency in the above models, safety nets or fences are used to manually enforce the constraint. The IBM370 model has some specialised serialisation instructions which are manually placed between operations. These instructions can consist of memory instructions or non-memory instructions such as branches. On the other hand, the TSO and PC models do not provide safety nets, but the programmers can still use read-modify-write operations to make it appear like the program order is still maintained between a write and a following read. In case of TSO, PO appears to be maintained if the R or W which is already a part of a R-modify-W is replaced by a R-modify-W, this requires the W in the R-modify-W is a ‘dummy’ that returns the read value. Similarly for PC, PO seems to be maintained if the read is replaced by a write or is already a part of R-modify-W.
However, compiler optimisations cannot be done after exercising this relaxation alone. Compiler optimisations require the full flexibility of reordering any two operations in the PO, so the ability to reorder a write with respect to a read is not sufficiently helpful in this case.
=== Relaxed write to read and write to write ===
Some models relax the program order even further by relaxing even the ordering constraints between writes to different locations. The SPARC V8 partial store ordering model (PSO) is the only example of such a model. The ability to pipeline and overlap writes to different locations from the same processor is the key hardware optimisation enabled by PSO. PSO is similar to TSO in terms of atomicity requirements, in that it allows a processor to read the value of its own write and prevents other processors from reading another processor's write before the write is visible to all other processors. Program order between two writes is maintained by PSO using an explicit STBAR instruction. The STBAR is inserted in a write buffer in implementations with FIFO write buffers. A counter is used to determine when all the writes before the STBAR instruction have been completed, which triggers a write to the memory system to increment the counter. A write acknowledgement decrements the counter, and when the counter becomes 0, it signifies that all the previous writes are completed.
In the examples A and B, PSO allows both these non-sequentially consistent results. The safety net that PSO provides is similar to TSO's, it imposes program order from a write to a read and enforces write atomicity.
Similar to the previous models, the relaxations allowed by PSO are not sufficiently flexible to be useful for compiler optimisation, which requires a much more flexible optimisation.
=== Relaxing read and read to write program orders: Alpha, RMO, and PowerPC ===
In some models, all operations to different locations are relaxed. A read or write may be reordered with respect to a different read or write in a different location. The weak ordering may be classified under this category and two types of release consistency models (RCsc and RCpc) also come under this model. Three commercial architectures are also proposed under this category of relaxation: the Digital Alpha, SPARC V9 relaxed memory order (RMO), and IBM PowerPC models.
These three commercial architectures exhibit explicit fence instructions as their safety nets. The Alpha model provides two types of fence instructions, memory barrier (MB) and write memory barrier (WMB). The MB operation can be used to maintain program order of any memory operation before the MB with a memory operation after the barrier. Similarly, the WMB maintains program order only among writes. The SPARC V9 RMO model provides a MEMBAR instruction which can be customised to order previous reads and writes with respect to future read and write operations. There is no need for using read-modify-writes to achieve this order because the MEMBAR instruction can be used to order a write with respect to a succeeding read. The PowerPC model uses a single fence instruction called the SYNC instruction. It is similar to the MB instruction, but with a little exception that reads can occur out of program order even if a SYNC is placed between two reads to the same location. This model also differs from Alpha and RMO in terms of atomicity. It allows a write to be seen earlier than a read's completion. A combination of read modify write operations may be required to make an illusion of write atomicity.
RMO and PowerPC allow reordering of reads to the same location. These models violate sequential order in examples A and B. An additional relaxation allowed in these models is that memory operations following a read operation can be overlapped and reordered with respect to the read. Alpha and RMO allow a read to return the value of another processor's early write. From a programmer's perspective these models must maintain the illusion of write atomicity even though they allow the processor to read its own write early.
== Transactional memory models ==
Transactional memory model is the combination of cache coherency and memory consistency models as a communication model for shared memory systems supported by software or hardware; a transactional memory model provides both memory consistency and cache coherency. A transaction is a sequence of operations executed by a process that transforms data from one consistent state to another. A transaction either commits when there is no conflict or aborts. In commits, all changes are visible to all other processes when a transaction is completed, while aborts discard all changes. Compared to relaxed consistency models, a transactional model is easier to use and can provide higher performance than a sequential consistency model.
== Other consistency models ==
Some other consistency models are as follows:
Causal+ consistency
Cross-Client Monotonicity
Delta consistency
Fork consistency
One-copy serializability
Serializability
Vector-field consistency
Weak consistency
Strong consistency
Several other consistency models have been conceived to express restrictions with respect to ordering or visibility of operations, or to deal with specific fault assumptions.
== Consistency and replication ==
Tanenbaum et al., 2007 defines two main reasons for replicating; reliability and performance. Reliability can be achieved in a replicated file system by switching to another replica in the case of the current replica failure. The replication also protects data from being corrupted by providing multiple copies of data on different replicas. It also improves the performance by dividing the work. While replication can improve performance and reliability, it can cause consistency problems between multiple copies of data. The multiple copies are consistent if a read operation returns the same value from all copies and a write operation as a single atomic operation (transaction) updates all copies before any other operation takes place. Tanenbaum, Andrew, & Maarten Van Steen, 2007 refer to this type of consistency as tight consistency provided by synchronous replication. However, applying global synchronizations to keep all copies consistent is costly. One way to decrease the cost of global synchronization and improve the performance can be weakening the consistency restrictions.
=== Data-centric consistency models ===
Tanenbaum et al., 2007 defines the consistency model as a contract between the software (processes) and memory implementation (data store). This model guarantees that if the software follows certain rules, the memory works correctly. Since, in a system without a global clock, defining the last operation among writes is difficult, some restrictions can be applied on the values that can be returned by a read operation. The goal of data-centric consistency models is to provide a consistent view on a data store where processes may carry out concurrent updates.
==== Consistent ordering of operations ====
Some consistency models such as sequential and also causal consistency models deal with the order of operations on shared replicated data in order to provide consistency. In these models, all replicas must agree on a consistent global ordering of updates.
===== Grouping operations =====
In grouping operation, accesses to the synchronization variables are sequentially consistent. A process is allowed to access a synchronization variable that all previous writes have been completed. In other words, accesses to synchronization variables are not permitted until all operations on the synchronization variables are completely performed.
=== Client-centric consistency models ===
In distributed systems, maintaining sequential consistency in order to control the concurrent operations is essential. In some special data stores without simultaneous updates, client-centric consistency models can deal with inconsistencies in a less costly way. The following models are some client-centric consistency models:
=== Consistency protocols ===
The implementation of a consistency model is defined by a consistency protocol. Tanenbaum et al., 2007 illustrates some consistency protocols for data-centric models.
==== Continuous consistency ====
Continuous consistency introduced by Yu and Vahdat (2000). In this model, the consistency semantics of an application is described by using conits in the application. Since the consistency requirements can differ based on application semantics, Yu and Vahdat (2000) believe that a predefined uniform consistency model may not be an appropriate approach. The application should specify the consistency requirements that satisfy the application semantics. In this model, an application specifies each consistency requirement as a conit (abbreviation of consistency units). A conit can be a physical or logical consistency and is used to measure the consistency. Tanenbaum et al., 2007 describes the notion of a conit by giving an example.
There are three inconsistencies that can be tolerated by applications.
Deviation in numerical values
Numerical deviation bounds the difference between the conit value and the relative value of the last update. A weight can be assigned to the writes which defines the importance of the writes in a specific application. The total weights of unseen writes for a conit can be defined as a numerical deviation in an application. There are two different types of numerical deviation; absolute and relative numerical deviation.
Deviation in ordering
Ordering deviation is the discrepancy between the local order of writes in a replica and their relative ordering in the eventual final image.
Deviation in staleness between replicas
Staleness deviation defines the validity of the oldest write by bounding the difference between the current time and the time of the oldest write on a conit not seen locally. Each server has a local queue of uncertain write that is required an actual order to be determined and applied on a conit. The maximal length of uncertain writes queue is the bound of ordering deviation. When the number of writes exceeds the limit, instead of accepting new submitted write, the server will attempt to commit uncertain writes by communicating with other servers based on the order that writes should be executed.
If all three deviation bounds are set to zero, the continuous consistency model is the strong consistency.
==== Primary-based protocols ====
Primary-based protocols can be considered as a class of consistency protocols that are simpler to implement. For instance, sequential ordering is a popular consistency model when consistent ordering of operations is considered. The sequential ordering can be determined as primary-based protocol. In these protocols, there is an associated primary for each data item in a data store to coordinate write operations on that data item.
===== Remote-write protocols =====
In the simplest primary-based protocol that supports replication, also known as primary-backup protocol, write operations are forwarded to a single server and read operations can be performed locally.
Example: Tanenbaum et al., 2007 gives an example of a primary-backup protocol. The diagram of primary-backup protocol shows an example of this protocol. When a client requests a write, the write request is forwarded to a primary server. The primary server sends request to backups to perform the update. The server then receives the update acknowledgement from all backups and sends the acknowledgement of completion of writes to the client. Any client can read the last available update locally. The trade-off of this protocol is that a client who sends the update request might have to wait so long to get the acknowledgement in order to continue. This problem can be solved by performing the updates locally, and then asking other backups to perform their updates. The non-blocking primary-backup protocol does not guarantee the consistency of update on all backup servers. However, it improves the performance. In the primary-backup protocol, all processes will see the same order of write operations since this protocol orders all incoming writes based on a globally unique time. Blocking protocols guarantee that processes view the result of the last write operation.
===== Local-write protocols =====
In primary-based local-write protocols, primary copy moves between processes willing to perform an update. To update a data item, a process first moves it to its location. As a result, in this approach, successive write operations can be performed locally while each process can read their local copy of data items. After the primary finishes its update, the update is forwarded to other replicas and all perform the update locally. This non-blocking approach can lead to an improvement.
The diagram of the local-write protocol depicts the local-write approach in primary-based protocols. A process requests a write operation in a data item x. The current server is considered as the new primary for a data item x. The write operation is performed and when the request is finished, the primary sends an update request to other backup servers. Each backup sends an acknowledgment to the primary after finishing the update operation.
==== Replicated-write protocols ====
In replicated-write protocols, unlike the primary-based protocol, all updates are carried out to all replicas.
===== Active replication =====
In active replication, there is a process associated with each replica to perform the write operation. In other words, updates are sent to each replica in the form of an operation in order to be executed. All updates need to be performed in the same order in all replicas. As a result, a totally-ordered multicast mechanism is required. There is a scalability issue in implementing such a multicasting mechanism in large distributed systems. There is another approach in which each operation is sent to a central coordinator (sequencer). The coordinator first assigns a sequence number to each operation and then forwards the operation to all replicas. Second approach cannot also solve the scalability problem.
===== Quorum-based protocols =====
Voting can be another approach in replicated-write protocols. In this approach, a client requests and receives permission from multiple servers in order to read and write a replicated data. As an example, suppose in a distributed file system, a file is replicated on N servers. To update a file, a client must send a request to at least N/2 + 1 in order to make their agreement to perform an update. After the agreement, changes are applied on the file and a new version number is assigned to the updated file. Similarly, for reading replicated file, a client sends a request to N/2 + 1 servers in order to receive the associated version number from those servers. Read operation is completed if all received version numbers are the most recent version.
===== Cache-coherence protocols =====
In a replicated file system, a cache-coherence protocol provides the cache consistency while caches are generally controlled by clients. In many approaches, cache consistency is provided by the underlying hardware. Some other approaches in middleware-based distributed systems apply software-based solutions to provide the cache consistency.
Cache consistency models can differ in their coherence detection strategies that define when inconsistencies occur. There are two approaches to detect the inconsistency; static and dynamic solutions. In the static solution, a compiler determines which variables can cause the cache inconsistency. So, the compiler enforces an instruction in order to avoid the inconsistency problem. In the dynamic solution, the server checks for inconsistencies at run time to control the consistency of the cached data that has changed after it was cached.
The coherence enforcement strategy is another cache-coherence protocol. It defines how to provide the consistency in caches by using the copies located on the server. One way to keep the data consistent is to never cache the shared data. A server can keep the data and apply some consistency protocol such as primary-based protocols to ensure the consistency of shared data. In this solution, only private data can be cached by clients. In the case that shared data is cached, there are two approaches in order to enforce the cache coherence.
In the first approach, when a shared data is updated, the server forwards invalidation to all caches. In the second approach, an update is propagated. Most caching systems apply these two approaches or dynamically choose between them.
== See also ==
Cache coherence – Equivalence of all cached copies of a memory location
Distributed shared memory – software and hardware implementations in which each cluster node accesses a large shared memoryPages displaying wikidata descriptions as a fallback
Non-uniform memory access – Computer memory design used in multiprocessing
== References ==
== Further reading ==
Paolo Viotti; Marko Vukolic (2016). "Consistency in Non-Transactional Distributed Storage Systems". ACM Computing Surveys. 49 (1): 19:1–19:34. arXiv:1512.00168. doi:10.1145/2926965. S2CID 118557.
Sezgin, Ali (August 2004). Formalization and Verification of Shared Memory (PDF) (PhD thesis). The University of Utah. Retrieved June 27, 2024. (contains many valuable references)
Yelick, Katherine; Bonachea, Dan; Wallace, Charles (2004), A Proposal for a UPC Memory Consistency Model, v1.0 (PDF), Lawrence Berkeley National Lab Tech Report, doi:10.2172/823757, LBNL-54983, archived from the original (PDF) on 20 May 2005
Mosberger, David (1993). "Memory Consistency Models". Operating Systems Review. 27 (1): 18–26. CiteSeerX 10.1.1.331.2924. doi:10.1145/160551.160553. S2CID 16123648.
Sarita V. Adve; Kourosh Gharachorloo (December 1996). "Shared Memory Consistency Models: A Tutorial" (PDF). IEEE Computer. 29 (12): 66–76. CiteSeerX 10.1.1.36.8566. doi:10.1109/2.546611. Archived from the original (PDF) on 2008-09-07. Retrieved 2008-05-28.
Steinke, Robert C.; Gary J. Nutt (2004). "A unified theory of shared memory consistency". Journal of the ACM. 51 (5): 800–849. arXiv:cs.DC/0208027. doi:10.1145/1017460.1017464. S2CID 3206071.
Terry, Doug. "Replicated data consistency explained through baseball." Communications of the ACM 56.12 (2013): 82-89. https://www.microsoft.com/en-us/research/wp-content/uploads/2011/10/ConsistencyAndBaseballReport.pdf
== External links ==
IETF slides
Memory Ordering in Modern Microprocessors, Part I and Part II, by Paul E. McKenney (2005). Linux Journal | Wikipedia/Consistency_model |
The ACM Symposium on Principles of Database Systems (PODS) is an international research conference on database theory, and has been held yearly since 1982. It is sponsored by three Association for Computing Machinery SIGs, SIGAI, SIGACT, and SIGMOD. Since 1991, PODS has been held jointly with the ACM SIGMOD Conference, a research conference on systems aspects of data management.
PODS is regarded as one of the top conferences in the area of database theory and data algorithms. It holds the highest rating of A* in the CORE2021 ranking [1]. The conference typically accepts between 20 and 40 papers each year, with acceptance rates fluctuating between 25% and 35%.
== External links ==
Official website
dblp: Symposium on Principles of Database Systems (PODS) | Wikipedia/Symposium_on_Principles_of_Database_Systems |
In computer science, ACID (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction.
In 1983, Andreas Reuter and Theo Härder coined the acronym ACID, building on earlier work by Jim Gray who named atomicity, consistency, and durability, but not isolation, when characterizing the transaction concept. These four properties are the major guarantees of the transaction paradigm, which has influenced many aspects of development in database systems.
According to Gray and Reuter, the IBM Information Management System supported ACID transactions as early as 1973 (although the acronym was created later).
BASE stands for basically available, soft state, and eventually consistent: the acronym highlights that BASE is opposite of ACID, like their chemical equivalents. ACID databases prioritize consistency over availability — the whole transaction fails if an error occurs in any step within the transaction; in contrast, BASE databases prioritize availability over consistency: instead of failing the transaction, users can access inconsistent data temporarily: data consistency is achieved, but not immediately.
== Characteristics ==
The characteristics of these four properties as defined by Reuter and Härder are as follows:
=== Atomicity ===
Transactions are often composed of multiple statements. Atomicity guarantees that each transaction is treated as a single "unit", which either succeeds completely or fails completely: if any of the statements constituting a transaction fails to complete, the entire transaction fails and the database is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. A guarantee of atomicity prevents updates to the database from occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next, it has already occurred in whole (or nothing happened if the transaction was cancelled in progress).
=== Consistency ===
Consistency ensures that a transaction can only bring the database from one consistent state to another, preserving database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This prevents database corruption by an illegal transaction. An example of a database invariant is referential integrity, which guarantees the primary key–foreign key relationship.
=== Isolation ===
Transactions are often executed concurrently (e.g., multiple transactions reading and writing to a table at the same time). Isolation ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed sequentially. Isolation is the main goal of concurrency control; depending on the isolation level used, the effects of an incomplete transaction might not be visible to other transactions.
=== Durability ===
Durability guarantees that once a transaction has been committed, it will remain committed even in the case of a system failure (e.g., power outage or crash). This usually means that completed transactions (or their effects) are recorded in non-volatile memory.
== Examples ==
The following examples further illustrate the ACID properties. In these examples, the database table has two columns, A and B. An integrity constraint requires that the value in A and the value in B must sum to 100. The following SQL code creates a table as described above:
=== Atomicity ===
Atomicity is the guarantee that series of database operations in an atomic transaction will either all occur (a successful operation), or none will occur (an unsuccessful operation). The series of operations cannot be separated with only some of them being executed, which makes the series of operations "indivisible". A guarantee of atomicity prevents updates to the database from occurring only partially, which can cause greater problems than rejecting the whole series outright. In other words, atomicity means indivisibility and irreducibility. Alternatively, we may say that a logical transaction may be composed of several physical transactions. Unless and until all component physical transactions are executed, the logical transaction will not have occurred.
An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and depositing it to account B. We would not want to see the amount removed from account A before we are sure it has also been transferred into account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither debited nor credited if either of those two operations fails.
=== Consistency failure ===
Consistency is a very general term, which demands that the data must meet all validation rules. In the previous example, the validation is a requirement that A + B = 100. All validation rules must be checked to ensure consistency. Assume that a transaction attempts to subtract 10 from A without altering B. Because consistency is checked after each transaction, it is known that A + B = 100 before the transaction begins. If the transaction removes 10 from A successfully, atomicity will be achieved. However, a validation check will show that A + B = 90, which is inconsistent with the rules of the database. The entire transaction must be canceled and the affected rows rolled back to their pre-transaction state. If there had been other constraints, triggers, or cascades, every single change operation would have been checked in the same way as above before the transaction was committed. Similar issues may arise with other constraints. We may have required the data types of both A and B to be integers. If we were then to enter, say, the value 13.5 for A, the transaction will be canceled, or the system may give rise to an alert in the form of a trigger (if/when the trigger has been written to this effect). Another example would be integrity constraints, which would not allow us to delete a row in one table whose primary key is referred to by at least one foreign key in other tables.
=== Isolation failure ===
To demonstrate isolation, we assume two transactions execute at the same time, each attempting to modify the same data. One of the two must wait until the other completes in order to maintain isolation.
Consider two transactions:
T1 transfers 10 from A to B.
T2 transfers 20 from B to A.
Combined, there are four actions:
T1 subtracts 10 from A.
T1 adds 10 to B.
T2 subtracts 20 from B.
T2 adds 20 to A.
If these operations are performed in order, isolation is maintained, although T2 must wait. Consider what happens if T1 fails halfway through. The database eliminates T1's effects, and T2 sees only valid data.
By interleaving the transactions, the actual order of actions might be:
T1 subtracts 10 from A.
T2 subtracts 20 from B.
T2 adds 20 to A.
T1 adds 10 to B.
Again, consider what happens if T1 fails while modifying B in Step 4. By the time T1 fails, T2 has already modified A; it cannot be restored to the value it had before T1 without leaving an invalid database. This is known as a write-write contention, because two transactions attempted to write to the same data field. In a typical system, the problem would be resolved by reverting to the last known good state, canceling the failed transaction T1, and restarting the interrupted transaction T2 from the good state.
=== Durability failure ===
Consider a transaction that transfers 10 from A to B. First, it removes 10 from A, then it adds 10 to B. At this point, the user is told the transaction was a success. However, the changes are still queued in the disk buffer waiting to be committed to disk. Power fails and the changes are lost, but the user assumes (understandably) that the changes persist.
== Implementation ==
Processing a transaction often requires a sequence of operations that is subject to failure for a number of reasons. For instance, the system may have no room left on its disk drives, or it may have used up its allocated CPU time. There are two popular families of techniques: write-ahead logging and shadow paging. In both cases, locks must be acquired on all information to be updated, and depending on the level of isolation, possibly on all data that may be read as well. In write ahead logging, durability is guaranteed by writing the prospective change to a persistent log before changing the database. That allows the database to return to a consistent state in the event of a crash. In shadowing, updates are applied to a partial copy of the database, and the new copy is activated when the transaction commits.
=== Locking vs. multiversioning ===
Many databases rely upon locking to provide ACID capabilities. Locking means that the transaction marks the data that it accesses so that the DBMS knows not to allow other transactions to modify it until the first transaction succeeds or fails. The lock must always be acquired before processing data, including data that is read but not modified. Non-trivial transactions typically require a large number of locks, resulting in substantial overhead as well as blocking other transactions. For example, if user A is running a transaction that has to read a row of data that user B wants to modify, user B must wait until user A's transaction completes. Two-phase locking is often applied to guarantee full isolation.
An alternative to locking is multiversion concurrency control, in which the database provides each reading transaction the prior, unmodified version of data that is being modified by another active transaction. This allows readers to operate without acquiring locks, i.e., writing transactions do not block reading transactions, and readers do not block writers. Going back to the example, when user A's transaction requests data that user B is modifying, the database provides A with the version of that data that existed when user B started his transaction. User A gets a consistent view of the database even if other users are changing data. One implementation, namely snapshot isolation, relaxes the isolation property.
=== Distributed transactions ===
Guaranteeing ACID properties in a distributed transaction across a distributed database, where no single node is responsible for all data affecting a transaction, presents additional complications. Network connections might fail, or one node might successfully complete its part of the transaction and then be required to roll back its changes because of a failure on another node. The two-phase commit protocol (not to be confused with two-phase locking) provides atomicity for distributed transactions to ensure that each participant in the transaction agrees on whether the transaction should be committed or not. Briefly, in the first phase, one node (the coordinator) interrogates the other nodes (the participants), and only when all reply that they are prepared does the coordinator, in the second phase, formalize the transaction.
== See also ==
Eventual consistency (BASE: basically available)
CAP theorem
Concurrency control
Java Transaction API
Open Systems Interconnection
Transactional NTFS
Two-phase commit protocol
CRUD
== References == | Wikipedia/ACID_(computer_science) |
An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.
In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (the end user) other than a computer programmer who is incorporating it into software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said to call that portion of the API. The calls that make up the API are also known as subroutines, methods, requests, or endpoints. An API specification defines these calls, meaning that it explains how to use or implement them.
One purpose of APIs is to hide the internal details of how a system works, exposing only those parts a programmer will find useful and keeping them consistent even if the internal details later change. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowing interoperability among many systems.
The term API is often used to refer to web APIs, which allow communication between computers that are joined by the internet. There are also APIs for programming languages, software libraries, computer operating systems, and computer hardware. APIs originated in the 1940s, though the term did not emerge until the 1960s and 70s.
== Purpose ==
An API opens a software system to interactions from the outside. It allows two software systems to communicate across a boundary — an interface — using mutually agreed-upon signals. In other words, an API connects software entities together. Unlike a user interface, an API is typically not visible to users. It is an "under the hood" portion of a software system, used for machine-to-machine communication.
A well-designed API exposes only objects or actions needed by software or software developers. It hides details that have no use. This abstraction simplifies programming.
Building software using APIs has been compared to using building-block toys, such as Lego bricks. Software services or software libraries are analogous to the bricks; they may be joined together via their APIs, composing a new software product. The process of joining is called integration.
As an example, consider a weather sensor that offers an API. When a certain message is transmitted to the sensor, it will detect the current weather conditions and reply with a weather report. The message that activates the sensor is an API call, and the weather report is an API response. A weather forecasting app might integrate with a number of weather sensor APIs, gathering weather data from throughout a geographical area.
An API is often compared to a contract. It represents an agreement between parties: a service provider who offers the API and the software developers who rely upon it. If the API remains stable, or if it changes only in predictable ways, developers' confidence in the API will increase. This may increase their use of the API.
== History of the term ==
The term API initially described an interface only for end-user-facing programs, known as application programs. This origin is still reflected in the name "application programming interface." Today, the term is broader, including also utility software and even hardware interfaces.
The idea of the API is much older than the term itself. British computer scientists Maurice Wilkes and David Wheeler worked on a modular software library in the 1940s for EDSAC, an early computer. The subroutines in this library were stored on punched paper tape organized in a filing cabinet. This cabinet also contained what Wilkes and Wheeler called a "library catalog" of notes about each subroutine and how to incorporate it into a program. Today, such a catalog would be called an API (or an API specification or API documentation) because it instructs a programmer on how to use (or "call") each subroutine that the programmer needs.
Wilkes and Wheeler's book The Preparation of Programs for an Electronic Digital Computer contains the first published API specification. Joshua Bloch considers that Wilkes and Wheeler "latently invented" the API, because it is more of a concept that is discovered than invented.
The term "application program interface" (without an -ing suffix) is first recorded in a paper called Data structures and techniques for remote computer graphics presented at an AFIPS conference in 1968. The authors of this paper use the term to describe the interaction of an application—a graphics program in this case—with the rest of the computer system. A consistent application interface (consisting of Fortran subroutine calls) was intended to free the programmer from dealing with idiosyncrasies of the graphics display device, and to provide hardware independence if the computer or the display were replaced.
The term was introduced to the field of databases by C. J. Date in a 1974 paper called The Relational and Network Approaches: Comparison of the Application Programming Interface. An API became a part of the ANSI/SPARC framework for database management systems. This framework treated the application programming interface separately from other interfaces, such as the query interface. Database professionals in the 1970s observed these different interfaces could be combined; a sufficiently rich application interface could support the other interfaces as well.
This observation led to APIs that supported all types of programming, not just application programming. By 1990, the API was defined simply as "a set of services available to a programmer for performing certain tasks" by technologist Carl Malamud.
The idea of the API was expanded again with the dawn of remote procedure calls and web APIs. As computer networks became common in the 1970s and 80s, programmers wanted to call libraries located not only on their local computers, but on computers located elsewhere. These remote procedure calls were well supported by the Java language in particular. In the 1990s, with the spread of the internet, standards like CORBA, COM, and DCOM competed to become the most common way to expose API services.
Roy Fielding's dissertation Architectural Styles and the Design of Network-based Software Architectures at UC Irvine in 2000 outlined Representational state transfer (REST) and described the idea of a "network-based Application Programming Interface" that Fielding contrasted with traditional "library-based" APIs. XML and JSON web APIs saw widespread commercial adoption beginning in 2000 and continuing as of 2021. The web API is now the most common meaning of the term API.
The Semantic Web proposed by Tim Berners-Lee in 2001 included "semantic APIs" that recast the API as an open, distributed data interface rather than a software behavior interface. Proprietary interfaces and agents became more widespread than open ones, but the idea of the API as a data interface took hold. Because web APIs are widely used to exchange data of all kinds online, API has become a broad term describing much of the communication on the internet. When used in this way, the term API has overlap in meaning with the term communication protocol.
== Types ==
=== Libraries and frameworks ===
The interface to a software library is one type of API. The API describes and prescribes the "expected behavior" (a specification) while the library is an "actual implementation" of this set of rules.
A single API can have multiple implementations (or none, being abstract) in the form of different libraries that share the same programming interface.
The separation of the API from its implementation can allow programs written in one language to use a library written in another. For example, because Scala and Java compile to compatible bytecode, Scala developers can take advantage of any Java API.
API use can vary depending on the type of programming language involved.
An API for a procedural language such as Lua could consist primarily of basic routines to execute code, manipulate data or handle errors while an API for an object-oriented language, such as Java, would provide a specification of classes and its class methods. Hyrum's law states that "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody." Meanwhile, several studies show that most applications that use an API tend to use a small part of the API.
Language bindings are also APIs. By mapping the features and capabilities of one language to an interface implemented in another language, a language binding allows a library or service written in one language to be used when developing in another language. Tools such as SWIG and F2PY, a Fortran-to-Python interface generator, facilitate the creation of such interfaces.
An API can also be related to a software framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself.
Moreover, the overall program flow of control can be out of the control of the caller and in the framework's hands by inversion of control or a similar mechanism.
=== Operating systems ===
An API can specify the interface between an application and the operating system. POSIX, for example, specifies a set of common APIs that aim to enable an application written for a POSIX conformant operating system to be compiled for another POSIX conformant operating system.
Linux and Berkeley Software Distribution are examples of operating systems that implement the POSIX APIs.
Microsoft has shown a strong commitment to a backward-compatible API, particularly within its Windows API (Win32) library, so older applications may run on newer versions of Windows using an executable-specific setting called "Compatibility Mode".
An API differs from an application binary interface (ABI) in that an API is source code based while an ABI is binary based. For instance, POSIX provides APIs while the Linux Standard Base provides an ABI.
=== Remote APIs ===
Remote APIs allow developers to manipulate remote resources through protocols, specific standards for communication that allow different technologies to work together, regardless of language or platform.
For example, the Java Database Connectivity API allows developers to query many different types of databases with the same set of functions, while the Java remote method invocation API uses the Java Remote Method Protocol to allow invocation of functions that operate remotely, but appear local to the developer.
Therefore, remote APIs are useful in maintaining the object abstraction in object-oriented programming; a method call, executed locally on a proxy object, invokes the corresponding method on the remote object, using the remoting protocol, and acquires the result to be used locally as a return value.
A modification of the proxy object will also result in a corresponding modification of the remote object.
=== Web APIs ===
Web APIs are the defined interfaces through which interactions happen between an enterprise and applications that use its assets, which also is a Service Level Agreement (SLA) to specify the functional provider and expose the service path or URL for its API users. An API approach is an architectural approach that revolves around providing a program interface to a set of services to different applications serving different types of consumers.
When used in the context of web development, an API is typically defined as a set of specifications, such as Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper's rate table into a web database. While "web API" historically has been virtually synonymous with web service, the recent trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based web services and service-oriented architecture (SOA) towards more direct representational state transfer (REST) style web resources and resource-oriented architecture (ROA). Part of this trend is related to the Semantic Web movement toward Resource Description Framework (RDF), a concept to promote web-based ontology engineering technologies. Web APIs allow the combination of multiple APIs into new applications known as mashups.
In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web. For example, Twitter's REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data.
== Design ==
The design of an API has significant impact on its usage. The principle of information hiding describes the role of programming interfaces as enabling modular programming by hiding the implementation details of the modules so that users of modules need not understand the complexities inside the modules. Thus, the design of an API attempts to provide only the tools a user would expect. The design of programming interfaces represents an important part of software architecture, the organization of a complex piece of software.
== Release policies ==
APIs are one of the more common ways technology companies integrate. Those that provide and use APIs are considered as being members of a business ecosystem.
The main policies for releasing an API are:
Private: The API is for internal company use only.
Partner: Only specific business partners can use the API. For example, vehicle for hire companies such as Uber and Lyft allow approved third-party developers to directly order rides from within their apps. This allows the companies to exercise quality control by curating which apps have access to the API, and provides them with an additional revenue stream.
Public: The API is available for use by the public. For example, Microsoft makes the Windows API public, and Apple releases its API Cocoa, so that software can be written for their platforms. Not all public APIs are generally accessible by everybody. For example, Internet service providers like Cloudflare or Voxility, use RESTful APIs to allow customers and resellers access to their infrastructure information, DDoS stats, network performance or dashboard controls. Access to such APIs is granted either by “API tokens”, or customer status validations.
=== Public API implications ===
An important factor when an API becomes public is its "interface stability". Changes to the API—for example adding new parameters to a function call—could break compatibility with the clients that depend on that API.
When parts of a publicly presented API are subject to change and thus not stable, such parts of a particular API should be documented explicitly as "unstable". For example, in the Google Guava library, the parts that are considered unstable, and that might change soon, are marked with the Java annotation @Beta.
A public API can sometimes declare parts of itself as deprecated or rescinded. This usually means that part of the API should be considered a candidate for being removed, or modified in a backward incompatible way. Therefore, these changes allow developers to transition away from parts of the API that will be removed or not supported in the future.
Client code may contain innovative or opportunistic usages that were not intended by the API designers. In other words, for a library with a significant user base, when an element becomes part of the public API, it may be used in diverse ways.
On February 19, 2020, Akamai published their annual “State of the Internet” report, showcasing the growing trend of cybercriminals targeting public API platforms at financial services worldwide. From December 2017 through November 2019, Akamai witnessed 85.42 billion credential violation attacks. About 20%, or 16.55 billion, were against hostnames defined as API endpoints. Of these, 473.5 million have targeted financial services sector organizations.
== Documentation ==
API documentation describes what services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes.
Documentation is crucial for the development and maintenance of applications using the API.
API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites.
Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure.
However, the types of content included in the documentation differs from API to API.
In the interest of clarity, API documentation may include a description of classes and methods in the API as well as "typical usage scenarios, code snippets, design rationales, performance discussions, and contracts", but implementation details of the API services themselves are usually omitted. It can take a number of forms, including instructional documents, tutorials, and reference works. It'll also include a variety of information types, including guides and functionalities.
Restrictions and limitations on how the API can be used are also covered by the documentation. For instance, documentation for an API function could note that its parameters cannot be null, that the function itself is not thread safe. Because API documentation tends to be comprehensive, it is a challenge for writers to keep the documentation updated and for users to read it carefully, potentially yielding bugs.
API documentation can be enriched with metadata information like Java annotations. This metadata can be used by the compiler, tools, and by the run-time environment to implement custom behaviors or custom handling.
It is possible to generate API documentation in a data-driven manner. By observing many programs that use a given API, it is possible to infer the typical usages, as well the required contracts and directives. Then, templates can be used to generate natural language from the mined data.
== Dispute over copyright protection for APIs ==
In 2010, Oracle Corporation sued Google for having distributed a new implementation of Java embedded in the Android operating system. Google had not acquired any permission to reproduce the Java API, although permission had been given to the similar OpenJDK project. Judge William Alsup ruled in the Oracle v. Google case that APIs cannot be copyrighted in the U.S. and that a victory for Oracle would have widely expanded copyright protection to a "functional set of symbols" and allowed the copyrighting of simple software commands:
To accept Oracle's claim would be to allow anyone to copyright one version of code to carry out a system of commands and thereby bar all others from writing its different versions to carry out all or part of the same commands.
Alsup's ruling was overturned in 2014 on appeal to the Court of Appeals for the Federal Circuit, though the question of whether such use of APIs constitutes fair use was left unresolved.
In 2016, following a two-week trial, a jury determined that Google's reimplementation of the Java API constituted fair use, but Oracle vowed to appeal the decision. Oracle won on its appeal, with the Court of Appeals for the Federal Circuit ruling that Google's use of the APIs did not qualify for fair use. In 2019, Google appealed to the Supreme Court of the United States over both the copyrightability and fair use rulings, and the Supreme Court granted review. Due to the COVID-19 pandemic, the oral hearings in the case were delayed until October 2020.
The case was decided by the Supreme Court in Google's favor.
== Examples ==
== See also ==
== References ==
== Further reading ==
Taina Bucher (16 November 2013). "Objects of Intense Feeling: The Case of the Twitter API". Computational Culture (3). ISSN 2047-2390. Argues that "APIs are far from neutral tools" and form a key part of contemporary programming, understood as a fundamental part of culture.
What is an API? – in the U.S. Supreme Court opinion, Google v. Oracle 2021, pp. 3–7 – "For each task, there is computer code; API (also known as Application Program Interface) is the method for calling that 'computer code' (instruction – like a recipe – rather than cooking instruction, this is machine instruction) to be carry out"
Maury, Innovation and Change – Cory Ondrejka \ February 28, 2014 \ " ...proposed a public API to let computers talk to each other". (Textise URL)
== External links ==
Forrester : IT industry : API Case : Google v. Oracle – May 20, 2021 – content format: Audio with text – length 26:41 | Wikipedia/Application_program_interface |
A bibliographic database is a database of bibliographic records. This is an organised online collection of references to published written works like journal and newspaper articles, conference proceedings, reports, government and legal publications, patents and books. In contrast to library catalogue entries, a majority of the records in bibliographic databases describe articles and conference papers rather than complete monographs, and they generally contain very rich subject descriptions in the form of keywords, subject classification terms, or abstracts.
A bibliographic database may cover a wide range of topics or one academic field like computer science. A significant number of bibliographic databases are marketed under a trade name by licensing agreement from vendors, or directly from their makers: the indexing and abstracting services.
Many bibliographic databases have evolved into digital libraries, providing the full text of the organised contents:for instance CORE also organises and mirrors scholarly articles and OurResearch develops a search engine for open access content in Unpaywall. Others merge with non-bibliographic and scholarly databases to create more complete disciplinary search engine systems, such as Chemical Abstracts or Entrez.
== History ==
Prior to the mid-20th century, individuals searching for published literature had to rely on printed bibliographic indexes, generated manually from index cards.
During the early 1960s computers were used to digitize text for the first time; the purpose was to reduce the cost and time required to publish two American abstracting journals, the Index Medicus of the National Library of Medicine and the Scientific and Technical Aerospace Reports of the National Aeronautics and Space Administration (NASA). By the late 1960s, such bodies of digitized alphanumeric information, known as bibliographic and numeric databases, constituted a new type of information resource. Online interactive retrieval became commercially viable in the early 1970s over private telecommunications networks. The first services offered a few databases of indexes and abstracts of scholarly literature. These databases contained bibliographic descriptions of journal articles that were searchable by keywords in author and title, and sometimes by journal name or subject heading. The user interfaces were crude, the access was expensive, and searching was done by librarians on behalf of "end users".
== See also ==
Citation index
Document-oriented database
Full-text database
Institutional repository
ISBNdb.com
List of academic databases and search engines
Online public access catalog (OPAC; library catalog)
== References == | Wikipedia/Bibliographic_database |
In database systems, durability is the ACID property that guarantees that the effects of transactions that have been committed will survive permanently, even in cases of failures, including incidents and catastrophic events. For example, if a flight booking reports that a seat has successfully been booked, then the seat will remain booked even if the system crashes.
Formally, a database system ensures the durability property if it tolerates three types of failures: transaction, system, and media failures. In particular, a transaction fails if its execution is interrupted before all its operations have been processed by the system. These kinds of interruptions can be originated at the transaction level by data-entry errors, operator cancellation, timeout, or application-specific errors, like withdrawing money from a bank account with insufficient funds. At the system level, a failure occurs if the contents of the volatile storage are lost, due, for instance, to system crashes, like out-of-memory events. At the media level, where media means a stable storage that withstands system failures, failures happen when the stable storage, or part of it, is lost. These cases are typically represented by disk failures.
Thus, to be durable, the database system should implement strategies and operations that guarantee that the effects of transactions that have been committed before the failure will survive the event (even by reconstruction), while the changes of incomplete transactions, which have not been committed yet at the time of failure, will be reverted and will not affect the state of the database system. These behaviours are proven to be correct when the execution of transactions has respectively the resilience and recoverability properties.
== Mechanisms ==
In transaction-based systems, the mechanisms that assure durability are historically associated with the concept of reliability of systems, as proposed by Jim Gray in 1981. This concept includes durability, but it also relies on aspects of the atomicity and consistency properties. Specifically, a reliability mechanism requires primitives that explicitly state the beginning, the end, and the rollback of transactions, which are also implied for the other two aforementioned properties. In this article, only the mechanisms strictly related to durability have been considered. These mechanisms are divided into three levels: transaction, system, and media level. This can be seen as well for scenarios where failures could happen and that have to be considered in the design of database systems to address durability.
=== Transaction level ===
Durability against failures that occur at transaction level, such as canceled calls and inconsistent actions that may be blocked before committing by constraints and triggers, is guaranteed by the serializability property of the execution of transactions. The state generated by the effects of precedently committed transactions is available in main memory and, thus, is resilient, while the changes carried by non-committed transactions can be undone. In fact, thanks to serializability, they can be discerned from other transactions and, therefore, their changes are discarded. In addition, it is relevant to consider that in-place changes, which overwrite old values without keeping any kind of history are discouraged. There exist multiple approaches that keep track of the history of changes, such as timestamp-based solutions or logging and locking.
=== System level ===
At system level, failures happen, by definition, when the contents of the volatile storage are lost. This can occur in events like system crashes or power outages. Existing database systems use volatile storage (i.e. the main memory of the system) for different purposes: some store their whole state and data in it, even without any durability guarantee; others keep the state and the data, or part of them, in memory, but also use the non-volatile storage for data; other systems only keep the state in main memory, while keeping all the data on disk. The reason behind the choice of having volatile storage, which is subject to this type of failure, and non-volatile storage, is found in the performance differences of the existing technologies that are used to implement these kinds of storage. However, the situation is likely to evolve as the popularity of non-volatile memories (NVM) technologies grows.
In systems that include non-volatile storage, durability can be achieved by keeping and flushing an immutable sequential log of the transactions to such non-volatile storage before acknowledging commitment. Thanks to their atomicity property, the transactions can be considered the unit of work in the recovery process that guarantees durability while exploiting the log. In particular, the logging mechanism is called write-ahead log (WAL) and allows durability by buffering changes to the disk before they are synchronized from the main memory. In this way, by reconstruction from the log file, all committed transactions are resilient to system-level failures, because they can be redone. Non-committed transactions, instead, are recoverable, since their operations are logged to non-volatile storage before they effectively modify the state of the database. In this way, the partially executed operations can be undone without affecting the state of the system. After that, those transactions that were incomplete can be redone. Therefore, the transaction log from non-volatile storage can be reprocessed to recreate the system state right before any later system-level failure. Logging is done as a combination of tracking data and operations (i.e. transactions) for performance reasons.
=== Media level ===
At media level, failure scenarios affect non-volatile storage, like hard disk drives, solid-state drives, and other types of storage hardware components. To guarantee durability at this level, the database system shall rely on stable memory, which is a memory that is completely and ideally failure-resistant. This kind of memory can be achieved with mechanisms of replication and robust writing protocols.
Many tools and technologies are available to provide a logical stable memory, such as the mirroring of disks, and their choice depends on the requirements of the specific applications. In general, replication and redundancy strategies and architectures that behave like stable memory are available at different levels of the technology stack. In this way, even in case of catastrophic events where the storage hardware is damaged, data loss can be prevented. At this level, there is a strong bond between durability and system and data recovery, in the sense that the main goal is to preserve the data, not necessarily in online replicas, but also as offline copies. These last techniques fall into the categories of backup, data loss prevention, and IT disaster recovery.
Therefore, in case of media failure, the durability of transactions is guaranteed by the ability to reconstruct the state of the database from the log files stored in the stable memory, in any way it was implemented in the database system. There exist several mechanisms to store and reconstruct the state of a database system that improves the performance, both in terms of space and time, compared to managing all the log files created from the beginning of the database system. These mechanisms often include incremental dumping, differential files, and checkpoints.
=== Distributed databases ===
In distributed transactions, ensuring durability requires additional mechanisms to preserve a consistent state sequence across all database nodes. This means, for example, that a single node may not be enough to decide to conclude a transaction by committing it. In fact, the resources used in that transaction may be on other nodes, where other transactions are occurring concurrently. Otherwise, in case of failure, if consistency could not be guaranteed, it would be impossible to acknowledge a safe state of the database for recovery. For this reason, all participating nodes must coordinate before a commit can be acknowledged. This is usually done by a two-phase commit protocol.
In addition, in distributed databases, even the protocols for logging and recovery shall address the issues of distributed environments, such as deadlocks, that could prevent the resilience and recoverability of transactions and, thus, durability. A widely adopted family of algorithms that ensures these properties is Algorithms for Recovery and Isolation Exploiting Semantics (ARIES).
== See also ==
Atomicity
Consistency
Isolation
Relational database management system
Data breach
== References ==
== Further reading ==
Campbell, Laine; Majors, Charity (2017). Database Reliability Engineering. O'Reilly Media, Inc. ISBN 9781491926215.
Taylor, C.A.; Gittens, M.S.; Miranskyy, A.V. (June 2008). "A case study in database reliability: Component types, usage profiles, and testing". Proceedings of the 1st international workshop on Testing database systems. pp. 1–6. doi:10.1145/1385269.1385283. ISBN 9781605582337. S2CID 16101765.
== External links ==
Durability aspects in Oracle's databases
MySQL InnoDB documentation on support of ACID properties
PostgreSQL's documentation on reliability
Microsoft SQL Server Control Transaction Durability
Interactive latency visualization for different types of storages from Berkeley | Wikipedia/Durability_(database_systems) |
Datavault or data vault modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that deals with issues such as auditing, tracing of data, loading speed and resilience to change as well as emphasizing the need to trace where all the data in the database came from. This means that every row in a data vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source. The concept was published in 2000 by Dan Linstedt.
Data vault modeling makes no distinction between good and bad data ("bad" meaning not conforming to business rules). This is summarized in the statement that a data vault stores "a single version of the facts" (also expressed by Dan Linstedt as "all the data, all of the time") as opposed to the practice in other data warehouse methods of storing "a single version of the truth" where data that does not conform to the definitions is removed or "cleansed". A data vault enterprise data warehouse provides both; a single version of facts and a single source of truth.
The modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes. Data vault is designed to enable parallel loading as much as possible, so that very large implementations can scale out without the need for major redesign.
Unlike the star schema (dimensional modelling) and the classical relational model (3NF), data vault and anchor modeling are well-suited for capturing changes that occur when a source system is changed or added, but are considered advanced techniques which require experienced data architects. Both data vaults and anchor models are entity-based models, but anchor models have a more normalized approach.
== History and philosophy ==
In its early days, Dan Linstedt referred to the modeling technique which was to become data vault as common foundational warehouse architecture or common foundational modeling architecture. In data warehouse modeling there are two well-known competing options for modeling the layer where the data are stored. Either you model according to Ralph Kimball, with conformed dimensions and an enterprise data bus, or you model according to Bill Inmon with the database normalized. Both techniques have issues when dealing with changes in the systems feeding the data warehouse. For conformed dimensions you also have to cleanse data (to conform it) and this is undesirable in a number of cases since this inevitably will lose information. Data vault is designed to avoid or minimize the impact of those issues, by moving them to areas of the data warehouse that are outside the historical storage area (cleansing is done in the data marts) and by separating the structural items (business keys and the associations between the business keys) from the descriptive attributes.
Dan Linstedt, the creator of the method, describes the resulting database as follows:
"The Data Vault Model is a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3rd normal form (3NF) and star schema. The design is flexible, scalable, consistent and adaptable to the needs of the enterprise"
Data vault's philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data are not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being "wrong" is an interpretation of the data that stems from a particular point of view that may not be valid for everyone, or at every point in time. Therefore the data vault must capture all data and only when reporting or extracting data from the data vault is the data being interpreted.
Another issue to which data vault is a response is that more and more there is a need for complete auditability and traceability of all the data in the data warehouse. Due to Sarbanes-Oxley requirements in the USA and similar measures in Europe this is a relevant topic for many business intelligence implementations, hence the focus of any data vault implementation is complete traceability and auditability of all information.
Data Vault 2.0 is the new specification. It is an open standard. The new specification consists of three pillars: methodology (SEI/CMMI, Six Sigma, SDLC, etc..), the architecture (amongst others an input layer (data stage, called persistent staging area in Data Vault 2.0) and a presentation layer (data mart), and handling of data quality services and master data services), and the model. Within the methodology, the implementation of best practices is defined. Data Vault 2.0 has a focus on including new components such as big data, NoSQL - and also focuses on the performance of the existing model. The old specification (documented here for the most part) is highly focused on data vault modeling. It is documented in the book: Building a Scalable Data Warehouse with Data Vault 2.0.
It is necessary to evolve the specification to include the new components, along with the best practices in order to keep the EDW and BI systems current with the needs and desires of today's businesses.
=== History ===
Data vault modeling was originally conceived by Dan Linstedt in the 1990s and was released in 2000 as a public domain modeling method. In a series of five articles in The Data Administration Newsletter the basic rules of the Data Vault method are expanded and explained. These contain a general overview, an overview of the components, a discussion about end dates and joins, link tables, and an article on loading practices.
An alternative (and seldom used) name for the method is "Common Foundational Integration Modelling Architecture."
Data Vault 2.0 has arrived on the scene as of 2013 and brings to the table Big Data, NoSQL, unstructured, semi-structured seamless integration, along with methodology, architecture, and implementation best practices.
=== Alternative interpretations ===
According to Dan Linstedt, the Data Model is inspired by (or patterned off) a simplistic view of neurons, dendrites, and synapses – where neurons are associated with Hubs and Hub Satellites, Links are dendrites (vectors of information), and other Links are synapses (vectors in the opposite direction). By using a data mining set of algorithms, links can be scored with confidence and strength ratings. They can be created and dropped on the fly in accordance with learning about relationships that currently don't exist. The model can be automatically morphed, adapted, and adjusted as it is used and fed new structures.
Another view is that a data vault model provides an ontology of the Enterprise in the sense that it describes the terms in the domain of the enterprise (Hubs) and the relationships among them (Links), adding descriptive attributes (Satellites) where necessary.
Another way to think of a data vault model is as a graphical model. The data vault model actually provides a "graph based" model with hubs and relationships in a relational database world. In this manner, the developer can use SQL to get at graph-based relationships with sub-second responses.
== Basic notions ==
Data vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.
The business keys and their associations are structural attributes, forming the skeleton of the data model. The data vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a data warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the hubs is of prime importance for the stability of your model. The keys are stored in tables with a few constraints on the structure. These key-tables are called hubs.
=== Hubs ===
Hubs contain a list of unique business keys with low propensity to change. Hubs also contain a surrogate key for each Hub item and metadata describing the origin of the business key. The descriptive attributes for the information on the Hub (such as the description for the key, possibly in multiple languages) are stored in structures called Satellite tables which will be discussed below.
The Hub contains at least the following fields:
a surrogate key, used to connect the other structures to this table.
a business key, the driver for this hub. The business key can consist of multiple fields.
the record source, which can be used to see what system loaded each business key first.
optionally, you can also have metadata fields with information about manual updates (user/time) and the extraction date.
A hub is not allowed to contain multiple business keys, except when two systems deliver the same business key but with collisions that have different meanings.
Hubs should normally have at least one satellite.
==== Hub example ====
This is an example for a hub-table containing cars, called "Car" (H_CAR). The driving key is vehicle identification number.
=== Links ===
Associations or transactions between business keys (relating for instance the hubs for customer and product with each other through the purchase transaction) are modeled using link tables. These tables are basically many-to-many join tables, with some metadata.
Links can link to other links, to deal with changes in granularity (for instance, adding a new key to a database table would change the grain of the database table). For instance, if you have an association between customer and address, you could add a reference to a link between the hubs for product and transport company. This could be a link called "Delivery". Referencing a link in another link is considered a bad practice, because it introduces dependencies between links that make parallel loading more difficult. Since a link to another link is the same as a new link with the hubs from the other link, in these cases creating the links without referencing other links is the preferred solution (see the section on loading practices for more information).
Links sometimes link hubs to information that is not by itself enough to construct a hub. This occurs when one of the business keys associated by the link is not a real business key. As an example, take an order form with "order number" as key, and order lines that are keyed with a semi-random number to make them unique. Let's say, "unique number". The latter key is not a real business key, so it is no hub. However, we do need to use it in order to guarantee the correct granularity for the link. In this case, we do not use a hub with surrogate key, but add the business key "unique number" itself to the link. This is done only when there is no possibility of ever using the business key for another link or as key for attributes in a satellite. This construct has been called a 'peg-legged link' by Dan Linstedt on his (now defunct) forum.
Links contain the surrogate keys for the hubs that are linked, their own surrogate key for the link and metadata describing the origin of the association. The descriptive attributes for the information on the association (such as the time, price or amount) are stored in structures called satellite tables which are discussed below.
==== Link example ====
This is an example for a link-table between two hubs for cars (H_CAR) and persons (H_PERSON). The link is called "Driver" (L_DRIVER).
=== Satellites ===
The hubs and links form the structure of the model, but have no temporal attributes and hold no descriptive attributes. These are stored in separate tables called satellites. These consist of metadata linking them to their parent hub or link, metadata describing the origin of the association and attributes, as well as a timeline with start and end dates for the attribute. Where the hubs and links provide the structure of the model, the satellites provide the "meat" of the model, the context for the business processes that are captured in hubs and links. These attributes are stored both with regards to the details of the matter as well as the timeline and can range from quite complex (all of the fields describing a client's complete profile) to quite simple (a satellite on a link with only a valid-indicator and a timeline).
Usually the attributes are grouped in satellites by source system. However, descriptive attributes such as size, cost, speed, amount or color can change at different rates, so you can also split these attributes up in different satellites based on their rate of change.
All the tables contain metadata, minimally describing at least the source system and the date on which this entry became valid, giving a complete historical view of the data as it enters the data warehouse.
An effectivity satellite is a satellite built on a link, "and record[s] the time period when the corresponding link records start and end effectivity".
==== Satellite example ====
This is an example for a satellite on the drivers-link between the hubs for cars and persons, called "Driver insurance" (S_DRIVER_INSURANCE). This satellite contains attributes that are specific to the insurance of the relationship between the car and the person driving it, for instance an indicator whether this is the primary driver, the name of the insurance company for this car and person (could also be a separate hub) and a summary of the number of accidents involving this combination of vehicle and driver. Also included is a reference to a lookup- or reference table called R_RISK_CATEGORY containing the codes for the risk category in which this relationship is deemed to fall.
(*) at least one attribute is mandatory.
(**) sequence number becomes mandatory if it is needed to enforce uniqueness for multiple valid satellites on the same hub or link.
=== Reference tables ===
Reference tables are a normal part of a healthy data vault model. They are there to prevent redundant storage of simple reference data that is referenced a lot. More formally, Dan Linstedt defines reference data as follows:
Any information deemed necessary to resolve descriptions from codes, or to translate keys in to (sic) a consistent manner. Many of these fields are "descriptive" in nature and describe a specific state of the other more important information. As such, reference data lives in separate tables from the raw Data Vault tables.
Reference tables are referenced from Satellites, but never bound with physical foreign keys. There is no prescribed structure for reference tables: use what works best in your specific case, ranging from simple lookup tables to small data vaults or even stars. They can be historical or have no history, but it is recommended that you stick to the natural keys and not create surrogate keys in that case. Normally, data vaults have a lot of reference tables, just like any other Data Warehouse.
==== Reference example ====
This is an example of a reference table with risk categories for drivers of vehicles. It can be referenced from any satellite in the data vault. For now we reference it from satellite S_DRIVER_INSURANCE. The reference table is R_RISK_CATEGORY.
(*) at least one attribute is mandatory.
== Loading practices ==
The ETL for updating a data vault model is fairly straightforward (see Data Vault Series 5 – Loading Practices). First you have to load all the hubs, creating surrogate IDs for any new business keys. Having done that, you can now resolve all business keys to surrogate ID's if you query the hub. The second step is to resolve the links between hubs and create surrogate IDs for any new associations. At the same time, you can also create all satellites that are attached to hubs, since you can resolve the key to a surrogate ID. Once you have created all the new links with their surrogate keys, you can add the satellites to all the links.
Since the hubs are not joined to each other except through links, you can load all the hubs in parallel. Since links are not attached directly to each other, you can load all the links in parallel as well. Since satellites can be attached only to hubs and links, you can also load these in parallel.
The ETL is quite straightforward and lends itself to easy automation or templating. Problems occur only with links relating to other links, because resolving the business keys in the link only leads to another link that has to be resolved as well. Due to the equivalence of this situation with a link to multiple hubs, this difficulty can be avoided by remodeling such cases and this is in fact the recommended practice.
Data is never deleted from the data vault, unless you have a technical error while loading data.
== Data vault and dimensional modelling ==
The data vault modelled layer is normally used to store data. It is not optimised for query performance, nor is it easy to query by the well-known query-tools such as Cognos, Oracle Business Intelligence Suite Enterprise Edition, SAP Business Objects, Pentaho et al. Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.
For this purpose, the hubs and related satellites on those hubs can be considered as dimensions and the links and related satellites on those links can be viewed as fact tables in a dimensional model. This enables you to quickly prototype a dimensional model out of a data vault model using views.
Note that while it is relatively straightforward to move data from a data vault model to a (cleansed) dimensional model, the reverse is not as easy, given the denormalized nature of the dimensional model's fact tables, fundamentally different to the third normal form of the data vault.
== Methodology ==
The data vault methodology is based on SEI/CMMI Level 5 best practices. It includes multiple components of CMMI Level 5, and combines them with best practices from Six Sigma, total quality management (TQM), and SDLC. Particularly, it is focused on Scott Ambler's agile methodology for build out and deployment. Data vault projects have a short, scope-controlled release cycle and should consist of a production release every 2 to 3 weeks.
Teams using the data vault methodology should readily adapt to the repeatable, consistent, and measurable projects that are expected at CMMI Level 5. Data that flow through the EDW data vault system will begin to follow the TQM life-cycle that has long been missing from BI (business intelligence) projects.
== Tools ==
Some examples of tools are:
DataVault4dbt
2150 Datavault Builder
Astera DW Builder
Wherescape
Vaultspeed
AutomateDV
== See also ==
Bill Inmon – American computer scientist
Data lake – Repository of data stored in a raw format
Data warehouse – Centralized storage of knowledge
The Kimball lifecycle – Methodology for developing data warehousesPages displaying short descriptions of redirect targets, developed by Ralph Kimball – American computer scientist
Staging area – Location where items are gathered before use
Agile Business Intelligence – Use of agile software development for business intelligence projectsPages displaying short descriptions of redirect targets
== References ==
=== Citations ===
=== Sources ===
== Literature ==
Patrick Cuba: The Data Vault Guru. A Pragmatic Guide on Building a Data Vault. Selbstverlag, ohne Ort 2020, ISBN 979-86-9130808-6.
John Giles: The Elephant in the Fridge. Guided Steps to Data Vault Success through Building Business-Centered Models. Technics, Basking Ridge 2019, ISBN 978-1-63462-489-3.
Kent Graziano: Better Data Modeling. An Introduction to Agile Data Engineering Using Data Vault 2.0. Data Warrior, Houston 2015.
Hans Hultgren: Modeling the Agile Data Warehouse with Data Vault. Brighton Hamilton, Denver u. a. 2012, ISBN 978-0-615-72308-2.
Dirk Lerner: Data Vault für agile Data-Warehouse-Architekturen. In: Stephan Trahasch, Michael Zimmer (Hrsg.): Agile Business Intelligence. Theorie und Praxis. dpunkt.verlag, Heidelberg 2016, ISBN 978-3-86490-312-0, S. 83–98.
Daniel Linstedt: Super Charge Your Data Warehouse. Invaluable Data Modeling Rules to Implement Your Data Vault. Linstedt, Saint Albans, Vermont 2011, ISBN 978-1-4637-7868-2.
Daniel Linstedt, Michael Olschimke: Building a Scalable Data Warehouse with Data Vault 2.0. Morgan Kaufmann, Waltham, Massachusetts 2016, ISBN 978-0-12-802510-9.
Dani Schnider, Claus Jordan u. a.: Data Warehouse Blueprints. Business Intelligence in der Praxis. Hanser, München 2016, ISBN 978-3-446-45075-2, S. 35–37, 161–173.
== External links ==
The homepage of Dan Linstedt, the inventor of Data Vault modeling | Wikipedia/Data_vault_modeling |
In database systems, consistency (or correctness) refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined database constraints.
In a distributed system, referencing CAP theorem, consistency can also be understood as after a successful write, update or delete of a Record, any read request immediately receives the latest value of the Record.
== As an ACID guarantee ==
Consistency is one of the four guarantees that define ACID transactions; however, significant ambiguity exists about the nature of this guarantee. It is defined variously as:
The guarantee that database constraints are not violated, particularly once a transaction commits.
The guarantee that any transactions started in the future necessarily see the effects of other transactions committed in the past.
As these various definitions are not mutually exclusive, it is possible to design a system that guarantees "consistency" in every sense of the word, as most relational database management systems in common use today arguably do.
== As a CAP trade-off ==
The CAP theorem is based on three trade-offs, one of which is "atomic consistency" (shortened to "consistency" for the acronym), about which the authors note, "Discussing atomic consistency is somewhat different than talking about an ACID database, as database consistency refers to transactions, while atomic consistency refers only to a property of a single request/response operation sequence. And it has a different meaning than the Atomic in ACID, as it subsumes the database notions of both Atomic and Consistent." In the CAP theorem, you can only have two of the following three properties: consistency, availability, or partition tolerance. Therefore, consistency may have to be traded off in some database systems.
== See also ==
Consistency model
CAP theorem
Referential integrity
Eventual consistency
== References == | Wikipedia/Consistency_(database_systems) |
According to Ralph Kimball, in a data warehouse, a degenerate dimension is a dimension key (primary key for a dimension table) in the fact table that does not have its own dimension table, because all the interesting attributes have been placed in analytic dimensions. The term "degenerate dimension" was originated by Ralph Kimball.
As Bob Becker says:Degenerate dimensions commonly occur when the fact table's grain is a single transaction (or transaction line). Transaction control header numbers assigned by the operational business process are typically degenerate dimensions, such as order, ticket, credit card transaction, or check numbers. These degenerate dimensions are natural keys of the "parents" of the line items.
Even though there is no corresponding dimension table of attributes, degenerate dimensions can be quite useful for grouping together related fact tables rows. For example, retail point-of-sale transaction numbers tie all the individual items purchased together into a single market basket. In health care, degenerate dimensions can group the claims items related to a single hospital stay or episode of care.
== Other uses of the term ==
Although most writers and practitioners use the term degenerate dimension correctly, it is very easy to find misleading definitions in online and printed sources. For example, the Oracle FAQ defines a degenerate dimension as a "data dimension that is stored in the fact table rather than a separate dimension table. This eliminates the need to join to a dimension table. You can use the data in the degenerate dimension to limit or 'slice and dice' your fact table measures."
This common interpretation implies that it is good dimensional modeling practice to place dimension attributes in the fact table, as long as you call them a degenerate dimension. This is not the case; the concept of degenerate dimension was developed by Kimball to support a specific, well-defined exception to the otherwise ironclad rule that dimension attributes are always pulled out into dimension tables.
== See also ==
Data warehouse
Dimension table
Early-arriving fact
Fact table
Measure (data warehouse)
== Notes ==
== Bibliography ==
Kimball, Ralph et al. (1998); The Data Warehouse Lifecycle Toolkit, p17. Pub. Wiley. ISBN 0-471-25547-5.
Kimball, Ralph (1996); The Data Warehouse Toolkit, p. 100. Pub. Wiley. ISBN 0-471-15337-0.
== External links ==
Becker, Bob (3 June 2003). "Design Tip #46: Another Look at Degenerate Dimensions". Fact Table Core Concepts. Kimball Group. Retrieved 25 January 2013. | Wikipedia/Degenerate_dimension |
A data control language (DCL) is a syntax similar to a computer programming language used to control access to data stored in a database (authorization). In particular, it is a component of Structured Query Language (SQL). Data Control Language is one of the logical group in SQL Commands. SQL is the standard language for relational database management systems. SQL statements are used to perform tasks such as insert data to a database, delete or update data in a database, or retrieve data from a database.
Though database systems use SQL, they also have their own additional proprietary extensions that are usually only used on their system. For example, Microsoft SQL server uses Transact-SQL (T-SQL), which is an extension of SQL. Similarly, Oracle uses PL-SQL, which an Oracle-specific SQL extension. However, the standard SQL commands such as "Select", "Insert", "Update", "Delete", "Create", and "Drop" can be used to accomplish almost everything that one needs to do with a database.
Examples of DCL commands include the SQL commands:
GRANT to allow specified users to perform specified tasks.
REVOKE to remove the user accessibility to database object.
The operations for which privileges may be granted to or revoked from a user or role apply to both the Data definition language (DDL) and the Data manipulation language (DML), and may include CONNECT, SELECT, INSERT, UPDATE, DELETE, EXECUTE, and USAGE.
== Microsoft SQL Server ==
In Microsoft SQL Server there are four groups of SQL commands:
Data Manipulation Language (DML)
Data Definition Language (DDL)
Data Control Language (DCL)
Transaction Control Language (TCL)
DCL commands are used for access control and permission management for users in the database. With them we can easily allow or deny some actions for users on the tables or records (row level security).
DCL commands are:
GRANT
gives specified permissions for the table (and other objects) to, or assigns a specified role with certain permissions to, specified groups or users of a database;
REVOKE
takes away specified permissions for the table (and other objects) to, or takes away a specified role with certain permissions to, specified groups or users of a database;
DENY
denies a specified permission to a security object.
For example: GRANT can be used to give privileges to user to do SELECT, INSERT, UPDATE and DELETE on a specific table or multiple tables.
The REVOKE command is used to take a privilege away (default) or revoking specific command like UPDATE or DELETE based on requirements.
=== Example ===
In the first example, GRANT gives privileges to user User1 to do SELECT, INSERT, UPDATE and DELETE on the table named Employees.
In the second example, REVOKE removes User1's privileges to use the INSERT command on the table Employees.
DENY is a specific command. We can conclude that every user has a list of privilege which is denied or granted so command DENY is there to explicitly ban you some privileges on the database objects.:
== Oracle Database ==
Oracle Database divide SQL commands to different types. They are.
Data Definition Language (DDL) Statements
Data Manipulation Language (DML) Statements
Transaction Control Statements
Session Control Statements
System Control Statement
Embedded SQL Statements
For details refer Oracle-TCL
Data definition language (DDL) statements let you to perform these tasks:
Create, alter, and drop schema objects
Grant and revoke privileges and roles
Analyze information on a table, index, or cluster
Establish auditing options
Add comments to the data dictionary
So Oracle Database DDL commands include the Grant and revoke privileges which is actually part of Data control Language in Microsoft SQL server.
Syntax for grant and revoke in Oracle Database:
=== Example ===
=== Transaction Control Statements in Oracle ===
Transaction control statements manage changes made by DML statements. The transaction control statements are:
COMMIT
ROLLBACK
SAVEPOINT
SET TRANSACTION
SET CONSTRAINT
== MySQL ==
MySQL server they divide SQL statements into different type of statement
Data Definition Statements
Data Manipulation Statements
Transactional and Locking Statements
Replication Statements
Prepared Statements
Compound Statement Syntax
Database Administration Statements
Utility Statements
For details refer MySQL Transactional statements
The grant, revoke syntax are as part of Database administration statementsàAccount Management System.
The GRANT statement enables system administrators to grant privileges and roles, which can be granted to user accounts and roles. These syntax restrictions apply:
GRANT cannot mix granting both privileges and roles in the same statement. A given GRANT statement must grant either privileges or roles.
The ON clause distinguishes whether the statement grants privileges or roles:
With ON, the statement grants privileges
Without ON, the statement grants roles.
It is permitted to assign both privileges and roles to an account, but you must use separate GRANT statements, each with syntax appropriate to what is to be granted.
The REVOKE statement enables system administrators to revoke privileges and roles, which can be revoked from user accounts and roles.
=== Examples ===
In PostgreSQL, executing DCL is transactional, and can be rolled back.
Grant and Revoke are the SQL commands are used to control the privileges given to the users in a Databases
SQLite does not have any DCL commands as it does not have usernames or logins. Instead, SQLite depends on file-system permissions to define who can open and access a database.
== See also ==
Data definition language
Data manipulation language
Data query language
== References == | Wikipedia/Data_control_language |
Proof-of-payment (POP) or proof-of-fare (POF) is an honor-based fare collection system used on many public transportation systems. Instead of checking each passenger as they enter a fare control zone, passengers are required to carry a paper ticket, transit pass, transit smartcard — or open payment methods such as contactless credit or debit cards (if applicable) — after swiping or tapping on smart card readers, to prove that they have paid the valid fare. Fares are enforced via random spot-checks by inspectors such as conductors or enforcement officers, to ensure that passengers have paid their fares and are not committing fare evasion. On many systems, a passenger can purchase a single-use ticket or multi-use pass at any time in advance, but must insert the ticket or pass into a validation machine immediately before use. Validation machines in stations or on board vehicles time stamp the ticket. The ticket is then valid for some period of time after the stamped time.
This method is implemented when the transit authority believes it will lose less money to the resultant fare evasion than it would cost to install and maintain a more direct collection method. It may be used in systems whose passenger volume and density are not very high most of the time—as passenger volumes increase, more-direct collection methods become more profitable. However, in some countries it is common even on systems with very high passenger volume. Proof-of-payment is usually applied on one-person operated rail and road vehicles as well as on automatically operated rail lines.
The honor system can be complemented with a more direct collection approach where this would be feasible—a transit authority using POP will usually post fare inspectors, sometimes armed as a police force, to man entrances to stations on a discretionary basis when a high volume of passengers is expected. For example, transit users leaving a stadium after a major concert or sporting event will likely have to buy a ticket from an attendant (or show proof of payment) to gain access to the station serving the stadium. Direct fare collection methods may also be used at major hubs in systems that otherwise use POP. An example of this is the Tower City station on Cleveland's RTA Rapid Transit Red Line, which uses faregates.
Travel without a valid ticket is not usually a criminal offense, but a penalty fare or a fine can be charged.
== Advantages and disadvantages ==
Advantages of proof-of-payment include lower labor costs for fare collection, simpler station design, easier access for mobility-impaired passengers, easier access for those carrying packages or in case of an emergency, and a more open feel for passengers. On buses, proof-of-payment saves drivers the time needed to collect fares, and makes it possible for all doors to be used for boarding. Validated tickets can double as transfers between lines. Collecting fares outside a bus "offers the greatest potential for reducing dwell time."
Disadvantages include higher rates of fare evasion, reduced security on station platforms when no barrier is used, increased potential of racial profiling and other unequal enforcement as "likely fare evaders" are targeted, and regularly exposing passengers to unpleasant confrontational situations when a rider without the proper proof is detained and removed from the vehicle. Visitors unfamiliar with a system's validation requirements who innocently misunderstand the rules are especially likely to get into trouble.
== Worldwide uses ==
Proof-of-payment is popular in Germany, where it was widely introduced during the labor shortages resulting from the Economic Miracle of the 1960s. It has also been adopted in Eastern Europe and Canada and has made some inroads in newer systems in the United States. The first use of the term "POP" or "Proof of Payment" on a rail line in North America is believed to have been in Edmonton in 1980. Since then, many new light rail, streetcar, and bus rapid transit systems have adopted the procedure, mainly to speed up boarding by avoiding the hassles of crowding at doors to pay fares at a farebox beside the driver as is common practice on traditional buses. TriMet in Portland, Oregon was the first large transit agency to adopt proof of payment on its bus system, from September 1982 to April 1984. It was discontinued after finding that fare evasion and vandalism increased and little productivity was added through drivers waiting for fares to be paid. San Francisco's MUNI system became the first North American system-wide adopter of the proof-of-payment system on July 1, 2012 across its buses, light rail and heritage streetcars, with the exception of cable cars, allowing boarding on all the available doors.
== References ==
== External links ==
TCRP Report 80: A Toolkit for Self-Service, Barrier-Free Fare Collection (PDF) | Wikipedia/Ticket_controller_(transportation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.