text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
In cryptography, ablock cipher mode of operationis an algorithm that uses ablock cipherto provideinformation securitysuch asconfidentialityorauthenticity.[1]A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group ofbitscalled ablock.[2]A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.[3][4][5]
Most modes require a unique binary sequence, often called aninitialization vector(IV), for each encryption operation. The IV must be non-repeating, and for some modes must also be random. The initialization vector is used to ensure that distinctciphertextsare produced even when the sameplaintextis encrypted multiple times independently with the samekey.[6]Block ciphers may be capable of operating on more than oneblock size, but during transformation the block size is always fixed. Block cipher modes operate on whole blocks and require that the final data fragment bepaddedto a full block if it is smaller than the current block size.[2]There are, however, modes that do not require padding because they effectively use a block cipher as astream cipher.
Historically, encryption modes have been studied extensively in regard to their error propagation properties under various scenarios of data modification. Later development regardedintegrity protectionas an entirely separate cryptographic goal. Some modern modes of operation combineconfidentialityandauthenticityin an efficient way, and are known asauthenticated encryptionmodes.[7]
The earliest modes of operation, ECB, CBC, OFB, and CFB (see below for all), date back to 1981 and were specified inFIPS 81,DES Modes of Operation. In 2001, the USNational Institute of Standards and Technology(NIST) revised its list of approved modes of operation by includingAESas a block cipher and adding CTR mode inSP800-38A,Recommendation for Block Cipher Modes of Operation. Finally, in January, 2010, NIST addedXTS-AESinSP800-38E,Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices. Other confidentiality modes exist which have not been approved by NIST. For example, CTS isciphertext stealingmode and available in many popular cryptographic libraries.
The block cipher modes ECB, CBC, OFB, CFB, CTR, andXTSprovide confidentiality, but they do not protect against accidental modification or malicious tampering. Modification or tampering can be detected with a separatemessage authentication codesuch asCBC-MAC, or adigital signature. The cryptographic community recognized the need for dedicated integrity assurances and NIST responded with HMAC, CMAC, and GMAC.HMACwas approved in 2002 asFIPS 198,The Keyed-Hash Message Authentication Code (HMAC),CMACwas released in 2005 underSP800-38B,Recommendation for Block Cipher Modes of Operation: The CMAC Mode for Authentication, andGMACwas formalized in 2007 underSP800-38D,Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC.
The cryptographic community observed that compositing (combining) a confidentiality mode with an authenticity mode could be difficult and error prone. They therefore began to supply modes which combined confidentiality and data integrity into a single cryptographic primitive (an encryption algorithm). These combined modes are referred to asauthenticated encryption, AE or "authenc". Examples of AE modes areCCM(SP800-38C),GCM(SP800-38D),CWC,EAX,IAPM, andOCB.
Modes of operation are defined by a number of national and internationally recognized standards bodies. Notable standards organizations includeNIST,ISO(with ISO/IEC 10116[5]), theIEC, theIEEE,ANSI, and theIETF.
An initialization vector (IV) or starting variable (SV)[5]is a block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times, without the need for a slower re-keying process.[citation needed]
An initialization vector has different security requirements than a key, so the IV usually does not need to be secret. For most block cipher modes it is important that an initialization vector is never reused under the same key, i.e. it must be acryptographic nonce. Many block cipher modes have stronger requirements, such as the IV must berandomorpseudorandom. Some block ciphers have particular problems with certain initialization vectors, such as all zero IV generating no encryption (for some keys).
It is recommended to review relevant IV requirements for the particular block cipher mode in relevant specification, for exampleSP800-38A.
For CBC and CFB, reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages.
For OFB and CTR, reusing an IV causes key bitstream re-use, which breaks security.[8]This can be seen because both modes effectively create a bitstream that is XORed with the plaintext, and this bitstream is dependent on the key and IV only.
In CBC mode, the IV must be unpredictable (random or pseudorandom) at encryption time; in particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0). If an attacker knows the IV (or the previous block of ciphertext) before the next plaintext is specified, they can check their guess about plaintext of some block that was encrypted with the same key before (this is known as the TLS CBC IV attack).[9]
For some keys, an all-zero initialization vector may generate some block cipher modes (CFB-8, OFB-8) to get the internal state stuck at all-zero. For CFB-8, an all-zero IV and an all-zero plaintext, causes 1/256 of keys to generate no encryption, plaintext is returned as ciphertext.[10]For OFB-8, using all zero initialization vector will generate no encryption for 1/256 of keys.[11]OFB-8 encryption returns the plaintext unencrypted for affected keys.
Some modes (such as AES-SIV and AES-GCM-SIV) are built to be more nonce-misuse resistant, i.e. resilient to scenarios in which the randomness generation is faulty or under the control of the attacker.
Ablock cipherworks on units of a fixedsize(known as ablock size), but messages come in a variety of lengths. So some modes (namelyECBandCBC) require that the final block be padded before encryption. Severalpaddingschemes exist. The simplest is to addnull bytesto theplaintextto bring its length up to a multiple of the block size, but care must be taken that the original length of the plaintext can be recovered; this is trivial, for example, if the plaintext is aCstylestringwhich contains no null bytes except at the end. Slightly more complex is the originalDESmethod, which is to add a single onebit, followed by enough zerobitsto fill out the block; if the message ends on a block boundary, a whole padding block will be added. Most sophisticated are CBC-specific schemes such asciphertext stealingorresidual block termination, which do not cause any extra ciphertext, at the expense of some additional complexity.SchneierandFergusonsuggest two possibilities, both simple: append a byte with value 128 (hex 80), followed by as many zero bytes as needed to fill the last block, or pad the last block withnbytes all with valuen.
CFB, OFB and CTR modes do not require any special measures to handle messages whose lengths are not multiples of the block size, since the modes work byXORingthe plaintext with the output of the block cipher. The last partial block of plaintext is XORed with the first few bytes of the lastkeystreamblock, producing a final ciphertext block that is the same size as the final partial plaintext block. This characteristic of stream ciphers makes them suitable for applications that require the encrypted ciphertext data to be the same size as the original plaintext data, and for applications that transmit data in streaming form where it is inconvenient to add padding bytes.
A number of modes of operation have been designed to combine secrecy and authentication in a single cryptographic primitive. Examples of such modes are ,[12]integrity-aware cipher block chaining (IACBC)[clarification needed], integrity-aware parallelizable mode (IAPM),[13]OCB,EAX,CWC,CCM, andGCM.Authenticated encryptionmodes are classified as single-pass modes or double-pass modes.
In addition, some modes also allow for the authentication of unencrypted associated data, and these are calledAEAD(authenticated encryption with associated data) schemes. For example, EAX mode is a double-pass AEAD scheme while OCB mode is single-pass.
Galois/counter mode (GCM) combines the well-known counter mode of encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of the Galois field multiplication used for authentication. This feature permits higher throughput than encryption algorithms.
GCM is defined for block ciphers with a block size of 128 bits. Galois message authentication code (GMAC) is an authentication-only variant of the GCM which can form an incremental message authentication code. Both GCM and GMAC can accept initialization vectors of arbitrary length. GCM can take full advantage of parallel processing and implementing GCM can make efficient use of aninstruction pipelineor a hardware pipeline. The CBC mode of operation incurspipeline stallsthat hamper its efficiency and performance.
Like in CTR, blocks are numbered sequentially, and then this block number is combined with an IV and encrypted with a block cipherE, usually AES. The result of this encryption is then XORed with the plaintext to produce the ciphertext. Like all counter modes, this is essentially a stream cipher, and so it is essential that a different IV is used for each stream that is encrypted.
The ciphertext blocks are considered coefficients of apolynomialwhich is then evaluated at a key-dependent pointH, usingfinite field arithmetic. The result is then encrypted, producing anauthentication tagthat can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag.
Counter with cipher block chaining message authentication code(counter with CBC-MAC; CCM) is anauthenticated encryptionalgorithm designed to provide both authentication and confidentiality. CCM mode is only defined for block ciphers with a block length of 128 bits.[14][15]
Synthetic initialization vector (SIV) is a nonce-misuse resistant block cipher mode.
SIV synthesizes an internal IV using the pseudorandom function S2V. S2V is a keyed hash based on CMAC, and the input to the function is:
SIV encrypts the S2V output and the plaintext using AES-CTR, keyed with the encryption key (K2).
SIV can support external nonce-based authenticated encryption, in which case one of the authenticated data fields is utilized for this purpose. RFC5297[16]specifies that for interoperability purposes the last authenticated data field should be used external nonce.
Owing to the use of two keys, the authentication key K1and encryption key K2, naming schemes for SIV AEAD-variants may lead to some confusion; for example AEAD_AES_SIV_CMAC_256 refers to AES-SIV with two AES-128 keys andnotAES-256.
AES-GCM-SIVis a mode of operation for the Advanced Encryption Standard which provides similar performance to Galois/counter mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452.[17]
AES-GCM-SIV synthesizes the internal IV. It derives a hash of the additional authenticated data and plaintext using the POLYVAL Galois hash function. The hash is then encrypted an AES-key, and used as authentication tag and AES-CTR initialization vector.
AES-GCM-SIVis an improvement over the very similarly named algorithmGCM-SIV, with a few very small changes (e.g. how AES-CTR is initialized), but which yields practical benefits to its security "This addition allows for encrypting up to 250messages with the same key, compared to the significant limitation of only 232messages that were allowed with GCM-SIV."[18]
Many modes of operation have been defined. Some of these are described below. The purpose of cipher modes is to mask patterns which exist in encrypted data, as illustrated in the description of theweakness of ECB.
Different cipher modes mask patterns by cascading outputs from the cipher block or other globally deterministic variables into the subsequent cipher block. The inputs of the listed modes are summarized in the following table:
Note:g(i) is any deterministic function, often theidentity function.
The simplest of the encryption modes is theelectronic codebook(ECB) mode (named after conventional physicalcodebooks[19]). The message is divided into blocks, and each block is encrypted separately. ECB is not recommended for use in cryptographic protocols: the disadvantage of this method is a lack ofdiffusion, wherein it fails to hide data patterns when it encrypts identicalplaintextblocks into identicalciphertextblocks.[20][21][22]
A striking example of the degree to which ECB can leave plaintext data patterns in the ciphertext can be seen when ECB mode is used to encrypt abitmap imagewhich contains large areas of uniform color. While the color of each individualpixelhas supposedly been encrypted, the overall image may still be discerned, as the pattern of identically colored pixels in the original remains visible in the encrypted version.
ECB mode can also make protocols without integrity protection even more susceptible toreplay attacks, since each block gets decrypted in exactly the same way.[citation needed]
Ehrsam, Meyer, Smith and Tuchman invented the cipher block chaining (CBC) mode of operation in 1976. In CBC mode, each block of plaintext is XORed with the previous ciphertext block before being encrypted. This way, each ciphertext block depends on all plaintext blocks processed up to that point. To make each message unique, an initialization vector must be used in the first block.
If the first block has index 1, the mathematical formula for CBC encryption is
while the mathematical formula for CBC decryption is
CBC has been the most commonly used mode of operation. Its main drawbacks are that encryption is sequential (i.e., it cannot be parallelized), and that the message must be padded to a multiple of the cipher block size. One way to handle this last issue is through the method known as ciphertext stealing. Note that a one-bit change in a plaintext or initialization vector (IV) affects all following ciphertext blocks.
Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but subsequent plaintext blocks will be correct. This is because each block is XORed with the ciphertext of the previous block, not the plaintext, so one does not need to decrypt the previous block before using it as the IV for the decryption of the current one. This means that a plaintext block can be recovered from two adjacent blocks of ciphertext. As a consequence, decryptioncanbe parallelized. Note that a one-bit change to the ciphertext causes complete corruption of the corresponding block of plaintext, and inverts the corresponding bit in the following block of plaintext, but the rest of the blocks remain intact. This peculiarity is exploited in different padding oracle attacks, such as POODLE.
Explicit initialization vectorstake advantage of this property by prepending a single random block to the plaintext. Encryption is done as normal, except the IV does not need to be communicated to the decryption routine. Whatever IV decryption uses, only the random block is "corrupted". It can be safely discarded and the rest of the decryption is the original plaintext.
Thepropagating cipher block chaining[23]orplaintext cipher-block chaining[24]mode was designed to cause small changes in the ciphertext to propagate indefinitely when decrypting, as well as when encrypting. In PCBC mode, each block of plaintext is XORed with both the previous plaintext block and the previous ciphertext block before being encrypted. Like with CBC mode, an initialization vector is used in the first block.
Unlike CBC, decrypting PCBC with the incorrect IV (initialization vector) causes all blocks of plaintext to be corrupt.
Encryption and decryption algorithms are as follows:
PCBC is used inKerberos v4andWASTE, most notably, but otherwise is not common.
On a message encrypted in PCBC mode, if two adjacent ciphertext blocks are exchanged, this does not affect the decryption of subsequent blocks.[25]For this reason, PCBC is not used in Kerberos v5.
Thecipher feedback(CFB) mode, in its simplest form uses the entire output of the block cipher. In this variation, it is very similar to CBC, turning a block cipher into a self-synchronizingstream cipher. CFB decryption in this variation is almost identical to CBC encryption performed in reverse:
NIST SP800-38A defines CFB with a bit-width.[26]The CFB mode also requires an integer parameter, denoted s, such that 1 ≤ s ≤ b. In the specification of the CFB mode below, each plaintext segment (Pj) and ciphertext segment (Cj) consists of s bits. The value of s is sometimes incorporated into the name of the mode, e.g., the 1-bit CFB mode, the 8-bit CFB mode, the 64-bit CFB mode, or the 128-bit CFB mode.
These modes will truncate the output of the underlying block cipher.
CFB-1 is considered self synchronizing and resilient to loss of ciphertext; "When the 1-bit CFB mode is used, then the synchronization is automatically restored b+1 positions after the inserted or deleted bit. For other values of s in the CFB mode, and for the other confidentiality modes in this recommendation, the synchronization must be restored externally." (NIST SP800-38A). I.e. 1-bit loss in a 128-bit-wide block cipher like AES will render 129 invalid bits before emitting valid bits.
CFB may also self synchronize in some special cases other than those specified. For example, a one bit change in CFB-128 with an underlying 128 bit block cipher, will re-synchronize after two blocks. (However, CFB-128 etc. will not handle bit loss gracefully; a one-bit loss will cause the decryptor to lose alignment with the encryptor)
Like CBC mode, changes in the plaintext propagate forever in the ciphertext, and encryption cannot be parallelized. Also like CBC, decryption can be parallelized.
CFB, OFB and CTR share two advantages over CBC mode: the block cipher is only ever used in the encrypting direction, and the message does not need to be padded to a multiple of the cipher block size (thoughciphertext stealingcan also be used for CBC mode to make padding unnecessary).
Theoutput feedback(OFB) mode makes a block cipher into a synchronousstream cipher. It generateskeystreamblocks, which are thenXORedwith the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows manyerror-correcting codesto function normally even when applied before encryption.
Because of the symmetry of the XOR operation, encryption and decryption are exactly the same:
Each output feedback block cipher operation depends on all previous ones, and so cannot be performed in parallel. However, because the plaintext or ciphertext is only used for the final XOR, the block cipher operations may be performed in advance, allowing the final step to be performed in parallel once the plaintext or ciphertext is available.
It is possible to obtain an OFB mode keystream by using CBC mode with a constant string of zeroes as input. This can be useful, because it allows the usage of fast hardware implementations of CBC mode for OFB mode encryption.
Using OFB mode with a partial block as feedback like CFB mode reduces the average cycle length by a factor of 232or more. A mathematical model proposed by Davies and Parkin and substantiated by experimental results showed that only with full feedback an average cycle length near to the obtainable maximum can be achieved. For this reason, support for truncated feedback was removed from the specification of OFB.[27]
Like OFB, counter mode turns ablock cipherinto astream cipher. It generates the nextkeystreamblock by encrypting successive values of a "counter". The counter can be any function which produces a sequence which is guaranteed not to repeat for a long time, although an actual increment-by-one counter is the simplest and most popular. The usage of a simple deterministic input function used to be controversial; critics argued that "deliberately exposing a cryptosystem to a known systematic input represents an unnecessary risk".[28]However, today CTR mode is widely accepted, and any problems are considered a weakness of the underlying block cipher, which is expected to be secure regardless of systemic bias in its input.[29]Along with CBC, CTR mode is one of two block cipher modes recommended by Niels Ferguson and Bruce Schneier.[30]
CTR mode was introduced byWhitfield DiffieandMartin Hellmanin 1979.[29]
CTR mode has similar characteristics to OFB, but also allows a random-access property during decryption. CTR mode is well suited to operate on a multi-processor machine, where blocks can be encrypted in parallel. Furthermore, it does not suffer from the short-cycle problem that can affect OFB.[31]
If the IV/nonce is random, then they can be combined with the counter using any invertible operation (concatenation, addition, or XOR) to produce the actual unique counter block for encryption. In case of a non-random nonce (such as a packet counter), the nonce and counter should be concatenated (e.g., storing the nonce in the upper 64 bits and the counter in the lower 64 bits of a 128-bit counter block). Simply adding or XORing the nonce and counter into a single value would break the security under achosen-plaintext attackin many cases, since the attacker may be able to manipulate the entire IV–counter pair to cause a collision. Once an attacker controls the IV–counter pair and plaintext, XOR of the ciphertext with the known plaintext would yield a value that, when XORed with the ciphertext of the other block sharing the same IV–counter pair, would decrypt that block.[32]
Note that thenoncein this diagram is equivalent to theinitialization vector(IV) in the other diagrams. However, if the offset/location information is corrupt, it will be impossible to partially recover such data due to the dependence on byte offset.
"Error propagation" properties describe how a decryption behaves during bit errors, i.e. how error in one bit cascades to different decrypted bits.
Bit errors may occur intentionally in attacks or randomly due to transmission errors.
For modernauthenticated encryption(AEAD) or protocols withmessage authentication codeschained in MAC-Then-Encrypt order, any bit error should completely abort decryption and must not generate any specific bit errors to decryptor. I.e. if decryption succeeded, there should not be any bit error. As such error propagation is less important subject in modern cipher modes than in traditional confidentiality-only modes.
(Source: SP800-38A Table D.2: Summary of Effect of Bit Errors on Decryption)
It might be observed, for example, that a one-block error in the transmitted ciphertext would result in a one-block error in the reconstructed plaintext for ECB mode encryption, while in CBC mode such an error would affect two blocks. Some felt that such resilience was desirable in the face of random errors (e.g., line noise), while others argued that error correcting increased the scope for attackers to maliciously tamper with a message.
However, when proper integrity protection is used, such an error will result (with high probability) in the entire message being rejected. If resistance to random error is desirable,error-correcting codesshould be applied to the ciphertext before transmission.
Many more modes of operation for block ciphers have been suggested. Some have been accepted, fully described (even standardized), and are in use. Others have been found insecure, and should never be used. Still others don't categorize as confidentiality, authenticity, or authenticated encryption – for example key feedback mode andDavies–Meyerhashing.
NISTmaintains a list of proposed modes for block ciphers atModes Development.[26][33]
Disk encryption often uses special purpose modes specifically designed for the application. Tweakable narrow-block encryption modes (LRW,XEX, andXTS) and wide-block encryption modes (CMCandEME) are designed to securely encrypt sectors of a disk (seedisk encryption theory).
Many modes use an initialization vector (IV) which, depending on the mode, may have requirements such as being only used once (a nonce) or being unpredictable ahead of its publication, etc. Reusing an IV with the same key in CTR, GCM or OFB mode results in XORing the same keystream with two or more plaintexts, a clear misuse of a stream, with a catastrophic loss of security. Deterministic authenticated encryption modes such as the NISTKey Wrapalgorithm and the SIV (RFC 5297) AEAD mode do not require an IV as an input, and return the same ciphertext and authentication tag every time for a given plaintext and key. Other IV misuse-resistant modes such asAES-GCM-SIVbenefit from an IV input, for example in the maximum amount of data that can be safely encrypted with one key, while not failing catastrophically if the same IV is used multiple times.
Block ciphers can also be used in othercryptographic protocols. They are generally used in modes of operation similar to the block modes described here. As with all protocols, to be cryptographically secure, care must be taken to design these modes of operation correctly.
There are several schemes which use a block cipher to build acryptographic hash function. Seeone-way compression functionfor descriptions of several such methods.
Cryptographically secure pseudorandom number generators(CSPRNGs) can also be built using block ciphers.
Message authentication codes(MACs) are often built from block ciphers.CBC-MAC,OMACandPMACare examples.
|
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_mode
|
Acascading failureis a failure in asystemofinterconnectedparts in which the failure of one or few parts leads to the failure of other parts, growing progressively as a result ofpositive feedback. This can occur when a single part fails, increasing the probability that other portions of the system fail.[1][2][3]Such a failure may happen in many types of systems, including power transmission, computer networking, finance, transportation systems, organisms, the human body, and ecosystems.
Cascading failures may occur when one part of the system fails. When this happens, other parts must then compensate for the failed component. This in turn overloads these nodes, causing them to fail as well, prompting additional nodes to fail one after another.
Cascading failure is common inpower gridswhen one of the elements fails (completely or partially) and shifts its load to nearby elements in the system. Those nearby elements are then pushed beyond their capacity so they become overloaded and shift their load onto other elements. Cascading failure is a common effect seen inhigh voltagesystems, where asingle point of failure(SPF) on a fully loaded or slightly overloaded system results in a sudden spike across all nodes of the system. This surge current can induce the already overloaded nodes into failure, setting off more overloads and thereby taking down the entire system in a very short time.
This failure process cascades through the elements of the system like a ripple on a pond and continues until substantially all of the elements in the system are compromised and/or the system becomes functionally disconnected from the source of its load. For example, under certain conditions a large power grid can collapse after the failure of a single transformer.
Monitoring the operation of a system, inreal-time, and judicious disconnection of parts can help stop a cascade. Another common technique is to calculate a safety margin for the system by computer simulation of possible failures, to establish safe operating levels below which none of the calculated scenarios is predicted to cause cascading failure, and to identify the parts of the network which are most likely to cause cascading failures.[4]
One of the primary problems with preventing electrical grid failures is that the speed of the control signal is no faster than the speed of the propagating power overload, i.e. since both the control signal and the electrical power are moving at the same speed, it is not possible to isolate the outage by sending a warning ahead to isolate the element.
Cascading failure caused the followingpower outages:
Cascading failures can also occur incomputer networks(such as theInternet) in whichnetwork trafficis severely impaired or halted to or between larger sections of the network, caused by failing or disconnected hardware or software. In this context, the cascading failure is known by the termcascade failure. A cascade failure can affect large groups of people and systems.
The cause of a cascade failure is usually the overloading of a single, crucialrouteror node, which causes the node to go down, even briefly. It can also be caused by taking a node down for maintenance or upgrades. In either case, traffic isroutedto or through another (alternative) path. This alternative path, as a result, becomes overloaded, causing it to go down, and so on. It will also affect systems which depend on the node for regular operation.
The symptoms of a cascade failure include:packet lossand highnetwork latency, not just to single systems, but to whole sections of a network or the internet. The high latency and packet loss is caused by the nodes that fail to operate due tocongestion collapse, which causes them to still be present in the network but without much or any useful communication going through them. As a result, routes can still be considered valid, without them actually providing communication.
If enough routes go down because of a cascade failure, a complete section of the network or internet can become unreachable. Although undesired, this can help speed up the recovery from this failure as connections will time out, and other nodes will give up trying to establish connections to the section(s) that have become cut off, decreasing load on the involved nodes.
A common occurrence during a cascade failure is a walking failure, where sections go down, causing the next section to fail, after which the first section comes back up. This ripple can make several passes through the same sections or connecting nodes before stability is restored.
Cascade failures are a relatively recent development, with the massive increase in traffic and the high interconnectivity between systems and networks. The term was first applied in this context in the late 1990s by a Dutch IT professional and has slowly become a relatively common term for this kind of large-scale failure.[citation needed]
Network failures typically start when a single network node fails. Initially, the traffic that would normally go through the node is stopped. Systems and users get errors about not being able to reach hosts. Usually, the redundant systems of an ISP respond very quickly, choosing another path through a different backbone. The routing path through this alternative route is longer, with morehopsand subsequently going through more systems that normally do not process the amount of traffic suddenly offered.
This can cause one or more systems along the alternative route to go down, creating similar problems of their own.
Related systems are also affected in this case. As an example,DNSresolution might fail and what would normally cause systems to be interconnected, might break connections that are not even directly involved in the actual systems that went down. This, in turn, may cause seemingly unrelated nodes to develop problems, that can cause another cascade failure all on its own.
In December 2012, a partial loss (40%) ofGmailservice occurred globally, for 18 minutes. This loss of service was caused by a routine update of load balancing software which contained faulty logic—in this case, the error was caused by logic using an inappropriate 'all' instead of the more appropriate 'some'.[5]The cascading error was fixed by fully updating a single node in the network instead of partially updating all nodes at one time.
Certain load-bearing structures with discrete structural components can be subject to the "zipper effect", where the failure of a single structural member increases the load on adjacent members. In the case of theHyatt Regency walkway collapse, a suspended walkway (which was already overstressed due to an error in construction) failed when a single vertical suspension rod failed, overloading the neighboring rods which failed sequentially (i.e. like azipper). A bridge that can have such a failure is called fracture critical, and numerous bridge collapses have been caused by the failure of a single part. Properly designed structures use an adequatefactor of safetyand/or alternate load paths to prevent this type of mechanical cascade failure.[6]
Fracture cascade is a phenomenon in the context of geology and describes triggering a chain reaction of subsequent fractures by a single fracture.[7]The initial fracture leads to the propagation of additional fractures, causing a cascading effect throughout the material.
Fracture cascades can occur in various materials, including rocks, ice, metals, and ceramics.[8]A common example is the bending of dryspaghetti, which in most cases breaks into more than 2 pieces, as first observed byRichard Feynman.[8]
In the context ofosteoporosis, a fracture cascade is the increased risk of subsequent bone fractures after an initial one.[9]
Biochemical cascadesexist in biology, where a small reaction can have system-wide implications. One negative example isischemic cascade, in which a smallischemicattack releasestoxinswhich kill off far more cells than the initial damage, resulting in more toxins being released. Current research is to find a way to block this cascade instrokepatients to minimize the damage.
In the study of extinction, sometimes the extinction of one species will cause many other extinctions to happen. Such a species is known as akeystone species.
Another example is theCockcroft–Walton generator, which can also experience cascade failures wherein one faileddiodecan result in all the diodes failing in a fraction of a second.
Yet another example of this effect in a scientific experiment was theimplosionin 2001 of several thousand fragile glass photomultiplier tubes used in theSuper-Kamiokandeexperiment, where the shock wave caused by the failure of a single detector appears to have triggered the implosion of the other detectors in a chain reaction.
Infinance, the risk of cascading failures of financial institutions is referred to assystemic risk:the failure of one financial institution may cause other financial institutions (itscounterparties) to fail, cascading throughout the system. Institutions that are believed to pose systemic risk are deemed either "too big to fail" (TBTF) or "too interconnected to fail" (TICTF), depending on why they appear to pose a threat.
Note however that systemic risk is not due to individual institutions per se, but due to the interconnections. Frameworks to study and predict the effects of cascading failures have been developed in the research literature.[10][11][12]
A related (though distinct) type of cascading failure in finance occurs in the stock market, exemplified by the2010 Flash Crash.[12]
Diverseinfrastructuressuch aswater supply,transportation, fuel andpower stationsare coupled together and depend on each other for functioning, see Fig. 1. Owing to this coupling, interdependent networks are extremely sensitive to random failures, and in particular totargeted attacks, such that a failure of a small fraction of nodes in one network can trigger an iterative cascade of failures in several interdependent networks.[13][14]Electrical blackoutsfrequently result from a cascade of failures between interdependent networks, and the problem has been dramatically exemplified by the several large-scale blackouts that have occurred in recent years. Blackouts are a fascinating demonstration of the important role played by the dependencies between networks. For example, the2003 Italy blackoutresulted in a widespread failure of therailway network,health care systems, andfinancial servicesand, in addition, severely influenced thetelecommunication networks. The partial failure of the communication system in turn further impaired theelectrical gridmanagement system, thus producing a positive feedback on the power grid.[15]This example emphasizes how inter-dependence can significantly magnify the damage in an interacting network system.
A model for cascading failures due to overload propagation is the Motter–Lai model.[16]
|
https://en.wikipedia.org/wiki/Cascading_failure
|
Inwritingandtypography, aligatureoccurs where two or moregraphemesor letters are joined to form a singleglyph. Examples are the characters⟨æ⟩and⟨œ⟩used in English and French, in which the letters⟨a⟩and⟨e⟩are joined for the first ligature and the letters⟨o⟩and⟨e⟩are joined for the second ligature. For stylistic and legibility reasons,⟨f⟩and⟨i⟩are often merged to create⟨fi⟩(where thetittleon the⟨i⟩merges with the hood of the⟨f⟩); the same is true of⟨s⟩and⟨t⟩to create⟨st⟩. The commonampersand,⟨&⟩, developed from a ligature in which the handwritten Latin letters⟨e⟩and⟨t⟩(spellinget,Latinfor 'and') were combined.[1]
The earliest known scriptSumerian cuneiformandEgyptianhieraticboth include many cases of character combinations that gradually evolve from ligatures into separately recognizable characters. Other notable ligatures, such as theBrahmicabugidasand theGermanicbind rune, figure prominently throughout ancient manuscripts. These new glyphs emerge alongside the proliferation of writing with a stylus, whether onpaperorclay, and often for a practical reason: fasterhandwriting. Merchants especially needed a way to speed up the process of written communication and found that conjoining letters and abbreviating words for lay use was more convenient for record keeping and transaction than the bulky long forms.[citation needed]
Around the 9th and 10th centuries, monasteries became a fountainhead for these type of script modifications. Medieval scribes who wrote inLatinincreased their writing speed by combining characters and by introducingnotational abbreviations. Others conjoined letters for aesthetic purposes. For example, inblackletter, letters with right-facing bowls (⟨b⟩,⟨o⟩, and⟨p⟩) and those with left-facing bowls (⟨c⟩,⟨e⟩,⟨o⟩,⟨d⟩,⟨g⟩and⟨q⟩) were written with the facing edges of the bowls superimposed. In many script forms, characters such as⟨h⟩,⟨m⟩, and⟨n⟩had their vertical strokes superimposed.[citation needed]Scribes also used notational abbreviations to avoid having to write a whole character in one stroke. Manuscripts in the fourteenth century employed hundreds of such abbreviations.[citation needed]
Inhandwriting, a ligature is made by joining two or more characters in an atypical fashion by merging their parts, or by writing one above or inside the other. In printing, a ligature is a group of characters that is typeset as a unit, so the characters do not have to be joined. For example, in some cases the⟨fi⟩ligature prints the letters⟨f⟩and⟨i⟩with a greater separation than when they are typeset as separate letters. Whenprinting with movable typewas invented around 1450,[4]typefaces included many ligatures and additional letters, as they were based on handwriting. Ligatures made printing with movable type easier because onesortwould replace frequent combinations of letters and also allowed more complex and interesting character designs which would otherwise collide with one another.[citation needed]
Because of their complexity, ligatures began to fall out of use in the 20th century. Sans serif typefaces, increasingly used for body text, generally avoid ligatures, though notable exceptions includeGill SansandFutura. Inexpensivephototypesettingmachines in the 1970s (which did not requirejourneymanknowledge or training to operate) also generally avoid them. A few, however, became characters in their own right, see below the sections aboutGerman ß,various Latin accented letters,& et al.
The trend against digraph use was further strengthened by thedesktop publishingrevolution. Early computer software in particular had no way to allow for ligature substitution (the automatic use of ligatures where appropriate), while most new digital typefaces did not include ligatures. As most of the early PC development was designed for the English language (which already treated ligatures as optional at best) dependence on ligatures did not carry over to digital. Ligature use fell as the number of traditional handcompositorsandhot metal typesettingmachine operators dropped because of the mass production of the IBM Selectric brand of electric typewriter in 1961. A designer active in the period commented: "some of the world's greatest typefaces were quickly becoming some of the world's worst fonts."[5]
Ligatures have grown in popularity in the 21st century because of an increasing interest in creating typesetting systems that evoke arcane designs and classical scripts. One of the first computer typesetting programs to take advantage of computer-driven typesetting (and later laser printers) wasDonald Knuth'sTeXprogram. Now the standard method of mathematical typesetting, its default fonts are explicitly based on nineteenth-century styles. Many new fonts feature extensive ligature sets; these includeFF Scala, Seria and others byMartin MajoorandHoefler TextbyJonathan Hoefler.Mrs EavesbyZuzana Lickocontains a particularly large set to allow designers to create dramatic display text with a feel of antiquity.
A parallel use of ligatures is seen in the creation of script fonts that join letterforms to simulate handwriting effectively. This trend is caused in part by the increased support for other languages and alphabets in modern computing, many of which use ligatures somewhat extensively. This has caused the development of new digital typesetting techniques such asOpenType, and the incorporation of ligature support into the text display systems ofmacOS,Windows, and applications likeMicrosoft Office. An increasing modern trend is to use a "Th" ligature which reduces spacing between these letters to make it easier to read, a trait infrequent in metal type.[6][7][8]
Today, modern font programming divides ligatures into three groups, which can be activated separately: standard, contextual and historical. Standard ligatures are needed to allow the font to display without errors such as character collision. Designers sometimes find contextual and historic ligatures desirable for creating effects or to evoke an old-fashioned print look.[citation needed]
Many ligatures combine⟨f⟩with the following letter. A particularly prominent example is⟨fi⟩(or⟨fi⟩, rendered with two normal letters). Thetittleof the⟨i⟩in many typefaces collides with the hood of the⟨f⟩when placed beside each other in a word, and are combined into a single glyph with the tittle absorbed into the⟨f⟩. Other ligatures with the letter f include⟨fj⟩,[a]⟨fl⟩(fl),⟨ff⟩(ff),⟨ffi⟩(ffi), and⟨ffl⟩(ffl). InLinotype, ligature matrices for⟨fa⟩,⟨fe⟩,⟨fo⟩,⟨fr⟩,⟨fs⟩,⟨ft⟩,⟨fb⟩,⟨fh⟩,⟨fu⟩,⟨fy⟩, and for⟨f⟩followed by afull stop,comma, orhyphenare optional in many typefaces,[9]as well as the equivalent set for the doubled⟨ff⟩, as a method to overcome the machine's physical restrictions.[citation needed]
These arose because with the usual typesortforlowercase⟨f⟩, the end of its hood is on akern, which would be damaged by collision with raised parts of the next letter.[citation needed]
Ligatures crossing themorphemeboundary of a composite word are sometimes considered incorrect, especially in officialGerman orthographyas outlined in theDuden. An English example of this would be⟨ff⟩inshelfful; a German example would beSchifffahrt("boat trip").[b]Some computer programs (such asTeX) provide a setting to disable ligatures for German, while some users have also written macros to identify which ligatures to disable.[10][11]
Turkishdistinguishesdottedanddotless "I". If a ligature withfwere to be used in words such asfırın[oven] andfikir[idea], this contrast would be obscured. The⟨fi⟩ligature, at least in the form typical to other languages, is therefore not used in Turkish typography.[citation needed]
Remnants of the ligatures⟨ſʒ⟩/⟨ſz⟩("sharp s",eszett) and⟨tʒ⟩/⟨tz⟩("sharp t",tezett) fromFraktur, a family of Germanblacklettertypefaces, originally mandatory in Fraktur but now employed only stylistically, can be seen to this day on street signs for city squares whose name containsPlatzor ends in-platz. Instead, the "sz" ligature has merged into a single character, the Germanß– see below.
Sometimes, ligatures for⟨st⟩(st),⟨ſt⟩(ſt),⟨ch⟩,⟨ck⟩,⟨ct⟩,⟨Qu⟩and⟨Th⟩are used (e.g. in the typefaceLinux Libertine).[citation needed]
Besides conventional ligatures, in the metal type era some newspapers commissioned custom condensed single sorts for the names of common long names that might appear in news headings, such as "Eisenhower", "Chamberlain". In these cases the characters did not appear combined, just more tightly spaced than if printed conventionally.[12]
TheGermanletter⟨ß⟩(Eszett, also called thescharfes S, meaningsharp s) is an official letter of the alphabet in Germany and Austria. A recognizableligaturerepresenting the⟨sz⟩digraph develops in handwriting in the early 14th century.[13]Its nameEs-zett(meaning S-Z) suggests a connection of "long s and z" (ſʒ) but the Latin script also knows a ligature of "long s over round s" (ſs). Since German was mostly set in blackletter typefaces until the 1940s, and those typefaces were rarely set in uppercase, a capital version of theEszettnever came into common use, even though its creation has been discussed since the end of the 19th century. Therefore, the common replacement in uppercase typesetting was originally SZ (Maße"measure" →MASZE, different fromMasse"mass" →MASSE) and later SS (Maße→MASSE). Until 2017, the SS replacement was the only valid spelling according to the official orthography in Germany and Austria. In Switzerland, the ß is omitted altogether in favour of ss. Thecapital version (ẞ)of the Eszett character was occasionally used since 1905/06, has been part of Unicode since 2008, and has appeared in more and more typefaces. Since the end of 2010, theStändiger Ausschuss für geographische Namen (StAGN)has suggested the new upper case character for "ß" rather than replacing it with "SS" or "SZ" for geographical names.[14]A new standardized German keyboard layout (DIN 2137-T2) has included the capital ß since 2012. The new character entered the official orthographic rules in June 2017.[citation needed]
A prominent feature of thecolonial orthographycreated byJohn Eliot(later used in the first Bible printed in the Americas, theMassachusett-languageMamusse Wunneetupanatamwe Up-Biblum God, published in 1663) was the use of the double-o ligature⟨ꝏ⟩to represent the/u/offoodas opposed to the/ʊ/ofhook(although Eliot himself used⟨oo⟩and⟨ꝏ⟩interchangeably).[clarification needed]In the orthography in use since 2000 in theWampanoagcommunities participating in the Wôpanâak Language Reclamation Project (WLRP), the ligature was replaced with the numeral⟨8⟩, partly because of its ease in typesetting and display as well as its similarity to the o-u ligature⟨Ȣ⟩used inAbenaki. For example, compare the colonial-era spellingseepꝏash[15]with the modern Wôpanâak Language Reclamation Project (WLRP) spellingseep8ash.[16]
As the letter⟨W⟩is an addition to theLatin alphabetthat originated in the seventh century, the phoneme it represents was formerly written in various ways. InOld English, the runic letterwynn⟨Ƿ⟩) was used, butNormaninfluence forced wynn out of use. By the 14th century, the "new" letter⟨W⟩, originated as two⟨V⟩glyphs or⟨U⟩glyphs joined, developed into a legitimate letter with its own position in the alphabet. Because of its relative youth compared to other letters of the alphabet, only a few European languages (English, Dutch, German, Polish, Welsh, Maltese, and Walloon) use the letter in native words.[citation needed]
The character⟨Æ⟩(lower case⟨æ⟩; in ancient times namedæsc) when used inDanish,Norwegian,Icelandic, orOld Englishis not a typographic ligature. It is a distinctletter— avowel— and when collated, may be given a different place in thealphabetical orderthanAe.[citation needed]
In modernEnglish orthography,⟨Æ⟩is not considered an independent letter but a spelling variant, for example: "encyclopædia" versus "encyclopaedia" or "encyclopedia". In this use,⟨Æ⟩comes fromMedieval Latin, where it was an optional ligature in some specific words that had been transliterated and borrowed from Ancient Greek, for example, "Æneas". It is still found as a variant in English and French words descended or borrowed from Medieval Latin, but the trend has recently been towards printing the⟨A⟩and⟨E⟩separately.[17]
Similarly,⟨Œ⟩and⟨œ⟩, while normally printed as ligatures in French, are replaced by component letters if technical restrictions require it.[citation needed]
InGerman orthography, theumlautedvowels⟨ä⟩,⟨ö⟩, and⟨ü⟩historically arose from⟨ae⟩,⟨oe⟩,⟨ue⟩ligatures (strictly, from these vowels with a small letter⟨e⟩written as adiacritic, for example⟨aͤ⟩,⟨oͤ⟩,⟨uͤ⟩). It is common practice to replace them with⟨ae⟩,⟨oe⟩,⟨ue⟩digraphs when the diacritics are unavailable, for example in electronic conversation. Phone books treat umlauted vowels as equivalent to the relevant digraph (so that a name Müller will appear at the same place as if it were spelled Mueller; German surnames have a strongly fixed orthography, either a name is spelled with⟨ü⟩or with⟨ue⟩); however, the alphabetic order used in other books treats them as equivalent to the simple letters⟨a⟩,⟨o⟩and⟨u⟩. The convention inScandinavian languagesandFinnishis different: there the umlaut vowels are treated as independent letters with positions at the end of the alphabet.[citation needed]
In Middle English, the wordthe(writtenþe) was frequently abbreviated as⟨þͤ⟩, a⟨þ⟩(thorn) with a small⟨e⟩written as a diacritic. Similarly, the wordthatwas abbreviated to⟨þͭ⟩, a⟨þ⟩with a small⟨t⟩written as a diacritic. During the latter Middle English andEarly Modern Englishperiods, the thorn in its common script, orcursive, form came to resemble a⟨y⟩shape. With the arrival ofmovable typeprinting, the substitution of⟨y⟩for⟨Þ⟩became ubiquitous, leading to the common "ye", as in 'Ye OldeCuriositie Shoppe'. One major reason for this was that⟨y⟩existed in the printer'stypesthatWilliam Caxtonand his contemporaries imported from Belgium and the Netherlands, while⟨Þ⟩did not.[18]
Theringdiacriticused in vowels such as⟨å⟩likewise originated as an⟨o⟩-ligature.[19]Before the replacement of the older "aa" with "å" became ade factopractice, an "a" with another "a" on top (aͣ) could sometimes be used, for example inJohannes Bureus's,Runa: ABC-Boken(1611).[20]The⟨uo⟩ligatureůin particular saw use inEarly Modern High German, but it merged in later Germanic languages with⟨u⟩(e.g.MHGfuosz,ENHGfuͦß,Modern GermanFuß"foot"). It survives inCzech, where it is calledkroužek.
The letterhwair(ƕ), used only intransliterationof theGothic language, resembles a⟨hw⟩ligature. It was introduced byphilologistsaround 1900 to replace thedigraph⟨hv⟩formerly used to express the phoneme in question, e.g. byMignein the 1860s (Patrologia Latinavol. 18).
TheByzantineshad a uniqueo-u ligature⟨Ȣ⟩that, while originally based on theGreek alphabet's ο-υ, carried over into Latin alphabets as well. This ligature is still seen today on icon artwork in Greek Orthodox churches, and sometimes in graffiti or other forms of informal or decorative writing.[citation needed]
Gha⟨ƣ⟩, a rarely used letter based on Q and G, was misconstrued by theISOto be an OI ligature because of its appearance, and is thus known (to the ISO and, in turn,Unicode) as "Oi". Historically, it was used in many Latin-based orthographies ofTurkic(e.g.,Azerbaijani) and othercentral Asianlanguages.[citation needed]
TheInternational Phonetic Alphabetformerly used ligatures to representaffricate consonants, of which six are encoded in Unicode:ʣ,ʤ,ʥ,ʦ,ʧandʨ. Onefricative consonantis still represented with a ligature:ɮ, and theextensions to the IPAcontain three more:ʩ,ʪandʫ.[citation needed]
TheInitial Teaching Alphabet, a short-lived alphabet intended for young children, used a number of ligatures to represent long vowels:⟨ꜷ⟩,⟨æ⟩,⟨œ⟩,⟨ᵫ⟩,⟨ꭡ⟩, and ligatures for⟨ee⟩,⟨ou⟩and⟨oi⟩that are not encoded in Unicode. Ligatures for consonants also existed, including ligatures of⟨ʃh⟩,⟨ʈh⟩,⟨wh⟩,⟨ʗh⟩,⟨ng⟩and a reversed⟨t⟩with⟨h⟩(neither the reversed t nor any of the consonant ligatures are in Unicode).[citation needed]
Rarer ligatures also exist, including⟨ꜳ⟩;⟨ꜵ⟩;⟨ꜷ⟩;⟨ꜹ⟩;⟨ꜻ⟩(barred⟨av⟩);⟨ꜽ⟩;⟨ꝏ⟩, which is used in medievalNordiclanguages for/oː/(a longclose-mid back rounded vowel),[21]as well as in some orthographies of theMassachusett languageto representuː(a longclose back rounded vowel); ᵺ; ỻ, which was used inMedieval Welshto representɬ(thevoiceless lateral fricative);[21]ꜩ; ᴂ; ᴔ; and ꭣ haveUnicode codepoints(in code blockLatin Extended-Efor characters used in German dialectology (Teuthonista),[22]theAnthroposalphabet,SakhaandAmericanistusage).[citation needed]
The most common ligature in modern usage is theampersand⟨&⟩. This was originally a ligature of⟨E⟩and⟨t⟩, forming theLatin:et, meaningand. It has exactly the same use inFrenchand inEnglish. The ampersand comes in many different forms. Because of its ubiquity, it is generally no longer considered a ligature, but alogogram. Like many other ligatures, it has at times been considered a letter (e.g., in early Modern English); in English it is pronouncedand, notet, except in the case of&c, pronouncedet cetera. In most typefaces, it does not immediately resemble the two letters used to form it, although certain typefaces use designs in the form of a ligature (examples include the original versions ofFuturaandUnivers,Trebuchet MS, andCivilité, known in modern times as the italic ofGaramond).[citation needed]
Similarly, thenumber sign⟨#⟩originated as a stylized abbreviation of the Roman termlibra pondo, written as ℔.[23]Over time, the number sign was simplified to how it is seen today, with two horizontal strokes across two slash-like strokes.[24]Now a logogram, the symbol is used mainly to denote (in the US) numbers, and weight in pounds.[25]It has also been used popularly onpush-button telephonesand as thehashtagindicator.[26]
Theat sign⟨@⟩is possibly a ligature, but there are many different theories about the origin. One theory says that the French wordà(meaningat), was simplified by scribes who, instead of lifting the pen to write the grave accent, drew an arc around the⟨a⟩. Another states that it is short for the Latin word fortoward,ad, with the⟨d⟩being represented by the arc. Another says it is short for an abbreviation of the termeach at, with the⟨e⟩encasing the⟨a⟩.[27]Around the 18th century, it started being used in commerce to indicate price per unit, as "15 units @ $1".[28]After the popularization ofEmail, this fairly unpopular character became widely known, used to tag specific users.[29]Lately, it has been used to de-gender nouns in Spanish with no agreed pronunciation.[30]
Thedollar sign⟨$⟩possibly originated as a ligature (for "pesos", although there are other theories as well) but is now a logogram.[31]At least once, theUnited States dollarused a symbol resembling an overlapping U-S ligature, with the right vertical bar of the U intersecting through the middle of the S (US) to resemble the modern dollar sign.[32]
TheSpanish pesetawas sometimes abbreviated by a ligature⟨₧⟩(fromPts). The ligature⟨₣⟩(F-with-bar) was proposed in 1968 byÉdouard Balladur,Minister of Economy.[33]as a symbol forFrench francbut was never adopted and has never been officially used.[34]
Inastronomy, theplanetary symbolfor Mercury (☿) may be a ligature ofMercury'scaduceusand a cross (which was added in the 16th century to Christianize the pagan symbol),[35]though other sources disagree;[36]the symbol for Venus♀may be a ligature of the Greek letters⟨ϕ⟩(phi) and⟨κ⟩(kappa).[36]The symbol for Jupiter (♃) descends from a Greekzetawith ahorizontal stroke,⟨Ƶ⟩, as an abbreviation forZeus.[35][37]Saturn'sastronomical symbol(♄) has been traced back to the GreekOxyrhynchus Papyri, where it can be seen to be a Greekkappa-rhowith ahorizontal stroke, as an abbreviation forΚρονος(Cronus), the Greek name for the planet.[35]It later came to look like a lower-case Greeketa, with the cross added at the top in the 16th century to Christianize it.
The dwarf planetPlutois symbolized by a PL ligature,♇.
A different PL ligature,⅊, represents theproperty linein surveying.[citation needed]
In engineering diagrams, a CL ligature,℄, represents the center line of an object.[citation needed]
Theinterrobang⟨‽⟩is an unconventional punctuation meant to combine the interrogation point (or thequestion mark) and the bang (printer's slang forexclamation mark) into one symbol, used to denote a sentence which is both a question and is exclaimed. For example, the sentence "Is that actually true‽" shows that the speaker is surprised while asking their question.[38]
Alchemyuseda set of mostly standardized symbols, many of which were ligatures: 🜇 (AR, foraqua regia); 🜈 (S inside a V, foraqua vitae); 🝫 (MB, forbalneum Mariae[Mary's bath], adouble boiler); 🝬 (VB, forbalneum vaporis, a steam bath); and 🝛 (aaawithoverline, foramalgam).[citation needed]
ComposerArnold Schoenbergintroduced two ligatures asmusical symbolsto denote melody and countermelody. The symbols are ligatures of HT and NT, 𝆦 and 𝆧, from the German forhauptstimmeand nebenstimme respectively.[39][40]
Digraphs, such as⟨ll⟩inSpanishorWelsh, are not ligatures in the general case as the two letters are displayed as separate glyphs: although written together, when they are joined in handwriting oritalicfonts the base form of the letters is not changed and the individual glyphs remain separate. Like some ligatures discussed above, these digraphs may or may not be considered individual letters in their respective languages. Until the 1994 spelling reform, the digraphs⟨ch⟩and⟨ll⟩were considered separate letters in Spanish forcollationpurposes. Catalan makes a difference between "Spanish ll" or palatalized l, writtenllas inllei(law), and "French ll" or geminated l, writtenl·las incol·lega(colleague).[citation needed]
The difference can be illustrated with the French digraphœu, which is composed of the ligatureœand the simplex letteru.[citation needed]
InDutch,⟨ij⟩can be considered a digraph, a ligature, or a letter in itself, depending on the standard used. Its uppercase andlowercaseforms are often available as a single glyph with a distinctive ligature in several professional typefaces (e.g.Zapfino).Sans serifuppercase⟨IJ⟩glyphs, popular in theNetherlands, typically use a ligature resembling a⟨U⟩with a broken left-hand stroke. Adding to the confusion, Dutch handwriting can render⟨y⟩(which is not found in native Dutch words, but occurs in words borrowed from other languages) as a⟨ij⟩-glyph without the dots in its lowercase form and the⟨IJ⟩in its uppercase form looking virtually identical (only slightly bigger). When written as two separate letters, both should be capitalized – or both not – to form a correctly spelled word, likeIJsorijs(ice).[citation needed]
Ligatures are not limited to Latin script:
Written Chinesehas a long history of creating new characters by merging parts or wholes of otherChinese characters. However, a few of these combinations do not representmorphemesbut retain the original multi-character (multiple morpheme) reading and are therefore not considered true characters themselves. In Chinese, these ligatures are calledhéwén(合文) orhéshū(合書); seepolysyllabic Chinese charactersfor more.
One popular ligature used onchūntiēdecorations used forChinese Lunar New Yearis a combination of the four characters forzhāocái jìnbǎo(招財進寶), meaning "ushering in wealth and fortune" and used as a popular New Year's greeting.
In 1924,Du Dingyou(杜定友; 1898–1967) created the ligature圕from two of the three characters圖書館(túshūguǎn), meaning "library".[43]Although it does have an assigned pronunciation oftuānand appears in many dictionaries, it is not amorphemeand cannot be used as such in Chinese. Instead, it is usually considered a graphic representation oftúshūguǎn.
In recent years, a Chineseinternet meme, theGrass Mud Horse, has had such a ligature associated with it combining the three relevant Chinese characters草,泥, and马(Cǎonímǎ).
Similar to the ligatures were several "two-syllable Chinese characters" (雙音節漢字) created in the 19th century asChinese charactersforSI units. In Chinese these units are disyllabic and standardly written with two characters, as厘米límǐ"centimeter" (厘centi-,米meter) or千瓦qiānwǎ"kilowatt". However, in the 19th century these were often written via compound characters, pronounced disyllabically, such as瓩for千瓦or糎for厘米– some of these characters were also used in Japan, where they were pronounced with borrowed European readings instead. These have now fallen out of general use, but are occasionally seen.[44]
TheCJK CompatibilityUnicodeblock features characters that have been combined into one square character in legacy character set so that it matches Japanese text.
For example, the Japanese equivalent of "stock company",株式会社(kabushiki gaisha) can be represented in 1 Unicode character⟨㍿⟩.
Its romanized abbreviationK.K.can also be 1 character⟨㏍⟩.
There are other Latin abbreviations such askgfor "kilogram" that can be ligated into 1 square character⟨㎏⟩.
TheOpenTypefont format includes features for associating multipleglyphsto a single character, used for ligature substitution. Typesetting software may or may not implement this feature, even if it is explicitly present in the font's metadata.XeTeXis a TeX typesetting engine designed to make the most of such advanced features. This type of substitution used to be needed mainly for typesetting Arabic texts, but ligature lookups and substitutions are being put into all kinds of Western Latin OpenType fonts. In OpenType, there are standardliga, historicalhlig, contextualclig, discretionarydligand requiredrligligatures.
Opinion is divided over whether it is the job of writers or typesetters to decide where to use ligatures.TeXis an example of a computer typesetting system that makes use of ligatures automatically. TheComputer ModernRoman typeface provided with TeX includes the five common ligatures⟨ff⟩,⟨fi⟩,⟨fl⟩,⟨ffi⟩, and⟨ffl⟩. When TeX finds these combinations in a text, it substitutes the appropriate ligature, unless overridden by the typesetter.
CSS3provides control over these properties usingfont-feature-settings,[45]though the CSS Fonts Module Level 4 draft standard indicates that authors should prefer several other properties.[46]Those includefont-variant-ligatures,common-ligatures,discretionary-ligatures,historical-ligatures, andcontextual.[47]
This table below shows discrete letter pairs on the left, the correspondingUnicodeligature in the middle column, and the Unicode code point on the right. Provided you are using anoperating systemandbrowserthat can handle Unicode, and have the correct Unicodefontsinstalled, some or all of these will display correctly. See also the provided graphic.
Unicodemaintains that ligaturing is a presentation issue rather than a character definition issue, and that, for example, "if a modern font is asked to display 'h' followed by 'r', and the font has an 'hr' ligature in it, it can display the ligature." Accordingly, the use of the special Unicode ligature characters is "discouraged", and "no more will be encoded in any circumstances".[48](Unicode has continued to add ligatures, but only in such cases that the ligatures were used as distinct letters in a language or could be interpreted as standalonesymbols. For example, ligatures such as æ and œ are not used to replace arbitrary "ae" or "oe" sequences; it is generally considered incorrect to write "does" as "dœs".)
Microsoft Worddisables ligature substitution by default, largely forbackward compatibilitywhen editing documents created in earlier versions of Word. Users can enable automatic ligature substitution on the Advanced tab of the Font dialog box.
LibreOffice Writerenables standard ligature substitution by default for OpenType fonts, user can enable or disable any ligature substitution on the Features dialog box, which is accessible via the Features button of the Character dialog box, or alternatively, input a syntax with font name and feature into the Font Name input box, for example:Noto Sans:liga=0.
There are separatecode pointsfor the digraphDZ, theDutchdigraphIJ, and for theSerbo-Croatian digraphsDŽ, LJ, and NJ. Although similar, these aredigraphs, not ligatures. SeeDigraphs in Unicode.
Four "ligature ornaments" are included from U+1F670 to U+1F673 in theOrnamental Dingbatsblock: regular and bold variants of ℯT (script e and T) and of ɛT (open E and T).
Typographic ligatures are used in a form ofcontemporary art,[58]as can be illustrated by Chinese artistXu Bing's work in which he combines Latin letters to form characters that resemble Chinese.[59]Croatian designer Maja Škripelj also created a ligature that combinedGlagolitic lettersⰘⰓ foreuro coins.[60]
|
https://en.wikipedia.org/wiki/Typographic_ligature
|
Innumerical analysis, aquasi-Newton methodis aniterative numerical methodused either tofind zeroesor tofind local maxima and minimaof functions via an iterativerecurrence formulamuch like the one forNewton's method, except using approximations of thederivativesof the functions in place of exact derivatives. Newton's method requires theJacobian matrixof allpartial derivativesof a multivariate function when used to search for zeros or theHessian matrixwhen usedfor finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration.
Someiterative methodsthat reduce to Newton's method, such assequential quadratic programming, may also be considered quasi-Newton methods.
Newton's methodto find zeroes of a functiong{\displaystyle g}of multiple variables is given byxn+1=xn−[Jg(xn)]−1g(xn){\displaystyle x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})}, where[Jg(xn)]−1{\displaystyle [J_{g}(x_{n})]^{-1}}is theleft inverseof theJacobian matrixJg(xn){\displaystyle J_{g}(x_{n})}ofg{\displaystyle g}evaluated forxn{\displaystyle x_{n}}.
Strictly speaking, any method that replaces the exact JacobianJg(xn){\displaystyle J_{g}(x_{n})}with an approximation is a quasi-Newton method.[1]For instance, the chord method (whereJg(xn){\displaystyle J_{g}(x_{n})}is replaced byJg(x0){\displaystyle J_{g}(x_{0})}for all iterations) is a simple example. The methods given below foroptimizationrefer to an important subclass of quasi-Newton methods,secant methods.[2]
Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes.Broyden's "good" and "bad" methodsare two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are thecolumn-updating method, theinverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method.
More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3]
The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of thegradientof that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, ifg{\displaystyle g}is the gradient off{\displaystyle f}, then searching for the zeroes of the vector-valued functiong{\displaystyle g}corresponds to the search for the extrema of the scalar-valued functionf{\displaystyle f}; the Jacobian ofg{\displaystyle g}now becomes the Hessian off{\displaystyle f}. The main difference is thatthe Hessian matrix is a symmetric matrix, unlike the Jacobian whensearching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry.
Inoptimization,quasi-Newton methods(a special case ofvariable-metric methods) are algorithms for finding localmaxima and minimaoffunctions. Quasi-Newton methods for optimization are based onNewton's methodto find thestationary pointsof a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as aquadraticin the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and theHessian matrixof secondderivativesof the function to be minimized.
In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of thesecant methodto find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation isunder-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.
The first quasi-Newton algorithm was proposed byWilliam C. Davidon, a physicist working atArgonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: theDFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula(for "symmetric rank-one"), theBHHHmethod, the widespreadBFGS method(suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extensionL-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods.
The SR1 formula does not guarantee the update matrix to maintainpositive-definitenessand can be used for indefinite problems. TheBroyden's methoddoes not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian(rather than the Hessian).
One of the chief advantages of quasi-Newton methods overNewton's methodis that theHessian matrix(or, in the case of quasi-Newton methods, its approximation)B{\displaystyle B}does not need to be inverted. Newton's method, and its derivatives such asinterior point methods, require the Hessian to be inverted, which is typically implemented by solving asystem of linear equationsand is often quite costly. In contrast, quasi-Newton methods usually generate an estimate ofB−1{\displaystyle B^{-1}}directly.
As inNewton's method, one uses a second-order approximation to find the minimum of a functionf(x){\displaystyle f(x)}. TheTaylor seriesoff(x){\displaystyle f(x)}around an iterate is
where (∇f{\displaystyle \nabla f}) is thegradient, andB{\displaystyle B}an approximation to theHessian matrix.[4]The gradient of this approximation (with respect toΔx{\displaystyle \Delta x}) is
and setting this gradient to zero (which is the goal of optimization) provides the Newton step:
The Hessian approximationB{\displaystyle B}is chosen to satisfy
which is called thesecant equation(the Taylor series of the gradient itself). In more than one dimensionB{\displaystyle B}isunderdetermined. In one dimension, solving forB{\displaystyle B}and applying the Newton's step with the updated value is equivalent to thesecant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such asBroyden's method) seek a symmetric solution (BT=B{\displaystyle B^{T}=B}); furthermore, the variants listed below can be motivated by finding an updateBk+1{\displaystyle B_{k+1}}that is as close as possible toBk{\displaystyle B_{k}}in somenorm; that is,Bk+1=argminB‖B−Bk‖V{\displaystyle B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}}, whereV{\displaystyle V}is somepositive-definite matrixthat defines the norm. An approximate initial valueB0=βI{\displaystyle B_{0}=\beta I}is often sufficient to achieve rapid convergence, although there is no general strategy to chooseβ{\displaystyle \beta }.[5]Note thatB0{\displaystyle B_{0}}should be positive-definite. The unknownxk{\displaystyle x_{k}}is updated applying the Newton's step calculated using the current approximate Hessian matrixBk{\displaystyle B_{k}}:
is used to update the approximate HessianBk+1{\displaystyle B_{k+1}}, or directly its inverseHk+1=Bk+1−1{\displaystyle H_{k+1}=B_{k+1}^{-1}}using theSherman–Morrison formula.
The most popular update formulas are:
Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2]These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is theCompact quasi-Newton representation, which is particularly effective for
constrained and/or large problems.
Whenf{\displaystyle f}is a convex quadratic function with positive-definite HessianB{\displaystyle B}, one would expect the matricesHk{\displaystyle H_{k}}generated by a quasi-Newton method to converge to the inverse HessianH=B−1{\displaystyle H=B^{-1}}. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6]
Implementations of quasi-Newton methods are available in many programming languages.
Notable open source implementations include:
Notable proprietary implementations include:
|
https://en.wikipedia.org/wiki/Quasi-Newton_methods
|
Innumber theory, anintegerqis aquadratic residuemodulonif it iscongruentto aperfect squaremodulon; that is, if there exists an integerxsuch that
Otherwise,qis aquadratic nonresiduemodulon.
Quadratic residues are used in applications ranging fromacoustical engineeringtocryptographyand thefactoring of large numbers.
Fermat,Euler,Lagrange,Legendre, and other number theorists of the 17th and 18th centuries established theorems[1]and formed conjectures[2]about quadratic residues, but the first systematic treatment is § IV ofGauss'sDisquisitiones Arithmeticae(1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and says that if the context makes it clear, the adjective "quadratic" may be dropped.
For a givenn, a list of the quadratic residues modulonmay be obtained by simply squaring all the numbers 0, 1, ...,n− 1. Sincea≡b(modn) impliesa2≡b2(modn), any other quadratic residue is congruent (modn) to some in the obtained list. But the obtained list is not composed of mutually incongruent quadratic residues (mod n) only. Sincea2≡(n−a)2(modn), the list obtained by squaring all numbers in the list 1, 2, ...,n− 1(or in the list 0, 1, ...,n) is symmetric (modn) around its midpoint, hence it is actually only needed to square all the numbers in the list 0, 1, ...,⌊{\displaystyle \lfloor }n/2⌋{\displaystyle \rfloor }. The list so obtained may still contain mutually congruent numbers (modn). Thus, the number of mutually noncongruent quadratic residues moduloncannot exceedn/2 + 1 (neven) or (n+ 1)/2 (nodd).[3]
The product of two residues is always a residue.
Modulo 2, every integer is a quadratic residue.
Modulo an oddprime numberpthere are (p+ 1)/2 residues (including 0) and (p− 1)/2 nonresidues, byEuler's criterion. In this case, it is customary to consider 0 as a special case and work within themultiplicative group of nonzero elementsof thefield(Z/pZ){\displaystyle (\mathbb {Z} /p\mathbb {Z} )}. In other words, every congruence class except zero modulophas a multiplicative inverse. This is not true for composite moduli.[4]
Following this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue.[5]
Following this convention, modulo an odd prime number there is an equal number of residues and nonresidues.[4]
Modulo a prime, the product of two nonresidues is a residue and the product of a nonresidue and a (nonzero) residue is a nonresidue.[5]
The first supplement[6]to thelaw of quadratic reciprocityis that ifp≡ 1 (mod 4) then −1 is a quadratic residue modulop, and ifp≡ 3 (mod 4) then −1 is a nonresidue modulop. This implies the following:
Ifp≡ 1 (mod 4) the negative of a residue modulopis a residue and the negative of a nonresidue is a nonresidue.
Ifp≡ 3 (mod 4) the negative of a residue modulopis a nonresidue and the negative of a nonresidue is a residue.
All odd squares are ≡ 1 (mod 8) and thus also ≡ 1 (mod 4). Ifais an odd number andm= 8, 16, or some higher power of 2, thenais a residue modulomif and only ifa≡ 1 (mod 8).[7]
For example, mod (32) the odd squares are
and the even ones are
So a nonzero number is a residue mod 8, 16, etc., if and only if it is of the form 4k(8n+ 1).
A numberarelatively prime to an odd primepis a residue modulo any power ofpif and only if it is a residue modulop.[8]
If the modulus ispn,
Notice that the rules are different for powers of two and powers of odd primes.
Modulo an odd prime powern=pk, the products of residues and nonresidues relatively prime topobey the same rules as they do modp;pis a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power ofpin the product ≥n.
Modulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form theKlein four-group.
The basic fact in this case is
Modulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero.
For example, from the table for modulus 61, 2,3,4, 5 (residues inbold).
The product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2.
Also, the product of two nonresidues may be either a residue, a nonresidue, or zero.
For example, from the table for modulus 151, 2, 3,4, 5,6, 7, 8,9,10, 11, 12, 13, 14 (residues inbold).
The product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14.
This phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are agroupunder multiplication, called thegroup of unitsof thering(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, and the squares are asubgroupof it. Different nonresidues may belong to differentcosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset.
The fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, which haszero divisorsfor compositen.
For this reason some authors[10]add to the definition that a quadratic residueamust not only be a square but must also berelatively primeto the modulusn. (ais coprime tonif and only ifa2is coprime ton.)
Although it makes things tidier, this article does not insist that residues must be coprime to the modulus.
Gauss[11]usedRandNto denote residuosity and non-residuosity, respectively;
Although this notation is compact and convenient for some purposes,[12][13]a more useful notation is theLegendre symbol, also called thequadratic character, which is defined for all integersaand positive oddprime numberspas
There are two reasons why numbers ≡ 0 (modp) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is ahomomorphismfrom themultiplicative group of nonzero congruence classes modulopto thecomplex numbersunder multiplication. Setting(npp)=0{\displaystyle ({\tfrac {np}{p}})=0}allows itsdomainto be extended to the multiplicativesemigroupof all the integers.[14]
One advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas.[15]It can also easily be generalized tocubic, quartic and higher power residues.[16]
There is a generalization of the Legendre symbol for composite values ofp, theJacobi symbol, but its properties are not as simple: ifmis composite and the Jacobi symbol(am)=−1,{\displaystyle ({\tfrac {a}{m}})=-1,}thenaNm, and ifaRmthen(am)=1,{\displaystyle ({\tfrac {a}{m}})=1,}but if(am)=1{\displaystyle ({\tfrac {a}{m}})=1}we do not know whetheraRmoraNm. For example:(215)=1{\displaystyle ({\tfrac {2}{15}})=1}and(415)=1{\displaystyle ({\tfrac {4}{15}})=1}, but2 N 15and4 R 15. Ifmis prime, the Jacobi and Legendre symbols agree.
Although quadratic residues appear to occur in a rather random pattern modulon, and this has been exploited in suchapplicationsasacousticsandcryptography, their distribution also exhibits some striking regularities.
UsingDirichlet's theoremon primes inarithmetic progressions, thelaw of quadratic reciprocity, and theChinese remainder theorem(CRT) it is easy to see that for anyM> 0 there are primespsuch that the numbers 1, 2, ...,Mare all residues modulop.
For example, ifp≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulop, and thus all numbers 1–10 will be. The CRT says that this is the same asp≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12≡ 1, 10462≡ 2, 1232≡ 3, 22≡ 4, 6432≡ 5, 872≡ 6, 6682≡ 7, 4292≡ 8, 32≡ 9, and 5292≡ 10 (mod 2521).
The first of these regularities stems fromPeter Gustav Lejeune Dirichlet's work (in the 1830s) on theanalytic formulafor theclass numberof binaryquadratic forms.[17]Letqbe a prime number,sa complex variable, and define aDirichlet L-functionas
Dirichlet showed that ifq≡ 3 (mod 4), then
Therefore, in this case (primeq≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is a negative number.
For example, modulo 11,
In fact the difference will always be an odd multiple ofqifq> 3.[18]In contrast, for primeq≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is zero, implying that both sums equalq(q−1)4{\displaystyle {\frac {q(q-1)}{4}}}.[citation needed]
Dirichlet also proved that for primeq≡ 3 (mod 4),
This implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q− 1)/2.
For example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2).
An intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement.[19]
Ifpandqare odd primes, then:
((pis a quadratic residue modq) if and only if (qis a quadratic residue modp)) if and only if (at least one ofpandqis congruent to 1 mod 4).
That is:
where(pq){\displaystyle \left({\frac {p}{q}}\right)}is theLegendre symbol.
Thus, for numbersaand odd primespthat don't dividea:
Modulo a primep, the number of pairsn,n+ 1 wherenRpandn+ 1 Rp, ornNpandn+ 1 Rp, etc., are almost equal. More precisely,[20][21]letpbe an odd prime. Fori,j= 0, 1 define the sets
and let
That is,
Then ifp≡ 1 (mod 4)
and ifp≡ 3 (mod 4)
For example: (residues inbold)
Modulo 17
Modulo 19
Gauss (1828)[22]introduced this sort of counting when he proved that ifp≡ 1 (mod 4) thenx4≡ 2 (modp) can be solved if and only ifp=a2+ 64b2.
The values of(ap){\displaystyle ({\tfrac {a}{p}})}for consecutive values ofamimic a random variable like acoin flip.[23]Specifically,PólyaandVinogradovproved[24](independently) in 1918 that for any nonprincipalDirichlet characterχ(n) moduloqand any integersMandN,
inbig O notation. Setting
this shows that the number of quadratic residues moduloqin any interval of lengthNis
It is easy[25]to prove that
In fact,[26]
MontgomeryandVaughanimproved this in 1977, showing that, if thegeneralized Riemann hypothesisis true then
This result cannot be substantially improved, forSchurhad proved in 1918 that
andPaleyhad proved in 1932 that
for infinitely manyd> 0.
The least quadratic residue modpis clearly 1. The question of the magnitude of the least quadratic non-residuen(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71.
The Pólya–Vinogradov inequality above gives O(√plogp).
The best unconditional estimate isn(p) ≪pθfor any θ>1/4√e, obtained by estimates of Burgess oncharacter sums.[27]
Assuming theGeneralised Riemann hypothesis, Ankeny obtainedn(p) ≪ (logp)2.[28]
Linnikshowed that the number ofpless thanXsuch thatn(p) > Xεis bounded by a constant depending on ε.[27]
The least quadratic non-residues modpfor odd primespare:
Letpbe an odd prime. Thequadratic excessE(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) (sequenceA178153in theOEIS). Forpcongruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric underr↔p−r. Forpcongruent to 3 mod 4, the excessEis always positive.[29]
That is, given a numberaand a modulusn, how hard is it
An important difference between prime and composite moduli shows up here. Modulo a primep, a quadratic residueahas 1 + (a|p) roots (i.e. zero ifaNp, one ifa≡ 0 (modp), or two ifaRpand gcd(a,p) = 1.)
In general if a composite modulusnis written as a product of powers of distinct primes, and there aren1roots modulo the first one,n2mod the second, ..., there will ben1n2... roots modulon.
The theoretical way solutions modulo the prime powers are combined to make solutions modulonis called theChinese remainder theorem; it can be implemented with an efficient algorithm.[30]
For example:
First off, if the modulusnis prime theLegendre symbol(an){\displaystyle \left({\frac {a}{n}}\right)}can bequickly computedusing a variation ofEuclid's algorithm[31]or theEuler's criterion. If it is −1 there is no solution.
Secondly, assuming that(an)=1{\displaystyle \left({\frac {a}{n}}\right)=1}, ifn≡ 3 (mod 4),Lagrangefound that the solutions are given by
andLegendrefound a similar solution[32]ifn≡ 5 (mod 8):
For primen≡ 1 (mod 8), however, there is no known formula.Tonelli[33](in 1891) andCipolla[34]found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulon, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 andnare nonresidues, picking numbersxat random and calculating the Legendre symbol(xn){\displaystyle \left({\frac {x}{n}}\right)}until a nonresidue is found will quickly produce one. A slight variant of this algorithm is theTonelli–Shanks algorithm.
If the modulusnis aprime powern=pe, a solution may be found modulopand "lifted" to a solution modulonusingHensel's lemmaor an algorithm of Gauss.[8]
If the modulusnhas been factored into prime powers the solution was discussed above.
Ifnis not congruent to 2 modulo 4 and theKronecker symbol(an)=−1{\displaystyle \left({\tfrac {a}{n}}\right)=-1}then there is no solution; ifnis congruent to 2 modulo 4 and(an/2)=−1{\displaystyle \left({\tfrac {a}{n/2}}\right)=-1}, then there is also no solution. Ifnis not congruent to 2 modulo 4 and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, there may or may not be one.
If the complete factorization ofnis not known, and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}andnis not congruent to 2 modulo 4, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, the problem is known to be equivalent tointeger factorizationofn(i.e. an efficient solution to either problem could be used to solve the other efficiently).
The above discussion indicates how knowing the factors ofnallows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The articlecongruence of squaresdiscusses how finding two numbers x and y wherex2≡y2(modn)andx≠ ±ysuffices to factorizenefficiently. Generate a random number, square it modulon, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulon), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient.[35]
Determining whetherais a quadratic residue or nonresidue modulon(denotedaRnoraNn) can be done efficiently for primenby computing the Legendre symbol. However, for compositen, this forms thequadratic residuosity problem, which is not known to be ashardas factorization, but is assumed to be quite hard.
On the other hand, if we want to know if there is a solution forxless than some given limitc, this problem isNP-complete;[36]however, this is afixed-parameter tractableproblem, wherecis the parameter.
In general, to determine ifais a quadratic residue modulo compositen, one can use the following theorem:[37]
Letn> 1, andgcd(a,n) = 1. Thenx2≡a(modn)is solvable if and only if:
Note: This theorem essentially requires that the factorization ofnis known. Also notice that ifgcd(a,n) =m, then the congruence can be reduced toa/m≡x2/m(modn/m), but then this takes the problem away from quadratic residues (unlessmis a square).
The list of the number of quadratic residues modulon, forn= 1, 2, 3 ..., looks like:
A formula to count the number of squares modulonis given by Stangl.[38]
Sound diffusershave been based on number-theoretic concepts such asprimitive rootsand quadratic residues.[39]
Paley graphsare dense undirected graphs, one for each primep≡ 1 (mod 4), that form an infinite family ofconference graphs, which yield an infinite family ofsymmetricconference matrices.
Paley digraphs are directed analogs of Paley graphs, one for eachp≡ 3 (mod 4), that yieldantisymmetricconference matrices.
The construction of these graphs uses quadratic residues.
The fact that finding a square root of a number modulo a large compositenis equivalent to factoring (which is widely believed to be ahard problem) has been used for constructingcryptographic schemessuch as theRabin cryptosystemand theoblivious transfer. Thequadratic residuosity problemis the basis for theGoldwasser-Micali cryptosystem.
Thediscrete logarithmis a similar problem that is also used in cryptography.
Euler's criterionis a formula for the Legendre symbol (a|p) wherepis prime. Ifpis composite the formula may or may not compute (a|p) correctly. TheSolovay–Strassen primality testfor whether a given numbernis prime or composite picks a randomaand computes (a|n) using a modification of Euclid's algorithm,[40]and also using Euler's criterion.[41]If the results disagree,nis composite; if they agree,nmay be composite or prime. For a compositenat least 1/2 the values ofain the range 2, 3, ...,n− 1 will return "nis composite"; for primennone will. If, after using many different values ofa,nhas not been proved composite it is called a "probable prime".
TheMiller–Rabin primality testis based on the same principles. There is a deterministic version of it, but the proof that it works depends on thegeneralized Riemann hypothesis; the output from this test is "nis definitely composite" or "eithernis prime or the GRH is false". If the second output ever occurs for a compositen, then the GRH would be false, which would have implications through many branches of mathematics.
In § VI of theDisquisitiones Arithmeticae[42]Gauss discusses two factoring algorithms that use quadratic residues and thelaw of quadratic reciprocity.
Several modern factorization algorithms (includingDixon's algorithm, thecontinued fraction method, thequadratic sieve, and thenumber field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find acongruence of squareswhich will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known.
The following table (sequenceA096008in theOEIS) lists the quadratic residues mod 1 to 75 (ared numbermeans it is not coprime ton). (For the quadratic residues coprime ton, seeOEIS:A096103, and for nonzero quadratic residues, seeOEIS:A046071.)
TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Quadratic_residue#Prime_power_modulus
|
So help me Godis a phrase often used to give anoath, sometimes optionally as part of anoath of office. It is used in some jurisdictions as an oath for performing a public duty, such as an appearance in court. The phrase implies greater care than usual in the truthfulness of one's testimony or in the performance of one's duty.
Notably, the wordhelpinso help me Godis in thesubjunctive mood.
InAustralia, theOath of Allegianceis available in two forms, one of which contains the phrase "So help me God!"[1]
InCanada, the Oath of Office, Oath of Allegiance, and Oath of Members of the Privy Council may be sworn, and end in "So help me God." They may also be solemnly affirmed, and in such case the phrase is omitted.[2]
TheConstitution of Fiji, Chapter 17requires this phrase for theoath of allegiance, and before service to the republic from the President's office or Vice-President's office, a ministerial position, or a judicial position.
InNew ZealandtheOath of Allegianceis available in English or Māori in two forms, one an oath containing the phrase 'so help me God' and the other anaffirmationwhich does not. ThePolice Act 1958and theOaths Modernisation Billstill includes the phrase.[3][4]
TheOath of Allegianceset out in thePromissory Oaths Act 1868ends with this phrase, and is required to be taken by various office-holders.[5]
The phrase "So help me God" is prescribed in oaths as early as theJudiciary Act of 1789, for U.S. officers other than the President. The act makes the semantic distinction between anaffirmationand anoath.[6]The oath, religious in essence, includes the phrase "so help me God" and "[I] swear". The affirmation uses "[I] affirm". Both serve the same purpose and are described as one (i.e. "... solemnly swear, or affirm, that ...")[7]
In theUnited States, theNo Religious Test Clausestates that "no religious test shall ever be required as a qualification to any office or public trust under the United States." Still, there are federal oaths which do include the phrase "So help me God", such as forjusticesandjudgesin28 U.S.C.§ 453.[8]
There is no law that requires Presidents to add the words "So help me God" at the end of the oath (or to use a Bible). Some historians maintain thatGeorge Washingtonhimself added the phrase to the end of his first oath, setting a precedent for future presidents and continuing what was already established practice in his day[9]and that all Presidents since have used this phrase, according to Marvin Pinkert, executive director of theNational Archives Experience.[10]Many other historians reject this story given that "it was not until 65 years after the event that the story that Washington added this phrase first appeared in a published volume" and other witnesses, who were present for the event, did not cite him as having added the phrase.[11]These historians further note that "we have no convincing contemporary evidence that any president said "so help me God" until September 1881, when Chester A. Arthur took the oath after the death of James Garfield."[12]It is demonstrable, however, that those historians are in error regarding their claim that there is no "contemporary evidence" of a president saying "so help me God" until 1881. Richard Gardiner's research published in theWhite House History Quarterly, November 2024, offers contemporary evidence for presidents who used the phrase going back to William Henry Harrison in 1841, and Andrew Jackson.[13]
TheUnited States Oath of Citizenship(officially referred to as the "Oath of Allegiance", 8 C.F.R. Part 337 (2008)), taken by all immigrants who wish to becomeUnited States citizens, includes the phrase "so help me God"; however8 CFR337.1provides that the phrase is optional.
TheEnlistment oathand officer'sOath of Officeboth contain this phrase. A change in October 2013 to Air Force Instruction 36-2606[14]made it mandatory to include the phrase during Air Force enlistments/reenlistments. This change has made the instruction "consistent with the language mandated in 10 USC 502".[15]The Air Force announced on September 17, 2014, that it revoked this previous policy change, allowing anyone to omit "so help me God" from the oath.[16]
Some of the states have specified that the words "so help me God" were used in oath of office, and also required ofjurors, witnesses in court,notaries public, and state employees. Alabama, Connecticut, Delaware, Kentucky, Louisiana, Maine, Mississippi, New Mexico, North Carolina, Texas, and Virginia retain the required "so help me God" as part of the oath to public office. Historically, Maryland and South Carolina did include it but both have been successfully challenged in court. Other states, such as New Hampshire, North Dakota and Rhode Island allow exceptions or alternative phrases. In Wisconsin, the specific language of the oath has been repealed.[17]
InCroatia, the text of presidential oath, which is defined by the Presidential Elections Act amendments of 1997 (Article 4), ends with "Tako mi Bog pomogao" (So help me God).[18][19]
In 2009, concerns about the phrase infringing onConstitution of Croatiawere raised.Constitutional Court of Croatiaruled them out in 2017, claiming that it is compatible with constitution and secular state.[20][21][22]The court said the phrase is in neither direct nor indirect relation to any religious beliefs of theelected president. It doesn't represent a theist or religious belief and does not stop the president in any way from expressing any other religious belief. Saying the phrase while taking the presidential oath does not force a certain belief on the President and does not infringe on their religious freedoms.[22]
In the inauguration ofDutch monarchs, the phrase "zo waarlijk helpe mij God Almachtig" ("So help me God Almighty") is used at the conclusion of the monarch's oath.[23]
In theOath of Officeof thePresident of the Philippines, the phrase "So help me God" (Filipino:Kasihan nawâ akó ng Diyos) is mandatory in oaths.[24]An affirmation, however, has exactly the same legal effect as an oath.
In medieval France, tradition held that when the Duke of Brittany or other royalty entered the city ofRennes, they would proclaimEt qu'ainsi Dieu me soit en aide("And so help me God").[25]
The phraseSo wahr mir Gott helfe(literally "as true as God may help me") is an optional part in oaths of office prescribed for civil servants, soldiers, judges as well as members and high representatives of the federal and state governments such as theFederal President,Federal Chancellorand theMinister Presidents. Parties and witnesses in criminal and civil proceedings may also be placed under oath with this phrase. In such proceedings, the judge first speaks the wordsYou swear [by God Almighty and All-Knowing] that to the best of your knowledge you have spoken the pure truth and not concealed anything.The witness or party then must answerI swear it [, so help me God]. The words between brackets are added or omitted according to the preference of the person placed under oath.[26]If the person concerned raises a conscientious objection against any kind of oath, the judge may speak the wordsAware of your responsibility in court, you affirm that to the best of your knowledge you have spoken the pure truth and not concealed anythingto which the person needs to replyYes.[27]Both forms of the oath and the affirmation carry the same penalty, if the person is found to have lied. Contrary to the oath without a religious phrase, this kind of affirmation is not necessarily available outside court proceedings (e.g. for an oath of office).
The traditional oath of witnesses in Austrian courts ends with the phraseso wahr mir Gott helfe. There are, however, exemptions for witnesses of different religious denominations as well as those unaffiliated with any religion. The oath is rarely practised in civil trials and was completely abolished for criminal procedures in 2008. The phraseso wahr mir Gott helfeis also an (optional) part in the oath of surveyors who testify as expert witnesses as well as court-certified interpreters. Unlike in Germany, the phraseso wahr mir Gott helfeis not part of the oath of office of theFederal President, members of the federal government or state governors, who may or may not add a religious affirmation after the form of oath prescribed by the constitution.
ThePolishphrase is "Tak mi dopomóż Bóg" or "Tak mi, Boże, dopomóż." It has been used in most version of thePolish Army oaths, however other denominations use different phrases. President, prime minister, deputy prime ministers, ministers and members of both houses of parliament can add this phrase at the end of the oath of their office.[28]
InRomania, the oath translation is "Așa să-mi ajute Dumnezeu!", which is used in various ceremonies such as the ministers' oath in front of the president of the republic or the magistrates' oath.
|
https://en.wikipedia.org/wiki/So_help_me_God
|
Inabstract algebra,group theorystudies thealgebraic structuresknown asgroups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such asrings,fields, andvector spaces, can all be seen as groups endowed with additionaloperationsandaxioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra.Linear algebraic groupsandLie groupsare two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such ascrystalsand thehydrogen atom, andthree of the fourknown fundamental forces in the universe, may be modelled bysymmetry groups. Thus group theory and the closely relatedrepresentation theoryhave many important applications inphysics,chemistry, andmaterials science. Group theory is also central topublic key cryptography.
The earlyhistory of group theorydates from the 19th century. One of the most important mathematical achievements of the 20th century[1]was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a completeclassification of finite simple groups.
Group theory has three main historical sources:number theory, the theory ofalgebraic equations, andgeometry. The number-theoretic strand was begun byLeonhard Euler, and developed byGauss'swork onmodular arithmeticand additive and multiplicative groups related toquadratic fields. Early results about permutation groups were obtained byLagrange,Ruffini, andAbelin their quest for general solutions of polynomial equations of high degree.Évariste Galoiscoined the term "group" and established a connection, now known asGalois theory, between the nascent theory of groups andfield theory. In geometry, groups first became important inprojective geometryand, later,non-Euclidean geometry.Felix Klein'sErlangen programproclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability ofpolynomial equations.Arthur CayleyandAugustin Louis Cauchypushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems fromgeometricalsituations. In an attempt to come to grips with possible geometries (such aseuclidean,hyperbolicorprojective geometry) using group theory,Felix Kleininitiated theErlangen programme.Sophus Lie, in 1884, started using groups (now calledLie groups) attached toanalyticproblems. Thirdly, groups were, at first implicitly and later explicitly, used inalgebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth ofabstract algebrain the early 20th century,representation theory, and many more influential spin-off domains. Theclassification of finite simple groupsis a vast body of work from the mid 20th century, classifying all thefinitesimple groups.
The range of groups being considered has gradually expanded from finite permutation groups and special examples ofmatrix groupsto abstract groups that may be specified through apresentationbygeneratorsandrelations.
The firstclassof groups to undergo a systematic study waspermutation groups. Given any setXand a collectionGofbijectionsofXinto itself (known aspermutations) that is closed under compositions and inverses,Gis a groupactingonX. IfXconsists ofnelements andGconsists ofallpermutations,Gis thesymmetric groupSn; in general, any permutation groupGis asubgroupof the symmetric group ofX. An early construction due toCayleyexhibited any group as a permutation group, acting on itself (X=G) by means of the leftregular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that forn≥ 5, thealternating groupAnissimple, i.e. does not admit any propernormal subgroups. This fact plays a key role in theimpossibility of solving a general algebraic equation of degreen≥ 5in radicals.
The next important class of groups is given bymatrix groups, orlinear groups. HereGis a set consisting of invertiblematricesof given ordernover afieldKthat is closed under the products and inverses. Such a group acts on then-dimensional vector spaceKnbylinear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the groupG.
Permutation groups and matrix groups are special cases oftransformation groups: groups that act on a certain spaceXpreserving its inherent structure. In the case of permutation groups,Xis a set; for matrix groups,Xis avector space. The concept of a transformation group is closely related with the concept of asymmetry group: transformation groups frequently consist ofalltransformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory withdifferential geometry. A long line of research, originating withLieandKlein, considers group actions onmanifoldsbyhomeomorphismsordiffeomorphisms. The groups themselves may bediscreteorcontinuous.
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of anabstract groupbegan to take hold, where "abstract" means that the nature of the elements are ignored in such a way that twoisomorphic groupsare considered as the same group. A typical way of specifying an abstract group is through apresentationbygenerators and relations,
A significant source of abstract groups is given by the construction of afactor group, orquotient group,G/H, of a groupGby anormal subgroupH.Class groupsofalgebraic number fieldswere among the earliest examples of factor groups, of much interest innumber theory. If a groupGis a permutation group on a setX, the factor groupG/His no longer acting onX; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant underisomorphism, as well as the classes of group with a given such property:finite groups,periodic groups,simple groups,solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation ofabstract algebrain the works ofHilbert,Emil Artin,Emmy Noether, and mathematicians of their school.[citation needed]
An important elaboration of the concept of a group occurs ifGis endowed with additional structure, notably, of atopological space,differentiable manifold, oralgebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they arecontinuous,smoothorregular(in the sense of algebraic geometry) maps, thenGis atopological group, aLie group, or analgebraic group.[2]
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain forabstract harmonic analysis, whereasLie groups(frequently realized as transformation groups) are the mainstays ofdifferential geometryand unitaryrepresentation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus,compact connected Lie groupshave been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a groupΓcan be realized as alatticein a topological groupG, the geometry and analysis pertaining toGyield important results aboutΓ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a singlep-adic analytic groupGhas a family of quotients which are finitep-groupsof various orders, and properties ofGtranslate into the properties of its finite quotients.
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially thelocal theoryof finite groups and the theory ofsolvableandnilpotent groups.[citation needed]As a consequence, the completeclassification of finite simple groupswas achieved, meaning that all thosesimple groupsfrom which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such asChevalleyandSteinbergalso increased our understanding of finite analogs ofclassical groups, and other related groups. One such family of groups is the family ofgeneral linear groupsoverfinite fields.
Finite groups often occur when consideringsymmetryof mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory ofLie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associatedWeyl groups. These are finite groups generated by reflections which act on a finite-dimensionalEuclidean space. The properties of finite groups can thus play a role in subjects such astheoretical physicsandchemistry.
Saying that a groupGactson a setXmeans that every element ofGdefines a bijective map on the setXin a way compatible with the group structure. WhenXhas more structure, it is useful to restrict this notion further: a representation ofGon avector spaceVis agroup homomorphism:
whereGL(V) consists of the invertiblelinear transformationsofV. In other words, to every group elementgis assigned anautomorphismρ(g) such thatρ(g) ∘ρ(h) =ρ(gh)for anyhinG.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[3]On the one hand, it may yield new information about the groupG: often, the group operation inGis abstractly given, but viaρ, it corresponds to themultiplication of matrices, which is very explicit.[4]On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, ifGis finite, it is known thatVabove decomposes intoirreducible parts(seeMaschke's theorem). These parts, in turn, are much more easily manageable than the wholeV(viaSchur's lemma).
Given a groupG,representation theorythen asks what representations ofGexist. There are several settings, and the employed methods and obtained results are rather different in every case:representation theory of finite groupsand representations ofLie groupsare two main subdomains of the theory. The totality of representations is governed by the group'scharacters. For example,Fourier polynomialscan be interpreted as the characters ofU(1), the group ofcomplex numbersofabsolute value1, acting on theL2-space of periodic functions.
ALie groupis agroupthat is also adifferentiable manifold, with the property that the group operations are compatible with thesmooth structure. Lie groups are named afterSophus Lie, who laid the foundations of the theory of continuoustransformation groups. The termgroupes de Liefirst appeared in French in 1893 in the thesis of Lie's studentArthur Tresse, page 3.[5]
Lie groups represent the best-developed theory ofcontinuous symmetryofmathematical objectsandstructures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for moderntheoretical physics. They provide a natural framework for analysing the continuous symmetries ofdifferential equations(differential Galois theory), in much the same way as permutation groups are used inGalois theoryfor analysing the discrete symmetries ofalgebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Groups can be described in different ways. Finite groups can be described by writing down thegroup tableconsisting of all possible multiplicationsg•h. A more compact way of defining a group is bygenerators and relations, also called thepresentationof a group. Given any setFof generators{gi}i∈I{\displaystyle \{g_{i}\}_{i\in I}}, thefree groupgenerated byFsurjects onto the groupG. The kernel of this map is called the subgroup of relations, generated by some subsetD. The presentation is usually denoted by⟨F∣D⟩.{\displaystyle \langle F\mid D\rangle .}For example, the group presentation⟨a,b∣aba−1b−1⟩{\displaystyle \langle a,b\mid aba^{-1}b^{-1}\rangle }describes a group which is isomorphic toZ×Z.{\displaystyle \mathbb {Z} \times \mathbb {Z} .}A string consisting of generator symbols and their inverses is called aword.
Combinatorial group theorystudies groups from the perspective of generators and relations.[6]It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection ofgraphsvia theirfundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. Theword problemasks whether two words are effectively the same group element. By relating the problem toTuring machines, one can show that there is in general noalgorithmsolving this task. Another, generally harder, algorithmically insoluble problem is thegroup isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation⟨x,y∣xyxyx=e⟩,{\displaystyle \langle x,y\mid xyxyx=e\rangle ,}is isomorphic to the additive groupZof integers, although this may not be immediately apparent. (Writingz=xy{\displaystyle z=xy}, one hasG≅⟨z,y∣z3=y⟩≅⟨z⟩.{\displaystyle G\cong \langle z,y\mid z^{3}=y\rangle \cong \langle z\rangle .})
Geometric group theoryattacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[7]The first idea is made precise by means of theCayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs theword metricgiven by the length of the minimal path between the elements. A theorem ofMilnorand Svarc then says that given a groupGacting in a reasonable manner on ametric spaceX, for example acompact manifold, thenGisquasi-isometric(i.e. looks similar from a distance) to the spaceX.
Given a structured objectXof any sort, asymmetryis a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
The axioms of a group formalize the essential aspects ofsymmetry. Symmetries form a group: they areclosedbecause if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theoremsays that every group is the symmetry group of somegraph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in acategory. Maps preserving the structure are then themorphisms, and the symmetry group is theautomorphism groupof the object in question.
Applications of group theory abound. Almost all structures inabstract algebraare special cases of groups.Rings, for example, can be viewed asabelian groups(corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theoryuses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). Thefundamental theorem of Galois theoryprovides a link betweenalgebraic field extensionsand group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the correspondingGalois group. For example,S5, thesymmetric groupin 5 elements, is not solvable which implies that the generalquintic equationcannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such asclass field theory.
Algebraic topologyis another domain which prominentlyassociatesgroups to the objects the theory is interested in. There, groups are used to describe certain invariants oftopological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to somedeformation. For example, thefundamental group"counts" how many paths in the space are essentially different. ThePoincaré conjecture, proved in 2002/2003 byGrigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use ofEilenberg–MacLane spaceswhich are spaces with prescribedhomotopy groups. Similarlyalgebraic K-theoryrelies in a way onclassifying spacesof groups. Finally, the name of thetorsion subgroupof an infinite group shows the legacy of topology in group theory.
Algebraic geometrylikewise uses group theory in many ways.Abelian varietieshave been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example theHodge conjecture(in certain cases).) The one-dimensional case, namelyelliptic curvesis studied in particular detail. They are both theoretically and practically intriguing.[8]In another direction,toric varietiesarealgebraic varietiesacted on by atorus. Toroidal embeddings have recently led to advances inalgebraic geometry, in particularresolution of singularities.[9]
Algebraic number theorymakes uses of groups for some important applications. For example,Euler's product formula,
capturesthe factthat any integer decomposes in a unique way intoprimes. The failure of this statement formore general ringsgives rise toclass groupsandregular primes, which feature inKummer'streatment ofFermat's Last Theorem.
Analysis on Lie groups and certain other groups is calledharmonic analysis.Haar measures, that is, integrals invariant under the translation in a Lie group, are used forpattern recognitionand otherimage processingtechniques.[10]
Incombinatorics, the notion ofpermutationgroup and the concept of group action are often used to simplify the counting of a set of objects; see in particularBurnside's lemma.
The presence of the 12-periodicityin thecircle of fifthsyields applications ofelementary group theoryinmusical set theory.Transformational theorymodels musical transformations as elements of a mathematical group.
Inphysics, groups are important because they describe the symmetries which the laws of physics seem to obey. According toNoether's theorem, every continuous symmetry of a physical system corresponds to aconservation lawof the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include theStandard Model,gauge theory, theLorentz group, and thePoincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed byWillard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.[11]
Inchemistryandmaterials science,point groupsare used to classify regular polyhedra, and thesymmetries of molecules, andspace groupsto classifycrystal structures. The assigned groups can then be used to determine physical properties (such aschemical polarityandchirality), spectroscopic properties (particularly useful forRaman spectroscopy,infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to constructmolecular orbitals.
Molecular symmetryis responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
Inchemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of achiralmolecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, wherenis an integer, about a rotation axis. For example, if awatermolecule rotates 180° around the axis that passes through theoxygenatom and between thehydrogenatoms, it is in the same configuration as it started. In this case,n= 2, since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cnaxis having the largest value of n is the highest order rotation axis or principal axis. For example inboron trifluoride(BF3), the highest order of rotation axis isC3, so the principal axis of rotation isC3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is calledσh(horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example,methaneand othertetrahedralmolecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Very large groups of prime order constructed inelliptic curve cryptographyserve forpublic-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make thediscrete logarithmvery hard to calculate. One of the earliest encryption protocols,Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particularDiffie–Hellman key exchangeuses finitecyclic groups. So the termgroup-based cryptographyrefers mostly tocryptographic protocolsthat use infinitenon-abelian groupssuch as abraid group.
|
https://en.wikipedia.org/wiki/Group_theory#Finite_groups
|
TheBabington Plotwas a plan in 1586 to assassinate QueenElizabeth I, aProtestant, and putMary, Queen of Scots, herCatholiccousin, on the English throne. It led to Mary's execution, a result of a letter sent by Mary (who had been imprisoned for 19 years since 1568 in England at the behest of Elizabeth) in which she consented to the assassination of Elizabeth.[1]
The long-term goal of the plot was the invasion of England by the Spanish forces of KingPhilip IIand theCatholic League in France, leading to the restoration of the old religion. The plot was discovered by Elizabeth's spymaster SirFrancis Walsinghamand used to entrap Mary for the purpose of removing her as a claimant to the English throne.
The chief conspirators wereAnthony BabingtonandJohn Ballard. Babington, a youngrecusant, was recruited by Ballard, aJesuitpriest who hoped to rescue the Scottish queen. Working for Walsingham weredouble agentsRobert PoleyandGilbert Gifford, as well asThomas Phelippes, a spy agent andcryptanalyst, and the Puritan spyMaliverey Catilyn. The turbulent Catholic deacon Gifford had been in Walsingham's service since the end of 1585 or the beginning of 1586. Gifford obtained a letter of introduction to Queen Mary from a confidant and spy for her,Thomas Morgan. Walsingham then placed double agent Gifford and spy decipherer Phelippes insideChartley Castle, where Queen Mary was imprisoned. Gifford organised the Walsingham plan to place Babington's and Queen Mary'sencryptedcommunications into a beer barrel cork which were then intercepted by Phelippes, decoded and sent to Walsingham.[2]
On 7 July 1586, the only Babington letter that was sent to Mary was decoded by Phelippes. Mary responded in code on 17 July 1586 ordering the would-be rescuers to assassinate Queen Elizabeth. The response letter also included deciphered phrases indicating her desire to be rescued: "The affairs being thus prepared" and "I may suddenly be transported out of this place". At the Fotheringay trial in October 1586, Elizabeth's Lord High Treasurer William Cecil –Lord Burghley– and Walsingham used the letter against Mary who refused to admit that she was guilty. However, Mary was betrayed by her secretariesNauandCurle, who confessed under pressure that the letter was mainly truthful.[3]
Mary, Queen of Scots, a Roman Catholic, was regarded by Roman Catholics as the legitimate heir to the throne of England. In 1568, she escaped imprisonment by Scottish rebels and sought the aid of her first cousin once removed,Queen Elizabeth I, a year after her forced abdication from the throne ofScotland. The issuance of thepapal bullRegnans in ExcelsisbyPope Pius Von 25 February 1570, granted English Catholics authority to overthrow the English queen. Queen Mary became the focal point of numerous plots and intrigues to restore England to its former religion, Catholicism, and to depose Elizabeth and even to take her life. Rather than rendering the expected aid, Elizabeth imprisoned Mary for nineteen years in the charge of a succession of jailers, principally theEarl of Shrewsbury.
In 1584, Elizabeth'sPrivy Councilsigned a "Bond of Association" designed by Cecil and Walsingham which stated that anyone within the line of succession to the throneon whose behalfanyone plotted against the Queen, would be excluded from the line and executed. This was agreed upon by hundreds of Englishmen, who likewise signed the Bond. Mary also agreed to sign the Bond. The following year, Parliament passed theAct of Association, which provided for the execution of anyone who would benefit from the death of the Queen if a plot against her was discovered. Because of the bond, Mary could be executed if a plot was initiated by others that could lead to her accession to England's throne.[4]
In 1585, Elizabeth ordered Mary to be transferred in a coach and under heavy guard and placed under the strictest confinement atChartley HallinStaffordshire, under the control ofSir Amias Paulet. She was prohibited any correspondence with the outside world.PuritanPaulet was chosen by Queen Elizabeth in part because he abhorred Queen Mary's Catholic faith.
Reacting to the growing threat posed by Catholics, urged on by the pope and other Catholic monarchs in Europe,Francis Walsingham, Queen Elizabeth'sSecretary of Stateandspymaster, together withWilliam Cecil, Elizabeth's chief advisor, realised that if Mary could be implicated in a plot to assassinate Elizabeth, she could be executed and thepapistthreat diminished. As he wrote to theEarl of Leicester: "So long as that devilish woman lives, neither Her Majesty must make account to continue in quiet possession of her crown, nor her faithful servants assure themselves of safety of their lives."[5]Walsingham used Babington to ensnare Queen Mary by sending his double agent,Gilbert Giffordto Paris to obtain the confidence of Morgan, then locked in the Bastille. Morgan previously worked forGeorge Talbot, 6th Earl of Shrewsbury, an earlier jailer of Queen Mary. Through Shrewsbury, Queen Mary became acquainted with Morgan. Queen Mary sent Morgan to Paris to deliver letters to the French court. While in Paris, Morgan became involved in a previous plot designed by William Parry, which resulted in Morgan's incarceration in the Bastille. In 1585 Gifford was arrested returning to England while coming through Rye in Sussex with letters of introduction from Morgan to Queen Mary. Walsingham released Gifford to work as adouble agent, in the Babington Plot. Gifford used the alias "No. 4" just as he had used other aliases such as Colerdin, Pietro and Cornelys. Walsingham had Gifford function as a courier in the entrapment plot against Queen Mary.
The Babington plot was related to several separate plans:
At the behest of Mary's French supporters,John Ballard, aJesuitpriest and agent of the Roman Church, went to England on various occasions in 1585 to secure promises of aid from the northern Catholic gentry on behalf of Mary. In March 1586, he met withJohn Savage, an ex-soldier who was involved in a separate plot against Elizabeth and who had sworn an oath to assassinate the queen. He was resolved in this plot after consulting with three friends: Dr. William Gifford,Christopher Hodgsonand Gilbert Gifford. Gilbert Gifford had been arrested by Walsingham and agreed to be a double agent. Gifford was already in Walsingham's employ by the time Savage was going ahead with the plot, according to Conyers Read.[7]Later that same year, Gifford reported to Charles Paget and the Spanish diplomat DonBernardino de Mendoza, and told them that English Catholics were prepared to mount an insurrection against Elizabeth, provided that they would be assured of foreign support. While it was uncertain whether Ballard's report of the extent of Catholic opposition was accurate, what was certain is that he was able to secure assurances that support would be forthcoming. He then returned to England, where he persuaded a member of the Catholic gentry, Anthony Babington, to lead and organise the English Catholics against Elizabeth. Ballard informed Babington about the plans that had been so far proposed. Babington's later confession made it clear that Ballard was sure of the support of theCatholic League:
He told me he was retorned from Fraunce uppon this occasion. Being with Mendoza at Paris, he was informed that in regarde of the iniuries don by our state unto the greatest Christian princes, by the nourishinge of sedition and divisions in their provinces, by withholding violently the lawful possessions of some, by invasion of the Indies and by piracy, robbing the treasure and the wealthe of others, and sondry intolerable wronges for so great and mighty princes to indure, it was resolved by the Catholique league to seeke redresse and satisfaction, which they had vowed to performe this sommer without farther delay, havinge in readiness suche forces and all warlike preparations as the like was never scene in these partes of Christendome ... The Pope was chief disposer, the most Christian king and the king Catholic with all other princes of the league concurred as instruments for the righting of these wronges, and reformation of religion. The conductors of this enterprise for the French nation, theD. of Guise, or his brother theD. de Main; for the Italian and Hispanishe forces, the P. of Parma; the whole number about 60,000.[8]
Despite this assurance of this foreign support, Babington was hesitant, as he thought that no foreign invasion would succeed for as long as Elizabeth remained, to which Ballard answered that the plans of John Savage would take care of that. After a lengthy discussion with friends and soon-to-be fellow conspirators, Babington consented to join and to lead the conspiracy.[9]
Unfortunately for the conspirators, Walsingham was certainly aware of some of the aspects of the plot, based on reports by his spies, most notably Gilbert Gifford, who kept tabs on all the major participants. While he could have shut down some part of the plot and arrested some of those involved within reach, he still lacked any piece of evidence that would prove Queen Mary's active participation in the plot and he feared to commit any mistake which might cost Elizabeth her life.
After theThrockmorton Plot, Queen Elizabeth had issued a decree in July 1584, which prevented all communication to and from Mary. However, Walsingham and Cecil realised that that decree also impaired their ability to entrap Mary. They needed evidence for which she could be executed based on their Bond of Association tenets. Thus Walsingham established a new line of communication, one which he could carefully control without incurring any suspicion from Mary. Gifford approached the French ambassador to England,Guillaume de l'Aubespine, Baron de Châteauneuf-sur-Cher, and described the new correspondence arrangement that had been designed by Walsingham. Gifford and jailer Paulet had arranged for a local brewer to facilitate the movement of messages between Queen Mary and her supporters by placing them in a watertight box inside a beer barrel.[10][11]Thomas Phelippes, a cipher and language expert in Walsingham's employ, was then quartered at Chartley Hall to receive the messages, decode them and send them to Walsingham. Gifford submitted a code table (supplied by Walsingham) to Chateauneuf and requested the first message be sent to Mary.[12]
All subsequent messages to Mary would be sent via diplomatic packets to Chateauneuf, who then passed them on to Gifford. Gifford would pass them on to Walsingham, who would confide them to Phelippes. The cipher used was anomenclator cipher.[13]Phelippes would decode and make a copy of the letter. The letter was then resealed and given back to Gifford, who would pass it on to the brewer. The brewer would then smuggle the letter to Mary. If Mary sent a letter to her supporters, it would go through the reverse process. In short order, every message coming to and from Chartley was intercepted and read by Walsingham.[14]
Babington wrote to Mary:[15]
Myself with ten gentlemen and a hundred of our followers will undertake the delivery of your royal person from the hands of your enemies. For the dispatch of the usurper, from the obedience of whom we are by the excommunication of her made free, there be six noble gentlemen, all my private friends, who for the zeal they bear to the Catholic cause and your Majesty's service will undertake that tragical execution.[16][17]
This letter was received by Mary on 14 July 1586, who was in a dark mood knowing that her son had betrayed her in favour of Elizabeth,[18]and three days later she replied to Babington in a long letter in which she outlined the components of a successful rescue and the need to assassinate Elizabeth. She also stressed the necessity of foreign aid if the rescue attempt was to succeed:
For I have long ago shown unto the foreign Catholic princes, what they have done against the King of Spain, and in the time the Catholics here remaining, exposed to all persecutions and cruelty, do daily diminish in number, forces, means and power. So as, if remedy be not thereunto speedily provided, I fear not a little but they shall become altogether unable for ever to rise again and to receive any aid at all, whensoever it were offered. Then for mine own part, I pray you to assure our principal friends that, albeit I had not in this cause any particular interest in this case... I shall be always ready and most willing to employ therein my life and all that I have, or may ever look for, in this world."[19][20][21]
Mary, in her response letter, advised the would-be rescuers to confront thePuritansand to link her case to the Queen of England as her heir.
These pretexts may serve to found and establish among all associations, or considerations general, as done only for your preservation and defence, as well in religions as lands, lives and goods, against the oppressions and attempts of said Puritans, without directly writing, or giving out anything against the Queen, but rather showing yourselves willing to maintain her, and her lawful heirs after her, not naming me.[22]
Mary was clear in her support for the murder of Elizabeth if that would have led to her liberty and Catholic domination of England. In addition, Queen Mary supported in that letter, and in another one to Ambassador Bernardino de Mendoza, a Spanish invasion of England.
The letter was again intercepted and deciphered by Phelippes. But this time, Phelippes, on the direction of Walsingham, kept the original and made a copy, adding a request for the names of the conspirators:[23][24]
I would be glad to know the names and quelityes of the sixe gentlemen which are to accomplish the dessignement, for that it may be, I shall be able uppon knowledge of the parties to give you some further advise necessarye to be followed therein; and even so do I wish to be made acquainted with the names of all such principall persons ... as also from time to time particularlye how you proceede and as sone as you may for the same purpose who bee alredye and how farr every one privye hereunto.[25][26][27]
Then, a letter was sent that would destroy Mary's life.
Let the great plot commence.SignedMary
John Ballardwas arrested on 4 August 1586, and under torture he confessed and implicated Babington. Although Babington was able to receive the letter with the postscript, he was not able to reply with the names of the conspirators, as he was arrested. Others were taken prisoner by 15 August 1586. Mary's two secretaries,Claude NauandGilbert Curle, and a clerkJérôme Pasquierwere likewise taken into custody and interrogated.[28]A large chest filled with Mary's papers seized at Chartley was taken to London.[29]
The conspirators were sentenced to death fortreasonand conspiracy against the crown, and were to behanged, drawn, and quartered. This first group included Babington, Ballard,Chidiock Tichborne,Thomas Salisbury,Henry Donn, Robert Barnewell andJohn Savage. A further group of seven men includingEdward Habington, Charles Tilney, Edward Jones, John Charnock, John Travers, Jerome Bellamy, and Robert Gage, were tried and convicted shortly afterward. Ballard and Babington were executed on 20 September 1586 along with the other men who had been tried with them. Such was the public outcry at the horror of their execution that Elizabeth changed the order for the second group to be allowed to hang until "quite dead" before disembowelling and quartering.
In October 1586, Mary was sent to be tried atFotheringhay CastleinNorthamptonshireby 46 English lords, bishops and earls. She was not permitted legal counsel, not permitted to review the evidence against her, nor to call witnesses. Portions of Phellipes' letter translations were read at the trial. Mary denied knowing Babington and Ballard,[30]but it was insisted that she had sent a reply to Babington using the same cipher code, entrusting the letter to a servant in a blue coat.[31]Mary was convicted of treason against England. One English Lord voted not guilty. Elizabeth signed her cousin-once-removed's death warrant,[32]and on 8 February 1587, in front of 300 witnesses, Mary, Queen of Scots, was executed by beheading.[33]
Mary Stuart(German:Maria Stuart), a dramatised version of the last days ofMary, Queen of Scots, including the Babington Plot, was written byFriedrich Schillerand performed inWeimar, Germany, in 1800. This in turn formed the basis forMaria Stuarda, an opera byDonizetti, in 1835. Although the Babington Plot occurs before the events of the opera, and is only referenced twice during the opera, the second such occasion being Mary admitting her own part in it in private to her confessor (a role taken by Lord Talbot in the opera, although not in real life).
The story of the Babington Plot is dramatised in the novelConies in the HaybyJane Lane(ISBN0-7551-0835-3), and features prominently inAnthony Burgess'sA Dead Man in Deptford. A fictional account is given in theMy Storybook series,The Queen's Spies(retitledTo Kill A Queen2008) told in diary format by a fictional Elizabethan girl, Kitty.
The Babington plot forms the historical background – and provides much of the intrigue – forHoly Spy, the 7th in the historical detective series by Rory Clements, featuring John Shakespeare, an intelligencer forWalsinghamand elder brother of the more famousWill.
The simplified version of the Babington plot is also the subject of the children's or Young Adult novelA Traveller in Time(1939), byAlison Uttley, who grew up near the Babington family home in Derbyshire. A young modern girl finds that she slips back to the time shortly before the Plot is about to be implemented. This was later made into a BBC TV mini-series in 1978, with small changes to the original novel.
The Babington Plot is also dramatized in the 2017Ken FollettnovelA Column of Fire, in Jacopo della Quercia's 2015 novelLicense to Quill, and inSJ Parris's 2020 novelExecution, the latest of her novels featuringGiordano Brunoas protagonist.
The Babington Plot is also dramatized in the 2024 T.S. Milbourne novel "Gilbert Gifford".
The novel focuses on Gilbert Gifford, the double agent in the Babington Plot, and portrays his complicated position through the dramatization of the people involved in the plot.
The plot figures prominently in the first chapter ofThe Code Book, a survey of the history of cryptography written bySimon Singhand published in 1999.
Episode four of the 1971 television miniseriesElizabeth R(titled "Horrible Conspiracies") is devoted to the Babington Plot. It is also depicted in the miniseriesElizabeth I(2005) and the filmsMary, Queen of Scots(1971),Elizabeth: The Golden Age(2007) andMary Queen of Scots(2018).
A 45-minute drama entitledThe Babington Plot, written by Michael Butt and directed bySasha Yevtushenko, was broadcast onBBC Radio 4on 2 December 2008 as part of theAfternoon Drama.[34]This drama took the form of a documentary on the first anniversary of the executions, with the story being told from the perspectives of Thomas Salisbury, Robert Poley, Gilbert Gifford and others who, while not conspirators, are in some way connected with the events, all of whom are interviewed by the Presenter (played byStephen Greif). The cast also includedSamuel BarnettasThomas Salisbury,Burn GormanasRobert Poley, Jonathan Taffler asThomas Phelippesand Inam Mirza asGilbert Gifford.
Episode one of the 2017 BBC miniseriesElizabeth I's Secret Agents[35](broadcast in the U.S. onPBSin 2018 asQueen Elizabeth's Secret Agents[36]) deals in part with the Babington plot.
|
https://en.wikipedia.org/wiki/Babington_Plot
|
Cooperative bankingis retail and commercial banking organized on a cooperative basis. Cooperativebanking institutionstake deposits and lend money in most parts of the world.
Cooperative banking, as discussed here, includes retail banking carried out bycredit unions,mutual savings banks,building societiesandcooperatives, as well as commercial banking services provided bymutual organizations(such ascooperative federations) to cooperative businesses.
Cooperative banks are owned by their customers and follow thecooperative principleof one person, one vote. Co-operative banks are often regulated under both banking and cooperative legislation. They provide services such as savings and loans to non-members as well as to members, and some participate in the wholesale markets for bonds, money and even equities.[1]Many cooperative banks are traded on publicstock markets, with the result that they are partly owned by non-members.
Member control can be diluted by these outside stakes, so they may be regarded as semi-cooperative.
Cooperative banking systems are also usually more integrated than credit union systems. Local branches of co-operative banks select their own boards of directors and manage their own operations, but most strategic decisions require approval from a central office. Credit unions usually retain strategic decision-making at a local level, though they share back-office functions, such as access to the global payments system, by federating.
Some cooperative banks are criticized for diluting their cooperative principles. Principles 2-4 of the "Statement on the Co-operative Identity" can be interpreted to require that members must control both the governance systems and capital of their cooperatives. A cooperative bank that raises capital on public stock markets creates a second class of shareholders who compete with the members for control. In some circumstances, the members may lose control. This effectively means that the bank ceases to be a cooperative. Accepting deposits from non-members may also lead to a dilution of member control.
Credit unions have the purpose of promoting thrift, providing credit at reasonable rates, and providing other financial services to its members.[2]Its members are usually required to share acommon bond, such as locality, employer, religion or profession, and credit unions are usually funded entirely by member deposits, and avoid outside borrowing.
They are typically (though not exclusively) the smaller form of cooperative banking institution.
In some countries they are restricted to providing only unsecured personal loans, whereas in others, they can provide business loans to farmers, and mortgages.
The special banks providing Long Term Loans are calledLand Development Banks(LDBs). The first LDB was started at Jhang inPunjabin 1920. This bank is also based oncooperative. The main objective of the LDBs are to promote the development of land, agriculture and increase the agricultural production. The LDBs provide long-term finance to members directly through their branches.[3]
Building societies exist in Britain, Ireland and several Commonwealth countries. They are similar to credit unions in organisation, though few enforce acommon bond. However, rather than promoting thrift and offering unsecured and business loans, their purpose is to provide home mortgages for members. Borrowers and depositors are society members, setting policy and appointing directors on a one-member, one-vote basis. Building societies often provide other retail banking services, such as current accounts, credit cards and personal loans. In the United Kingdom, regulations permit up to half of their lending to be funded by debt to non-members, allowing societies to access wholesale bond and money markets to fund mortgages. The world's largest building society is Britain'sNationwide Building Society.
Mutual savings banksand mutualsavings and loan associationswere very common in the 19th and 20th centuries, but declined in number and market share in the late 20th century, becoming globally less significant than cooperative banks, building societies and credit unions.
Trustee savings banksare similar to other savings banks, but they are not cooperatives, as they are controlled by trustees, rather than their depositors.
The most important international associations of cooperative banks are the Brussels-basedEuropean Association of Co-operative Bankswhich has 28 European and non-European members, and the Paris-based International Cooperative Banking Association (ICBA), which has member institutions from around the world too.
In Canada, cooperative banking is provided by credit unions (caisses populairesin French). As of September 30, 2012, there were 357 credit unions andcaisses populairesaffiliated with Credit Union Central of Canada. They operated 1,761 branches across the country with 5.3 million members and $149.7 billion in assets.[4]
Thecaisse populairemovement started byAlphonse DesjardinsinQuebec,Canada, pioneered credit unions. Desjardins opened the first credit union in North America in 1900, from his home inLévis, Quebec, marking the beginning of theMouvement Desjardins. He was interested in bringing financial protection to working people.
Britishbuilding societiesdeveloped into general-purpose savings and banking institutions with ‘one member, one vote’ ownership and can be seen as a form of financial cooperative (although manyde-mutualisedinto conventionally owned banks in the 1980s and 1990s). Until 2017, theCo-operative GroupincludedThe Co-operative Bank; however, despite its name, the Co-operative Bank was not itself a trueco-operativeas it was not owned directly by its members. Instead it was part-owned by a holding company which was itself a co-operative – theCo-operative Banking Group.[5]It still retains aninsuranceprovider,The Co-operative Insurance, noted for promotingethical investment. For the financial year 2021/2022, the British building society sector had assets of around £483, of which more than half were accounted for by the cooperativeNationwide Building Society.
Important continental cooperative banking systems include theCrédit Agricole,Crédit Mutuel, andGroupe BPCEin France,Caja Rural Cooperative GroupandCajamar Cooperative Groupin Spain,Rabobankin the Netherlands, theGerman Cooperative Financial Groupin Germany,ICCREA BancaandCassa Centrale Banca - Credito Cooperativo Italianoin Italy,Migrosand Coop Bank in Switzerland, and theRaiffeisen Banking Groupin Austria. The cooperative banks that are members of theEuropean Association of Co-operative Bankshave 130 million customers, 4 trillion euros in assets, and 17% of Europe's deposits. The International Confederation of Cooperative Banks (CIBP) is the oldest association of cooperative banks at international level.
In theNordic countries, there is a clear distinction betweenmutual savings banks(Sparbank) and truecredit unions(Andelsbank).
In Italy, a 2015 reform required popular banks (Italian:Banca Popolare) with assets of greater than €8 billion todemutualizeinto joint-stock companies (Italian:società per azioni).[6]
Credit unions in the United Stateshad 96.3 million members in 2013 and assets of $1.06 trillion.[7][8]The sector had five times lower failure rate than other banks during the2008 financial crisis[9]and more than doubled lending to small businesses between 2008 and 2016, from $30 billion to $60 billion, while lending to small businesses overall during the same period declined by around $100 billion.[10]
Public trust in credit unions in the United States stands at 60%, compared to 30% for big banks[11]and small businesses are 80% less likely to be dissatisfied with a credit union than with a big bank.[12]
Cooperative banks serve an important role in theIndian economy, especially in rural areas. In urban areas, they mainly serve to small industry and self-employed workers. They are registered under the Cooperative Societies Act, 1912. They are regulated by theReserve Bank of Indiaunder theBanking Regulation Act, 1949and Banking Laws (Application to Cooperative Societies) Act, 1965.[13]Anyonya Sahakari Mandali, established in 1889 in the province ofBaroda, is the earliest known cooperative credit union in India.[14]
The Cooperative Credit System in India consists of Short Term and Long Term credit institutions. The short-term credit structure which takes care of the short term (1 to 5 years) credit needs of the farmers is a three-tier structure in most of the States viz., Primary Agricultural Cooperative Societies (PACCS) at the village level, District Central Cooperative Banks at the District level and State Cooperative Bank at the State level and two-tier in some States voz., State Cooperative Banks and PACCS. The long term credit structure caters to the long term credit needs of the farmers (up to 20 years) is a two-tier structure with Primary Agriculture and Rural Development Banks (PARDBs) at the village level and State Agriculture and Rural Development Banks. The State Cooperative Banks and Central Cooperative Banks are licensed by Reserve Bank of India under the Banking Regulation Act. While the StCBs and DCCBs function like a normal Bank they focus mainly on agricultural credit. While Reserve Bank of India is the Regulating Authority,National Bank for Agriculture and Rural Development(NABARD) provides refinance support and takes care of inspection of StCBs and DCCBs. The first Cooperative Credit Society in India was started in 1904 at Thiroor in Tiruvallur District in Tamil Nadu
Primary Cooperative Banks which are otherwise known as Urban Cooperative Banks are registered as Cooperative Societies under the Cooperative Societies Acts of the concerned States or the Multi-State Cooperative Societies Act function in urban areas and their business is similar to that of Commercial Banks. They are licensed by RBI to do banking business. Reserve Bank of India is both the controlling and inspecting authority for the Primary Cooperative Banks.
Ofek(Hebrew: אופק) is a cooperative initiative founded in mid-2012 that intended to establish the first cooperative bank in Israel.[15]
The recent phenomena ofmicrocreditandmicrofinanceare often based on a cooperative model. These focus onsmall businesslending. In 2006,Muhammad Yunus, founder of theGrameen Bankin Bangladesh, won theNobel Peace Prizefor his ideas regarding development and his pursuit of the microcredit concept. In this concept the institution provides micro loans to people who couldn't otherwise secure loans through conventional means.
However, cooperative banking differs from modern microfinance. Particularly, members’ control over financial resources is the distinguishing feature between the cooperative model and modern microfinance. The not-for-profit orientation of modern microfinance has gradually been replaced by full-cost recovery and self-sustainable microfinance approaches. The microfinance model has been gradually absorbed by market-oriented or for-profit institutions in most underdeveloped economies. The current dominant model of microfinance, whether it is provided by not-for-profit or for-profit institutions, places the control over financial resources and their allocation in the hands of a small number of microfinance providers that benefit from the highly profitable sector.
Cooperative banking is different in many aspects from standard microfinance institutions, both for-profit and not-for-profit organizations. Although group lending may seemingly share some similarities with cooperative concepts, in terms of joint liability, the distinctions are much bigger, especially when it comes to autonomy, mobilization and control over resources, legal and organizational identity, and decision-making. Early financial cooperatives founded in Germany were more able to provide larger loans relative to the borrowers’ income, with longer-term maturity at lower interest rates compared to modern standard microfinance institutions. The main source of funds for cooperatives are local savings, while microfinance institutions in underdeveloped economies rely heavily on donations, foreign funds, external borrowing, or retained earnings, which implies high-interest rates. High-interest rates, short-term maturities, and tight repayment schedules are destructive instruments for low- and middle-income borrowers which may lead to serious debt traps, or in best scenarios will not support any sort of capital accumulation. Without improving the ability of agents to earn, save, and accumulate wealth, there are no real economic gains from financial markets to the lower- and middle-income populations.[16]
Head office: Zakir Hossain Road, Khulshi, Chittagong-4209, Bangladesh.
A 2013 report byILOconcluded that cooperative banks outperformed their competitors during the2008 financial crisis. The cooperative banking sector had a 20% market share of the European banking sector, but accounted for only 7% of all the write-downs and losses between the third quarter of 2007 and first quarter of 2011. Cooperative banks were also over-represented in lending to small and medium-sized businesses in all of the 10 countries included in the report.[28]
Credit unionsin the US had five times the lower failure rate of other banks during the crisis[9]and more than doubled lending to small businesses between 2008 and 2016, from $30 billion to $60 billion, while lending to small businesses overall during the same period declined by around $100 billion.[10]
|
https://en.wikipedia.org/wiki/Cooperative_banking
|
Olaf Sporns(born 18 September 1963) is Provost Professor in Psychological and Brain Sciences atIndiana Universityand scientific co-director of the university's Network Science Institute.[1]He is the founding editor of theacademic journalNetwork Neuroscience, published byMIT Press.[citation needed][2]
Sporns received his degree fromUniversity of TübingeninTübingen, West Germany, before going toNew Yorkto study at theRockefeller UniversityunderGerald Edelman. After receiving his doctorate, he followed Edelman to theNeurosciences InstituteinLa Jolla,California.
His focus is in the area of computational cognitive neuroscience. His topics of study include functional integration and binding in the cerebral cortex, neural models of perception and action, network structure and dynamics, applications of information theory to the brain and embodied cognitive science using robotics.[3]He was awarded aGuggenheim Fellowshipin 2011 in the Natural Sciences category.[citation needed]
One of the core areas of research being conducted by Sporns is in the area of complexity of the brain. One aspect in particular is howsmall-world networkeffects are seen in the neural connections which are decentralized in the brain.[4]Research in collaboration with scientists across the world has revealed that there are pathways in the brain that are very well connected.[5]This is insightful for understanding how the architecture of the brain may relate toschizophrenia,autismandAlzheimer's disease.
Sporns is also interested in understanding the relationship between statistical properties of neuronal populations and perceptual data. How does an organism use and structure its environment in such a way as to achieve (statistically) complex input? To this end, he has run statistical analysis on movement patterns and input within simulations, videos and robotic devices.[citation needed]
Sporns also has a research interest in reward models of the brain utilizing robots.[6]The reward models have shown ways in whichdopamineis onset bydrug addiction.
Though not directly related to his core research, in early 2000 Sporns was interested indeveloping robots with human-like qualities in their ability to learn.[7]
|
https://en.wikipedia.org/wiki/Olaf_Sporns
|
Atext-to-video modelis amachine learning modelthat uses anatural languagedescription as input to produce avideorelevant to the input text.[1]Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of videodiffusion models.[2]
There are different models, includingopen sourcemodels. Chinese-language input[3]CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented onGitHubin 2022.[4]That year,Meta Platformsreleased a partial text-to-video model called "Make-A-Video",[5][6][7]andGoogle'sBrain(laterGoogle DeepMind) introduced Imagen Video, a text-to-video model with 3DU-Net.[8][9][10][11][12]
In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation.[13]The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences.[14]In the same month,Adobeintroduced Firefly AI as part of its features.[15]
In January 2024,Googleannounced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities.[16]Matthias NiessnerandLourdes Agapitoat AI companySynthesiawork on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars.[17]In June 2024, Luma Labs launched itsDream Machinevideo tool.[18][19]That same month,[20]Kuaishouextended its Kling AI text-to-video model to international users. In July 2024,TikTokownerByteDancereleased Jimeng AI in China, through its subsidiary, Faceu Technology.[21]By September 2024, the Chinese AI companyMiniMaxdebuted its video-01 model, joining other established AI model companies likeZhipu AI,Baichuan, andMoonshot AI, which contribute to China’s involvement in AI technology.[22]
Alternative approaches to text-to-video models include[23]Google's Phenaki, Hour One,Colossyan,[3]Runway's Gen-3 Alpha,[24][25]and OpenAI'sSora,[26][27]Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged.[28]Googleis also preparing to launch a video generation tool named Veo forYouTube Shortsin 2025.[29]FLUX.1developer Black Forest Labs has announced its text-to-video model SOTA.[30]
There are several architectures that have been used to create Text-to-Video models. Similar toText-to-Imagemodels, these models can be trained usingRecurrent Neural Networks(RNNs) such aslong short-term memory(LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively.[31]An alternative for these include transformer models.Generative adversarial networks(GANs),Variational autoencoders(VAEs), — which can aid in the prediction of human motion[32]— and diffusion models have also been used to develop the image generation aspects of the model.[33]
Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M.[34][35]These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM.[34][35]These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts.
The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence.[35]This predictive process is subject to decline in quality as the length of the video increases due to resource limitations.[35]
Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs.[36][37]Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility.[37][36]
Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message.[37][35]Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation.[37]
Another issue with the outputs is that text or fine details in AI-generated videos often appear garbled, a problem thatstable diffusionmodels also struggle with. Examples include distorted hands and unreadable text.
The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent.[38]Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences.[38]
Text-to-video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate content.[39]
|
https://en.wikipedia.org/wiki/Text-to-video_model
|
Therepeating circleis an instrument forgeodetic surveying, developed from thereflecting circlebyÉtienne Lenoirin 1784.[1]He invented it while an assistant ofJean-Charles de Borda, who later improved the instrument. It was notable as being the equal of thegreat theodolitecreated by the renowned instrument maker,Jesse Ramsden. It was used tomeasurethemeridian arcfromDunkirktoBarcelonabyJean Baptiste DelambreandPierre Méchain(see:meridian arc of Delambre and Méchain).
The repeating circle is made of twotelescopesmounted on a shared axis with scales to measure the angle between the two. The instrument combines multiple measurements to increase accuracy with the following procedure:
At this stage, the angle on the instrument is double the angle of interest between the points. Repeating the procedure causes the instrument to show 4× the angle of interest, with further iterations increasing it to 6×, 8×, and so on. In this way, many measurements can be added together, allowing some of the random measurement errors to cancel out.[2]
The repeating circle was used byCésar-François Cassini de Thury, assisted byPierre Méchain, for thetriangulationof theAnglo-French Survey. It would later be used for theArc measurement of Delambre and Méchainas improvements in the measuring device designed by Borda and used for this survey also raised hopes for a more accurate determination of the length of theFrench meridian arc.[3]
When themetrewas chosen as an international unit of length, it was well known that by measuring the latitude of two stations inBarcelona, Méchain had found that the difference between these latitudes was greater than predicted by direct measurement of distance by triangulation and that he did not dare to admit this inaccuracy.[4][5]This was later explained by clearance in the central axis of the repeating circle causing wear and consequently thezenithmeasurements contained significantsystematic errors.[6]
Thishistory of sciencearticle is astub. You can help Wikipedia byexpanding it.
This article about acivil engineeringtopic is astub. You can help Wikipedia byexpanding it.
Thisstandards- ormeasurement-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Repeating_circle
|
Real-time locating systems(RTLS), also known asreal-time tracking systems, are used to automaticallyidentifyandtrackthe location of objects or people inreal time, usually within a building or other contained area. Wireless RTLS tags are attached to objects or worn by people, and in most RTLS, fixed reference points receive wireless signals from tags to determine their location.[1]Examples of real-time locating systems include tracking automobiles through anassembly line, locating pallets of merchandise in a warehouse, or finding medical equipment in a hospital.
The physical layer of RTLS technology is oftenradio frequency(RF) communication. Some systems use optical (usuallyinfrared) or acoustic (usuallyultrasound) technology with, or in place of RF, RTLS tags. And fixed reference points can betransmitters,receivers, or both resulting in numerous possible technology combinations.
RTLS are a form oflocal positioning systemand do not usually refer toGPSor tomobile phone tracking. Location information usually does not include speed, direction, or spatial orientation.
The term RTLS was created (circa 1998) at theID EXPOtrade show by Tim Harrington (WhereNet), Jay Werb (PinPoint), and Bert Moore (Automatic Identification Manufacturers, Inc., AIM). It was created to describe and differentiate anemerging technologythat not only provided the automatic identification capabilities of activeRFIDtags, but also added the ability to view the location on a computer screen. It was at this show that the first examples of a commercial radio based RTLS system were shown by PinPoint and WhereNet. Although this capability had been utilized previously by military and government agencies, the technology had been too expensive for commercial purposes. In the early 1990s, the first commercial RTLS were installed at three healthcare facilities in the United States and were based on the transmission and decoding ofinfrared lightsignals from actively transmitting tags. Since then, new technology has emerged that also enables RTLS to be applied to passive tag applications.
RTLS are generally used in indoor and/or confined areas, such as buildings, and do not provide global coverage likeGPS. RTLS tags are affixed to mobile items, such as equipment or personnel, to be tracked or managed. RTLS reference points, which can be either transmitters or receivers, are spaced throughout a building (or similar area of interest) to provide the desired tag coverage. In most cases, the more RTLS reference points that are installed, the better the location accuracy, until the technology limitations are reached.
A number of disparate system designs are all referred to as "real-time locating systems". Two primary system design elements are locating at choke points and locating in relative coordinates.
The simplest form ofchoke pointlocating is where short range ID signals from a moving tag are received by a single fixed reader in a sensory network, thus indicating the location coincidence of reader and tag. Alternately, a choke point identifier can be received by the moving tag and then relayed, usually via a second wireless channel, to a location processor. Accuracy is usually defined by the sphere spanned with the reach of the choke point transmitter or receiver. The use of directional antennas, or technologies such as infrared or ultrasound that are blocked by room partitions, can support choke points of various geometries.[2]
ID signals from a tag are received by a multiplicity of readers in asensory network, and a position is estimated using one or more locating algorithms, such astrilateration,multilateration, ortriangulation. Equivalently, ID signals from several RTLS reference points can be received by a tag and relayed back to a location processor. Localization with multiple reference points requires that distances between reference points in the sensory network be known in order to precisely locate a tag, and the determination of distances is calledranging.
Another way to calculate relative location is viamobile tagscommunicating with one another. The tag(s) will then relay this information to a location processor.
RF trilateration uses estimated ranges from multiple receivers to estimate the location of a tag. RF triangulation uses the angles at which the RF signals arrive at multiple receivers to estimate the location of a tag. Many obstructions, such as walls or furniture, can distort the estimated range and angle readings leading to varied qualities of location estimate. Estimation-based locating is often measured in accuracy for a given distance, such as 90% accurate for 10-meter range.
Some systems use locating technologies that can't pass through walls, such as infrared or ultrasound. These require line of sight (or near line of sight) to communicate properly. As a result, they tend to be more accurate in indoor environments.
RTLS can be used in numerouslogisticalor operational areas to:
RTLS may be seen as a threat toprivacywhen used to determine the location of people. The newly declared human right ofinformational self-determinationgives the right to prevent one's identity andpersonal datafrom being disclosed to others and also covers disclosure of locality, though this does not generally apply to theworkplace.
Several prominentlabor unionshave spoken out against the use of RTLS systems to track workers, calling them "the beginning ofBig Brother" and "aninvasion of privacy".[5]
Current location-tracking technologies can be used to pinpoint users of mobile devices in several ways. First, service providers have access to network-based and handset-based technologies that can locate a phone for emergency purposes. Second, historical location can frequently be discerned from service provider records. Thirdly, other devices such as Wi-Fi hotspots or IMSI catchers can be used to track nearby mobile devices in real time. Finally, hybrid positioning systems combine different methods in an attempt to overcome each individual method's shortcomings.[6]
There is a wide variety of systems concepts and designs to provide real-time locating.[7]
A general model for selection of the best solution for a locating problem has been constructed at theRadboud University of Nijmegen.[19]Many of these references do not comply with the definitions given in international standardization with ISO/IEC 19762-5[20]and ISO/IEC 24730-1.[21]However, some aspects of real-time performance are served and aspects of locating are addressed in context of absolute coordinates.
Depending on the physical technology used, at least one and often some combination of ranging and/or angulating methods are used to determine location:
Real-time locating is affected by a variety of errors. Many of the major reasons relate to the physics of the locating system, and may not be reduced by improving the technical equipment.
Many RTLS systems require direct and clear line of sight visibility. For those systems, where there is no visibility from mobile tags to fixed nodes there will be no result or a non valid result fromlocating engine. This applies to satellite locating as well as other RTLS systems such as angle of arrival and time of arrival. Fingerprinting is a way to overcome the visibility issue: If the locations in the tracking area contain distinct measurement fingerprints, line of sight is not necessarily needed. For example, if each location contains a unique combination of signal strength readings from transmitters, the location system will function properly. This is true, for example, with some Wi-Fi based RTLS solutions. However, having distinct signal strength fingerprints in each location typically requires a fairly high saturation of transmitters.
The measured location may appear entirely faulty. This is a generally result of simple operational models to compensate for the plurality of error sources. It proves impossible to serve proper location after ignoring the errors.
Real timeis no registered branding and has no inherent quality. A variety of offers sails under this term. As motion causes location changes, inevitably the latency time to compute a new location may be dominant with regard to motion. Either an RTLS system that requires waiting for new results is not worth the money or the operational concept that asks for faster location updates does not comply with the chosen system's approach.
Location will never be reportedexactly, as the termreal-timeand the termprecisiondirectly contradict in aspects of measurement theory as well as the termprecisionand the termcostcontradict in aspects of economy. That is no exclusion of precision, but the limitations with higher speed are inevitable.
Recognizing a reported location steadily apart from physical presence generally indicates the problem of insufficient over-determination and missing of visibility along at least one link from resident anchors to mobile transponders. Such effect is caused also by insufficient concepts to compensate for calibration needs.
Noise from various sources has an erratic influence on stability of results. The aim to provide a steady appearance increases the latency contradicting to real time requirements.
As objects containing mass have limitations to jump, such effects are mostly beyond physical reality. Jumps of reported location not visible with the object itself generally indicate improper modeling with the location engine. Such effect is caused by changing dominance of various secondary responses.
Location of residing objects gets reported moving, as soon as the measures taken are biased by secondary path reflections with increasing weight over time. Such effect is caused by simple averaging and the effect indicates insufficient discrimination of first echoes.
The basic issues of RTLS are standardized by theInternational Organization for Standardizationand theInternational Electrotechnical Commissionunder the ISO/IEC 24730 series. In this series of standards, the basic standard ISO/IEC 24730-1 identifies the terms describing a form of RTLS used by a set of vendors but does not encompass the full scope of RTLS technology.
Currently several standards are published:
These standards do not stipulate any special method of computing locations, nor the method of measuring locations. This may be defined in specifications for trilateration, triangulation, or any hybrid approaches to trigonometric computing for planar or spherical models of a terrestrial area.
In RTLS application in the healthcare industry, various studies were issued discussing the limitations of the currently adopted RTLS. Currently used technologies RFID, Wi-fi, UWB, all RFID based are hazardous in the sense of interference with sensitive equipment. A study carried out by Dr Erik Jan van Lieshout of the Academic Medical Centre of the University of Amsterdam published inJAMA(Journal of the American Medical Equipment)[24]claimed "RFID and UWB could shut down equipment patients rely on" as "RFID caused interference in 34 of the 123 tests they performed". The first Bluetooth RTLS provider in the medical industry is supporting this in their article: "The fact that RFID cannot be used near sensitive equipment should in itself be a red flag to the medical industry". The RFID Journal responded to this study not negating it rather explaining real-case solution: "The Purdue study showed no effect when ultrahigh-frequency (UHF) systems were kept at a reasonable distance from medical equipment. So placing readers in utility rooms, near elevators and above doors between hospital wings or departments to track assets is not a problem".[25]However the case of ”keeping at a reasonable distance” might be still an open question for the RTLS technology adopters and providers in medical facilities.
In many applications it is very difficult and at the same time important to make a proper choice among various communication technologies (e.g., RFID, WiFi, etc.) which RTLS may include. Wrong design decisions made at early stages can lead to catastrophic results for the system and a significant loss of money for fixing and redesign. To solve this problem a special methodology for RTLS design space exploration was developed. It consists of such steps as modelling, requirements specification, and verification into a single efficient process.[26]
|
https://en.wikipedia.org/wiki/Real_time_locating
|
Adata dictionary, ormetadata repository, as defined in theIBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format".[1]Oracledefines it as a collection of tables with metadata. The term can have one of several closely related meanings pertaining todatabasesanddatabase management systems(DBMS):
The termsdata dictionaryanddata repositoryindicate a more general software utility than a catalogue. Acatalogueis closely coupled with the DBMS software. It provides the information stored in it to the user and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such asDDLandDMLcompilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer. On the other hand, adata dictionaryis a data structure that storesmetadata, i.e., (structured) data about information. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users and administrators of a computer system for information resource management. These systems maintain information on system hardware and software configuration, documentation, application and users as well as other information relevant to system administration.[2]
If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called apassive data dictionary.Otherwise, it is called anactive data dictionaryordata dictionary.When a passive data dictionary is updated, it is done so manually and independently from any changes to a DBMS (database) structure. With an active data dictionary, the dictionary is updated first and changes occur in the DBMS automatically as a result.
Databaseusersandapplicationdevelopers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases.[3]This typically includes the names and descriptions of varioustables(recordsorentities) and their contents (fields) plus additional details, like thetypeand length of eachdata element. Another important piece of information that a data dictionary can provide is the relationship between tables. This is sometimes referred to inentity-relationshipdiagrams (ERDs), or if using set descriptors, identifying which sets database tables participate in.
In an active data dictionary constraints may be placed upon the underlying data. For instance, a range may be imposed on the value of numeric data in a data element (field), or a record in a table may be forced to participate in a set relationship with another record-type. Additionally, a distributed DBMS may have certain location specifics described within its active data dictionary (e.g. where tables are physically located).
The data dictionary consists of record types (tables) created in the database by systems generated command files, tailored for each supported back-end DBMS. Oracle has a list of specific views for the "sys" user. This allows users to look up the exact information that is needed. Command files contain SQL Statements forCREATE TABLE,CREATE UNIQUE INDEX,ALTER TABLE(for referential integrity), etc., using the specific statement required by that type of database.
There is no universal standard as to the level of detail in such a document.
In the construction of database applications, it can be useful to introduce an additional layer of data dictionary software, i.e.middleware, which communicates with the underlying DBMS data dictionary. Such a "high-level" data dictionary may offer additional features and a degree of flexibility that goes beyond the limitations of the native "low-level" data dictionary, whose primary purpose is to support the basic functions of the DBMS, not the requirements of a typical application. For example, a high-level data dictionary can provide alternativeentity-relationship modelstailored to suit different applications that share a common database.[4]Extensions to the data dictionary also can assist inquery optimizationagainstdistributed databases.[5]Additionally, DBA functions are often automated using restructuring tools that are tightly coupled to an active data dictionary.
Software frameworksaimed atrapid application developmentsometimes include high-level data dictionary facilities, which can substantially reduce the amount of programming required to buildmenus,forms, reports, and other components of a database application, including the database itself. For example, PHPLens includes aPHPclass libraryto automate the creation of tables, indexes, andforeign keyconstraintsportablyfor multiple databases.[6]Another PHP-based data dictionary, part of the RADICORE toolkit, automatically generates programobjects,scripts, and SQL code for menus and forms withdata validationand complexjoins.[7]For theASP.NETenvironment,Base One'sdata dictionary provides cross-DBMS facilities for automated database creation, data validation, performance enhancement (cachingand index utilization),application security, and extendeddata types.[8]Visual DataFlexfeatures[9]provides the ability to use DataDictionaries as class files to form middle layer between the user interface and the underlying database. The intent is to create standardized rules to maintain data integrity and enforce business rules throughout one or more related applications.
Some industries use generalized data dictionaries as technical standards to ensure interoperability between systems. The real estate industry, for example, abides by aRESO's Data Dictionaryto which theNational Association of REALTORSmandates[10]itsMLSscomply with through its policy handbook.[11]This intermediate mapping layer for MLSs' native databases is supported by software companies which provide API services to MLS organizations.
Developers use adata description specification(DDS) to describe data attributes in file descriptions that are external to the application program that processes the data, in the context of anIBM i.[12]Thesys.ts$table in Oracle stores information about every table in the database. It is part of the data dictionary that is created when theOracle Databaseis created.[13]Developers may also use DDS context fromfree and open-source software(FOSS) for structured and transactional queries in open environments.
Here is a non-exhaustive list of typical items found in a data dictionary for columns or fields:
|
https://en.wikipedia.org/wiki/Data_dictionary
|
Inmathematics, aMarkov information source, or simply, aMarkov source, is aninformation sourcewhose underlying dynamics are given by a stationary finiteMarkov chain.
Aninformation sourceis a sequence ofrandom variablesranging over a finite alphabetΓ{\displaystyle \Gamma }, having astationary distribution.
A Markov information source is then a (stationary) Markov chainM{\displaystyle M}, together with afunction
that maps statesS{\displaystyle S}in the Markov chain to letters in the alphabetΓ{\displaystyle \Gamma }.
Aunifilar Markov sourceis a Markov source for which the valuesf(sk){\displaystyle f(s_{k})}are distinct whenever each of the statessk{\displaystyle s_{k}}are reachable, in one step, from a common prior state. Unifilar sources are notable in that many of their properties are far more easily analyzed, as compared to the general case.
Markov sources are commonly used incommunication theory, as a model of atransmitter. Markov sources also occur innatural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques ofhidden Markov models, such as theViterbi algorithm.
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Markov_information_source
|
AnRDP shopis a website where access tohacked computersis sold tocybercriminals.
The computers may be acquired via scanning the web for openRemote Desktop Protocolconnections andbrute-forcingpasswords.[1]High-valueransomwaretargets are sometimes available such as airports.[2]Access to a compromised machine retails from $3 to $19 depending on automatically gathered system and network metrics using a standardisedback door.[3][4]
Many such shops exist on thedark web.
Ukrainian sites such asxDedic[3]do not sell access to machines within theformer Soviet nations.[5]
This Internet-related article is astub. You can help Wikipedia byexpanding it.
Thiscrime-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/RDP_shop
|
"Syndicate–2"was adisinformation operationdeveloped and carried out by theState Political Directorate, aimed at eliminatingSavinkov's anti–Soviet underground.
The interest of the famous terroristBoris Savinkovto participate in underground anti–Soviet activities prompted the extraordinary commissioners to develop a plan to involve him in such activities under the supervision of special services, in order to eliminate the entire underground network. Such a plan was developed in the Counterintelligence Department of the State Political Administration under the People's Commissariat of Internal Affairs of the Russian Socialist Federative Soviet Republic, created in 1922. On May 12, 1922, a circular letter "On the Savinkov's Organization" was issued (it was published on the fourth day of the department's existence and became the first circular letter published). This letter addressed the issue of a new method ofcounterintelligencework – the creation of legendary organizations. Operation Syndicate–2 was carried out in parallel with a similar OperationTrust, aimed at liquidating the monarchist underground. Operation Syndicate–2 involved the head of the Counterintelligence DivisionArtur Artuzov, Deputy ChiefRoman Pilar, Assistant Chief Sergei Puzitskiy and the personnel of the 6th Division of the Counterintelligence Division: Chief Ignatiy Sosnovskiy, Assistant Chief Nikolai Demidenko, secret officer Andrey Fedorov, authorized Grigory Syroezhkin,Semyon Gendin, Assistant to the Authorized Counterintelligence Department of the Plenipotentiary Representation of the State Political Administration for the Western Territory, Jan Krikman.[1]
After the failure of the resistance inPolandand a series of failures in the anti–Soviet field, Boris Savinkov decided to single–handedly organize uprisings and terrorist acts in Russia, reviving thePeople's Union for the Defense of the Motherland and Freedom. While inParis, in the summer of 1922, he sent hisadjutantLeonid Sheshenya to Russia, where he was detained by Soviet border guards[2]while crossing the border from Poland.[3]Through Sheshenya (who was under the threat of beingshotfor participating inBalakhovich's formations and agreed to cooperate with the United State Political Directorate), the extraordinary commissioners managed to uncover two agents – Mikhail Zekunov and V. Gerasimov, who turned out to be the leader of an underground organization.[4]Also, on the basis of Sheshenya's testimony, the cells of the People's Union for the Defense of the Motherland and Freedom in the Western Territory were liquidated.[5]
In the Counterintelligence Department, a project was developed according to which secret officer Andrei Fedorov should go abroad under the guise of a member of the Central Committee of the Liberal Democrats Party, Andrei Mukhin, in order to convince Savinkov of the existence of a capable underground organization in the Soviet Union and persuade him to cooperate. In addition, the extraordinary commissioners managed to recruit Zekunov, arrested in September 1922, who, after a month of briefing, was sent to Poland, where he met with Sheshenya's relative, a member of the People's Union for the Defense of the Motherland and Freedom, Ivan Fomichev. Fomichev sent Zekunov toWarsaw, where he reported to the resident of Savinkov,Dmitry Filosofov, the information that Sheshenya had come into contact with a large counter–revolutionary organization in the Soviet State, and handed over a letter to Sheshenya addressed to Savinkov. In June1923, Fedorov went to Poland. InVilno, he met with Ivan Fomichev, with whom they went to Warsaw. Fedorov demanded a meeting with Savinkov, in which he was denied (as envisaged by the extraordinary commissioners), instead Filosofov talked with him. Filosofov was suspicious of Fedorov, but he listened to his statement, and decided to send Fomichev to the Soviet Union for reconnaissance, which he informed Savinkov about in a letter. Savinkov approved the decision of his resident.[4]
Fomichev, who arrived inMoscow, was first set up by extraordinary commissioners with a real counter–revolutionary, Professor Isachenko, who headed a monarchist organisation, in the hope that the political opponents would quarrel and Fomichev would get the impression that the only force with which to cooperate was the "Liberal Democrats". And so it happened, after this conversation, Fomichev got to a meeting of the joint center of "Liberal Democrats" and Savinkovites, where he made a proposal for cooperation (Professor Isachenko was sent to theInternal Prison of the State Political Administration on Lubyanka, and, apparently, was shot). The "Liberal Democrats" accepted the proposal to work together, but set a condition for political consultations with Savinkov personally. The information received from Fomichev was accepted by Filosofov with enthusiasm, and he even forgot to inform Savinkov himself about it, who learned about the result of the trip by accident. Savinkov was very angry at this behavior of the residents, and threatened to remove all local leaders of the Union. He was in thought, comparing all the known facts, studying the program documents of the Liberal Democrats Party, which were drawn up with the participation of Artuzov, Puzitsky andMenzhinsky.[4]
Meanwhile, on June 11, 1923, Fedorov went to Paris from Warsaw to meet with Savinkov.[3]Savinkov was still not sure that the Liberal Democrats were not a provocation by the extraordinary commissioners, and decided to send one of his closest associates, Sergei Pavlovsky, to Fedorov, who suspected that the organization was provocative in nature. However, the check failed, Fedorov did not succumb to the provocation, and achieved an audience with Savinkov, playing a scene of a quarrel over resentment and disappointment in Savinkov and his associates. Savinkov calmed Fedorov, and sent Pavlovsky to the Soviet Union (without giving details of the leadership of the Liberal Democrats).[4]In addition, Fomichev and Fedorov contacted Polish intelligence, passed on some false information (prepared by the State Political Administration) and agreed on permanent cooperation.[5]
Pavlovsky arrived in Poland in August 1923, illegally crossed the border with the Soviet Union on August 17,[3]killing a Red Army soldier, but instead of immediately proceeding to check the activities of Sheshenya, Zekunov and others, inBelarushe organized an armed group from among the members of the People's Union for the Defense of the Motherland and freedoms, along with which he began to deal with the expropriation of banks, mail trains and the murder of communists. On September 16,[3]he moved to Moscow, where two days later, during a meeting with Sheshenya and the leaders of the Liberal Democrats, he was arrested and taken to the internal prison of the State Political Administration. There he was presented with a list of his most important crimes and made it clear that he would only be able to avoid execution by cooperating with the State Political Administration. He was asked to write a letter to Filosofov, and he agreed, having an agreement with Savinkov, that if he did not put an end to any proposal in the letter, it would be a sign that he was arrested. But the attempt to send a letter with a conditional sign failed, since Pavlovsky, with his persistent interest in whether the emergency commissioners were not afraid that Savinkov would learn about the arrest of his assistant, aroused their suspicions, and a secret technique was unraveled by the cipher clerks. The letter was forced to be rewritten. Savinkov, having received a letter without a secret sign, trusted Pavlovsky and wrote a message to the Liberal Democrats in which he expressed a desire to come to Russia. Savinkov's wish was that Pavlovsky should come to Europe for him, but the extraordinary commissioners could not let Pavlovsky go, and they invented the legend that he allegedly went to the south of Russia with the aim of expropriation, was wounded and bedridden. Such news confused Savinkov, planting suspicions in his mind, but they were discarded, since Savinkov was driven by the fear of being late at the right moment for active action. In addition, the extraordinary commissioners organized meetings of Fomichev with the "leaders of anti–Soviet groups" inRostov–on–DonandMineralnye Vody, who were represented by officers of the Counterintelligence Department Ibragim Abyssalov and Ignatiy Sosnovsky. In June 1924, Fomichev arrived in Paris and convinced Savinkov of the need for a trip.[5]
In August 1924, Savinkov returned to the Soviet Union, accompanied by Alexander Dikhoff, his wife Lyubov, Fomichev and Fedorov.[4]Fedorov separated from the group in Vilno, promising to meet them on Soviet territory. On August 15, they crossed the border through a passage prepared by the United State Political Administration. They reachedMinsk, where Savinkov, and Alexander and Lyubov Dikgof, betrayed by Andrei Fedorov, were arrested in a safe house on August 16, 1924, and sent to Moscow.[3]On August 18, they were taken to the inner prison of the United State Political Administration.[1]
At the trial Savinkov confessed to everything and especially noted the fact that "all his life he worked only for the people and in their name". Cooperating with the investigation, he presented at the trial the version invented by the extraordinary commissioners in order to keep the details of Operation Syndicate–2 secret, and stated that he repented of his crimes and admitted "all his political activities since the October Socialist Revolution were a mistake and delusion". On August 29, 1924 theSupreme Courtsentenced Savinkov to death with confiscation of property, since he deserved five years in prison and five death sentences for the cumulative crimes. However, the court petitioned for a mitigation of the sentence due to the convicted person's admission of his guilt and his readiness to make amends to the Soviet authorities. The motion of the Military Collegium of the Supreme Court was approved by the Presidium of theCentral Executive Committee of the Soviet Union, and the capital punishment was replaced by imprisonment for a term of ten years.[4]
Being in the internal prison of the United State Political Administration on Lubyanka, in unprecedented conditions (in the cell where his mistress Lyubov Dikhoff lived with him periodically, there was a carpet, furniture, the prisoner was allowed to write memoirs, some of which were even published, and he was paid a fee), Savinkov kept a diary in which he continued to insist on a fictional version of the motives for crossing the border. On the morning of May 7, 1925, Savinkov wrote a letter toFelix Dzerzhinsky, in which he asked to explain why he was being held in prison, and not shot or not allowed to work for the Soviet regime. Dzerzhinsky did not answer in writing, only ordering to convey that he started talking about freedom early. In the evening of that day, employees of the United State Political Administration Speransky, Puzitsky and Syroezhkin accompanied Savinkov for a walk inTsaritsinsky Park, three hours later they returned to Lubyanka, to Pillar's office on the fifth floor to wait for the guards. At some point, Puzitsky left the office, in which there was Speransky, sitting on the sofa, Syroezhkin, sitting at the table, and Savinkov.
Researchers have not yet come to a consensus about further events. The official version says that Savinkov paced the room and suddenly jumped out of the window into the courtyard. However, the investigator conducting the official investigation notes that Savinkov was sitting at a round table opposite one of the emergency commissioners.Boris Gudz, a close friend of Syroezhkin, who was at that moment in the next room, in the 90s said that Savinkov walked around the room and jumped through the window upside down, Syroezhkin managed to catch him by the leg, but he could not hold back, as he had an injury to one of his hands. For the first time, a message about Savinkov's suicide, written in the United State Political Administration, edited by Felix Dzerzhinsky and approved byJoseph Stalin, was published on May 13 in the newspaperPravda. The suicide version was circulated by the Soviet press and part of the émigré community. Doubts about the official version were one of the first to be expressed byAlexander Solzhenitsynin theArchipelago of the Main Administration of the Camps. He wrote that in the Kolyma Camp, the former extraordinary commissioner Arthur Prubel, dying, told someone that he was one of four people who threw Savinkov out of the window. Some modern historians are also inclined to believe that Savinkov was killed.[4]
During the operation "Syndicate–2", most of the "Savinkovites" were identified, conducting clandestine work on the territory of the Soviet Union, the "People's Union for the Defense of the Motherland and Freedom" was finally defeated: the cells of the "Union" were liquidated inSmolensk, Bryansk,Vitebsk,GomelProvinces,[5]on the territory of the Petrograd Military District, 23 Savinkov's residencies inMoscow,Samara,Saratov,Kharkov,Kiev,Tula,Odessa. There have been several major trials, including the "Case of Forty–Four", "Case of Twelve", "Case of Forty–Three".[2]Agents of the People's Union for the Defense of the Motherland and Freedom Veselov, Gorelov, Nagel–Neiman, Rosselevich, the organizers of the terrorist acts V. I. Svezhevsky and Mikhail Gnilorybov and others were arrested and convicted.[5]Alexander and Lyubov Dikhoff were amnestied and lived in Moscow until 1936, when they were sentenced to 5 years in aforced labor campas "socially dangerous elements", they ended up in the Kolyma. There, in 1939, Alexander Arkadyevich was shot. Lyubov Efimovna survived, settled in exile in Magadan, and worked as a librarian. She died in Mariupol in 1969. Pavlovsky was shot in 1924, Sheshenya worked for the United State Political Administration, and was shot in 1937, Fomichev was released, lived in the village, was shot in 1929.[3][5]
Vyacheslav Menzhinsky, Roman Pillar, Sergei Puzitsky, Nikolai Demidenko, Andrey Fedorov, Grigory Syroezhkin were presented with theOrder of the Red Banner.Artur Artuzov, Ignatiy Sosnovsky,Semyon Gendinand I. P. Krikman received gratitude from the government of the Soviet Union.[5]
Subsequently, almost all the extraordinary commissioners who participated in the operation were shot during theStalinist Purges:[3]Andrei Fedorov, in the organs of the All–Russian Extraordinary Commission since 1920, one of the main participant in the Syndicate–2 operation and the capture of Boris Savinkov (disguised as officer Mukhin), shot on September 20, 1937 in Moscow. Roman Pillar, Sergei Puzitsky,Artur Artuzov, Ignatiy Sosnovsky were shot in 1937. Grigory Syroezhkin andSemyon Gendin– in 1939.
Based on the operation in 1968, the novel "Retribution" was written by the writer Vasily Ardamatsky. In the same year, according to a script based on the novel "Retribution", the film "Crash" was shot by directorVladimir Chebotaryov. In 1981, director Mark Orlov filmed a remake of the six–part television movie "Syndicate–2". An operation similar to Syndicate–2 is also at the heart of the 2014 television series Wolf Sun, which chronicles the activities of Soviet intelligence in Poland in the 1920s.
|
https://en.wikipedia.org/wiki/Syndicate%E2%80%932
|
In the context ofnetwork theory, acomplex networkis agraph(network) with non-trivialtopologicalfeatures—features that do not occur in simple networks such aslatticesorrandom graphsbut often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research[1][2](since 2000) inspired largely by empirical findings of real-world networks such ascomputer networks,biological networks, technological networks,brain networks,[3][4]climate networksandsocial networks.
Mostsocial,biological, andtechnological networksdisplay substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure, andhierarchical structure. In the case of directed networks these features also includereciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features. The most complex structures can be realized by networks with a medium number of interactions.[5]This corresponds to the fact that the maximum information content (entropy) is obtained for medium probabilities.
Two well-known and much studied classes of complex networks arescale-free networks[6]andsmall-world networks,[7][8]whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-lawdegree distributionsfor the former and short path lengths and highclusteringfor the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structures have attracted attention as well.
The field continues to develop at a brisk pace, and has brought together researchers from many areas includingmathematics,physics, electric power systems,[9]biology,climate,computer science,sociology,epidemiology, and others.[10]Ideas and tools from network science and engineering have been applied to the analysis of metabolic and genetic regulatory networks; the study of ecosystem stability and robustness;[11]clinical science;[12]the modeling and design of scalable communication networks such as the generation and visualization of complex wireless networks;[13]and a broad range of other practical issues. Network science is the topic of many conferences in a variety of different fields, and has been the subject of numerous books both for the lay person and for the expert.
A network is called scale-free[6][14]if its degree distribution, i.e., the probability that a node selected uniformly at random has a certain number of links (degree), follows a mathematical function called apower law. The power law implies that the degree distribution of these networks has no characteristic scale. In contrast, networks with a single well-defined scale are somewhat similar to a lattice in that every node has (roughly) the same degree. Examples of networks with a single scale include theErdős–Rényi (ER) random graph,random regular graphs,regular lattices, andhypercubes. Some models of growing networks that produce scale-invariant degree distributions are theBarabási–Albert modeland thefitness model. In a network with a scale-free degree distribution, some vertices have a degree that is orders of magnitude larger than the average - these vertices are often called "hubs", although this language is misleading as, by definition, there is no inherent threshold above which a node can be viewed as a hub. If there were such a threshold, the network would not be scale-free.
Interest in scale-free networks began in the late 1990s with the reporting of discoveries of power-law degree distributions in real world networks such as theWorld Wide Web, the network ofAutonomous systems(ASs), some networks of Internet routers, protein interaction networks, email networks, etc. Most of these reported "power laws" fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit (before finite-size effects occur) -- are very different from what one would expect if edges existed independently and at random (i.e., if they followed aPoisson distribution). There are many different ways to build a network with a power-law degree distribution. TheYule processis a canonical generative process for power laws, and has been known since 1925. However, it is known by many other names due to its frequent reinvention, e.g., The Gibrat principle byHerbert A. Simon, theMatthew effect, cumulative advantage and,preferential attachmentbyBarabásiand Albert for power-law degree distributions. Recently,Hyperbolic Geometric Graphshave been suggested as yet another way of constructing scale-free networks.
Some networks with a power-law degree distribution (and specific other types of structure) can be highly resistant to the random deletion of vertices—i.e., the vast majority of vertices remain connected together in a giant component. Such networks can also be quite sensitive to targeted attacks aimed at fracturing the network quickly. When the graph is uniformly random except for the degree distribution, these critical vertices are the ones with the highest degree, and have thus been implicated in the spread of disease (natural and artificial) in social and communication networks, and in the spread of fads (both of which are modeled by apercolationorbranching process). While random graphs (ER) have an average distance of order log N[7]between nodes, where N is the number of nodes, scale free graph can have a distance of log log N.
A network is called a small-world network[7]by analogy with thesmall-world phenomenon(popularly known assix degrees of separation). The small world hypothesis, which was first described by the Hungarian writerFrigyes Karinthyin 1929, and tested experimentally byStanley Milgram(1967), is the idea that two arbitrary people are connected by only six degrees of separation, i.e. the diameter of the corresponding graph of social connections is not much larger than six. In 1998,Duncan J. WattsandSteven Strogatzpublished the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice.[7]Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a "small world" in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as theWorld Wide Weband the metabolic network also exhibit this property.
In the scientific literature on networks, there is some ambiguity associated with the term "small world". In addition to referring to the size of the diameter of the network, it can also refer to the co-occurrence of a small diameter and a highclustering coefficient. The clustering coefficient is a metric that represents the density of triangles in the network. For instance, sparse random graphs have a vanishingly small clustering coefficient while real world networks often have a coefficient significantly larger. Scientists point to this difference as suggesting that edges are correlated in real world networks. Approaches have been developed to generate network models that exhibit high correlations, while preserving the desired degree distribution and small-world properties. These approaches can be used to generate analytically solvable toy models for research into these systems.[15]
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain networks.[3][4]Several models for spatial networks have been developed.[16]
|
https://en.wikipedia.org/wiki/Complex_network
|
Inmechanicsandphysics,simple harmonic motion(sometimes abbreviated asSHM) is a special type ofperiodicmotionan object experiences by means of arestoring forcewhose magnitude is directlyproportionalto the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in anoscillationthat is described by asinusoidwhich continues indefinitely (if uninhibited byfrictionor any otherdissipationofenergy).[1]
Simple harmonic motion can serve as amathematical modelfor a variety of motions, but is typified by the oscillation of amasson aspringwhen it is subject to the linearelasticrestoring force given byHooke's law. The motion issinusoidalin time and demonstrates a singleresonantfrequency. Other phenomena can be modeled by simple harmonic motion, including the motion of asimple pendulum, although for it to be an accurate model, thenet forceon the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; seesmall-angle approximation). Simple harmonic motion can also be used to modelmolecular vibration.
Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques ofFourier analysis.
The motion of aparticlemoving along a straight line with anaccelerationwhose direction is always toward afixed pointon the line and whose magnitude is proportional to the displacement from the fixed point is called simple harmonic motion.[2]
In the diagram, asimple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at theequilibriumposition then there is no netforceacting on the mass. However, if the mass is displaced from the equilibrium position, the springexertsa restoringelasticforce that obeysHooke's law.
Mathematically,F=−kx,{\displaystyle \mathbf {F} =-k\mathbf {x} ,}whereFis the restoring elastic force exerted by the spring (inSIunits:N),kis thespring constant(N·m−1), andxis thedisplacementfrom the equilibrium position (inmetres).
For any simple mechanical harmonic oscillator:
Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, itacceleratesand starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, atx= 0, the mass hasmomentumbecause of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until itsvelocityreaches zero, whereupon it is accelerated back to the equilibrium position again.
As long as the system has noenergyloss, the mass continues to oscillate. Thus simple harmonic motion is a type ofperiodicmotion. If energy is lost in the system, then the mass exhibitsdamped oscillation.
Note if the real space andphase spaceplot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum.
InNewtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linearordinary differential equationwithconstantcoefficients, can be obtained by means ofNewton's second lawandHooke's lawfor amasson aspring.
Fnet=md2xdt2=−kx,{\displaystyle F_{\mathrm {net} }=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-kx,}wheremis theinertial massof the oscillating body,xis itsdisplacementfrom theequilibrium(or mean) position, andkis a constant (thespring constantfor a mass on a spring).
Therefore,d2xdt2=−kmx{\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-{\frac {k}{m}}x}
Solving thedifferential equationabove produces a solution that is asinusoidal function:x(t)=c1cos(ωt)+c2sin(ωt),{\displaystyle x(t)=c_{1}\cos \left(\omega t\right)+c_{2}\sin \left(\omega t\right),}whereω=k/m.{\textstyle {\omega }={\sqrt {{k}/{m}}}.}The meaning of the constantsc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}can be easily found: settingt=0{\displaystyle t=0}on the equation above we see thatx(0)=c1{\displaystyle x(0)=c_{1}}, so thatc1{\displaystyle c_{1}}is the initial position of the particle,c1=x0{\displaystyle c_{1}=x_{0}}; taking the derivative of that equation and evaluating at zero we get thatx˙(0)=ωc2{\displaystyle {\dot {x}}(0)=\omega c_{2}}, so thatc2{\displaystyle c_{2}}is the initial speed of the particle divided by the angular frequency,c2=v0ω{\displaystyle c_{2}={\frac {v_{0}}{\omega }}}. Thus we can write:x(t)=x0cos(kmt)+v0kmsin(kmt).{\displaystyle x(t)=x_{0}\cos \left({\sqrt {\frac {k}{m}}}t\right)+{\frac {v_{0}}{\sqrt {\frac {k}{m}}}}\sin \left({\sqrt {\frac {k}{m}}}t\right).}
This equation can also be written in the form:x(t)=Acos(ωt−φ),{\displaystyle x(t)=A\cos \left(\omega t-\varphi \right),}where
or equivalently
In the solution,c1andc2are two constants determined by the initial conditions (specifically, the initial position at timet= 0isc1, while the initial velocity isc2ω), and the origin is set to be the equilibrium position.[A]Each of these constants carries a physical meaning of the motion:Ais theamplitude(maximum displacement from the equilibrium position),ω= 2πfis theangular frequency, andφis the initialphase.[B]
Using the techniques ofcalculus, thevelocityandaccelerationas a function of time can be found:v(t)=dxdt=−Aωsin(ωt−φ),{\displaystyle v(t)={\frac {\mathrm {d} x}{\mathrm {d} t}}=-A\omega \sin(\omega t-\varphi ),}
a(t)=d2xdt2=−Aω2cos(ωt−φ).{\displaystyle a(t)={\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-A\omega ^{2}\cos(\omega t-\varphi ).}
By definition, if a massmis under SHM its acceleration is directly proportional to displacement.a(x)=−ω2x.{\displaystyle a(x)=-\omega ^{2}x.}whereω2=km{\displaystyle \omega ^{2}={\frac {k}{m}}}
Sinceω= 2πf,f=12πkm,{\displaystyle f={\frac {1}{2\pi }}{\sqrt {\frac {k}{m}}},}and, sinceT=1/fwhereTis the time period,T=2πmk.{\displaystyle T=2\pi {\sqrt {\frac {m}{k}}}.}
These equations demonstrate that the simple harmonic motion isisochronous(the period and frequency are independent of the amplitude and the initial phase of the motion).
Substitutingω2withk/m, thekinetic energyKof the system at timetisK(t)=12mv2(t)=12mω2A2sin2(ωt−φ)=12kA2sin2(ωt−φ),{\displaystyle K(t)={\tfrac {1}{2}}mv^{2}(t)={\tfrac {1}{2}}m\omega ^{2}A^{2}\sin ^{2}(\omega t-\varphi )={\tfrac {1}{2}}kA^{2}\sin ^{2}(\omega t-\varphi ),}and thepotential energyisU(t)=12kx2(t)=12kA2cos2(ωt−φ).{\displaystyle U(t)={\tfrac {1}{2}}kx^{2}(t)={\tfrac {1}{2}}kA^{2}\cos ^{2}(\omega t-\varphi ).}In the absence of friction and other energy loss, the totalmechanical energyhas a constant valueE=K+U=12kA2.{\displaystyle E=K+U={\tfrac {1}{2}}kA^{2}.}
The following physical systems are some examples ofsimple harmonic oscillator.
A massmattached to a spring of spring constantkexhibits simple harmonic motion inclosed space. The equation for describing the period:T=2πmk{\displaystyle T=2\pi {\sqrt {\frac {m}{k}}}}shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation.
Simple harmonic motion can be considered the one-dimensionalprojectionofuniform circular motion. If an object moves with angular speedωaround a circle of radiusrcentered at theoriginof thexy-plane, then its motion along each coordinate is simple harmonic motion with amplituderand angular frequencyω.
The motion of a body in which it moves to and from a definite point is also calledoscillatory motionor vibratory motion. The time period is able to be calculated byT=2πlg{\displaystyle T=2\pi {\sqrt {\frac {l}{g}}}}where l is the distance from rotation to the object's center of mass undergoing SHM and g is gravitational acceleration. This is analogous to the mass-spring system.
In thesmall-angle approximation, themotion of a simple pendulumis approximated by simple harmonic motion. The period of a mass attached to a pendulum of lengthlwith gravitational accelerationg{\displaystyle g}is given byT=2πlg{\displaystyle T=2\pi {\sqrt {\frac {l}{g}}}}
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due togravity,g{\displaystyle g}, therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value ofg{\displaystyle g}varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression forangular accelerationαbeing proportional to the sine of the displacement angle:
−mglsinθ=Iα,{\displaystyle -mgl\sin \theta =I\alpha ,}
whereIis themoment of inertia. Whenθis small,sinθ≈θand therefore the expression becomes
−mglθ=Iα{\displaystyle -mgl\theta =I\alpha }
which makes angular acceleration directly proportional and opposite toθ, satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position).
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form.
x(t)=Asin(ωt+φ′),{\displaystyle x(t)=A\sin \left(\omega t+\varphi '\right),}wheretanφ′=c1c2,{\displaystyle \tan \varphi '={\frac {c_{1}}{c_{2}}},}
|
https://en.wikipedia.org/wiki/Simple_harmonic_motion
|
In digital logic, aninverterorNOT gateis alogic gatewhich implementslogical negation. It outputs abitopposite of the bit that is put into it. The bits are typically implemented as two differingvoltagelevels.
The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called "flipping" bits.[1]As with all binary logic gates, other pairs of symbols — such as true and false, or high and low — may be used in lieu of one and zero.
It is equivalent to thelogical negationoperator (¬) inmathematical logic. Because it has only one input, it is aunary operationand has the simplest type oftruth table. It is also called the complement gate[2]because it produces theones' complementof a binary number, swapping 0s and 1s.
The NOT gate is one of three basic logic gates from which anyBoolean circuitmay be built up. Together with theAND gateand theOR gate, any function in binary mathematics may be implemented. All otherlogic gatesmay be made from these three.[3]
The terms "programmable inverter" or "controlled inverter" do not refer to this gate; instead, these terms refer to theXOR gatebecause it can conditionally function like a NOT gate.[1][3]
The traditional symbol for an inverter circuit is a triangle touching a small circle or "bubble". Input and output lines are attached to the symbol; the bubble is typically attached to the output line. To symbolizeactive-low input, sometimes the bubble is instead placed on the input line.[4]Sometimes only the circle portion of the symbol is used, and it is attached to the input or output of another gate; the symbols forNANDandNORare formed in this way.[3]
A bar oroverline( ‾ ) above a variable can denote negation (or inversion or complement) performed by a NOT gate.[4]A slash (/) before the variable is also used.[3]
An inverter circuit outputs a voltage representing the opposite logic-level to its input. Its main function is to invert the input signal applied. If the applied input is low then the output becomes high and vice versa. Inverters can be constructed using a singleNMOStransistor or a singlePMOStransistor coupled with aresistor. Since this "resistive-drain" approach uses only a single type of transistor, it can be fabricated at a low cost. However, because current flows through the resistor in one of the two states, the resistive-drain configuration is disadvantaged for power consumption and processing speed. Alternatively, inverters can be constructed using two complementary transistors in aCMOSconfiguration. This configuration greatly reduces power consumption since one of the transistors is always off in both logic states.[5]Processing speed can also be improved due to the relatively low resistance compared to the NMOS-only or PMOS-only type devices. Inverters can also be constructed withbipolar junction transistors(BJT) in either aresistor–transistor logic(RTL) or atransistor–transistor logic(TTL) configuration.
Digitalelectronics circuits operate at fixed voltage levels corresponding to a logical 0 or 1 (seebinary). An inverter circuit serves as the basic logic gate to swap between those two voltage levels. Implementation determines the actual voltage, but common levels include (0, +5V) for TTL circuits.
The inverter is a basic building block in digital electronics. Multiplexers, decoders, state machines, and other sophisticated digital devices may use inverters.
Thehex inverteris anintegrated circuitthat contains six (hexa-) inverters. For example, the7404TTLchip which has 14 pins and the 4049CMOSchip which has 16 pins, 2 of which are used for power/referencing, and 12 of which are used by the inputs and outputs of the six inverters (the 4049 has 2 pins with no connection).
f(a)=1−a{\displaystyle f(a)=1-a}is the analytical representation of NOT gate:
If no specific NOT gates are available, one can be made from the universalNANDorNORgates,[6]or anXOR gateby setting one input to high.
Digital inverter quality is often measured using the voltage transfer curve (VTC), which is a plot of output vs. input voltage. From such a graph, device parameters including noise tolerance, gain, and operating logic levels can be obtained.
Ideally, the VTC appears as an inverted step function – this would indicate precise switching betweenonandoff– but in real devices, a gradual transition region exists. The VTC indicates that for low input voltage, the circuit outputs high voltage; for high input, the output tapers off towards the low level. The slope of this transition region is a measure of quality – steep (close to vertical) slopes yield precise switching.
The tolerance to noise can be measured by comparing the minimum input to the maximum output for each region of operation (on / off).
Since the transition region is steep and approximately linear, a properly-biased CMOS inverter digital logic gate may be used as a high-gain analoglinear amplifier[7][8][9][10][11]or even combined to form anopamp.[12]Maximum gain is achieved when the input and output operating points are the same voltage, which can be biased by connecting a resistor between the output and input.[13]
|
https://en.wikipedia.org/wiki/Inverter_(logic_gate)
|
Instatisticsandcontrol theory,Kalman filtering(also known aslinear quadratic estimation) is analgorithmthat uses a series of measurements observed over time, includingstatistical noiseand other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating ajoint probability distributionover the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics.[1]The filter is named afterRudolf E. Kálmán.
Kalman filtering[2]has numerous technological applications. A common application is forguidance, navigation, and controlof vehicles, particularly aircraft, spacecraft and shipspositioned dynamically.[3]Furthermore, Kalman filtering is much applied intime seriesanalysis tasks such assignal processingandeconometrics. Kalman filtering is also important for roboticmotion planningand control,[4][5]and can be used fortrajectory optimization.[6]Kalman filtering also works for modeling thecentral nervous system's control of movement. Due to the time delay between issuing motor commands and receivingsensory feedback, the use of Kalman filters[7]provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.[8]
The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the currentstate variables, including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using aweighted average, with more weight given to estimates with greater certainty. The algorithm isrecursive. It can operate inreal time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have anormal (Gaussian)distribution. In the words ofRudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear."[9]Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possiblelinearestimator in theminimum mean-square-error sense,[10]although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.[11]
Extensions andgeneralizationsof the method have also been developed, such as theextended Kalman filterand theunscented Kalman filterwhich work onnonlinear systems. The basis is ahidden Markov modelsuch that thestate spaceof thelatent variablesiscontinuousand all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully inmulti-sensor fusion,[12]and distributedsensor networksto develop distributed orconsensusKalman filtering.[13]
The filtering method is named for HungarianémigréRudolf E. Kálmán, althoughThorvald Nicolai Thiele[14][15]andPeter Swerlingdeveloped a similar algorithm earlier. Richard S. Bucy of theJohns Hopkins Applied Physics Laboratorycontributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Kalman was inspired to derive the Kalman filter by applying state variables to theWiener filtering problem.[16]Stanley F. Schmidtis generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements.[17]It was during a visit by Kálmán to theNASA Ames Research Centerthat Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for theApollo programresulting in its incorporation in theApollo navigation computer.[18]: 16
This digital filter is sometimes termed theStratonovich–Kalman–Bucy filterbecause it is a special case of a more general, nonlinear filter developed by theSovietmathematicianRuslan Stratonovich.[19][20][21][22]In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow.[23]
This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
The Apollo computer used 2k of magnetic core RAM and 36k wire rope [...]. The CPU was built from ICs [...]. Clock speed was under 100 kHz [...]. The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable.
Kalman filters have been vital in the implementation of the navigation systems ofU.S. Navynuclearballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy'sTomahawk missileand theU.S. Air Force'sAir Launched Cruise Missile. They are also used in the guidance and navigation systems ofreusable launch vehiclesand theattitude controland navigation systems of spacecraft which dock at theInternational Space Station.[24]
Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (itsstate) that is better than the estimate obtained by using only one measurement alone. As such, it is a commonsensor fusionanddata fusionalgorithm.
Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using aweighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from thecovariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter worksrecursivelyand requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter'sgain. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded intomatricesbecause of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with aGPSunit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known asdead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it willdriftover time as small errors accumulate.
For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physicallaws of motion(the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.
The Kalman filter is an efficientrecursive filterestimatingthe internal state of alinear dynamic systemfrom a series ofnoisymeasurements. It is used in a wide range ofengineeringandeconometricapplications fromradarandcomputer visionto estimation of structural macroeconomic models,[25][26]and is an important topic incontrol theoryandcontrol systemsengineering. Together with thelinear-quadratic regulator(LQR), the Kalman filter solves thelinear–quadratic–Gaussian controlproblem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory.
In most applications, the internal state is much larger (has moredegrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.
For theDempster–Shafer theory, each state equation or observation is considered a special case of alinear belief functionand the Kalman filtering is a special case of combining linear belief functions on a join-tree orMarkov tree. Additional methods includebelief filteringwhich use Bayes or evidential updates to the state equations.
A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, theKalman–Bucy filter, Schmidt's "extended" filter, theinformation filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is thephase-locked loop, which is now ubiquitous in radios, especiallyfrequency modulation(FM) radios, television sets,satellite communicationsreceivers, outer space communications systems, and nearly any otherelectroniccommunications equipment.
Kalman filtering is based onlinear dynamic systemsdiscretized in the time domain. They are modeled on aMarkov chainbuilt onlinear operatorsperturbed by errors that may includeGaussiannoise. Thestateof the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as avectorofreal numbers. At eachdiscrete timeincrement, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis andGhahramani(1999)[27]and Hamilton (1994), Chapter 13.[28]
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-stepk{\displaystyle k}, following:
As seen below, it is common in many applications that the matricesF{\displaystyle \mathbf {F} },H{\displaystyle \mathbf {H} },Q{\displaystyle \mathbf {Q} },R{\displaystyle \mathbf {R} }, andB{\displaystyle \mathbf {B} }are constant across time, in which case theirk{\displaystyle k}index may be dropped.
The Kalman filter model assumes the true state at timek{\displaystyle k}is evolved from the state atk−1{\displaystyle k-1}according to
where
IfQ{\displaystyle \mathbf {Q} }is independent of time, one may, following Roweis and Ghahramani,[27]: 307writew∙{\displaystyle \mathbf {w} _{\bullet }}instead ofwk{\displaystyle \mathbf {w} _{k}}to emphasize that the noise has no explicit knowledge of time.
At timek{\displaystyle k}an observation (or measurement)zk{\displaystyle \mathbf {z} _{k}}of the true statexk{\displaystyle \mathbf {x} _{k}}is made according to
where
Analogously to the situation forwk{\displaystyle \mathbf {w} _{k}}, one may writev∙{\displaystyle \mathbf {v} _{\bullet }}instead ofvk{\displaystyle \mathbf {v} _{k}}ifR{\displaystyle \mathbf {R} }is independent of time.
The initial state, and the noise vectors at each step{x0,w1,…,wk,v1,…,vk}{\displaystyle \{\mathbf {x} _{0},\mathbf {w} _{1},\dots ,\mathbf {w} _{k},\mathbf {v} _{1},\dots ,\mathbf {v} _{k}\}}are all assumed to be mutuallyindependent.
Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory usingrobust control.[29][30]
The Kalman filter is arecursiveestimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notationx^n∣m{\displaystyle {\hat {\mathbf {x} }}_{n\mid m}}represents the estimate ofx{\displaystyle \mathbf {x} }at timengiven observations up to and including at timem≤n.
The state of the filter is represented by two variables:
The algorithm structure of the Kalman filter resembles that ofAlpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as thea prioristate estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, theinnovation(the pre-fit residual), i.e. the difference between the currenta prioriprediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed thea posterioristate estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matricesHk).[31][32]
The formula for the updated (a posteriori) estimate covariance above is valid for the optimalKkgain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in thederivationssection, where the formula valid for anyKkis also shown.
A more intuitive way to express the updated state estimate (x^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}) is:
This expression reminds us of a linear interpolation,x=(1−t)(a)+t(b){\displaystyle x=(1-t)(a)+t(b)}fort{\displaystyle t}between [0,1].
In our case:
This expression also resembles thealpha beta filterupdate step.
If the model is accurate, and the values forx^0∣0{\displaystyle {\hat {\mathbf {x} }}_{0\mid 0}}andP0∣0{\displaystyle \mathbf {P} _{0\mid 0}}accurately reflect the distribution of the initial state values, then the following invariants are preserved:
whereE[ξ]{\displaystyle \operatorname {E} [\xi ]}is theexpected valueofξ{\displaystyle \xi }. That is, all estimates have a mean error of zero.
Also:
so covariance matrices accurately reflect the covariance of estimates.
Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matricesQkandRk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is theautocovariance least-squares (ALS)technique that uses the time-laggedautocovariancesof routine operating data to estimate the covariances.[33][34]TheGNU OctaveandMatlabcode used to calculate the noise covariance matrices using the ALS technique is available online using theGNU General Public License.[35]Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed.[36]The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is theOptimized Kalman Filter(OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation.[37]These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies.
The Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated), and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters.[38]Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by settingQkandRkto the covariances of the noise. Instead, in that case, the parametersQkandRkmay be set to explicitly optimize the state estimation,[37]e.g., using standardsupervised learning.
After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of theinnovationsmeasures filter performance. Several different methods can be used for this purpose.[39]If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.[40][41]
Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δtseconds, but these measurements are imprecise; we want to maintain a model of the truck's position andvelocity. We show here how we derive the model from which we create our Kalman filter.
SinceF,H,R,Q{\displaystyle \mathbf {F} ,\mathbf {H} ,\mathbf {R} ,\mathbf {Q} }are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
wherex˙{\displaystyle {\dot {x}}}is the velocity, that is, the derivative of position with respect to time.
We assume that between the (k− 1) andktimestep, uncontrolled forces cause a constant acceleration ofakthat isnormally distributedwith mean 0 and standard deviationσa. FromNewton's laws of motionwe conclude that
(there is noBu{\displaystyle \mathbf {B} u}term since there are no known control inputs. Instead,akis the effect of an unknown input andG{\displaystyle \mathbf {G} }applies that effect to the state vector) where
so that
where
The matrixQ{\displaystyle \mathbf {Q} }is not full rank (it is of rank one ifΔt≠0{\displaystyle \Delta t\neq 0}). Hence, the distributionN(0,Q){\displaystyle N(0,\mathbf {Q} )}is not absolutely continuous and hasno probability density function. Another way to express this, avoiding explicit degenerate distributions is given by
At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noisevkis also distributed normally, with mean 0 and standard deviationσz.
where
and
We know the initial starting state of the truck with perfect precision, so we initialize
and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:
If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal:
The filter will then prefer the information from the first measurements over the information already in the model.
For simplicity, assume that the control inputuk=0{\displaystyle \mathbf {u} _{k}=\mathbf {0} }. Then the Kalman filter may be written:
A similar equation holds if we include a non-zero control input. Gain matricesKk{\displaystyle \mathbf {K} _{k}}evolve independently of the measurementszk{\displaystyle \mathbf {z} _{k}}. From above, the four equations needed for updating the Kalman gain are as follows:
Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matricesKk{\displaystyle \mathbf {K} _{k}}to an asymptotic matrixK∞{\displaystyle \mathbf {K} _{\infty }}applies for conditions established in Walrand and Dimakis.[42]Simulations establish the number of steps to convergence. For the moving truck example described above, withΔt=1{\displaystyle \Delta t=1}. andσa2=σz2=σx2=σx˙2=1{\displaystyle \sigma _{a}^{2}=\sigma _{z}^{2}=\sigma _{x}^{2}=\sigma _{\dot {x}}^{2}=1}, simulation shows convergence in10{\displaystyle 10}iterations.
Using the asymptotic gain, and assumingHk{\displaystyle \mathbf {H} _{k}}andFk{\displaystyle \mathbf {F} _{k}}are independent ofk{\displaystyle k}, the Kalman filter becomes alinear time-invariantfilter:
The asymptotic gainK∞{\displaystyle \mathbf {K} _{\infty }}, if it exists, can be computed by first solving the following discreteRiccati equationfor the asymptotic state covarianceP∞{\displaystyle \mathbf {P} _{\infty }}:[42]
The asymptotic gain is then computed as before.
Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by
where
This leads to an estimator of the form
The Kalman filter can be derived as ageneralized least squaresmethod operating on previous data.[43]
Starting with our invariant on the error covariancePk|kas above
substitute in the definition ofx^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}
and substitutey~k{\displaystyle {\tilde {\mathbf {y} }}_{k}}
andzk{\displaystyle \mathbf {z} _{k}}
and by collecting the error vectors we get
Since the measurement errorvkis uncorrelated with the other terms, this becomes
by the properties ofvector covariancethis becomes
which, using our invariant onPk|k−1and the definition ofRkbecomes
This formula (sometimes known as theJoseph formof the covariance update equation) is valid for any value ofKk. It turns out that ifKkis the optimal Kalman gain, this can be simplified further as shown below.
The Kalman filter is aminimum mean-square error (MMSE)estimator. The error in thea posterioristate estimation is
We seek to minimize the expected value of the square of the magnitude of this vector,E[‖xk−x^k|k‖2]{\displaystyle \operatorname {E} \left[\left\|\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k|k}\right\|^{2}\right]}. This is equivalent to minimizing thetraceof thea posterioriestimatecovariance matrixPk|k{\displaystyle \mathbf {P} _{k|k}}. By expanding out the terms in the equation above and collecting, we get:
The trace is minimized when itsmatrix derivativewith respect to the gain matrix is zero. Using thegradient matrix rulesand the symmetry of the matrices involved we find that
Solving this forKkyields the Kalman gain:
This gain, which is known as theoptimal Kalman gain, is the one that yields MMSE estimates when used.
The formula used to calculate thea posteriorierror covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right bySkKkT, it follows that
Referring back to our expanded formula for thea posteriorierror covariance,
we find the last two terms cancel out, giving
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems withnumerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; thea posteriorierror covariance formula as derived above (Joseph form) must be used.
The Kalman filtering equations provide an estimate of the statex^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}and its error covariancePk∣k{\displaystyle \mathbf {P} _{k\mid k}}recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.[44]In the absence of reliable statistics or the true values of noise covariance matricesQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}, the expression
no longer provides the actual error covariance. In other words,Pk∣k≠E[(xk−x^k∣k)(xk−x^k∣k)T]{\displaystyle \mathbf {P} _{k\mid k}\neq E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]}. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices.[citation needed]This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matricesFk{\displaystyle \mathbf {F} _{k}}andHk{\displaystyle \mathbf {H} _{k}}that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted byQka{\displaystyle \mathbf {Q} _{k}^{a}}andRka{\displaystyle \mathbf {R} _{k}^{a}}respectively, whereas the design values used in the estimator areQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}respectively. The actual error covariance is denoted byPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}andPk∣k{\displaystyle \mathbf {P} _{k\mid k}}as computed by the Kalman filter is referred to as the Riccati variable. WhenQk≡Qka{\displaystyle \mathbf {Q} _{k}\equiv \mathbf {Q} _{k}^{a}}andRk≡Rka{\displaystyle \mathbf {R} _{k}\equiv \mathbf {R} _{k}^{a}}, this means thatPk∣k=Pk∣ka{\displaystyle \mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k}^{a}}. While computing the actual error covariance usingPk∣ka=E[(xk−x^k∣k)(xk−x^k∣k)T]{\displaystyle \mathbf {P} _{k\mid k}^{a}=E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]}, substituting forx^k∣k{\displaystyle {\widehat {\mathbf {x} }}_{k\mid k}}and using the fact thatE[wkwkT]=Qka{\displaystyle E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}^{a}}andE[vkvkT]=Rka{\displaystyle E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}^{a}}, results in the following recursive equations forPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}:
and
While computingPk∣k{\displaystyle \mathbf {P} _{k\mid k}}, by design the filter implicitly assumes thatE[wkwkT]=Qk{\displaystyle E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}}andE[vkvkT]=Rk{\displaystyle E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}}. The recursive expressions forPk∣ka{\displaystyle \mathbf {P} _{k\mid k}^{a}}andPk∣k{\displaystyle \mathbf {P} _{k\mid k}}are identical except for the presence ofQka{\displaystyle \mathbf {Q} _{k}^{a}}andRka{\displaystyle \mathbf {R} _{k}^{a}}in place of the design valuesQk{\displaystyle \mathbf {Q} _{k}}andRk{\displaystyle \mathbf {R} _{k}}respectively. Researches have been done to analyze Kalman filter system's robustness.[45]
One problem with the Kalman filter is itsnumerical stability. If the process noise covarianceQkis small, round-off error often causes a small positive eigenvalue of the state covariance matrixPto be computed as a negative number. This renders the numerical representation ofPindefinite, while its true form ispositive-definite.
Positive definite matrices have the property that they have a factorization into the product of anon-singular,lower-triangular matrixSand itstranspose:P=S·ST. The factorScan be computed efficiently using theCholesky factorizationalgorithm. This product form of the covariance matrixPis guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal elementPkkis equal to theeuclidean normof the k-th row ofS, which is necessarily positive. An equivalent form, which avoids many of thesquare rootoperations involved in theCholesky factorizationalgorithm, yet preserves the desirable numerical properties, is the U-D decomposition form,P=U·D·UT, whereUis aunit triangular matrix(with unit diagonal), andDis a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[46]: 69while on 21st-century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton.[46][47]
TheL·D·LTdecompositionof the innovation covariance matrixSkis the basis for another type of numerically efficient and robust square root filter.[48]The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into theL·D·LTstructure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.[49]Any singular covariance matrix ispivotedso that the first diagonal partition isnonsingularandwell-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variablesHk·xk|k-1that are associated with auxiliary observations inyk. Thel·d·ltsquare-root filter requiresorthogonalizationof the observation vector.[47][48]This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).[50]
The Kalman filter is efficient for sequential data processing oncentral processing units(CPUs), but in its original form it is inefficient on parallel architectures such asgraphics processing units(GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä and García-Fernández (2021).[51]The filter solution can then be retrieved by the use of aprefix sumalgorithm which can be efficiently implemented on GPU.[52]This reduces thecomputational complexityfromO(N){\displaystyle O(N)}in the number of time steps toO(log(N)){\displaystyle O(\log(N))}.
The Kalman filter can be presented as one of the simplestdynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly,recursive Bayesian estimationcalculatesestimatesof an unknownprobability density function(PDF) recursively over time using incoming measurements and a mathematical process model.[53]
In recursive Bayesian estimation, the true state is assumed to be an unobservedMarkov process, and the measurements are the observed states of a hidden Markov model (HMM).
Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
Similarly, the measurement at thek-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
However, when a Kalman filter is used to estimate the statex, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This results in thepredictandupdatephases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k− 1)-th timestep to thek-th and the probability distribution associated with the previous state, over all possiblexk−1{\displaystyle x_{k-1}}.
The measurement set up to timetis
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
The denominator
is a normalization term.
The remaining probability density functions are
The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF forxk{\displaystyle \mathbf {x} _{k}}given the measurementsZk{\displaystyle \mathbf {Z} _{k}}is the Kalman filter estimate.
Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as agenerative model, i.e., a process forgeneratinga stream of random observationsz= (z0,z1,z2, ...). Specifically, the process is
This process has identical structure to thehidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.
In some applications, it is useful to compute theprobabilitythat a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as themarginal likelihoodbecause it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models usingBayesian model comparison.
It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By thechain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,
and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimatex^k∣k−1,Pk∣k−1.{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}.}Thus the marginal likelihood is given by
i.e., a product of Gaussian densities, each corresponding to the density of one observationzkunder the current filtering distributionHkx^k∣k−1,Sk{\displaystyle \mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}}. This can easily be computed as a simple recursive update; however, to avoidnumeric underflow, in a practical implementation it is usually desirable to compute thelogmarginal likelihoodℓ=logp(z){\displaystyle \ell =\log p(\mathbf {z} )}instead. Adopting the conventionℓ(−1)=0{\displaystyle \ell ^{(-1)}=0}, this can be done via the recursive update rule
wheredy{\displaystyle d_{y}}is the dimension of the measurement vector.[54]
An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.
In cases where the dimension of the observation vectoryis bigger than the dimension of the state space vectorx, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. Additionally, the information filter allows for system information initialization according toI1|0=P1|0−1=0{\displaystyle {I_{1|0}=P_{1|0}^{-1}=0}}, which would not be possible for the regular Kalman filter.[55]In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by theinformation matrixandinformationvector respectively. These are defined as:
Similarly the predicted covariance and state have equivalent information forms, defined as:
and the measurement covariance and measurement vector, which are defined as:
The information update now becomes a trivial sum.[56]
The main advantage of the information filter is thatNmeasurements can be filtered at each time step simply by summing their information matrices and vectors.
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.[56]
The optimal fixed-lag smoother provides the optimal estimate ofx^k−N∣k{\displaystyle {\hat {\mathbf {x} }}_{k-N\mid k}}for a given fixed-lagN{\displaystyle N}using the measurements fromz1{\displaystyle \mathbf {z} _{1}}tozk{\displaystyle \mathbf {z} _{k}}.[57]It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following:
where:
If the estimation error covariance is defined so that
then we have that the improvement on the estimation ofxt−i{\displaystyle \mathbf {x} _{t-i}}is given by:
The optimal fixed-interval smoother provides the optimal estimate ofx^k∣n{\displaystyle {\hat {\mathbf {x} }}_{k\mid n}}(k<n{\displaystyle k<n}) using the measurements from a fixed intervalz1{\displaystyle \mathbf {z} _{1}}tozn{\displaystyle \mathbf {z} _{n}}. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.
The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.[58]
The forward pass is the same as the regular Kalman filter algorithm. Thesefiltereda-priori and a-posteriori state estimatesx^k∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1}},x^k∣k{\displaystyle {\hat {\mathbf {x} }}_{k\mid k}}and covariancesPk∣k−1{\displaystyle \mathbf {P} _{k\mid k-1}},Pk∣k{\displaystyle \mathbf {P} _{k\mid k}}are saved for use in the backward pass (forretrodiction).
In the backward pass, we compute thesmoothedstate estimatesx^k∣n{\displaystyle {\hat {\mathbf {x} }}_{k\mid n}}and covariancesPk∣n{\displaystyle \mathbf {P} _{k\mid n}}. We start at the last time step and proceed backward in time using the following recursive equations:
where
xk∣k{\displaystyle \mathbf {x} _{k\mid k}}is the a-posteriori state estimate of timestepk{\displaystyle k}andxk+1∣k{\displaystyle \mathbf {x} _{k+1\mid k}}is the a-priori state estimate of timestepk+1{\displaystyle k+1}. The same notation applies to the covariance.
An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.[47]This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive
computation of data which are used at each observation time to compute the smoothed state and covariance.
The recursive equations are
whereSk{\displaystyle \mathbf {S} _{k}}is the residual covariance andC^k=I−KkHk{\displaystyle {\hat {\mathbf {C} }}_{k}=\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}}. The smoothed state and covariance can then be found by substitution in the equations
or
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs.[59]
The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that isxk∣k−xk∣n{\displaystyle \mathbf {x} _{k\mid k}-\mathbf {x} _{k\mid n}}.[60]
The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.[61]This smoother is a time-varying state-space generalization of the optimal non-causalWiener filter.
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward passβk{\displaystyle \beta _{k}}may be calculated by operating the forward equations on the time-reversedαk{\displaystyle \alpha _{k}}and time reversing the result. In the case of output estimation, the smoothed estimate is given by
Taking the causal part of this minimum-variance smoother yields
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.[62][63]
Expectation–maximization algorithmsmay be employed to calculate approximatemaximum likelihoodestimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.[64]
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).
Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest.
Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Lety−y^{\displaystyle \mathbf {y} -{\hat {\mathbf {y} }}}denote the output estimation error exhibited by a conventional Kalman filter. Also, letW{\displaystyle \mathbf {W} }denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance ofW(y−y^){\displaystyle \mathbf {W} \left(\mathbf {y} -{\hat {\mathbf {y} }}\right)}arises by simply constructingW−1y^{\displaystyle \mathbf {W} ^{-1}{\hat {\mathbf {y} }}}.
The design ofW{\displaystyle \mathbf {W} }remains an open question. One way of proceeding is to identify a system which generates the estimation error and settingW{\displaystyle \mathbf {W} }equal to the inverse of that system.[65]This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.
The basic Kalman filter is limited to a linear assumption. More complex systems, however, can benonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both.
The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.[66]
In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are ofdifferentiabletype.
The functionfcan be used to compute the predicted state from the previous estimate and similarly the functionhcan be used to compute the predicted measurement from the predicted state. However,fandhcannot be applied to the covariance directly. Instead a matrix of partial derivatives (theJacobian) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.
When the state transition and observation models—that is, the predict and update functionsf{\displaystyle f}andh{\displaystyle h}—are highly nonlinear, the extended Kalman filter can give particularly poor performance.[67][68]This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)[67]uses a deterministic sampling technique known as theunscented transformation (UT)to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way.[69]For certain systems, the resulting UKF more accurately estimates the true mean and covariance.[70]This can be verified withMonte Carlo samplingorTaylor seriesexpansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).
For arandomvectorx=(x1,…,xL){\displaystyle \mathbf {x} =(x_{1},\dots ,x_{L})}, sigma points are any set of vectors
attributed with
A simple choice of sigma points and weights forxk−1∣k−1{\displaystyle \mathbf {x} _{k-1\mid k-1}}in the UKF algorithm is
wherex^k−1∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k-1\mid k-1}}is the mean estimate ofxk−1∣k−1{\displaystyle \mathbf {x} _{k-1\mid k-1}}. The vectorAj{\displaystyle \mathbf {A} _{j}}is thejth column ofA{\displaystyle \mathbf {A} }wherePk−1∣k−1=AAT{\displaystyle \mathbf {P} _{k-1\mid k-1}=\mathbf {AA} ^{\textsf {T}}}. Typically,A{\displaystyle \mathbf {A} }is obtained viaCholesky decompositionofPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}. With some care the filter equations can be expressed in such a way thatA{\displaystyle \mathbf {A} }is evaluated directly without intermediate calculations ofPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}. This is referred to as thesquare-root unscented Kalman filter.[71]
The weight of the mean value,W0{\displaystyle W_{0}}, can be chosen arbitrarily.
Another popular parameterization (which generalizes the above) is
α{\displaystyle \alpha }andκ{\displaystyle \kappa }control the spread of the sigma points.β{\displaystyle \beta }is related to the distribution ofx{\displaystyle x}. Note that this is an overparameterization in the sense that any one ofα{\displaystyle \alpha },β{\displaystyle \beta }andκ{\displaystyle \kappa }can be chosen arbitrarily.
Appropriate values depend on the problem at hand, but a typical recommendation isα=1{\displaystyle \alpha =1},β=0{\displaystyle \beta =0}, andκ≈3L/2{\displaystyle \kappa \approx 3L/2}.[citation needed]If the true distribution ofx{\displaystyle x}is Gaussian,β=2{\displaystyle \beta =2}is optimal.[72]
As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.
Given estimates of the mean and covariance,x^k−1∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k-1\mid k-1}}andPk−1∣k−1{\displaystyle \mathbf {P} _{k-1\mid k-1}}, one obtainsN=2L+1{\displaystyle N=2L+1}sigma points as described in the section above. The sigma points are propagated through the transition functionf.
The propagated sigma points are weighed to produce the predicted mean and covariance.
whereWja{\displaystyle W_{j}^{a}}are the first-order weights of the original sigma points, andWjc{\displaystyle W_{j}^{c}}are the second-order weights. The matrixQk{\displaystyle \mathbf {Q} _{k}}is the covariance of the transition noise,wk{\displaystyle \mathbf {w} _{k}}.
Given prediction estimatesx^k∣k−1{\displaystyle {\hat {\mathbf {x} }}_{k\mid k-1}}andPk∣k−1{\displaystyle \mathbf {P} _{k\mid k-1}}, a new set ofN=2L+1{\displaystyle N=2L+1}sigma pointss0,…,s2L{\displaystyle \mathbf {s} _{0},\dots ,\mathbf {s} _{2L}}with corresponding first-order weightsW0a,…W2La{\displaystyle W_{0}^{a},\dots W_{2L}^{a}}and second-order weightsW0c,…,W2Lc{\displaystyle W_{0}^{c},\dots ,W_{2L}^{c}}is calculated.[73]These sigma points are transformed through the measurement functionh{\displaystyle h}.
Then the empirical mean and covariance of the transformed points are calculated.
whereRk{\displaystyle \mathbf {R} _{k}}is the covariance matrix of the observation noise,vk{\displaystyle \mathbf {v} _{k}}. Additionally, the cross covariance matrix is also needed
The Kalman gain is
The updated mean and covariance estimates are
When the observation modelp(zk∣xk){\displaystyle p(\mathbf {z} _{k}\mid \mathbf {x} _{k})}is highly non-linear and/or non-Gaussian, it may prove advantageous to applyBayes' ruleand estimate
wherep(xk∣zk)≈N(g(zk),Q(zk)){\displaystyle p(\mathbf {x} _{k}\mid \mathbf {z} _{k})\approx {\mathcal {N}}(g(\mathbf {z} _{k}),Q(\mathbf {z} _{k}))}for nonlinear functionsg,Q{\displaystyle g,Q}. This replaces the generative specification of the standard Kalman filter with adiscriminative modelfor the latent states given observations.
Under astationarystate model
whereT=FTF⊺+C{\displaystyle \mathbf {T} =\mathbf {F} \mathbf {T} \mathbf {F} ^{\intercal }+\mathbf {C} }, if
then given a new observationzk{\displaystyle \mathbf {z} _{k}}, it follows that[74]
where
Note that this approximation requiresQ(zk)−1−T−1{\displaystyle Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1}}to be positive-definite; in the case that it is not,
is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states[75]and can be used build filters that are particularly robust to nonstationarities in the observation model.[76]
Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process modelF(t){\displaystyle \mathbf {F} (t)}, which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.[77]
Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.[78][79]
It is based on the state space model
whereQ(t){\displaystyle \mathbf {Q} (t)}andR(t){\displaystyle \mathbf {R} (t)}represent the intensities of the two white noise termsw(t){\displaystyle \mathbf {w} (t)}andv(t){\displaystyle \mathbf {v} (t)}, respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
where the Kalman gain is given by
Note that in this expression forK(t){\displaystyle \mathbf {K} (t)}the covariance of the observation noiseR(t){\displaystyle \mathbf {R} (t)}represents at the same time the covariance of the prediction error (orinnovation)y~(t)=z(t)−H(t)x^(t){\displaystyle {\tilde {\mathbf {y} }}(t)=\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)}; these covariances are equal only in the case of continuous time.[80]
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of aRiccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.
Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by
where
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e.,K(t)=0{\displaystyle \mathbf {K} (t)=0}. The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
For the case oflinear time invariantsystems, the continuous time dynamics can be exactlydiscretizedinto a discrete time system usingmatrix exponentials.
The update equations are identical to those of the discrete-time Kalman filter.
The traditional Kalman filter has also been employed for the recovery ofsparse, possibly dynamic, signals from noisy observations. Recent works[81][82][83]utilize notions from the theory ofcompressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.
Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers forGaussian process regression.[84]
|
https://en.wikipedia.org/wiki/Kalman_filter
|
Ingeometry, theequal incircles theoremderives from a JapaneseSangaku, and pertains to the following construction: a series of rays are drawn from a given point to a given line such that the inscribed circles of the triangles formed by adjacent rays and the base line are equal. In the illustration the equal blue circles define the spacing between the rays, as described.
The theorem states that the incircles of the triangles formed (starting from any given ray) by every other ray, every third ray, etc. and the base line are also equal. The case of every other ray is illustrated above by the green circles, which are all equal.
From the fact that the theorem does not depend on the angle of the initial ray, it can be seen that the theorem properly belongs toanalysis, rather than geometry, and must relate to a continuous scaling function which defines the spacing of the rays. In fact, this function is thehyperbolic sine.
The theorem is a direct corollary of the following lemma:
Suppose that thenth ray makes an angleγn{\displaystyle \gamma _{n}}with the normal to the baseline. Ifγn{\displaystyle \gamma _{n}}is parameterized according to the equation,tanγn=sinhθn{\displaystyle \tan \gamma _{n}=\sinh \theta _{n}}, then values ofθn=a+nb{\displaystyle \theta _{n}=a+nb}, wherea{\displaystyle a}andb{\displaystyle b}are real constants, define a sequence of rays that satisfy the condition of equal incircles, and furthermore any sequence of rays satisfying the condition can be produced by suitable choice of the constantsa{\displaystyle a}andb{\displaystyle b}.
In the diagram, lines PS and PT are adjacent rays making anglesγn{\displaystyle \gamma _{n}}andγn+1{\displaystyle \gamma _{n+1}}with line PR, which is perpendicular to the baseline, RST.
Line QXOY is parallel to the baseline and passes through O, the center of the incircle of△{\displaystyle \triangle }PST, which is tangent to the rays at W and Z. Also, line PQ has lengthh−r{\displaystyle h-r}, and line QR has lengthr{\displaystyle r}, the radius of the incircle.
Then△{\displaystyle \triangle }OWX is similar to△{\displaystyle \triangle }PQX and△{\displaystyle \triangle }OZY is similar to△{\displaystyle \triangle }PQY, and from XY = XO + OY we get
This relation on a set of angles,{γm}{\displaystyle \{\gamma _{m}\}}, expresses the condition of equal incircles.
To prove the lemma, we settanγn=sinh(a+nb){\displaystyle \tan \gamma _{n}=\sinh(a+nb)}, which givessecγn=cosh(a+nb){\displaystyle \sec \gamma _{n}=\cosh(a+nb)}.
Usinga+(n+1)b=(a+nb)+b{\displaystyle a+(n+1)b=(a+nb)+b}, we apply the addition rules forsinh{\displaystyle \sinh }andcosh{\displaystyle \cosh }, and verify that the equal incircles relation is satisfied by setting
This gives an expression for the parameterb{\displaystyle b}in terms of the geometric measures,h{\displaystyle h}andr{\displaystyle r}. With this definition ofb{\displaystyle b}we then obtain an expression for the radii,rN{\displaystyle r_{N}}, of the incircles formed by taking everyNth ray as the sides of the triangles
|
https://en.wikipedia.org/wiki/Equal_incircles_theorem
|
Incomputer networking,ARP spoofing(alsoARP cache poisoningorARP poison routing) is a technique by which an attacker sends (spoofed)Address Resolution Protocol(ARP) messages onto alocal area network. Generally, the aim is to associate the attacker'sMAC addresswith theIP addressof anotherhost, such as thedefault gateway, causing any traffic meant for that IP address to be sent to the attacker instead.
ARP spoofing may allow an attacker to interceptdata frameson a network, modify the traffic, or stop all traffic. Often the attack is used as an opening for other attacks, such asdenial of service,man in the middle, orsession hijackingattacks.[1]
The attack can only be used on networks that use ARP, and requires that the attacker has direct access to the localnetwork segmentto be attacked.[2]
TheAddress Resolution Protocol(ARP) is a widely usedcommunications protocolfor resolvingInternet layeraddresses intolink layeraddresses.
When anInternet Protocol(IP)datagramis sent from one host to another in alocal area network, the destination IP address must be resolved to aMAC addressfor transmission via thedata link layer. When another host's IP address is known, and its MAC address is needed, abroadcast packetis sent out on the local network. This packet is known as anARP request. The destination machine with the IP in the ARP request then responds with anARP replythat contains the MAC address for that IP.[2]
ARP is astateless protocol. Network hosts will automaticallycacheany ARP replies they receive, regardless of whether network hosts requested them. Even ARP entries that have not yet expired will be overwritten when a new ARP reply packet is received. There is no method in the ARP protocol by which a host canauthenticatethe peer from which the packet originated. This behavior is the vulnerability that allows ARP spoofing to occur.[1][2][3]
The basic principle behind ARP spoofing is to exploit the lack of authentication in the ARP protocol by sendingspoofedARP messages onto the LAN. ARP spoofing attacks can be run from a compromised host on the LAN, or from an attacker's machine that is connected directly to the target LAN.
An attacker using ARP spoofing will disguise as a host to the transmission of data on the network between the users.[4]Then users would not know that the attacker is not the real host on the network.[4]
Generally, the goal of the attack is to associate the attacker's host MAC address with the IP address of a targethost, so that any traffic meant for the target host will be sent to the attacker's host. The attacker may choose to inspect the packets (spying), while forwarding the traffic to the actual default destination to avoid discovery, modify the data before forwarding it (man-in-the-middle attack), or launch adenial-of-service attackby causing some or all of the packets on the network to be dropped.
The simplest form of certification is the use of static, read-only entries for critical services in theARP cacheof a host. IP address-to-MAC address mappings in the local ARP cache may be statically entered. Hosts don't need to transmit ARP requests where such entries exist.[5]While static entries provide some security against spoofing, they result in maintenance efforts as address mappings for all systems in the network must be generated and distributed. This does not scale on a large network since the mapping has to be set for each pair of machines resulting inn2-nARP entries that have to be configured whennmachines are present; On each machine there must be an ARP entry for every other machine on the network;n-1ARP entries on each of thenmachines.
Software that detects ARP spoofing generally relies on some form of certification or cross-checking of ARP responses. Uncertified ARP responses are then blocked. These techniques may be integrated with theDHCP serverso that bothdynamicandstatic IPaddresses are certified. This capability may be implemented in individual hosts or may be integrated intoEthernet switchesor other network equipment. The existence of multiple IP addresses associated with a single MAC address may indicate an ARP spoof attack, although there are legitimate uses of such a configuration. In a more passive approach, a device listens for ARP replies on a network, and sends a notification viaemailwhen an ARP entry changes.[6]
AntiARP[7]also provides Windows-based spoofing prevention at the kernel level. ArpStar is a Linux module for kernel 2.6 and Linksys routers that drops invalid packets that violate mapping, and contains an option to repoison or heal.
Some virtualized environments such asKVMalso provide security mechanisms to prevent MAC spoofing between guests running on the same host.[8]
Additionally some Ethernet adapters provide MAC and VLAN anti-spoofing features.[9]
OpenBSDwatches passively for hosts impersonating the local host and notifies in case of any attempt to overwrite a permanent entry.[10]
Operating systems react differently. Linux ignores unsolicited replies, but, on the other hand, uses responses to requests from other machines to update its cache. Solaris accepts updates on entries only after a timeout. In Microsoft Windows, the behavior of the ARP cache can be configured through several registry entries under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, ArpCacheLife, ArpCacheMinReferenceLife, ArpUseEtherSNAP, ArpTRSingleRoute, ArpAlwaysSourceRoute, ArpRetryCount.[11]
The techniques that are used in ARP spoofing can also be used to implement redundancy of network services. For example, some software allows a backup server to issue agratuitous ARP requestin order to take over for a defective server and transparently offer redundancy.[12][13]Circle[14]and CUJO are two companies that have commercialized products centered around this strategy.
ARP spoofing is often used by developers to debug IP traffic between two hosts when a switch is in use: if host A and host B are communicating through an Ethernet switch, their traffic would normally be invisible to a third monitoring host M. The developer configures A to have M's MAC address for B, and B to have M's MAC address for A; and also configures M to forward packets. M can now monitor the traffic, exactly as in a man-in-the-middle attack.
Some of the tools that can be used to carry out ARP spoofing attacks:
|
https://en.wikipedia.org/wiki/ARP_spoofing
|
Anillegal opcode, also called anunimplemented operation,[1]unintended opcode[2]orundocumented instruction, is aninstructionto aCPUthat is not mentioned in any official documentation released by the CPU's designer or manufacturer, which nevertheless has an effect. Illegal opcodes were common on older CPUs designed during the 1970s, such as theMOS Technology6502,Intel8086, and theZilogZ80. Unlike modern processors, those older processors have a very limited transistor budget, and thus to save space their designers often omitted circuitry to detect invalid opcodes and generate atrapto an error handler. The operation of many of these opcodes happens as aside effectof the wiring oftransistorsin the CPU, and usually combines functions of the CPU that were not intended to be combined. On old and modern processors, there are also instructions intentionally included in the processor by the manufacturer, but that are not documented in any official specification.
While most accidental illegal instructions have useless or even highly undesirable effects (such as crashing the computer), some can have useful functions in certain situations. Such instructions were sometimes exploited incomputer gamesof the 1970s and 1980s to speed up certain time-critical sections. Another common use was in the ongoing battle betweencopy protectionimplementations andcracking. Here, they were a form ofsecurity through obscurity, and their secrecy usually did not last very long.
A danger associated with the use of illegal instructions was that, given the fact that the manufacturer does not guarantee their existence and function, they might disappear or behave differently with any change of the CPU internals or any new revision of the CPU, rendering programs that use them incompatible with the newer revisions. For example, a number of olderApple IIgames did not work correctly on the newerApple IIc, because the latter used a newer CPU revision –65C02– that did away with illegal opcodes.
Later CPUs, such as the80186,80286,68000and its descendants, do not have illegal opcodes that are widely known/used. Ideally, the CPU will behave in a well-defined way when it finds an unknown opcode in the instruction stream, such as triggering a certainexceptionorfaultcondition. Theoperating system's exception or fault handler will then usually terminate the application that caused the fault, unless the program had previously established its own exception/fault handler, in which case that handler would receive control. Another, less common way of handling illegal instructions is by defining them to do nothing except taking up time and space (equivalent to the CPU's officialNOPinstruction); this method is used by theTMS9900and65C02processors, among others.Alternatively, unknown instructions can be emulated in software (e.g.LOADALL), or even "new" pseudo-instructions can be implemented. SomeBIOSes, memory managers, and operating systems take advantage of this, for example, to let V86 tasks communicate with the underlying system, i.e. BOP (from "BIOS Operation") utilized by the WindowsNTVDM.[3]
In spite of Intel's guarantee against such instructions, research using techniques such asfuzzinguncovered a vast number of undocumented instructions in x86 processors as late as 2018.[4]Some of these instructions are shared across processor manufacturers, indicating that Intel andAMDare both aware of the instruction and its purpose, despite it not appearing in any official specification. Other instructions are specific to manufacturers or specific product lines. The purpose of the majority of x86 undocumented instructions is unknown.
Today, the details of these instructions are mainly of interest for exactemulationof older systems.
|
https://en.wikipedia.org/wiki/Unintended_instructions
|
RFC4210(CMPv2, 2005)RFC9480(CMPv3, 2023)
RFC2510(CMPv1, 1999)
TheCertificate Management Protocol(CMP) is an Internet protocol standardized by theIETFused for obtaining X.509digital certificatesin apublic key infrastructure(PKI).
CMP is a very feature-rich and flexible protocol, supporting many types of cryptography.
CMP messages are self-contained, which, as opposed toEST, makes the protocol independent of the transport mechanism and provides end-to-end security.
CMP messages are encoded inASN.1, using theDERmethod.
CMP is described inRFC4210. Enrollment request messages employ the Certificate Request Message Format (CRMF), described inRFC4211.
The only other protocol so far using CRMF isCertificate Management over CMS(CMC), described inRFC5273.
An obsolete version of CMP is described inRFC2510, the respective CRMF version inRFC2511.
In November 2023,CMP Updates,CMP Algorithms, andCoAP transfer for CMP, have been published as well as theLightweight CMP Profilefocusing on industrial use.
In apublic key infrastructure(PKI), so-calledend entities(EEs) act as CMP client, requesting one or more certificates for themselves from acertificate authority(CA), which issues the legal certificates and acts as a CMP server. None or any number ofregistration authorities(RA), can be used to mediate between the EEs and CAs, having both a downstream CMP server interface and an upstream CMP client interface. Using a "cross-certification request" a CA can get a certificate signed by another CA.
CMP messages are usually transferred using HTTP, but any reliable means of transportation can be used.
TheContent-Typeused isapplication/pkixcmp; older versions of the draft usedapplication/pkixcmp-poll,application/x-pkixcmporapplication/x-pkixcmp-poll.
|
https://en.wikipedia.org/wiki/Certificate_Management_Protocol
|
HTTP Strict Transport Security(HSTS) is a policy mechanism that helps to protect websites againstman-in-the-middle attackssuch asprotocol downgrade attacks[1]andcookie hijacking. It allowsweb serversto declare that web browsers (or other complyinguser agents) should automatically interact with it using onlyHTTPSconnections, which provideTransport Layer Security(TLS/SSL), unlike the insecureHTTPused alone. HSTS is anIETFstandardstrack protocol and is specified inRFC6797.
The HSTS Policy is communicated by the server to the user agent via an HTTP responseheaderfield namedStrict-Transport-Security. HSTS Policy specifies a period of time during which theuser agentshould only access the server in a secure fashion.[2]: §5.2Websites using HSTS often do not accept clear text HTTP, either by rejecting connections over HTTP or systematically redirecting users to HTTPS (though this is not required by the specification). The consequence of this is that a user-agent not capable of doing TLS will not be able to connect to the site.
The protection only applies after a user has visited the site at least once, relying on the principle of "trust on first use". The way this protection works is that when a user entering or selecting an HTTP (not HTTPS) URL to the site, the client, such as a Web browser, will automatically upgrade to HTTPS without making an HTTP request, thereby preventing any HTTP man-in-the-middle attack from occurring.
The HSTS specification was published as RFC 6797 on 19 November 2012 after being approved on 2 October 2012 by theIESGfor publication as aProposed StandardRFC.[3]The authors originally submitted it as anInternet Drafton 17 June 2010. With the conversion to an Internet Draft, the specification name was altered from "Strict Transport Security" (STS) to "HTTP Strict Transport Security", because the specification applies only toHTTP.[4]The HTTP response header field defined in the HSTS specification however remains named "Strict-Transport-Security".
The last so-called "community version" of the then-named "STS" specification was published on 18 December 2009, with revisions based on community feedback.[5]
The original draft specification by Jeff Hodges fromPayPal, Collin Jackson, and Adam Barth was published on 18 September 2009.[6]
The HSTS specification is based on original work by Jackson and Barth as described in their paper "ForceHTTPS: Protecting High-Security Web Sites from Network Attacks".[7]
Additionally, HSTS is the realization of one facet of an overall vision for improving web security, put forward by Jeff Hodges and Andy Steingruebl in their 2010 paperThe Need for Coherent Web Security Policy Framework(s).[8]
A server implements an HSTS policy by supplying a header over an HTTPS connection (HSTS headers over HTTP are ignored).[1]For example, a server could send a header such that future requests to the domain for the next year (max-age is specified in seconds; 31,536,000 is equal to one non-leap year) use only HTTPS:Strict-Transport-Security: max-age=31536000.
When a web application issues HSTS Policy to user agents, conformant user agents behave as follows:[2]: §5
The HSTS Policy helps protect web application users against some passive (eavesdropping) and active networkattacks.[2]: §2.4Aman-in-the-middle attackerhas a greatly reduced ability to intercept requests and responses between a user and a web application server while the user's browser has HSTS Policy in effect for that web application.
The most important security vulnerability that HSTS can fix is SSL-strippingman-in-the-middle attacks, first publicly introduced byMoxie Marlinspikein his 2009 BlackHat Federal talk "New Tricks For Defeating SSL In Practice".[9][10]The SSL (andTLS) stripping attack works by transparently converting a secureHTTPSconnection into a plain HTTP connection. The user can see that the connection is insecure, but crucially there is no way of knowing whether the connectionshouldbe secure. At the time of Marlinspike's talk, many websites did not use TLS/SSL, therefore there was no way of knowing (without prior knowledge) whether the use of plain HTTP was due to an attack, or simply because the website had not implemented TLS/SSL. Additionally, no warnings are presented to the user during the downgrade process, making the attack fairly subtle to all but the most vigilant. Marlinspike's sslstrip tool fully automates the attack.[citation needed]
HSTS addresses this problem[2]: §2.4by informing the browser that connections to the site should always use TLS/SSL. The HSTS header can be stripped by the attacker if this is the user's first visit.Google Chrome,Mozilla Firefox,Internet Explorer, andMicrosoft Edgeattempt to limit this problem by including a "pre-loaded" list of HSTS sites.[11][12][13]Unfortunately this solution cannot scale to include all websites on the internet. Seelimitations, below.
HSTS can also help to prevent having one's cookie-based website login credentials stolen by widely available tools such asFiresheep.[14]
Because HSTS is time limited, it is sensitive to attacks involving shifting the victim's computer time e.g. using falseNTPpackets.[15]
The initial request remains unprotected from active attacks if it uses an insecure protocol such as plain HTTP or if theURIfor the initial request was obtained over aninsecure channel.[2]: §14.6The same applies to the first request after the activity period specified in the advertised HSTS Policymax-age(sites should set a period of several days or months depending on user activity and behavior).
Google Chrome,Mozilla Firefox, andInternet Explorer/Microsoft Edgeaddress this limitation by implementing a "HSTS preloaded list", which is a list that contains known sites supporting HSTS.[16][11][12][13]This list is distributed with the browser so that it uses HTTPS for the initial request to the listed sites as well. As previously mentioned, these pre-loaded lists cannot scale to cover the entire Web. A potential solution might be achieved by usingDNSrecords to declare HSTS Policy, and accessing them securely viaDNSSEC, optionally with certificate fingerprints to ensure validity (which requires running a validating resolver to avoidlast mileissues).[17]
Junade Ali has noted that HSTS is ineffective against the use of phony domains; by using DNS-based attacks, it is possible for a man-in-the-middle interceptor to serve traffic from an artificial domain which is not on the HSTS Preload list,[18]this can be made possible by DNS Spoofing Attacks,[19]or simply a domain name that misleadingly resembles the real domain name such aswww.example.orginstead ofwww.example.com.
Even with an HSTS preloaded list, HSTS cannot prevent advanced attacks against TLS itself, such as theBEASTorCRIMEattacks introduced by Juliano Rizzo and Thai Duong. Attacks against TLS itself areorthogonalto HSTS policy enforcement. Neither can it protect against attacks on the server - if someone compromises it, it will happily serve any content over TLS.
HSTS can be used to near-indelibly tag visiting browsers with recoverable identifying data (supercookies) which can persist in and out of browser "incognito" privacy modes. By creating a web page that makes multiple HTTP requests to selected domains, for example, if twenty browser requests to twenty different domains are used, theoretically over one million visitors can be distinguished (220) due to the resulting requests arriving via HTTP vs. HTTPS; the latter being the previously recorded binary "bits" established earlier via HSTS headers.[20]
Depending on the actual deployment there are certain threats (e.g. cookie injection attacks) that can be avoided by following best practices.
|
https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
|
TheGame of Life, also known asConway's Game of Lifeor simplyLife, is acellular automatondevised by the BritishmathematicianJohn Horton Conwayin 1970.[1]It is azero-player game,[2][3]meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It isTuring completeand can simulate auniversal constructoror any otherTuring machine.
The universe of the Game of Life isan infinite, two-dimensional orthogonal grid of squarecells, each of which is in one of two possible states,liveordead(orpopulatedandunpopulated, respectively). Every cell interacts with its eightneighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:
The initial pattern constitutes theseedof the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called atick.[nb 1]Each generation is apure functionof the preceding one. The rules continue to be applied repeatedly to create further generations.
Stanisław Ulam, while working at theLos Alamos National Laboratoryin the 1940s, studied the growth of crystals, using a simplelattice networkas his model.[7]At the same time,John von Neumann, Ulam's colleague at Los Alamos, was working on the problem ofself-replicating systems.[8]: 1Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model.[9][10]As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Von Neumann wrote a paper entitled "The general and logical theory of automata" for theHixon Symposiumin 1948.[11]Ulam was the one who suggested using adiscretesystem for creating a reductionist model of self-replication.[8]: 3[12]: xxixUlam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbours' behaviours.[13]: 8Thus was born the first system of cellular automata. Like Ulam's lattice network,von Neumann's cellular automataare two-dimensional, with his self-replicator implemented algorithmically. The result was auniversal copier and constructorworking within a cellular automaton with a small neighbourhood (only those cells that touch are neighbours; for von Neumann's cellular automata, onlyorthogonalcells), and with 29 states per cell. Von Neumann gave anexistence proofthat a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as thetessellationmodel, and is called avon Neumann universal constructor.[14]
Motivated by questions in mathematical logic and in part by work onsimulation gamesby Ulam, among others,John Conwaybegan doing experiments in 1968 with a variety of different two-dimensional cellular automaton rules. Conway's initial goal was to define an interesting and unpredictable cellular automaton.[3]According toMartin Gardner, Conway experimented with different rules, aiming for rules that would allow for patterns to "apparently" grow without limit, while keeping it difficult toprovethat any given pattern would do so. Moreover, some "simple initial patterns" should "grow and change for a considerable period of time" before settling into a static configuration or a repeating loop.[1]Conway later wrote that the basic motivation for Life was to create a "universal" cellular automaton.[15][better source needed]
The game made its first public appearance in the October 1970 issue ofScientific American, inMartin Gardner's "Mathematical Games" column, which was based on personal conversations with Conway. Theoretically, the Game of Life has the power of auniversal Turing machine: anything that can be computedalgorithmically can be computed within the Game of Life.[16][2]Gardner wrote, "Because of Life's analogies with the rise, fall, and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real-life processes)."[1]
Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example ofemergenceandself-organization.[3]A version of Life that incorporates random fluctuations has been used inphysicsto studyphase transitionsandnonequilibrium dynamics.[17]The game can also serve as a didacticanalogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, philosopherDaniel Dennetthas used the analogy of the Game of Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such asconsciousnessandfree will, from the relatively simple set of deterministic physical laws which might govern our universe.[18][19][20]
The popularity of the Game of Life was helped by its coming into being at the same time as increasingly inexpensive computer access. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generatedfractals. For many, the Game of Life was simply a programming challenge: a fun way to use otherwise wastedCPUcycles. For some, however, the Game of Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Game of Life board.[21][22]
Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include:still lifes, which do not change from one generation to the next;oscillators, which return to their initial state after a finite number of generations; andspaceships, which translate themselves across the grid.
The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations usinggraph paper,blackboards, and physical game boards, such as those used inGo. During this early research, Conway discovered that the R-pentominofailed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escapinggliders;[23]these were the first spaceships ever discovered.[24]
Frequently occurring[25][26]examples (in that they emerge frequently from a random starting configuration of cells) of the three aforementioned pattern types are shown below, with live cells shown in black and dead cells in white.Periodrefers to the number of ticks a pattern must iterate through before returning to its initial configuration.
Thepulsar[27]is the most common period-3 oscillator. The great majority of naturally occurring oscillators have a period of 2, like the blinker and the toad, but oscillators of all periods are known to exist,[28][29][30]and oscillators of periods 4, 8, 14, 15, 30, and a few others have been seen to arise from random initial conditions.[31]Patterns which evolve for long periods before stabilizing are calledMethuselahs, the first-discovered of which was the R-pentomino.Diehardis a pattern that disappears after 130 generations. Starting patterns of eight or more cells can be made to die after an arbitrarily long time.[32]Acorntakes 5,206 generations to generate 633 cells, including 13 escaped gliders.[33]
Conway originally conjectured that no pattern can grow indefinitely—i.e. that for any initial configuration with a finite number of living cells, the population cannot grow beyond some finite upper limit. In the game's original appearance in "Mathematical Games", Conway offered a prize of fifty dollars (equivalent to $400 in 2024) to the first person who could prove or disprove the conjecture before the end of 1970. The prize was won in November by a team from theMassachusetts Institute of Technology, led byBill Gosper; the "Gosper glider gun" produces its first glider on the 15th generation, and another glider every 30th generation from then on. For many years, this glider gun was the smallest one known.[34]In 2015, a gun called the "Simkin glider gun", which releases a glider every 120th generation, was discovered that has fewer live cells but which is spread out across a larger bounding box at its extremities.[35]
Smaller patterns were later found that also exhibit infinite growth. All three of the patterns shown below grow indefinitely. The first two create a singleblock-laying switch engine: a configuration that leaves behind two-by-two still life blocks as it translates itself across the game's universe.[36]The third configuration creates two such patterns. The first has only ten live cells, which has been proven to be minimal.[37]The second fits in a five-by-five square, and the third is only one cell high.
Later discoveries included otherguns, which are stationary, and which produce gliders or other spaceships;puffer trains, which move along leaving behind a trail of debris; andrakes, which move and emit spaceships.[38]Gosper also constructed the first pattern with anasymptotically optimalquadratic growth rate, called abreederorlobster, which worked by leaving behind a trail of guns.
It is possible for gliders to interact with other objects in interesting ways. For example, if two gliders are shot at a block in a specific position, the block will move closer to the source of the gliders. If three gliders are shot in just the right way, the block will move farther away. Thissliding block memorycan be used to simulate acounter. It is possible to constructlogic gatessuch asAND,OR, andNOTusing gliders. It is possible to build a pattern that acts like afinite-state machineconnected to two counters. This has the same computational power as auniversal Turing machine, so the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints; it isTuring complete.[16][2]In fact, several different programmable computer architectures[39][40]have been implemented in the Game of Life, including a pattern that simulatesTetris.[41]
Until the 2010s, all known spaceships could only move orthogonally or diagonally. Spaceships which move neither orthogonally nor diagonally are commonly referred to asoblique spaceships.[42][43]On May 18, 2010, Andrew J. Wade announced the first oblique spaceship, dubbed "Gemini", that creates a copy of itself on (5,1) further while destroying its parent.[44][43]This pattern replicates in 34 million generations, and uses an instruction tape made of gliders oscillating between two stable configurations made of Chapman–Greene construction arms. These, in turn, create new copies of the pattern, and destroy the previous copy. In December 2015, diagonal versions of the Gemini were built.[45]
A more specific case is aknightship, a spaceship that moves two squares left for every one square it moves down (like aknight in chess), whose existence had been predicted byElwyn Berlekampsince 1982. The first elementary knightship, Sir Robin, was discovered in 2018 by Adam P. Goucher.[46]This is the first new spaceship movement pattern for an elementary spaceship found in forty-eight years. "Elementary" means that it cannot be decomposed into smaller interacting patterns such as gliders and still lifes.[47]
A pattern can contain a collection of guns that fire gliders in such a way as to construct new objects, including copies of the original pattern. Auniversal constructorcan be built which contains a Turing complete computer, and which can build many types of complex objects, including more copies of itself.[2]On November 23, 2013, Dave Greene built the firstreplicatorin the Game of Life that creates a complete copy of itself, including the instruction tape.[48]In October 2018, Adam P. Goucher finished his construction of the 0E0P metacell, a metacell capable of self-replication. This differed from previous metacells, such as the OTCA metapixel by Brice Due, which only worked with already constructed copies near them. The 0E0P metacell works by using construction arms to create copies that simulate the programmed rule.[49]The actual simulation of the Game of Life or otherMoore neighbourhoodrules is done by simulating an equivalent rule using thevon Neumann neighbourhoodwith more states.[50]The name 0E0P is short for "Zero Encoded by Zero Population", which indicates that instead of a metacell being in an "off" state simulating empty space, the 0E0P metacell removes itself when the cell enters that state, leaving a blank space.[51]
Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.
The Game of Life isundecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of thehalting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input.[2]
From most random initial patterns of living cells on the grid, observers will find the population constantly changing as the generations tick by. The patterns that emerge from the simple rules may be considered a form ofmathematical beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens, the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases, the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually burn out, producing either stable figures or patterns that oscillate forever between two or more states;[52][53]many also produce one or more gliders or spaceships that travel indefinitely away from the initial location. Because of the nearest-neighbour based rules, no information can travel through the grid at a greater rate than one cell per unit time, so this velocity is said to be thecellular automaton speed of lightand denotedc.
Early patterns with unknown futures, such as the R-pentomino, led computer programmers to write programs to track the evolution of patterns in the Game of Life. Most of the earlyalgorithmswere similar: they represented the patterns as two-dimensional arrays in computer memory. Typically, two arrays are used: one to hold the current generation, and one to calculate its successor. Often 0 and 1 represent dead and live cells, respectively. A nestedfor loopconsiders each element of the current array in turn, counting the live neighbours of each cell to decide whether the corresponding element of the successor array should be 0 or 1. The successor array is displayed. For the next iteration, the arrays may swap roles so that the successor array in the last iteration becomes the current array in the next iteration, or one may copy the values of the second array into the first array then update the second array from the first array again.
A variety of minor enhancements to this basic scheme are possible, and there are many ways to save unnecessary computation. A cell that did not change at the last time step, and none of whose neighbours changed, is guaranteed not to change at the current time step as well, so a program that keeps track of which areas are active can save time by not updating inactive zones.[54]
To avoid decisions and branches in the counting loop, the rules can be rearranged from anegocentricapproach of the inner field regarding its neighbours to a scientific observer's viewpoint: if the sum of all nine fields in a given neighbourhood is three, the inner field state for the next generation will be life; if the all-field sum is four, the inner field retains its current state; and every other sum sets the inner field to death.
To save memory, the storage can be reduced to one array plus two line buffers. One line buffer is used to calculate the successor state for a line, then the second line buffer is used to calculate the successor state for the next line. The first buffer is then written to its line and freed to hold the successor state for the third line. If atoroidalarray is used, a third buffer is needed so that the original state of the first line in the array can be saved until the last line is computed.
In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding atoroidalarray. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects. Techniques of dynamic storage allocation may also be used, creating ever-larger arrays to hold growing patterns. The Game of Life on a finite field is sometimes explicitly studied; some implementations, such asGolly, support a choice of the standard infinite field, a field infinite only in one dimension, or a finite field, with a choice of topologies such as a cylinder, a torus, or aMöbius strip.
Alternatively, programmers may abandon the notion of representing the Game of Life field with a two-dimensional array, and use a different data structure, such as a vector of coordinate pairs representing live cells. This allows the pattern to move about the field unhindered, as long as the population does not exceed the size of the live-coordinate array. The drawback is that counting live neighbours becomes a hash-table lookup or search operation, slowing down simulation speed. With more sophisticated data structures this problem can also be largely solved.[citation needed]
For exploring large patterns at great time depths, sophisticated algorithms such asHashlifemay be useful. There is also a method for implementation of the Game of Life and other cellular automata using arbitrary asynchronous updates while still exactlyemulatingthe behaviour of the synchronous game.[55]Source codeexamples that implement the basic Game of Life scenario in various programming languages, includingC,C++,JavaandPythoncan be found atRosetta Code.[56]
Since the Game of Life's inception, new, similar cellular automata have been developed. The standard Game of Life is symbolized in rule-string notation as B3/S23. A cell is born if it has exactly three neighbours, survives if it has two or three living neighbours, and dies otherwise. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence B6/S16 means "a cell is born if there are six neighbours, and lives on if there are either one or six neighbours". Cellular automata on a two-dimensional grid that can be described in this way are known asLife-like cellular automata. Another common Life-like automaton,Highlife, is described by the rule B36/S23, because having six neighbours, in addition to the original game's B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators.[57][58]
Additional Life-like cellular automata exist. The vast majority of these 218different rules[59]produce universes that are either too chaotic or too desolate to be of interest, but a large subset do display interesting behaviour. A further generalization produces theisotropicrulespace, with 2102possible cellular automaton rules[60](the Game of Life again being one of them). These are rules that use the same square grid as the Life-like rules and the same eight-cell neighbourhood, and are likewise invariant under rotation and reflection. However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours.
Some variations on the Game of Life modify the geometry of the universe as well as the rules. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid. One-dimensional square variations, known aselementary cellular automata,[61]and three-dimensional square variations have been developed, as have two-dimensionalhexagonal and triangularvariations. A variant usingaperiodic tilinggrids has also been made.[62]
Conway's rules may also be generalized such that instead of two states,liveanddead, there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek's Cellebration's multi-coloured Rules Table and Weighted Life rule families each include sample rules equivalent to the Game of Life.
Patterns relating to fractals and fractal systems may also be observed in certain Life-like variations. For example, the automaton B1/S12 generates four very close approximations to theSierpinski trianglewhen applied to a single live cell. The Sierpinski triangle can also be observed in the Game of Life by examining the long-term growth of an infinitely long single-cell-thick line of live cells,[63]as well as in Highlife,Seeds (B2/S), andStephen Wolfram'sRule 90.[64]
Immigration is a variation that is very similar to the Game of Life, except that there are twoonstates, often expressed as two different colours. Whenever a new cell is born, it takes on the on state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions betweenspaceshipsand other objects within the game.[65]Another similar variation, called QuadLife, involves four different on states. When a new cell is born from three different on neighbours, it takes the fourth value, and otherwise, like Immigration, it takes the majority value.[66]Except for the variation among on cells, both of these variations act identically to the Game of Life.
Various musical composition techniques use the Game of Life, especially inMIDIsequencing.[67]A variety of programs exist for creating sound from patterns generated in the Game of Life.[68][69][70]
Computers have been used to follow and simulate the Game of Life since it was first publicized. When John Conway was first investigating how various starting configurations developed, he tracked them by hand using agoboard with its black and white stones. This was tedious and prone to errors. The first interactive Game of Life program was written in an early version ofALGOL 68Cfor thePDP-7byM. J. T. GuyandS. R. Bourne. The results were published in the October 1970 issue ofScientific American, along with the statement: "Without its help, some discoveries about the game would have been difficult to make."[1]
A color version of the Game of Life was written by Ed Hall in 1976 forCromemcomicrocomputers, and a display from that program filled the cover of the June 1976 issue ofByte.[71]The advent of microcomputer-based color graphics from Cromemco has been credited with a revival of interest in the game.[72]
Two early implementations of the Game of Life on home computers were by Malcolm Banthorpe written inBBC BASIC. The first was in the January 1984 issue ofAcorn Usermagazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue.[73]Susan Stepney, Professor of Computer Science at theUniversity of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata.[74]
There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features. Most of these programs incorporate a graphical user interface for pattern editing and simulation, the capability for simulating multiple rules including the Game of Life, and a large library of interesting patterns in the Game of Life and other cellular automaton rules.
Google implemented aneaster eggof the Game of Life in 2012. Users who search for the term are shown an implementation of the game in the search results page.[77]
The visual novelAnonymous;Codeincludes a basic implementation of the Game of Life in it, which is connected to the plot of the novel. Near the end ofAnonymous;Code, a certain pattern that appears throughout the game as a tattoo on the heroine Momo Aizaki has to be entered into the Game of Life to complete the game (Kok's galaxy, the same pattern used as thelogofor the open-source Game of Life program Golly).
|
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
|
The1970s commodities boomrefers to the rise of manycommodityprices in the1970s. Excess demand was created with money supply increasing too much and supply shocks that came fromArab–Israeli conflict, initially betweenIsraelandEgypt. TheSix-Day Warwhere Israel captured andoccupied the Sinai Peninsulafor 15 years, theClosure of the Suez Canal (1967–1975)for 8 years of that, led to supply shocks. 66% of oil consumed byEuropeat that time came through theSuez Canaland had to be redirected around the continent ofAfrica.[1][verification needed]15% of allmaritime tradepassed through the Suez Canal in 1966, the year before it closed.[2][verification needed]
TheYom Kippur Warin late 1973 was Egypts attempt at crossing the Suez Canal and taking theSinai Peninsulaback from Israeli occupation. On October 19, 1973Richard Nixonrequested $2.2 billion to support Israel in the Yom Kippur War. That resulted inOAPECcountries cutting production of oil and placing anembargoon oil exports to the United States and other countries backing Israel. That was the start of the1973 oil crisis.[4][verification needed]
In the early 1970s, theSoviet Unionand many otherplanned economiesstarted importing large amounts ofgrainsin the global markets, driving up demand for grains andoilseeds.[5][verification needed]
Sugar prices spiked in the 1970s because of Soviet Union demand/hoarding and possible futures contracts market manipulation. The Soviet Union was the largest producer of sugar at the time. In 1974,Coca-Colaswitched over tohigh-fructose corn syrupbecause of the elevated prices.[6][7][verification needed]
The price ofcoffeewent up in the mid 1970s because of ablack frost of 1975that killed 66% ofBrazil's coffee trees, which was the number one producer of coffee at the time. There was a bigearthquake in 1976 in Guatemalathat disrupted supply chains, the world's fifth biggest coffee exporter at the time. TheAngolan Civil Warstarted in 1975 disrupting their coffee production and shipping.[8][verification needed]
The United States weaned itself off thegold standardin the 1970s, allowing the price of gold to float. The price of gold went from a set exchange rate of $42.22 pertroy ouncein 1973 to almost $200 per ounce in 1976.[9][verification needed]
Price controls were implemented in the early 1970s to combat inflation but when those price controls were lifted on commodities like copper the prices appreciated. These quick price rises were known as theNixon shock.[10][11][verification needed]
Coalprice increases in the 1970s were primarily demand driven. There was astrikeby theUnited Mine Workers of AmericacalledThe Brookside Strikein late 1974 that lasted for 3 weeks but was said to have little impact on prices.[12][verification needed]
AfterWorld War IItheBretton Woods systemhadfixed exchange ratesbetween countries andpegged to gold. This kept some commodities like gold very stable. When the Bretton Woods system failed in the early 1970s andfloating exchange ratesstarted to float, commodities also became veryvolatile.[13][10][verification needed]
|
https://en.wikipedia.org/wiki/1970s_commodities_boom
|
Journaled File System(JFS) is a64-bitjournaling file systemcreated byIBM. There are versions forAIX,OS/2,eComStation,ArcaOSandLinuxoperating systems. The latter is available as free software under the terms of the GNU General Public License (GPL).HP-UXhas another, different filesystem named JFS that is actually an OEM version ofVeritas Software'sVxFS.
In the AIX operating system, two generations of JFS exist, which are calledJFS(JFS1) andJFS2respectively.[1]
IBM'sJFSwas originally designed for32-bitsystems.JFS2was designed for64-bitsystems.[2]
In other operating systems, such as OS/2 and Linux, only the second generation exists and is called simplyJFS.[3]This should not be confused with JFS inAIXthat actually refers to JFS1.
IBM introduced JFS with the initial release of AIX version 3.1 in February 1990. This file system, now calledJFS1 on AIX, was the premier file system for AIX over the following decade and was install jndsed in thousands or millions of customers' AIX systems. Historically, the JFS1 file system is very closely tied to the memory manager of AIX,[1]which is a typical design for a file system supporting only one operating system. JFS was one of the first file systems to supportJournaling.
In 1995, work began to enhance the file system to be more scalable and to support machines that had more than one processor. Another goal was to have a more portable file system, capable of running on multiple operating systems. After several years of designing, coding, and testing, the new JFS was first shipped in OS/2 Warp Server for eBusiness in April 1999, and then in OS/2 Warp Client in October 2000. In December 1999, a snapshot of the original OS/2 JFS source was granted to theopen sourcecommunity and work was begun to port JFS toLinux. The first stable release ofJFS for Linuxappeared in June 2001.[3]TheJFS for Linuxproject is maintained by a small group of contributors known as theJFS Core Team.[4]This release of sources also worked to form the basis of a re-port back to OS/2 of the open-source JFS.
In parallel with this effort, some of the JFS development team returned to the AIX Operating System Development Group in 1997 and started to move this new JFS source base to the AIX operating system. In May 2001, a second journaled file system,Enhanced Journaled File System (JFS2), was made available for AIX 5L.[1][3]
Early in 2008 there was speculation that IBM is no longer interested in maintaining JFS and thus it should not be used in production environments.[5]However, Dave Kleikamp, a member of theIBM Linux Technology Centerand JFS Core Team,[4]explained that they still follow changes in theLinux kerneland try to fix potentialsoftware bugs. He went on to add that certain distributions expect a larger resource commitment from them and opt not to support the filesystem.[6]
In 2012,TRIMcommand support forsolid-state driveswas added to JFS.[7]
JFS supports the following features.[8][9]
JFS is ajournaling file system. Rather than adding journaling as an add-on feature like in theext3file system, it was implemented from the start. The journal can be up to 128 MB. JFS journals metadata only, which means that metadata will remain consistent but user files may be corrupted after a crash or power loss. JFS's journaling is similar toXFSin that it only journals parts of theinode.[10]
JFS uses aB+ treeto accelerate lookups in directories. JFS can store 8 entries of a directory in the directory'sinodebefore moving the entries to a B+ tree. JFS also indexes extents in a B+ tree.
JFS dynamically allocates space for diskinodesas necessary. Each inode is 512 bytes. 32 inodes are allocated on a 16 kB Extent.
JFS allocates files as anextent. An extent is a variable-length sequence of Aggregate blocks. An extent may be located in severalallocation groups. To solve this the extents are indexed in a B+ tree for better performance when locating the extent locations.
Compressionis supported only in JFS1 on AIX and uses a variation of theLZ algorithm. Because of highCPU usageand increased free spacefragmentation, compression is not recommended for use other than on a single userworkstationor off-linebackupareas.
JFS normally applies read-shared, write-exclusive locking to files, which avoids data inconsistencies but imposes write serialization at the file level. The CIO option disables this locking. Applications such as relational databases which maintain data consistency themselves can use this option to largely eliminate filesystem overheads.[11]
JFS uses allocation groups. Allocation groups divide the aggregate space into chunks. This allows JFS to use resource allocation policies to achieve great I/O performance. The first policy is to try to cluster disk blocks and disk inodes for related data in the same AG in order to achieve good locality for the disk. The second policy is to distribute unrelated data throughout the file system in an attempt to minimize free-space fragmentation. When there is an open file JFS will lock the AG the file resides in and only allow the open file to grow. This reduces fragmentation as only the open file can write to the AG.
Thesuperblockmaintains information about the entire file system and includes the following fields:
In the Linux operating system, JFS is supported with thekernelmodule (since the kernel version2.4.18pre9-ac4) and the complementaryuserspaceutilities packaged under the nameJFSutils. MostLinux distributionssupport JFS unless it is specifically removed due to space restrictions, such as onlive CDs.[citation needed]
According to benchmarks of the available filesystems for Linux, JFS is fast and reliable, with consistently good performance under different kinds of load.[12]
Actual usage of JFS in Linux is uncommon, however, JFS does have a niche role in Linux: it offers a case-insensitive mount option, unlike most other Linux file systems.[13]
There are also potential problems with JFS, such as its implementation of journal writes. They can be postponed until there is another trigger—potentially indefinitely, which can cause data loss over a theoretically infinite timeframe.[14]
|
https://en.wikipedia.org/wiki/JFS_(file_system)
|
Ineconomics,non-convexityrefers to violations of theconvexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers withconvex preferences(that do not prefer extremes to in-between values) and convexbudget setsand on producers with convexproduction sets; for convex models, the predicted economic behavior is well understood.[1][2]When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated withmarket failures,[3][4]wheresupply and demanddiffer or wheremarket equilibriacan beinefficient.[1][4][5][6][7][8]Non-convex economies are studied withnonsmooth analysis, which is a generalization ofconvex analysis.[8][9][10][11]
If a preference set isnon-convex, then some prices determine a budget-line that supports twoseparateoptimal-baskets. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase half of an eagle and half of a lion. Thus, the zoo-keeper's preferences are non-convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both.
When the consumer's preference set is non-convex, then (for some prices) the consumer's demand is notconnected; A disconnected demand implies some discontinuous behavior by the consumer, as discussed byHarold Hotelling:
If indifference curves for purchases be thought of as possessing a wavy character, convex to the origin in some regions and concave in others, we are forced to the conclusion that it is only the portions convex to the origin that can be regarded as possessing any importance, since the others are essentially unobservable. They can be detected only by the discontinuities that may occur in demand with variation in price-ratios, leading to an abrupt jumping of a point of tangency across a chasm when the straight line is rotated. But, while such discontinuities may reveal the existence of chasms, they can never measure their depth. The concave portions of the indifference curves and their many-dimensional generalizations, if they exist, must forever remain in
unmeasurable obscurity.[12]
The difficulties of studying non-convex preferences were emphasized byHerman Wold[13]and again byPaul Samuelson, who wrote that non-convexities are "shrouded in eternaldarkness ...",[14]according to Diewert.[15]
When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated withmarket failures, wheresupply and demanddiffer or wheremarket equilibriacan beinefficient.[1]Non-convex preferences were illuminated from 1959 to 1961 by a sequence of papers inThe Journal of Political Economy(JPE). The main contributors wereMichael Farrell,[16]Francis Bator,[17]Tjalling Koopmans,[18]and Jerome Rothenberg.[19]In particular, Rothenberg's paper discussed the approximate convexity of sums of non-convex sets.[20]TheseJPE-papers stimulated a paper byLloyd ShapleyandMartin Shubik, which considered convexified consumer-preferences and introduced the concept of an "approximate equilibrium".[21]TheJPE-papers and the Shapley–Shubik paper influenced another notion of "quasi-equilibria", due toRobert Aumann.[22][23]
Non-convex sets have been incorporated in the theories of general economic equilibria.[24]These results are described in graduate-level textbooks inmicroeconomics,[25]general equilibrium theory,[26]game theory,[27]mathematical economics,[28]and applied mathematics (for economists).[29]TheShapley–Folkman lemmaestablishes that non-convexities are compatible with approximate equilibria in markets with many consumers; these results also apply toproduction economieswith many smallfirms.[30]
Non-convexity is important underoligopoliesand especiallymonopolies.[8]Concerns with large producers exploiting market power initiated the literature on non-convex sets, whenPiero Sraffawrote about on firms with increasingreturns to scalein 1926,[31]after whichHarold Hotellingwrote aboutmarginal costpricing in 1938.[32]Both Sraffa and Hotelling illuminated themarket powerof producers without competitors, clearly stimulating a literature on the supply-side of the economy.[33]
Recent research in economics has recognized non-convexity in new areas of economics. In these areas, non-convexity is associated withmarket failures, whereequilibrianeed not beefficientor where no competitive equilibrium exists becausesupply and demanddiffer.[1][4][5][6][7][8]Non-convex sets arise also withenvironmental goods(and otherexternalities),[6][7]and with market failures,[3]andpublic economics.[5][34]Non-convexities occur also withinformation economics,[35]and withstock markets[8](and otherincomplete markets).[36][37]Such applications continued to motivate economists to study non-convex sets.[1]In some cases, non-linear pricing or bargaining may overcome the failures of markets with competitive pricing; in other cases, regulation may be justified.
The previously mentioned applications concern non-convexities in finite-dimensionalvector spaces, where points represent commodity bundles. However, economists also consider dynamic problems of optimization over time, using the theories ofdifferential equations,dynamic systems,stochastic processes, andfunctional analysis: Economists use the following optimization methods:
In these theories, regular problems involve convex functions defined on convex domains, and this convexity allows simplifications of techniques and economic meaningful interpretations of the results.[43][44][45]In economics, dynamic programing was used by Martin Beckmann and Richard F. Muth for work oninventory theoryandconsumption theory.[46]Robert C. Merton used dynamic programming in his 1973 article on theintertemporal capital asset pricing model.[47](See alsoMerton's portfolio problem). In Merton's model, investors chose between income today and future income or capital gains, and their solution is found via dynamic programming. Stokey, Lucas & Prescott use dynamic programming to solve problems in economic theory, problems involving stochastic processes.[48]Dynamic programming has been used in optimaleconomic growth,resource extraction,principal–agent problems,public finance, businessinvestment,asset pricing,factorsupply, andindustrial organization. Ljungqvist & Sargent apply dynamic programming to study a variety of theoretical questions inmonetary policy,fiscal policy,taxation, economic growth,search theory, andlabor economics.[49]Dixit & Pindyck used dynamic programming forcapital budgeting.[50]For dynamic problems, non-convexities also are associated with market failures,[51]just as they are for fixed-time problems.[52]
Economists have increasingly studied non-convex sets withnonsmooth analysis, which generalizesconvex analysis. Convex analysis centers on convex sets and convex functions, for which it provides powerful ideas and clear results, but it is not adequate for the analysis of non-convexities, such as increasing returns to scale.[53]"Non-convexities in [both] production and consumption ... required mathematical tools that went beyond convexity, and further development had to await the invention of non-smooth calculus": For example,Clarke'sdifferential calculusforLipschitz continuous functions, which usesRademacher's theoremand which is described byRockafellar & Wets (1998)[54]andMordukhovich (2006),[9]according toKhan (2008).[10]Brown (1995, pp. 1967–1968)harvtxt error: no target: CITEREFBrown1995 (help)wrote that the "major methodological innovation in the general equilibrium analysis of firms with pricing rules" was "the introduction of the methods of non-smooth analysis, as a [synthesis] of global analysis (differential topology) and [of] convex analysis." According toBrown (1995, p. 1966)harvtxt error: no target: CITEREFBrown1995 (help), "Non-smooth analysis extends the local approximation of manifolds by tangent planes [and extends] the analogous approximation of convex sets by tangent cones to sets" that can be non-smooth or non-convex.[11][55]
Exercise 45, page 146:Wold, Herman; Juréen, Lars (in association with Wold) (1953). "8 Some further applications of preference fields (pp. 129–148)".Demand analysis: A study in econometrics. Wiley publications in statistics. New York: John Wiley and Sons, Inc. Stockholm: Almqvist and Wiksell.MR0064385.
It will be noted that any point where the indifference curves are convex rather than concave cannot be observed in a competitive market. Such points are shrouded in eternal darkness—unless we make our consumer a monopsonist and let him choose between goods lying on a very convex "budget curve" (along which he is affecting the price of what he buys). In this monopsony case, we could still deduce the slope of the man's indifference curve from the slope of the observed constraint at the equilibrium point.
A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.
Koopmans (1961, p. 478) and others—for example,Farrell (1959, pp. 390–391) andFarrell (1961a, p. 484),Bator (1961a, pp. 482–483),Rothenberg (1960, p. 438), andStarr (1969, p. 26)harvtxt error: no target: CITEREFStarr1969 (help)—commented onKoopmans (1957, pp. 1–126, especially 9–16 [1.3 Summation of opportunity sets], 23–35 [1.6 Convex sets and the price implications of optimality], and 35–37 [1.7 The role of convexity assumptions in the analysis]):
Koopmans, Tjalling C.(1957). "Allocation of resources and the price system". InKoopmans, Tjalling C(ed.).Three essays on the state of economic science. New York: McGraw–Hill Book Company. pp.1–126.ISBN0-07-035337-9.{{cite book}}:ISBN / Date incompatibility (help)
Aumann, Robert J.(January–April 1964). "Markets with a continuum of traders".Econometrica.32(1–2):39–50.doi:10.2307/1913732.JSTOR1913732.MR0172689.
Aumann, Robert J.(August 1965)."Integrals of set-valued functions".Journal of Mathematical Analysis and Applications.12(1):1–12.doi:10.1016/0022-247X(65)90049-1.MR0185073.
Pages 52–55 with applications on pages 145–146, 152–153, and 274–275:Mas-Colell, Andreu(1985). "1.L Averages of sets".The Theory of General Economic Equilibrium: ADifferentiableApproach. Econometric Society Monographs. Cambridge University Press.ISBN0-521-26514-2.MR1113262.
Theorem C(6) on page 37 and applications on pages 115-116, 122, and 168:Hildenbrand, Werner(1974).Core and equilibria of a large economy. Princeton studies in mathematical economics. Princeton, NJ: Princeton University Press.ISBN978-0-691-04189-6.MR0389160.
Page 628:Mas–Colell, Andreu; Whinston, Michael D.; Green, Jerry R. (1995). "17.1 Large economies and nonconvexities".Microeconomic theory. Oxford University Press. pp.627–630.ISBN978-0-19-507340-9.
Ellickson (1994, p. xviii), and especially Chapter 7 "Walras meets Nash" (especially section 7.4 "Nonconvexity" pages 306–310 and 312, and also 328–329) and Chapter 8 "What is Competition?" (pages 347 and 352):Ellickson, Bryan (1994).Competitive equilibrium: Theory and applications. Cambridge University Press.ISBN978-0-521-31988-1.
Page 309:Moore, James C. (1999).Mathematical methods for economic theory: VolumeI. Studies in economic theory. Vol. 9. Berlin: Springer-Verlag.doi:10.1007/978-3-662-08544-8.ISBN3-540-66235-9.MR1727000.
Pages 47–48:Florenzano, Monique; Le Van, Cuong (2001).Finite dimensional convexity and optimization. Studies in economic theory. Vol. 13. in cooperation with Pascal Gourdel. Berlin: Springer-Verlag.doi:10.1007/978-3-642-56522-9.ISBN3-540-41516-5.MR1878374.S2CID117240618.
Heal, G. M. (April 1998).The Economics of Increasing Returns(PDF). PaineWebber working paper series in money, economics, and finance. Columbia Business School. PW-97-20. Archived fromthe original(PDF)on 15 September 2015. Retrieved5 March2011.
|
https://en.wikipedia.org/wiki/Non-convexity_(economics)
|
Afrikaans, Azerbaijani, Indonesian, Malay, Bosnian, Catalan, Czech, Danish, German (Germany), Estonian, English (United States), Spanish (Spain), Spanish (Latin America), Basque, Filipino, French (France), Galician, Croatian, Zulu, Icelandic, Italian, Swahili, Latvian, Lithuanian, Hungarian, Dutch, Norwegian, Uzbek, Polish, Portuguese (Brazil), Portuguese (Portugal), Romanian, Albanian, Slovak, Slovenian, Finnish, Swedish, Vietnamese, Turkish, Greek, Bulgarian, Kyrgyz, Kazakh, Macedonian, Mongolian, Russian, Serbian, Ukrainian, Georgian, Armenian, Hebrew, Urdu, Arabic, Persian, Amharic, Nepali, Hindi, Marathi, Bengali, Punjabi, Gujarati, Tamil, Telugu, Kannada, Malayalam, Sinhala, Thai, Lao, Burmese, Khmer, Korean, Japanese, Simplified Chinese, Traditional Chinese[4]
English, Arabic, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Portuguese, Romanian, Russian, Simplified Chinese, Slovak, Spanish, Swedish, Thai, Traditional Chinese, Turkish, Ukrainian, Vietnamese[6]
|
https://en.wikipedia.org/wiki/Comparison_of_web_map_services
|
Thepercolation thresholdis a mathematical concept inpercolation theorythat describes the formation of long-range connectivity inrandomsystems. Below the threshold a giantconnected componentdoes not exist; while above it, there exists a giant component of the order of system size. In engineering andcoffee making, percolation represents the flow of fluids throughporous media, but in the mathematics and physics worlds it generally refers to simplifiedlattice modelsof random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probabilityp, or more generally a critical surface for a group of parametersp1,p2, ..., such that infinite connectivity (percolation) first occurs.[1]
The most common percolation model is to take a regular lattice, like a square lattice, and make it into a random network by randomly "occupying" sites (vertices) or bonds (edges) with a statistically independent probabilityp. At a critical thresholdpc, large clusters and long-range connectivity first appear, and this is called thepercolation threshold. Depending on the method for obtaining the random network, one distinguishes between thesite percolationthreshold and thebond percolationthreshold. More general systems have several probabilitiesp1,p2, etc., and the transition is characterized by acritical surfaceormanifold. One can also consider continuum systems, such as overlapping disks and spheres placed randomly, or the negative space (Swiss-cheesemodels).
To understand the threshold, you can consider a quantity such as the probability that there is a continuous path from one boundary to another along occupied sites or bonds—that is, within a single cluster. For example, one can consider a square system, and ask for the probabilityPthat there is a path from the top boundary to the bottom boundary. As a function of the occupation probabilityp, one finds a sigmoidal plot that goes fromP=0atp=0toP=1atp=1. The larger the square is compared to the lattice spacing, the sharper the transition will be. When the system size goes to infinity,P(p)will be a step function at the threshold valuepc. For finite large systems,P(pc)is a constant whose value depends upon the shape of the system; for the square system discussed above,P(pc)=1⁄2exactly for any lattice by a simple symmetry argument.
There are other signatures of the critical threshold. For example, the size distribution (number of clusters of sizes) drops off as a power-law for largesat the threshold,ns(pc) ~ s−τ, where τ is a dimension-dependentpercolation critical exponents. For an infinite system, the critical threshold corresponds to the first point (aspincreases) where the size of the clusters become infinite.
In the systems described so far, it has been assumed that the occupation of a site or bond is completely random—this is the so-calledBernoullipercolation.For a continuum system, random occupancy corresponds to the points being placed by aPoisson process. Further variations involve correlated percolation, such as percolation clusters related to Ising and Potts models of ferromagnets, in which the bonds are put down by the Fortuin–Kasteleynmethod.[2]Inbootstrapork-satpercolation, sites and/or bonds are first occupied and then successively culled from a system if a site does not have at leastkneighbors. Another important model of percolation, in a differentuniversality classaltogether, isdirected percolation, where connectivity along a bond depends upon the direction of the flow. Another variation of recent interest isExplosive Percolation, whose thresholds are listed on that page.
Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries.
Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of1⁄2, and self-dual lattices (square, martini-B) have bond thresholds of1⁄2.
The notation such as (4,82) comes fromGrünbaumandShephard,[3]and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the elevenArchimedean latticescomposed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied.
Error bars in the last digit or digits are shown by numbers in parentheses. Thus, 0.729724(3) signifies 0.729724 ± 0.000003, and 0.74042195(80) signifies 0.74042195 ± 0.00000080. The error bars variously represent one or two standard deviations in net error (including statistical and expected systematic error), or an empirical confidence interval, depending upon the source.
For a randomtree-likenetwork(i.e., a connected network with no cycle) without degree-degree correlation, it can be shown that such network can have agiant component, and the percolation threshold (transmission probability) is given by
pc=1g1′(1)=⟨k⟩⟨k2⟩−⟨k⟩{\displaystyle p_{c}={\frac {1}{g_{1}'(1)}}={\frac {\langle k\rangle }{\langle k^{2}\rangle -\langle k\rangle }}}.
Whereg1(z){\displaystyle g_{1}(z)}is thegenerating functioncorresponding to theexcess degree distribution,⟨k⟩{\displaystyle {\langle k\rangle }}is the average degree of the network and⟨k2⟩{\displaystyle {\langle k^{2}\rangle }}is the secondmomentof thedegree distribution. So, for example, for anER network, since the degree distribution is aPoisson distribution, where⟨k2⟩=⟨k⟩2+⟨k⟩,{\displaystyle {\langle k^{2}\rangle =\langle k\rangle ^{2}+\langle k\rangle },}the threshold is atpc=⟨k⟩−1{\displaystyle p_{c}={\langle k\rangle }^{-1}}.
In networks with lowclustering,0<C≪1{\displaystyle 0<C\ll 1}, the critical point gets scaled by(1−C)−1{\displaystyle (1-C)^{-1}}such that:[4]
pc=11−C1g1′(1).{\displaystyle p_{c}={\frac {1}{1-C}}{\frac {1}{g_{1}'(1)}}.}
This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable.[5]
(4, 82)
Note: sometimes "hexagonal" is used in place of honeycomb, although in some contexts a triangular lattice is also called ahexagonal lattice.z= bulkcoordination number.
In this section, sq-1,2,3 corresponds to square (NN+2NN+3NN),[39]etc. Equivalent to square-2N+3N+4N,[40]sq(1,2,3).[41]tri = triangular, hc = honeycomb.
Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers.[39]
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the box(x−α,x+α),(y−α,y+α){\displaystyle (x-\alpha ,x+\alpha ),(y-\alpha ,y+\alpha )}, and considers percolation when sites are within Euclidean distanced{\displaystyle d}of each other.
Site threshold is number of overlapping objects per lattice site.kis the length (net area). Overlapping squares are shown in the complex neighborhood section. Here z is the coordination number to k-mers of either orientation, withz=k2+10k−2{\displaystyle z=k^{2}+10k-2}for1×k{\displaystyle 1\times k}sticks.
0.5483(2)[56]
0.18019(9)[56]
0.50004(64)[56]
0.1093(2)[56]
The coverage is calculated frompc{\displaystyle p_{c}}byϕc=1−(1−pc)2k{\displaystyle \phi _{c}=1-(1-p_{c})^{2k}}for1×k{\displaystyle 1\times k}sticks, because there are2k{\displaystyle 2k}sites where a stick will cause an overlap with a given site.
For aligned1×k{\displaystyle 1\times k}sticks:ϕc=1−(1−pc)k{\displaystyle \phi _{c}=1-(1-p_{c})^{k}}
In AB percolation, apsite{\displaystyle p_{\mathrm {site} }}is the proportion of A sites among B sites, and bonds are drawn between sites of opposite species.[59]It is also called antipercolation.
In colored percolation, occupied sites are assigned one ofn{\displaystyle n}colors with equal probability, and connection is made along bonds between neighbors of different colors.[60]
Site bond percolation. Hereps{\displaystyle p_{s}}is the site occupation probability andpb{\displaystyle p_{b}}is the bond occupation probability, and connectivity is made only if both the sites and bonds along a path are occupied. The criticality condition becomes a curvef(ps,pb){\displaystyle f(p_{s},p_{b})}= 0, and some specific critical pairs(ps,pb){\displaystyle (p_{s},p_{b})}are listed below.
Square lattice:
Honeycomb (hexagonal) lattice:
Kagome lattice:
* For values on different lattices, see "An investigation of site-bond percolation on many lattices".[65]
Approximate formula for site-bond percolation on a honeycomb lattice
Laves lattices are the duals to the Archimedean lattices. Drawings from.[6]See alsoUniform tilings.
D(32,4,3,4)=(2⁄3)(53)+(1⁄3)(54)
D(3,6,3,6) = (1⁄3)(46) + (2⁄3)(43)
D(3,4,6,4) = (1⁄6)(46) + (2⁄6)(43) + (3⁄6)(44)
D(4,82) = (1⁄2)(34) + (1⁄2)(38)
D(4,6,12)= (1⁄6)(312)+(2⁄6)(36)+(1⁄2)(34)
D(3, 122)=(2⁄3)(33)+(1⁄3)(312)
Top 3 lattices: #13 #12 #36Bottom 3 lattices: #34 #37 #11
[3]
Top 2 lattices: #35 #30Bottom 2 lattices: #41 #42
[3]
Top 4 lattices: #22 #23 #21 #20Bottom 3 lattices: #16 #17 #15
[3]
Top 2 lattices: #31 #32Bottom lattice: #33
[3]
This figure shows something similar to the 2-uniform lattice #37, except the polygons are not all regular—there is a rectangle in the place of the two squares—and the size of the polygons is changed. This lattice is in the isoradial representation in which each polygon is inscribed in a circle of unit radius. The two squares in the 2-uniform lattice must now be represented as a single rectangle in order to satisfy the isoradial condition. The lattice is shown by black edges, and the dual lattice by red dashed lines. The green circles show the isoradial constraint on both the original and dual lattices. The yellow polygons highlight the three types of polygons on the lattice, and the pink polygons highlight the two types of polygons on the dual lattice. The lattice has vertex types (1⁄2)(33,42) + (1⁄2)(3,4,6,4), while the dual lattice has vertex types (1⁄15)(46)+(6⁄15)(42,52)+(2⁄15)(53)+(6⁄15)(52,4). The critical point is where the longer bonds (on both the lattice and dual lattice) have occupation probability p = 2 sin (π/18) = 0.347296... which is the bond percolation threshold on a triangular lattice, and the shorter bonds have occupation probability 1 − 2 sin(π/18) = 0.652703..., which is the bond percolation on a hexagonal lattice. These results follow from the isoradial condition[71]but also follow from applying the star-triangle transformation to certain stars on the honeycomb lattice. Finally, it can be generalized to having three different probabilities in the three different directions, p1, p2andp3for the long bonds, and1 −p1,1 −p2, and1 −p3for the short bonds, wherep1,p2andp3satisfy the critical surface for the inhomogeneous triangular lattice.
To the left, center, and right are: the martini lattice, the martini-A lattice, the martini-B lattice. Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed).
Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h):
The 2 x 2, 3 x 3, and 4 x 4 subnet kagome lattices. The 2 × 2 subnet is also known as the "triangular kagome" lattice.[80]
(For more results and comparison to the jamming density, seeRandom sequential adsorption)
The threshold gives the fraction of sites occupied by the objects when site percolation first takes place (not at full jamming). For longer k-mers see Ref.[89]
Here, we are dealing with networks that are obtained by covering a lattice with dimers, and then consider bond percolation on the remaining bonds. In discrete mathematics, this problem is known as the 'perfect matching' or the 'dimer covering' problem.
System is composed of ordinary (non-avoiding) random walks of length l on the square lattice.[91]
For disks,nc=4r2N/L2{\displaystyle n_{c}=4r^{2}N/L^{2}}equals the critical number of disks per unit area, measured in units of the diameter2r{\displaystyle 2r}, whereN{\displaystyle N}is the number of objects andL{\displaystyle L}is the system size
For disks,ηc=πr2N/L2=(π/4)nc{\displaystyle \eta _{c}=\pi r^{2}N/L^{2}=(\pi /4)n_{c}}equals critical total disk area.
4ηc{\displaystyle 4\eta _{c}}gives the number of disk centers within the circle of influence (radius 2 r).
rc=LηcπN=L2ncN{\displaystyle r_{c}=L{\sqrt {\frac {\eta _{c}}{\pi N}}}={\frac {L}{2}}{\sqrt {\frac {n_{c}}{N}}}}is the critical disk radius.
ηc=πabN/L2{\displaystyle \eta _{c}=\pi abN/L^{2}}for ellipses of semi-major and semi-minor axes of a and b, respectively. Aspect ratioϵ=a/b{\displaystyle \epsilon =a/b}witha>b{\displaystyle a>b}.
ηc=ℓmN/L2{\displaystyle \eta _{c}=\ell mN/L^{2}}for rectangles of dimensionsℓ{\displaystyle \ell }andm{\displaystyle m}. Aspect ratioϵ=ℓ/m{\displaystyle \epsilon =\ell /m}withℓ>m{\displaystyle \ell >m}.
ηc=πxN/(4L2(x−2)){\displaystyle \eta _{c}=\pi xN/(4L^{2}(x-2))}for power-law distributed disks withProb(radius≥R)=R−x{\displaystyle {\hbox{Prob(radius}}\geq R)=R^{-x}},R≥1{\displaystyle R\geq 1}.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}equals critical area fraction.
For disks, Ref.[100]useϕc=1−e−πx/2{\displaystyle \phi _{c}=1-e^{-\pi x/2}}wherex{\displaystyle x}is the density of disks of radius1/2{\displaystyle 1/{\sqrt {2}}}.
nc=ℓ2N/L2{\displaystyle n_{c}=\ell ^{2}N/L^{2}}equals number of objects of maximum lengthℓ=2a{\displaystyle \ell =2a}per unit area.
For ellipses,nc=(4ϵ/π)ηc{\displaystyle n_{c}=(4\epsilon /\pi )\eta _{c}}
For void percolation,ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction.
For more ellipse values, see[110][113]
For more rectangle values, see[116]
Both ellipses and rectangles belong to the superellipses, with|x/a|2m+|y/b|2m=1{\displaystyle |x/a|^{2m}+|y/b|^{2m}=1}. For more percolation values of superellipses, see.[103]
For the monodisperse particle systems, the percolation thresholds of concave-shaped superdisks are obtained as seen in[122]
For binary dispersions of disks, see[96][123][124]
*Theoretical estimate
Assuming power-law correlationsC(r)∼|r|−α{\displaystyle C(r)\sim |r|^{-\alpha }}
his the thickness of the slab,h× ∞ × ∞. Boundary conditions (b.c.) refer to the top and bottom planes of the slab.
Filling factor = fraction of space filled by touching spheres at every lattice site (for systems with uniform bond length only). Also calledAtomic Packing Factor.
Filling fraction (or Critical Filling Fraction) = filling factor * pc(site).
NN = nearest neighbor, 2NN = next-nearest neighbor, 3NN = next-next-nearest neighbor, etc.
kxkxk cubes are cubes of occupied sites on a lattice, and are equivalent to extended-range percolation of a cube of length (2k+1), with edges and corners removed, with z = (2k+1)3-12(2k-1)-9 (center site not counted in z).
Question: the bond thresholds for the hcp and fcc lattice
agree within the small statistical error. Are they identical,
and if not, how far apart are they? Which threshold is expected to be bigger? Similarly for the ice and diamond lattices. See[189]
Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the cube(x−α,x+α),(y−α,y+α),(z−α,z+α){\displaystyle (x-\alpha ,x+\alpha ),(y-\alpha ,y+\alpha ),(z-\alpha ,z+\alpha )}, and considers percolation when sites are within Euclidean distanced{\displaystyle d}of each other.
Site threshold is the number of overlapping objects per lattice site. The coverage φcis the net fraction of sites covered, andvis the volume (number of cubes). Overlapping cubes are given in the section on thresholds of 3D lattices. Here z is the coordination number to k-mers of either orientation, withz=6k2+18k−4{\displaystyle z=6k^{2}+18k-4}
The coverage is calculated frompc{\displaystyle p_{c}}byϕc=1−(1−pc)3k{\displaystyle \phi _{c}=1-(1-p_{c})^{3k}}for sticks, andϕc=1−(1−pc)3k2{\displaystyle \phi _{c}=1-(1-p_{c})^{3k^{2}}}for plaquettes.
All overlapping except for jammed spheres and polymer matrix.
ηc=(4/3)πr3N/L3{\displaystyle \eta _{c}=(4/3)\pi r^{3}N/L^{3}}is the total volume (for spheres), where N is the number of objects and L is the system size.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}is the critical volume fraction, valid for overlapping randomly placed objects.
For disks and plates, these are effective volumes and volume fractions.
For void ("Swiss-Cheese" model),ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction.
For more results on void percolation around ellipsoids and elliptical plates, see.[214]
For more ellipsoid percolation values see.[201]
For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in.[198]
For superballs, m is the deformation parameter, the percolation values are given in.,[215][216]In addition, the thresholds of concave-shaped superballs are also determined in[122]
For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in.[200]
Void percolation refers to percolation in the space around overlapping objects. Hereϕc{\displaystyle \phi _{c}}refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related toηc{\displaystyle \eta _{c}}byϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}.ηc{\displaystyle \eta _{c}}is defined as in the continuum percolation section above.
∗{\displaystyle ^{*}}In drilling percolation, the site thresholdpc{\displaystyle p_{c}}represents the fraction of columns in each direction that have not been removed, andϕc=pc3{\displaystyle \phi _{c}=p_{c}^{3}}. For the 1d drilling, we haveϕc=pc{\displaystyle \phi _{c}=p_{c}}(columns)pc{\displaystyle p_{c}}(sites).
†In tube percolation, the bond threshold represents the value of the parameterμ{\displaystyle \mu }such that the probability of putting a bond between neighboring vertical tube segments is1−e−μhi{\displaystyle 1-e^{-\mu h_{i}}}, wherehi{\displaystyle h_{i}}is the overlap height of two adjacent tube segments.[234]
ηc=(πd/2/Γ[d/2+1])rdN/Ld.{\displaystyle \eta _{c}=(\pi ^{d/2}/\Gamma [d/2+1])r^{d}N/L^{d}.}
In 4d,ηc=(1/2)π2r4N/L4{\displaystyle \eta _{c}=(1/2)\pi ^{2}r^{4}N/L^{4}}.
In 5d,ηc=(8/15)π2r5N/L5{\displaystyle \eta _{c}=(8/15)\pi ^{2}r^{5}N/L^{5}}.
In 6d,ηc=(1/6)π3r6N/L6{\displaystyle \eta _{c}=(1/6)\pi ^{3}r^{6}N/L^{6}}.
ϕc=1−e−ηc{\displaystyle \phi _{c}=1-e^{-\eta _{c}}}is the critical volume fraction, valid for overlapping objects.
For void models,ϕc=e−ηc{\displaystyle \phi _{c}=e^{-\eta _{c}}}is the critical void fraction, andηc{\displaystyle \eta _{c}}is the total volume of the overlapping objects
For thresholds on high dimensional hypercubic lattices, we have the asymptotic series expansions[237][245][248]
pcsite(d)=σ−1+32σ−2+154σ−3+834σ−4+657748σ−5+11907796σ−6+O(σ−7){\displaystyle p_{c}^{\mathrm {site} }(d)=\sigma ^{-1}+{\frac {3}{2}}\sigma ^{-2}+{\frac {15}{4}}\sigma ^{-3}+{\frac {83}{4}}\sigma ^{-4}+{\frac {6577}{48}}\sigma ^{-5}+{\frac {119077}{96}}\sigma ^{-6}+{\mathcal {O}}(\sigma ^{-7})}
pcbond(d)=σ−1+52σ−3+152σ−4+57σ−5+485512σ−6+O(σ−7){\displaystyle p_{c}^{\mathrm {bond} }(d)=\sigma ^{-1}+{\frac {5}{2}}\sigma ^{-3}+{\frac {15}{2}}\sigma ^{-4}+57\sigma ^{-5}+{\frac {4855}{12}}\sigma ^{-6}+{\mathcal {O}}(\sigma ^{-7})}
whereσ=2d−1{\displaystyle \sigma =2d-1}. For 13-dimensional bond percolation, for example, the error with the measured value is less than 10−6, and these formulas can be useful for higher-dimensional systems.
In a one-dimensional chain we establish bonds between distinct sitesi{\displaystyle i}andj{\displaystyle j}with probabilityp=C|i−j|1+σ{\displaystyle p={\frac {C}{|i-j|^{1+\sigma }}}}decaying as a power-law with an exponentσ>0{\displaystyle \sigma >0}. Percolation occurs[251][252]at a critical valueCc<1{\displaystyle C_{c}<1}forσ<1{\displaystyle \sigma <1}. The numerically determined percolation thresholds are given by:[253]
In these lattices there may be two percolation thresholds: the lower threshold is the probability above which infinite clusters appear, and the upper is the probability above which there is a unique infinite cluster.
Note: {m,n} is the Schläfli symbol, signifying a hyperbolic lattice in which n regular m-gons meet at every vertex
For bond percolation on {P,Q}, we have by dualitypc,ℓ(P,Q)+pc,u(Q,P)=1{\displaystyle p_{c,\ell }(P,Q)+p_{c,u}(Q,P)=1}. For site percolation,pc,ℓ(3,Q)+pc,u(3,Q)=1{\displaystyle p_{c,\ell }(3,Q)+p_{c,u}(3,Q)=1}because of the self-matching of triangulated lattices.
Cayley tree (Bethe lattice) with coordination numberz:pc=1/(z−1){\displaystyle z:p_{c}=1/(z-1)}
nn = nearest neighbors. For a (d+ 1)-dimensional hypercubic system, the hypercube is in d dimensions and the time direction points to the 2D nearest neighbors.
(1+1)-d square with z NN, square lattice for z odd, tilted square lattice for z even
For large z, pc~ 1/z[279]
p_b = bond threshold
p_s = site threshold
Site-bond percolation is equivalent to having different probabilities of connections:
P_0 = probability that no sites are connected
P_2 = probability that exactly one descendant is connected to the upper vertex (two connected together)
P_3 = probability that both descendants are connected to the original vertex (all three connected together)
Formulas:
P_0 = (1-p_s) + p_s(1-p_b)^2
P_2 = p_s p_b (1-p_b)
P_3 = p_s p_b^2
P_0 + 2P_2 + P_3 = 1
Inhomogeneous triangular lattice bond percolation[20]
1−p1−p2−p3+p1p2p3=0{\displaystyle 1-p_{1}-p_{2}-p_{3}+p_{1}p_{2}p_{3}=0}
Inhomogeneous honeycomb lattice bond percolation = kagome lattice site percolation[20]
1−p1p2−p1p3−p2p3+p1p2p3=0{\displaystyle 1-p_{1}p_{2}-p_{1}p_{3}-p_{2}p_{3}+p_{1}p_{2}p_{3}=0}
Inhomogeneous (3,12^2) lattice, site percolation[7][281]
1−3(s1s2)2+(s1s2)3=0,{\displaystyle 1-3(s_{1}s_{2})^{2}+(s_{1}s_{2})^{3}=0,}ors1s2=1−2sin(π/18){\displaystyle s_{1}s_{2}=1-2\sin(\pi /18)}
Inhomogeneous union-jack lattice, site percolation with probabilitiesp1,p2,p3,p4{\displaystyle p_{1},p_{2},p_{3},p_{4}}[282]
p3=1−p1;p4=1−p2{\displaystyle p_{3}=1-p_{1};\qquad p_{4}=1-p_{2}}
Inhomogeneous martini lattice, bond percolation[74][283]
1−(p1p2r3+p2p3r1+p1p3r2)−(p1p2r1r2+p1p3r1r3+p2p3r2r3)+p1p2p3(r1r2+r1r3+r2r3)+r1r2r3(p1p2+p1p3+p2p3)−2p1p2p3r1r2r3=0{\displaystyle 1-(p_{1}p_{2}r_{3}+p_{2}p_{3}r_{1}+p_{1}p_{3}r_{2})-(p_{1}p_{2}r_{1}r_{2}+p_{1}p_{3}r_{1}r_{3}+p_{2}p_{3}r_{2}r_{3})+p_{1}p_{2}p_{3}(r_{1}r_{2}+r_{1}r_{3}+r_{2}r_{3})+r_{1}r_{2}r_{3}(p_{1}p_{2}+p_{1}p_{3}+p_{2}p_{3})-2p_{1}p_{2}p_{3}r_{1}r_{2}r_{3}=0}
Inhomogeneous martini lattice, site percolation.r= site in the star
1−r(p1p2+p1p3+p2p3−p1p2p3)=0{\displaystyle 1-r(p_{1}p_{2}+p_{1}p_{3}+p_{2}p_{3}-p_{1}p_{2}p_{3})=0}
Inhomogeneous martini-A (3–7) lattice, bond percolation. Left side (top of "A" to bottom):r2,p1{\displaystyle r_{2},\ p_{1}}. Right side:r1,p2{\displaystyle r_{1},\ p_{2}}. Cross bond:r3{\displaystyle \ r_{3}}.
1−p1r2−p2r1−p1p2r3−p1r1r3−p2r2r3+p1p2r1r3+p1p2r2r3+p1r1r2r3+p2r1r2r3−p1p2r1r2r3=0{\displaystyle 1-p_{1}r_{2}-p_{2}r_{1}-p_{1}p_{2}r_{3}-p_{1}r_{1}r_{3}-p_{2}r_{2}r_{3}+p_{1}p_{2}r_{1}r_{3}+p_{1}p_{2}r_{2}r_{3}+p_{1}r_{1}r_{2}r_{3}+p_{2}r_{1}r_{2}r_{3}-p_{1}p_{2}r_{1}r_{2}r_{3}=0}
Inhomogeneous martini-B (3–5) lattice, bond percolation
Inhomogeneous martini lattice with outside enclosing triangle of bonds, probabilitiesy,x,z{\displaystyle y,x,z}from inside to outside, bond percolation[283]
1−3z+z3−(1−z2)[3x2y(1+y−y2)(1+z)+x3y2(3−2y)(1+2z)]=0{\displaystyle 1-3z+z^{3}-(1-z^{2})[3x^{2}y(1+y-y^{2})(1+z)+x^{3}y^{2}(3-2y)(1+2z)]=0}
Inhomogeneous checkerboard lattice, bond percolation[58][94]
1−(p1p2+p1p3+p1p4+p2p3+p2p4+p3p4)+p1p2p3+p1p2p4+p1p3p4+p2p3p4=0{\displaystyle 1-(p_{1}p_{2}+p_{1}p_{3}+p_{1}p_{4}+p_{2}p_{3}+p_{2}p_{4}+p_{3}p_{4})+p_{1}p_{2}p_{3}+p_{1}p_{2}p_{4}+p_{1}p_{3}p_{4}+p_{2}p_{3}p_{4}=0}
Inhomogeneous bow-tie lattice, bond percolation[57][94]
1−(p1p2+p1p3+p1p4+p2p3+p2p4+p3p4)+p1p2p3+p1p2p4+p1p3p4+p2p3p4−u(1−p1p2−p3p4+p1p2p3p4)=0{\displaystyle 1-(p_{1}p_{2}+p_{1}p_{3}+p_{1}p_{4}+p_{2}p_{3}+p_{2}p_{4}+p_{3}p_{4})+p_{1}p_{2}p_{3}+p_{1}p_{2}p_{4}+p_{1}p_{3}p_{4}+p_{2}p_{3}p_{4}-u(1-p_{1}p_{2}-p_{3}p_{4}+p_{1}p_{2}p_{3}p_{4})=0}
wherep1,p2,p3,p4{\displaystyle p_{1},p_{2},p_{3},p_{4}}are the four bonds around the square andu{\displaystyle u}is the diagonal bond connecting the vertex between bondsp4,p1{\displaystyle p_{4},p_{1}}andp2,p3{\displaystyle p_{2},p_{3}}.
|
https://en.wikipedia.org/wiki/Percolation_threshold
|
Innumber theory, aWilson primeis aprime numberp{\displaystyle p}such thatp2{\displaystyle p^{2}}divides(p−1)!+1{\displaystyle (p-1)!+1}, where "!{\displaystyle !}" denotes thefactorial function; compare this withWilson's theorem, which states that every primep{\displaystyle p}divides(p−1)!+1{\displaystyle (p-1)!+1}. Both are named for 18th-centuryEnglishmathematicianJohn Wilson; in 1770,Edward Waringcredited the theorem to Wilson,[1]although it had been stated centuries earlier byIbn al-Haytham.[2]
The only known Wilson primes are5,13, and563(sequenceA007540in theOEIS). Costa et al. write that "the casep=5{\displaystyle p=5}is trivial", and credit the observation that 13 is a Wilson prime toMathews (1892).[3][4]Early work on these numbers included searches byN. G. W. H. BeegerandEmma Lehmer,[5][3][6]but 563 was not discovered until the early 1950s, when computer searches could be applied to the problem.[3][7][8]If any others exist, they must be greater than 2 × 1013.[3]It has beenconjecturedthat infinitely many Wilson primes exist, and that the number of Wilson primes in an interval[x,y]{\displaystyle [x,y]}is aboutloglogxy{\displaystyle \log \log _{x}y}.[9]
Several computer searches have been done in the hope of finding new Wilson primes.[10][11][12]TheIbercivisdistributed computingproject includes a search for Wilson primes.[13]Another search was coordinated at theGreat Internet Mersenne Prime Searchforum.[14]
Wilson's theorem can be expressed in general as(n−1)!(p−n)!≡(−1)nmodp{\displaystyle (n-1)!(p-n)!\equiv (-1)^{n}\ {\bmod {p}}}for everyintegern≥1{\displaystyle n\geq 1}and primep≥n{\displaystyle p\geq n}. Generalized Wilson primes of ordernare the primespsuch thatp2{\displaystyle p^{2}}divides(n−1)!(p−n)!−(−1)n{\displaystyle (n-1)!(p-n)!-(-1)^{n}}.
It was conjectured that for everynatural numbern, there are infinitely many Wilson primes of ordern.
The smallest generalized Wilson primes of ordern{\displaystyle n}are:
A primep{\displaystyle p}satisfying thecongruence(p−1)!≡−1+Bp(modp2){\displaystyle (p-1)!\equiv -1+Bp\ (\operatorname {mod} {p^{2}})}with small|B|{\displaystyle |B|}can be called anear-Wilson prime. Near-Wilson primes withB=0{\displaystyle B=0}are bona fide Wilson primes. The table on the right lists all such primes with|B|≤100{\displaystyle |B|\leq 100}from 106up to 4×1011.[3]
AWilson numberis a natural numbern{\displaystyle n}such thatW(n)≡0(modn2){\displaystyle W(n)\equiv 0\ (\operatorname {mod} {n^{2}})}, whereW(n)=±1+∏gcd(k,n)=11≤k≤nk,{\displaystyle W(n)=\pm 1+\prod _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}{k},}and where the±1{\displaystyle \pm 1}term is positiveif and only ifn{\displaystyle n}has aprimitive rootand negative otherwise.[15]For every natural numbern{\displaystyle n},W(n){\displaystyle W(n)}is divisible byn{\displaystyle n}, and the quotients (called generalizedWilson quotients) are listed inOEIS:A157249. The Wilson numbers are
If a Wilson numbern{\displaystyle n}is prime, thenn{\displaystyle n}is a Wilson prime. There are 13 Wilson numbers up to 5×108.[16]
|
https://en.wikipedia.org/wiki/Wilson_prime
|
Skip graphsare a kind of distributed data structure based onskip lists.
They were invented in 2003 by James Aspnes and Gauri Shah.
A nearly identical data structure called SkipNet was independently invented by Nicholas Harvey, Michael Jones, Stefan Saroiu, Marvin Theimer and Alec Wolman, also in 2003.[1]
Skip graphs have the full functionality of abalanced treein adistributed system. Skip graphs are mostly used in searchingpeer-to-peer networks. As they provide the ability toquerybykey ordering, they improve over search tools based on thehash tablefunctionality only. In contrast toskip listsand othertree data structures, they are very resilient and can tolerate a large fraction ofnodefailures. In addition, constructing, inserting, searching, and repairing a skip graph that was disturbed by failing nodes can be done by straightforward algorithms.[2]
A skip graph is adistributeddata structurebased onskip listsdesigned to resemble abalanced search tree. They are one of several methods to implement adistributed hash table, which are used to locate resources stored in different locations across a network, given the name (or key) of the resource.
Skip graphs offer several benefits over other distributed hash table schemes such asChord (peer-to-peer)andTapestry (DHT), including addition and deletion in expected logarithmic time, logarithmic space per resource to store indexing information, no required knowledge of the number of nodes in a set and support for complex range queries. A major distinction from Chord and Tapestry is that there is no hashing of search keys of resources, which allows related resources to be near each other in the skip graph; this property makes searches for values within a given range feasible. Another strength of skip graphs is the resilience to node failure in both random andadversarialfailure models.
As withskip lists, nodes are arranged in increasing order in multiple levels; each node in leveliis contained in leveli+1 with some probability p (an adjustable parameter).
Level 0 consists of onedoubly linked listcontaining all of the nodes in the set. Lists becoming increasingly sparse at higher levels, until the list is composed of just one node. Where skip graphs differ from skip lists is that each leveli≥1, will contain multiple lists; membership of a keyxin a list is defined by themembership vectorm(x){\displaystyle m(x)}. The membership vector is defined as an infinite random word over a fixed alphabet, each list in the skip graph is identified by a finite wordwfrom the same alphabet, if that word is a prefix ofm(x){\displaystyle m(x)}then node x is a member of the list.[2]
Skip graphs support the basic operations ofsearch,insertanddelete. Skip graphs will also support the more complexrange searchoperation.
The search algorithm for skip graphs is almost identical to the search algorithm for skip lists but it is modified to run in a distributed system. Searches start at the top level and traverse through the structure. At each level, the search traverses the list until the next node contains a greater key. When a greater key is found, the search drops to the next level, continuing until the key is found or it is determined that the key is not contained in the set of nodes. If the key is not contained in the set of nodes the largest value less than the search key is returned.
Each node in a list has the following fields:
Analysis performed by William Pugh shows that on average, a skip list and by extension a skip graph containsO(lognlog(1/p)){\displaystyle O\left({\frac {\log n}{\log(1/p)}}\right)}levels for a fixed value ofp.[3]Given that at most11−p{\displaystyle {\frac {1}{1-p}}}nodes are searched on average per level, the total expected number of messages sent isO(logn(1−p)log(1−p)){\displaystyle O\left({\frac {\log n}{(1-p)\log(1-p)}}\right)}and the expected time for the search isO(logn(1−p)log(1−p)){\displaystyle O\left({\frac {\log n}{(1-p)\log(1-p)}}\right)}.[2]Therefore, for a fixed value ofp, the search operation is expected to takeO(logn)time usingO(logn)messages.[2]
Insertion is done in two phases and requires that a new nodeuknows some introducing nodev; the introducing node may be any other node currently in the skip graph. In the first phase the new nodeuuses the introducing nodevto search for its own key; this search is expected to fail and return the nodeswith the largest key smaller thanu. In the second phaseuinserts itself in each level until it is the only element in a list at the top level.[2]Insertion at each level is performed using standard doubly linked list operations; the left neighbor's next pointer is changed to point to the new node and the right neighbor's previous pointer is changed to point to the node.
Similar to the search, the insert operation takes expectedO(logn) messages andO(logn) time. With a fixed value of p; the search operation in phase 1 is expected to takeO(logn) time and messages. In phase 2 at each levelL≥ 0,ucommunicates with an average 1/pother nodes to locates', this will requireO(1/p) time and messages leading toO(1) time and messages for each step in phase 2.[2]
Nodes may be deleted in parallel at each level inO(1) time andO(logn) messages.[2]When a node wishes to leave the graph it must send messages to its immediate neighbors to rearrange their next and previous pointers.[2]
The skip graph contains an average ofO(logn) levels; at each levelumust send 2 messages to complete a delete operation on a doubly linked list. As operations on each level may be done in parallel the delete operation may be finished usingO(1) time and expectedO(logn) messages.
In skip graphs, fault tolerance describes the number of nodes which can be disconnected from the skip graph by failures of other nodes.[2]Two failure models have been examined; random failures and adversarial failures. In the random failure model any node may fail independently from any other node with some probability. The adversarial model assumes node failures are planned such that the worst possible failure is achieved at each step, the entire skip graph structure is known and failures are chosen to maximize node disconnection. A drawback of skip graphs is that there is norepair mechanism; currently the only way to remove and to repair a skip graph is to build a new skip graph with surviving nodes.
Skip graphs are highly resistant to random failures. By maintaining information on the state of neighbors and using redundant links to avoid failed neighbors, normal operations can continue even with a large number of node failures. While the number of failed nodes is less thanO(1logn){\displaystyle O\left({\frac {1}{\log n}}\right)}the skip graph can continue to function normally.[2]Simulations performed by James Aspnes show that a skip graph with 131072 nodes was able tolerate up to 60% of its nodes failing before surviving nodes were isolated.[2]While other distributed data structures may be able to achieve higher levels of resiliency they tend to be much more complex.
Adversarial failure is difficult to simulate in a large network as it becomes difficult to find worst case failure patterns.[2]Theoretical analysis shows that the resilience depends based on thevertex expansion ratioof the graph, defined as follows. For a set of nodes A in the graph G, the expansion factor is the number of nodes not in A but adjacent to a node in A divided by the number of nodes in A. If skip graphs have a sufficiently large expansion ratio ofΩ(1logn){\displaystyle \Omega \left({\frac {1}{\log n}}\right)}then at mostO(flogn){\displaystyle O(f\log n)}nodes may be separated, even if up to f failures are specifically targeted.[2]
|
https://en.wikipedia.org/wiki/Skip_graph
|
Ineconomics, theLorenz curveis a graphical representation of thedistribution of incomeor ofwealth. It was developed byMax O. Lorenzin 1905 for representinginequalityof thewealth distribution.
The curve is agraphshowing the proportion of overall income or wealth assumed by the bottomx%of the people, although this is not rigorously true for a finite population (see below). It is often used to representincome distribution, where it shows for the bottomx%of households, what percentage(y%)of the total income they have. Thepercentageof households is plotted on thex-axis, the percentage of income on they-axis. It can also be used to show distribution ofassets. In such use, many economists consider it to be a measure ofsocial inequality.
The concept is useful in describing inequality among the size of individuals inecology[1]and in studies ofbiodiversity, where the cumulative proportion of species is plotted against the cumulative proportion of individuals.[2]It is also useful inbusiness modeling: e.g., inconsumer finance, to measure the actual percentagey%ofdelinquenciesattributable to thex%of people with worstrisk scores. Lorenz curves were also applied toepidemiologyandpublic health, e.g., to measure pandemic inequality as the distribution of nationalcumulative incidence(y%) generated by the population residing in areas (x%) ranked with respect to their local epidemicattack rate.[3]
Data from 2005.
Points on the Lorenz curve represent statements such as, "the bottom 20% of all households have 10% of the total income."
A perfectly equal income distribution would be one in which every person has the same income. In this case, the bottomN%of society would always haveN%of the income. This can be depicted by the straight liney=x; called the "line of perfect equality."
By contrast, a perfectly unequal distribution would be one in which one person has all the income and everyone else has none. In that case, the curve would be aty= 0%for allx< 100%, andy= 100%whenx= 100%. This curve is called the "line of perfect inequality."
TheGini coefficientis the ratio of the area between the line of perfect equality and the observed Lorenz curve to the area between the line of perfect equality and the line of perfect inequality. The higher the coefficient, the more unequal the distribution is. In the diagram on the right, this is given by the ratioA/ (A+B), whereAandBare the areas of regions as marked in the diagram.
The Lorenz curve is a probability plot (aP–P plot) comparing the distribution of avariableagainst a hypothetical uniform distribution of that variable. It can usually be represented by a functionL(F), whereF, the cumulative portion of the population, is represented by the horizontal axis, andL, the cumulative portion of the total wealth or income, is represented by the vertical axis.
The curveLneed not be a smoothly increasing function ofF, For wealth distributions there may be oligarchies or people with negative wealth for instance.[4]
For a discrete distribution of Y given by valuesy1, ...,ynin non-decreasing order(yi≤yi+1)and their probabilitiesf(yj):=Pr(Y=yj){\displaystyle f(y_{j}):=\Pr(Y=y_{j})}the Lorenz curve is thecontinuouspiecewise linear functionconnecting the points(Fi,Li),i= 0 ton, whereF0= 0,L0= 0, and fori= 1 ton:Fi:=∑j=1if(yj)Si:=∑j=1if(yj)yjLi:=SiSn{\displaystyle {\begin{aligned}F_{i}&:=\sum _{j=1}^{i}f(y_{j})\\S_{i}&:=\sum _{j=1}^{i}f(y_{j})\,y_{j}\\L_{i}&:={\frac {S_{i}}{S_{n}}}\end{aligned}}}
When allyiare equally probable with probabilities1 /nthis simplifies toFi=inSi=1n∑j=1iyjLi=SiSn{\displaystyle {\begin{aligned}F_{i}&={\frac {i}{n}}\\S_{i}&={\frac {1}{n}}\sum _{j=1}^{i}\;y_{j}\\L_{i}&={\frac {S_{i}}{S_{n}}}\end{aligned}}}
For acontinuous distributionwith theprobability density functionfand thecumulative distribution functionF, the Lorenz curveLis given by:L(F(x))=∫−∞xtf(t)dt∫−∞∞tf(t)dt=∫−∞xtf(t)dtμ{\displaystyle L(F(x))={\frac {\int _{-\infty }^{x}t\,f(t)\,dt}{\int _{-\infty }^{\infty }t\,f(t)\,dt}}={\frac {\int _{-\infty }^{x}t\,f(t)\,dt}{\mu }}}whereμ{\displaystyle \mu }denotes the average. The Lorenz curveL(F)may then be plotted as a function parametric inx:L(x)vs.F(x). In other contexts, the quantity computed here is known as the length biased (or size biased) distribution; it also has an important role in renewal theory.
Alternatively, for acumulative distribution functionF(x)with inversex(F), the Lorenz curveL(F)is directly given by:L(F)=∫0Fx(F1)dF1∫01x(F1)dF1{\displaystyle L(F)={\frac {\int _{0}^{F}x(F_{1})\,dF_{1}}{\int _{0}^{1}x(F_{1})\,dF_{1}}}}
The inversex(F)may not exist because the cumulative distribution function has intervals of constant values. However, the previous formula can still apply by generalizing the definition ofx(F):x(F1)=inf{y:F(y)≥F1}{\displaystyle x(F_{1})=\inf\{y:F(y)\geq F_{1}\}}whereinfis theinfimum.
For an example of a Lorenz curve, seePareto distribution.
A Lorenz curve always starts at (0,0) and ends at (1,1).
The Lorenz curve is not defined if the mean of the probability distribution is zero or infinite.
The Lorenz curve for a probability distribution is acontinuous function. However, Lorenz curves representing discontinuous functions can be constructed as the limit of Lorenz curves of probability distributions, the line of perfect inequality being an example.
The information in a Lorenz curve may be summarized by theGini coefficientand theLorenz asymmetry coefficient.[1]
The Lorenz curve cannot rise above the line of perfect equality.
A Lorenz curve that never falls beneath a second Lorenz curve and at least once runs above it, has Lorenz dominance over the second one.[5]
If the variable being measured cannot take negative values, the Lorenz curve:
Note however that a Lorenz curve fornet worthwould start out by going negative due to the fact that some people have a negative net worth because of debt.
The Lorenz curve is invariant under positive scaling. IfXis a random variable, for any positive numbercthe random variablecXhas the same Lorenz curve asX.
The Lorenz curve is flipped twice, once aboutF= 0.5and once aboutL= 0.5, by negation. IfXis a random variable with Lorenz curveLX(F), then−Xhas the Lorenz curve:
The Lorenz curve is changed by translations so that the equality gapF−L(F)changes in proportion to the ratio of the original and translated means. IfXis a random variable with a Lorenz curveLX(F)and meanμX, then for any constantc≠ −μX,X+chas a Lorenz curve defined by:F−LX+c(F)=μXμX+c(F−LX(F)){\displaystyle F-L_{X+c}(F)={\frac {\mu _{X}}{\mu _{X}+c}}(F-L_{X}(F))}
For a cumulative distribution functionF(x)with meanμand (generalized) inversex(F), then for anyFwith 0 <F< 1:
BothL(F) =FPandL(F) = 1 − (1 −F)1/P, forP≥ 1, are well-known functional forms for the Lorenz curve.[6]
|
https://en.wikipedia.org/wiki/Lorenz_curve
|
Inspherical trigonometry, thelaw of cosines(also called thecosine rule for sides[1]) is a theorem relating the sides and angles ofspherical triangles, analogous to the ordinarylaw of cosinesfrom planetrigonometry.
Given a unit sphere, a "spherical triangle" on the surface of the sphere is defined by thegreat circlesconnecting three pointsu,v, andwon the sphere (shown at right). If the lengths of these three sides area(fromutov),b(fromutow), andc(fromvtow), and the angle of the corner oppositecisC, then the (first) spherical law of cosines states:[2][1]
cosc=cosacosb+sinasinbcosC{\displaystyle \cos c=\cos a\cos b+\sin a\sin b\cos C\,}
Since this is a unit sphere, the lengthsa,b, andcare simply equal to the angles (inradians) subtended by those sides from the center of the sphere. (For a non-unit sphere, the lengths are the subtended angles times the radius, and the formula still holds ifa,bandcare reinterpreted as the subtended angles). As a special case, forC=π/2, thencosC= 0, and one obtains the spherical analogue of thePythagorean theorem:
cosc=cosacosb{\displaystyle \cos c=\cos a\cos b\,}
If the law of cosines is used to solve forc, the necessity of inverting the cosine magnifiesrounding errorswhencis small. In this case, the alternative formulation of thelaw of haversinesis preferable.[3]
A variation on the law of cosines, the second spherical law of cosines,[4](also called thecosine rule for angles[1]) states:
cosC=−cosAcosB+sinAsinBcosc{\displaystyle \cos C=-\cos A\cos B+\sin A\sin B\cos c\,}
whereAandBare the angles of the corners opposite to sidesaandb, respectively. It can be obtained from consideration of aspherical triangle dualto the given one.
Letu,v, andwdenote theunit vectorsfrom the center of the sphere to those corners of the triangle. The angles and distances do not change if the coordinate system is rotated, so we can rotate the coordinate system so thatu{\displaystyle \mathbf {u} }is at thenorth poleandv{\displaystyle \mathbf {v} }is somewhere on theprime meridian(longitude of 0). With this rotation, thespherical coordinatesforv{\displaystyle \mathbf {v} }are(r,θ,ϕ)=(1,a,0),{\displaystyle (r,\theta ,\phi )=(1,a,0),}whereθis the angle measured from the north pole not from the equator, and the spherical coordinates forw{\displaystyle \mathbf {w} }are(r,θ,ϕ)=(1,b,C).{\displaystyle (r,\theta ,\phi )=(1,b,C).}The Cartesian coordinates forv{\displaystyle \mathbf {v} }are(x,y,z)=(sina,0,cosa){\displaystyle (x,y,z)=(\sin a,0,\cos a)}and the Cartesian coordinates forw{\displaystyle \mathbf {w} }are(x,y,z)=(sinbcosC,sinbsinC,cosb).{\displaystyle (x,y,z)=(\sin b\cos C,\sin b\sin C,\cos b).}The value ofcosc{\displaystyle \cos c}is the dot product of the two Cartesian vectors, which issinasinbcosC+cosacosb.{\displaystyle \sin a\sin b\cos C+\cos a\cos b.}
Letu,v, andwdenote theunit vectorsfrom the center of the sphere to those corners of the triangle. We haveu·u= 1,v·w= cosc,u·v= cosa, andu·w= cosb. The vectorsu×vandu×whave lengthssinaandsinbrespectively and the angle between them isC, sosinasinbcosC=(u×v)⋅(u×w)=(u⋅u)(v⋅w)−(u⋅w)(v⋅u)=cosc−cosacosb{\displaystyle {\begin{aligned}\sin a\sin b\cos C&=({\mathbf {u}}\times {\mathbf {v}})\cdot ({\mathbf {u}}\times {\mathbf {w}})\\&=({\mathbf {u}}\cdot {\mathbf {u}})({\mathbf {v}}\cdot {\mathbf {w}})-({\mathbf {u}}\cdot {\mathbf {w}})({\mathbf {v}}\cdot {\mathbf {u}})\\&=\cos c-\cos a\cos b\end{aligned}}}
usingcross products,dot products, and theBinet–Cauchy identity(p×q)⋅(r×s)=(p⋅r)(q⋅s)−(p⋅s)(q⋅r).{\displaystyle ({\mathbf {p}}\times {\mathbf {q}})\cdot ({\mathbf {r}}\times {\mathbf {s}})=({\mathbf {p}}\cdot {\mathbf {r}})({\mathbf {q}}\cdot {\mathbf {s}})-({\mathbf {p}}\cdot {\mathbf {s}})({\mathbf {q}}\cdot {\mathbf {r}}).}
The following proof relies on the concept ofquaternionsand is based on a proof given in Brand:[5]Letu,v, andwdenote theunit vectorsfrom the center of the unit sphere to those corners of the triangle. We define the quaternionu= (0,u) = 0 +uxi+uyj+uzk. The quaternionuis used to represent a rotation by 180° around the axis indicated by the vectoru. We note that using−uas the axis of rotation gives the same result, and that the rotation is its own inverse. We also definev= (0,v)andw= (0,w).
We compute theproductof quaternions, which also gives the composition of the corresponding rotations:
where(f,g)represents the real (scalar) and imaginary (vector) parts of a quaternion,ais the angle betweenuandv, andw′ = (u×v) / |u×v|is the axis of the rotation that movesutovalong a great circle. Similarly we define:
The quaternionsq,r, andsare used to represent rotations with axes of rotationw′,u′, andv′, respectively, and angles of rotation2a,2b, and2c, respectively. (Because these are double angles, each ofq,r, andsrepresents two applications of the rotation implied by an edge of the spherical triangle.)
From the definitions, it follows that
which tells us that the composition of these rotations is the identity transformation. In particular,rq=s−1gives us
Expanding the left-hand side, we obtain
Equating the real parts on both sides of the identity, we obtain
Becauseu′is parallel tov×w,w′is parallel tou×v= −v×u, andCis the angle betweenv×wandv×u, it follows thatu′⋅w′=−cosC{\displaystyle {\mathbf {u}}'\cdot {\mathbf {w}}'=-\cos C}. Thus,
The first and second spherical laws of cosines can be rearranged to put the sides (a,b,c) and angles (A,B,C) on opposite sides of the equations:cosC=cosc−cosacosbsinasinbcosc=cosC+cosAcosBsinAsinB{\displaystyle {\begin{aligned}\cos C&={\frac {\cos c-\cos a\cos b}{\sin a\sin b}}\\\cos c&={\frac {\cos C+\cos A\cos B}{\sin A\sin B}}\\\end{aligned}}}
Forsmallspherical triangles, i.e. for smalla,b, andc, the spherical law of cosines is approximately the same as the ordinary planar law of cosines,c2≈a2+b2−2abcosC.{\displaystyle c^{2}\approx a^{2}+b^{2}-2ab\cos C\,.}
To prove this, we will use thesmall-angle approximationobtained from theMaclaurin seriesfor the cosine and sine functions:cosa=1−a22+O(a4)sina=a+O(a3){\displaystyle {\begin{aligned}\cos a&=1-{\frac {a^{2}}{2}}+O\left(a^{4}\right)\\\sin a&=a+O\left(a^{3}\right)\end{aligned}}}
Substituting these expressions into the spherical law of cosines nets:
1−c22+O(c4)=1−a22−b22+a2b24+O(a4)+O(b4)+cos(C)(ab+O(a3b)+O(ab3)+O(a3b3)){\displaystyle 1-{\frac {c^{2}}{2}}+O\left(c^{4}\right)=1-{\frac {a^{2}}{2}}-{\frac {b^{2}}{2}}+{\frac {a^{2}b^{2}}{4}}+O\left(a^{4}\right)+O\left(b^{4}\right)+\cos(C)\left(ab+O\left(a^{3}b\right)+O\left(ab^{3}\right)+O\left(a^{3}b^{3}\right)\right)}
or after simplifying:
c2=a2+b2−2abcosC+O(c4)+O(a4)+O(b4)+O(a2b2)+O(a3b)+O(ab3)+O(a3b3).{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C+O\left(c^{4}\right)+O\left(a^{4}\right)+O\left(b^{4}\right)+O\left(a^{2}b^{2}\right)+O\left(a^{3}b\right)+O\left(ab^{3}\right)+O\left(a^{3}b^{3}\right).}
Thebig Oterms foraandbare dominated byO(a4) +O(b4)asaandbget small, so we can write this last expression as:
c2=a2+b2−2abcosC+O(a4)+O(b4)+O(c4).{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C+O\left(a^{4}\right)+O\left(b^{4}\right)+O\left(c^{4}\right).}
Various trigonometric equations equivalent to the spherical law of cosines were used in the course of solving astronomical problems by medieval Islamic astronomersal-Khwārizmī(9th century) andal-Battānī(c. 900), Indian astronomerNīlakaṇṭha(15th century), and Austrian astronomerGeorg von Peuerbach(15th century) but none of them treated it as a general method for solving spherical triangles.[6]For example, al-Khwārizmī calculated the azimuthα{\displaystyle \alpha }of the Sun in terms of its altitudeh{\displaystyle h}, terrestrial latitudeϕ{\displaystyle \phi }, and ortive amplitudeψ{\displaystyle \psi }(angular distance between due East and the Sun's rising place on the horizon) ascosα=(sinψ−tanϕsinh)/cosh{\displaystyle \cos \alpha =(\sin \psi -\tan \phi \sin h)/\cos h}.[7](SeeHorizontal coordinate system.)
The spherical law of cosines appeared as an independent trigonometrical identity for solving spherical triangles in Peuerbach's studentRegiomontanus'sDe triangulis omnimodis(unfinished at Regiomontanus's death in 1476, published posthumously 1533), a foundational work for European trigonometry and astronomy which comprehensively described how to solve plane and spherical triangles. Regiomontanus used nearly the modern form, but written in terms of theversine,versx=1−cosx{\displaystyle \operatorname {vers} x=1-\cos x}, rather than the cosine,[8]
Mathematical historians have speculated that Regiomontanus may have adapted the result from specific astronomical examples in al-Battānī'sKitāb az-Zīj aṣ-Ṣābi’, which was published in Latin translation annotated by Regiomontanus in 1537.
|
https://en.wikipedia.org/wiki/Spherical_law_of_cosines
|
This article listsmathematical identities, that is,identically true relationsholding inmathematics.
|
https://en.wikipedia.org/wiki/List_of_mathematical_identities
|
Acombination lockis a type oflocking devicein which asequenceof symbols, usually numbers, is used to open the lock. The sequence may be entered using a single rotating dial which interacts with several discs orcams, by using a set of several rotating discs with inscribed symbols which directly interact with the locking mechanism, or through an electronic or mechanical keypad. Types range from inexpensive three-digitluggage locksto high-securitysafes. Unlike ordinarypadlocks, combination locks do not use keys.
The earliest known combination lock was excavated in aRomanperiod tomb on theKerameikos,Athens. Attached to a small box, it featured several dials instead of keyholes.[1]In 1206, theMuslimengineerIsmail al-Jazaridocumented a combination lock in his bookal-Ilm Wal-Amal al-Nafi Fi Sina'at al-Hiyal(The Book of Knowledge of Ingenious Mechanical Devices).[2]Muhammad al-Asturlabi (ca. 1200) also made combination locks.[3]
Gerolamo Cardanolater described a combination lock in the 16th century.
U.S. Patents regarding combination padlocks by J.B. Gray in 1841[4]and by J.E. Treat in 1869[5]describe themselves as improvements, suggesting that such mechanisms were already in use.
Joseph Loch was said to have invented the modern combination lock forTiffany's Jewelersin New York City, and from the 1870s to the early 1900s, made many more improvements in the designs and functions of such locks.[6]However, his patent claim states: "I do not claim as my invention a tumbler composed of two disks, one working within the other, such not being my invention.", but there is no reference to prior art of this type of lock.
The first commercially viable single-dial combination lock was patented on 1 February 1910 by John Junkunc, owner of American Lock Company.[7]
One of the simplest types of combination lock, often seen in low-securitybicyclelocks,briefcases, andsuitcases, uses several rotating discs with notches cut into them. The lock is secured by a pin with several teeth on it which hook into the rotating discs. When the notches in the discs align with the teeth on the pin, the lock can be opened.
Therotary combination locksfound onpadlocks, lockers, orsafesmay use a single dial which interacts with several parallel discs orcams. Customarily, a lock of this type is opened by rotating the dial clockwise to the first numeral, counterclockwise to the second, and so on in an alternating fashion until the last numeral is reached. The cams typically have an indentation or notch, and when the correctpermutationis entered, the notches align, allowing the latch to fit into them and open the lock.
The C. L. Gougler Keyless Locks Company manufactured locks for which the combination was a set number of audible clicks to the left and right, allowing them to be unlocked in darkness or by the vision-impaired.[citation needed]
In 1978 a combination lock which could be set by the user to a sequence of his own choosing was invented by Andrew Elliot Rae.[8]At this time the electronic keypad was invented and he was unable to get any manufacturers to back his mechanical lock for lockers, luggage, or brief-cases. The silicon chip locks never became popular due to the need for battery power to maintain their integrity. The patent expired and the original mechanical invention was instantly manufactured and sold worldwide mainly for luggage, lockers, and hotel safes. It is now a standard part of the luggage used by travellers.
Many doors use combination locks which require the user to enter a numeric sequence on akeypadto gain entry. These special locks usually require the additional use of electronic circuitry, although purely mechanical keypad locks have been available since 1936.[9]The chief advantage of this system is that multiple persons can be granted access without having to supply an expensive physical key to each person. Also, in case the key is compromised, "changing" the lock requires only configuring a new key code and informing the users, which will generally be cheaper and quicker than the same process for traditional key locks.
Electronic combination locks, while generally safe from the attacks on their mechanical counterparts, suffer from their own set of flaws. If the arrangement of numbers is fixed, it is easy to determine the lock sequence by viewing several successful accesses. Similarly, the numbers in the combination (but not the actual sequence) may be determined by which keys show signs of recent use. More advanced electronic locks may scramble the numbers' locations randomly to prevent these attacks.
There is a variation of the traditional dial based combination lock wherein the "secret" is encoded in an electronic microcontroller. These are popular for safe andbank vaultdoors where tradition tends towards dial locks rather than keys. They allow many valid combinations, one per authorized user, so changing one person's access has no effect on other users. These locks often have auditing features, recording which combination is used at what time for every opening. Power for the lock may be provided by a battery or by a tiny generator set in operation by spinning the dial.[10][11]
|
https://en.wikipedia.org/wiki/Combination_lock
|
Incomputer programming,feature-oriented programming(FOP) orfeature-oriented software development(FOSD) is aprogramming paradigmfor program generation insoftware product lines(SPLs) and for incremental development of programs.
FOSD arose out of layer-based designs and levels of abstraction in network protocols and extensible database systems in the late-1980s.[1]A program was a stack of layers. Each layer added functionality to previously composed layers and different compositions of layers produced different programs. Not surprisingly, there was a need for a compact language to express such designs. Elementary algebra fit the bill: each layer was a function (aprogram transformation) that added new code to an existing program to produce a new program, and a program's design was modeled by an expression, i.e., a composition of transformations (layers). The figure to the left illustrates the stacking of layers i, j, and h (where h is on the bottom and i is on the top). The algebraic notations i(j(h)), i•j•h, and i+j+h have been used to express these designs.
Over time, layers were equated to features, where afeatureis an increment in program functionality. The paradigm for program design and generation was recognized to be an outgrowth of relational query optimization, where query evaluation programs were defined as relational algebra expressions, andquery optimizationwas expression optimization.[2]A software product line is a family of programs where each program is defined by a unique composition of features. FOSD has since evolved into the study of feature modularity, tools, analyses, and design techniques to support feature-based program generation.
The second generation of FOSD research was on feature interactions, which originated in telecommunications. Later, the termfeature-oriented programmingwas coined;[3]this work exposed interactions between layers. Interactions require features to be adapted when composed with other features.
A third generation of research focussed on the fact that every program has multiple representations (e.g., source, makefiles, documentation, etc.) and adding a feature to a program should elaborate each of its representations so that all are consistent. Additionally, some of representations could be generated (or derived) from others. In the sections below, the mathematics of the three most recent generations of FOSD, namelyGenVoca,[1]AHEAD,[4]andFOMDD[5][6]are described, and links to product lines that have been developed using FOSD tools are provided. Also, four additional results that apply to all generations of FOSD are:FOSD metamodels,FOSD program cubes, and FOSD feature interactions.
GenVoca(aportmanteauof the names Genesis and Avoca)[1]is a compositional paradigm for defining programs of product lines. Base programs are 0-ary functions or transformations calledvalues:
and features are unary functions/transformations that elaborate (modify, extend, refine) a program:
where + denotes function composition. Thedesignof a program is a named expression, e.g.:
AGenVoca modelof a domain or software product line is a collection of base programs and features (seeMetaModelsandProgram Cubes).
The programs (expressions) that can be created defines a product line. Expression optimization isprogram design optimization, and expression evaluation isprogram generation.
GenVoca features were originally implemented using C preprocessor (#ifdef feature ... #endif) techniques. A more advanced technique, calledmixin layers, showed the connection of features to object-oriented collaboration-based designs.
Algebraic Hierarchical Equations for Application Design(AHEAD)[4]generalized GenVoca in two ways. First, it revealed the internal structure of GenVoca values as tuples. Every program has multiple representations, such as source, documentation, bytecode, and makefiles. A GenVoca value is a tuple of program representations. In a product line of parsers, for example, a base parser f is defined by its grammar gf, Java source sf, and documentation df. Parser f is modeled by the tuple f=[gf, sf, df]. Each program representation may have subrepresentations, and they too may have subrepresentations, recursively. In general, a GenVoca value is a tuple of nested tuples that define a hierarchy of representations for a particular program.
Example. Suppose terminal representations are files. In AHEAD, grammar gfcorresponds to a single BNF file, source sfcorresponds to a tuple of Java files [c1…cn], and documentation dfis a tuple of HTML files [h1…hk]. A GenVoca value (nested tuples) can be depicted as a directed graph: the graph for parser f is shown in the figure to the right. Arrows denote projections, i.e., mappings from a tuple to one of its components. AHEAD implements tuples as file directories, so f is a directory containing file gfand subdirectories sfand df. Similarly, directory sfcontains files c1…cn, and directory df contains files h1…hk.
Second, AHEAD expresses features as nested tuples of unary functions calleddeltas. Deltas can beprogram refinements(semantics-preserving transformations),extensions(semantics-extending transformations),
orinteractions(semantics-altering transformations). We use the neutral term “delta” to represent all of these possibilities, as each occurs in FOSD.
To illustrate, suppose feature j extends a grammar by Δgj(new rules and tokens are added), extends source code by Δsj(new classes and members are added and existing methods are modified), and extends documentation by Δdj. The tuple of deltas for feature j is modeled by j=[Δgj,Δsj,Δdj], which we call adelta tuple. Elements of delta tuples can themselves be delta tuples. Example: Δsjrepresents the changes that are made to each class in sfby feature j, i.e., Δsj=[Δc1…Δcn].
The representations of a program are computed recursively by nested vector addition. The representations for parser p2(whose GenVoca expression is j+f) are:
That is, the grammar of p2is the base grammar composed with its extension (Δgj+gf), the source of p2is the base source composed with its extension (Δsj+sf), and so on. As elements of delta tuples can themselves be delta tuples, composition recurses, e.g., Δsj+sf=
[Δc1…Δcn]+[c1…cn]=[Δc1+c1…Δcn+cn].
Summarizing, GenVoca values are nested tuples of program artifacts, and features are nested delta tuples, where + recursively composes them by vector addition. This is the essence of AHEAD.
The ideas presented above concretely expose two FOSD principles. ThePrinciple of Uniformitystates that all program artifacts are treated and modified in the same way. (This is evidenced by deltas for different artifact types above). ThePrinciple of Scalabilitystates all levels of abstractions are treated uniformly. (This gives rise to the hierarchical nesting of tuples above).
The original implementation of AHEAD is the AHEAD Tool Suite and Jak language, which exhibits both the Principles of Uniformity and Scalability. Next-generation tools include CIDE[10]and FeatureHouse.[11]
Feature-Oriented Model-Driven Design(FOMDD)[5][6]combines the ideas of AHEAD withModel-Driven Design(MDD) (a.k.a.Model-Driven Architecture(MDA)). AHEAD functions capture the lockstep update of program artifacts when a feature is added to a program. But there are other functional relationships among program artifacts that express derivations. For example, the relationship between a grammar gfand its parser source sfis defined by a compiler-compiler tool, e.g., javacc. Similarly, the relationship between Java source sfand its bytecode bfis defined by the javac compiler. Acommuting diagramexpresses these relationships. Objects are program representations, downward arrows are derivations, and horizontal arrows are deltas. The figure to the right shows the commuting diagram for program p3= i+j+h = [g3,s3,b3].
A fundamental property of acommuting diagramis that all paths between two objects are equivalent. For example, one way to derive the bytecode b3of parser p3(lower right object in the figure to the right) from grammar ghof parser h (upper left object) is to derive the bytecode bhand refine to b3, while another way refines ghto g3, and then derive b3, where + represents delta composition and () is function or tool application:
There are(42){\displaystyle {\tbinom {4}{2}}}possible paths to derive the bytecode b3of parser p3from the grammar ghof parser h. Each path represents ametaprogramwhose execution generates the target object (b3) from the starting object (gf).
There is a potential optimization: traversing each arrow of acommuting diagramhas a cost. The cheapest (i.e., shortest) path between two objects in acommuting diagramis ageodesic, which represents the most efficient metaprogram that produces the target object from a given object.
Commuting diagramsare important for at least two reasons: (1) there is the possibility of optimizing the generation of artifacts (e.g., geodesics) and (2) they specify different ways of constructing a target object from a starting object.[5][12]A path through a diagram corresponds to a tool chain: for an FOMDD model to be consistent, it should be proven (or demonstrated through testing) that all tool chains that map one object to another in fact yield equivalent results. If this is not the case, then either there is a bug in one or more of the tools or the FOMDD model is wrong.
|
https://en.wikipedia.org/wiki/Feature-oriented_programming
|
Semi-structured data[1]is a form ofstructured datathat does not obey the tabular structure of data models associated withrelational databasesor other forms ofdata tables, but nonetheless containstagsor other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known asself-describingstructure.
In semi-structured data, the entities belonging to the same class may have differentattributeseven though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of theInternetwherefull-textdocumentsanddatabasesare not the only forms of data anymore, and different applications need a medium forexchanging information. Inobject-oriented databases, one often finds semi-structured data.
XML,[2]other markup languages,email, andEDIare all forms of semi-structured data.OEM(Object Exchange Model)[3]was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizingSOAPprinciples.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor asdatabase schema, enforced by theXML schemaand processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML as "human-readable", however, can only be taken so far. Some implementations/dialects of XML, such as the XML representation of the contents of a Microsoft Word document, as implemented in Office 2007 and later versions, utilize dozens or even hundreds of different kinds of tags that reflect a particular problem domain - in Word's case, formatting at the character and paragraph and document level, definitions of styles, inclusion of citations, etc. - which are nested within each other in complex ways. Understanding even a portion of such an XML document by reading it, let alone catching errors in its structure, is impossible without a very deep prior understanding of the specific XML implementation, along with assistance by software that understands the XML schema that has been employed. Such text is not "human-understandable" any more than a book written in Swahili (which uses the Latin alphabet) would be to an American or Western European who does not know a word of that language: the tags are symbols that are meaningless to a person unfamiliar with the domain.
JSONor JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects. JSON has been popularized by web services developed utilizingRESTprinciples.
Databases such asMongoDBandCouchbasestore data natively in JSON format, leveraging the pros of semi-structured data architecture.
Thesemi-structured modelis adatabase modelwhere there is no separation between thedataand theschema, and the amount of structure used depends on the purpose.
The advantages of this model are the following:
The primary trade-off being made in using a semi-structureddatabase modelis that queries cannot be made as efficiently as in a more constrained structure, such as in therelational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical inSQL), it is not as efficient because it has to seek around the disk following pointers.
TheObject Exchange Model(OEM) is one standard to express semi-structured data, another way isXML.
|
https://en.wikipedia.org/wiki/Semi-structured_data
|
Data-driven control systemsare a broad family ofcontrol systems, in which theidentificationof the process model and/or the design of the controller are based entirely onexperimental datacollected from the plant.[1]
In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome bydata-drivenmethods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. Thedirectdata-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest.
Thestandardapproach to control systems design is organized in two-steps:
Typical objectives ofsystem identificationare to haveG^{\displaystyle {\widehat {G}}}as close as possible toG0{\displaystyle G_{0}}, and to haveΓ{\displaystyle \Gamma }as small as possible. However, from anidentification for controlperspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model.
One way to deal with uncertainty is to design a controller that has an acceptable performance with all models inΓ{\displaystyle \Gamma }, includingG0{\displaystyle G_{0}}. This is the main idea behindrobust controldesign procedure, that aims at building frequency domain uncertainty descriptions of the process. However, being based on worst-case assumptions rather than on the idea of averaging out the noise, this approach typically leads toconservativeuncertainty sets. Rather, data-driven techniques deal with uncertainty by working on experimental data, and avoiding excessive conservativism.
In the following, the main classifications of data-driven control systems are presented.
There are many methods available to control the systems.
The fundamental distinction is betweenindirectanddirectcontroller design methods. The former group of techniques is still retaining the standard two-step approach,i.e.first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated modelG^{\displaystyle {\widehat {G}}}(according to thecertainty equivalenceprinciple), but in practiceG^≠G0{\displaystyle {\widehat {G}}\neq G_{0}}. To overcome this problem, the idea behind the latter group of techniques is to map the experimental datadirectlyonto the controller, without any model to be identified in between.
Another important distinction is betweeniterativeandnoniterative(orone-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which theoptimization problemis performed based on the results of the previous iteration, and the estimation is expected to become more and more accurate at each iteration. This approach is also prone to on-line implementations (see below). In the latter group, the (optimal) controller parametrization is provided with a single optimization problem. This is particularly important for those systems in which iterations or repetitions of data collection experiments are limited or even not allowed (for example, due to economic aspects). In such cases, one should select a design technique capable of delivering a controller on a single data set. This approach is often implemented off-line (see below).
Since, on practical industrial applications, open-loop or closed-loop data are often available continuously,on-linedata-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead,off-lineapproaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time.
The iterative feedback tuning (IFT) method was introduced in 1994,[2]starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle.
IFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation.
Letyd{\displaystyle y^{d}}be a desired output to the reference signalr{\displaystyle r}; the error between the achieved and desired response isy~(ρ)=y(ρ)−yd{\displaystyle {\tilde {y}}(\rho )=y(\rho )-y^{d}}. The control design objective can be formulated as the minimization of the objective function:
Given the objective function to minimize, thequasi-Newton methodcan be applied, i.e. a gradient-based minimization using a gradient search of the type:
The valueγi{\displaystyle \gamma _{i}}is the step size,Ri{\displaystyle R_{i}}is an appropriate positive definite matrix anddJ^dρ{\displaystyle {\frac {d{\widehat {J}}}{d\rho }}}is an approximation of the gradient; the true value of the gradient is given by the following:
The value ofδyδρ(t,ρ){\displaystyle {\frac {\delta y}{\delta \rho }}(t,\rho )}is obtained through the following three-step methodology:
A crucial factor for the convergence speed of the algorithm is the choice ofRi{\displaystyle R_{i}}; wheny~{\displaystyle {\tilde {y}}}is small, a good choice is the approximation given by the Gauss–Newton direction:
Noniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller.[3]It provides a one-shot method to directly synthesize a controller based on a single dataset.
Suppose thatG{\displaystyle G}denotes an unknown LTI stable SISO plant,M{\displaystyle M}a user-defined reference model andF{\displaystyle F}a user-defined weighting function. An LTI fixed-order controller is indicated asK(ρ)=βTρ{\displaystyle K(\rho )=\beta ^{T}\rho }, whereρ∈Rn{\displaystyle \rho \in \mathbb {R} ^{n}}, andβ{\displaystyle \beta }is a vector of LTI basis functions. Finally,K∗{\displaystyle K^{*}}is an ideal LTI controller of any structure, guaranteeing a closed-loop functionM{\displaystyle M}when applied toG{\displaystyle G}.
The goal is to minimize the following objective function:
J(ρ){\displaystyle J(\rho )}is a convex approximation of the objective function obtained from a model reference problem, supposing that1(1+K(ρ)G)≈1(1+K∗G){\displaystyle {\frac {1}{(1+K(\rho )G)}}\approx {\frac {1}{(1+K^{*}G)}}}.
WhenG{\displaystyle G}is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm ofε(t){\displaystyle \varepsilon (t)}in the scheme in figure.
The input signalr(t){\displaystyle r(t)}is supposed to be a persistently exciting input signal andv(t){\displaystyle v(t)}to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal errorε(t,ρ∗){\displaystyle \varepsilon (t,\rho ^{*})}is uncorrelated withr(t){\displaystyle r(t)}. The control objective thus consists in findingρ{\displaystyle \rho }such thatr(t){\displaystyle r(t)}andε(t,ρ∗){\displaystyle \varepsilon (t,\rho ^{*})}are uncorrelated.
The vector ofinstrumental variablesζ(t){\displaystyle \zeta (t)}is defined as:
whereℓ1{\displaystyle \ell _{1}}is large enough andrW(t)=Wr(t){\displaystyle r_{W}(t)=Wr(t)}, whereW{\displaystyle W}is an appropriate filter.
The correlation function is:
and the optimization problem becomes:
Denoting withϕr(ω){\displaystyle \phi _{r}(\omega )}the spectrum ofr(t){\displaystyle r(t)}, it can be demonstrated that, under some assumptions, ifW{\displaystyle W}is selected as:
then, the following holds:
There is no guarantee that the controllerK{\displaystyle K}that minimizesJN,ℓ1{\displaystyle J_{N,\ell _{1}}}is stable. Instability may occur in the following cases:
Consider a stabilizing controllerKs{\displaystyle K_{s}}and the closed loop transfer functionMs=KsG1+KsG{\displaystyle M_{s}={\frac {K_{s}G}{1+K_{s}G}}}.
Define:
Condition 1. is enforced when:
The model reference design with stability constraint becomes:
Aconvex data-driven estimationofδ(ρ){\displaystyle \delta (\rho )}can be obtained through thediscrete Fourier transform.
Define the following:
Forstable minimum phase plants, the followingconvex data-driven optimization problemis given:
Virtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset.
VRFT was first proposed in[4]and then extended to LPV systems.[5]VRFT also builds on ideas given in[6]asVRD2{\displaystyle VRD^{2}}.
The main idea is to define a desired closed loop modelM{\displaystyle M}and to use its inverse dynamics to obtain a virtual referencerv(t){\displaystyle r_{v}(t)}from the measured output signaly(t){\displaystyle y(t)}.
The virtual signals arerv(t)=M−1y(t){\displaystyle r_{v}(t)=M^{-1}y(t)}andev(t)=rv(t)−y(t).{\displaystyle e_{v}(t)=r_{v}(t)-y(t).}
The optimal controller is obtained from noiseless data by solving the following optimization problem:
where the optimization function is given as follows:
An Introduction to Data-Driven Control Systems
Ali Khaki-Sedigh
ISBN: 978-1-394-19642-5 November 2023 Wiley-IEEE Press 384 Pages
|
https://en.wikipedia.org/wiki/Data-driven_control_system
|
Accelerandois a 2005 science fiction novel consisting of a series of interconnected short stories written by British authorCharles Stross. As well as normal hardback and paperback editions, it was released as a freee-bookunder theCC BY-NC-ND license.Accelerandowon theLocus Awardin 2006,[1]and was nominated for several other awards in 2005 and 2006, including theHugo,Campbell,Clarke, andBritish Science Fiction Association Awards.[1][2]
In Italian,accelerandomeans "speeding up" and is used as atempo markinginmusical notation. In Stross' novel, it refers to theaccelerating rateat which humanity in general, and/or the novel's characters, head towards thetechnological singularity.
The book is a collection of nine short stories telling the tale of threegenerationsof a family before, during, and after atechnological singularity. It was originally written as a series ofnovelettesand novellas, all published inAsimov's Science Fictionmagazine in the period 2001 to 2004. According to Stross, the initial inspiration for the stories was his experience working as a programmer for a high-growth company during thedot-com boomof the 1990s.[3]
The first three stories follow the character ofagalmic"venturealtruist" Manfred Macx, starting in the early 21st century; the next three stories follow his daughter Amber; and the final three focus largely on Amber's son Sirhan in the completely transformed world at the end of the century.
Stross describes humanity's situation inAccelerandoas dire:
In the background of what looks like a Panglossian techno-optimist novel, horrible things are happening. Most of humanity is wiped out, then arbitrarily resurrected in mutilated form by the Vile Offspring. Capitalism eatseverythingthen the logic of competition pushes it so far that merely human entities can no longer compete; we're a fat, slow-moving, tasty resource – like the dodo. Our narrative perspective, Aineko, isnota talking cat: it's a vastly superintelligent AI, coolly calculating, that has worked out that human beings are more easily manipulated if they think they're dealing with a furry toy. The cat body is a sock puppet wielded by an abusive monster.[4]
As events progress inAccelerando, the planets of theSolar Systemare dismantled over time to form aMatrioshka brain, a vast solar-powered computational device inhabited by minds inconceivably more advanced and complex than naturally evolved intelligences such as human beings. This proves to be a normal stage in the life-cycle of an inhabited solar system; the galaxies are revealed to be filled with such Matrioshka brains. Intelligent consciousnesses outside of Matrioshka brains may communicate viawormholenetworks.
The notion that the universe is dominated by a communications network ofsuperintelligencesbears comparison withOlaf Stapledon's 1937 science-fiction novelStar Maker, although Stapledon's advanced civilisations are said to communicate psychically rather than informatically.
In the following table, the chapter number (#), chapter name and original magazine date of publication, and a brief synopsis are given. The nine stories are grouped into three parts.
The novel contains numerous allusions to real-world scientific concepts and individuals, including:
Accelerandowon the 2006Locus Awardfor Best Science Fiction Novel,[1]and the 2010 Estonian SF Award for Best Translated Novel of 2009.[12]Additionally, the novel was shortlisted for several other awards, including:
The original short story "Lobsters" (June 2001) was shortlisted for:
The original short story "Halo" (June 2002) was shortlisted for:
The original short story "Router" (September 2002) was shortlisted for:
The original short story "Nightfall" (April 2003) was shortlisted for:
The original short story "Elector" (September 2004) was shortlisted for:
|
https://en.wikipedia.org/wiki/Accelerando
|
Revlon, Inc. v. MacAndrews & Forbes Holdings, Inc., 506 A.2d 173 (Del. 1986),[1]was alandmark decisionof theDelaware Supreme Courtonhostile takeovers.
The Court declared that, in certain limited circumstances indicating that the "sale" or "break-up" of the company is inevitable, thefiduciaryobligation of thedirectorsof a target corporation are narrowed significantly, the singular responsibility of the board being to maximize immediate stockholder value by securing the highest price available. The role of the board of directors transforms from "defenders of the corporate bastion to auctioneers charged with getting the best price for the stockholders at a sale of the company."[2]Accordingly, the board's actions are evaluated in a different frame of reference. In such a context, that conduct can not be judicially reviewed pursuant to the traditionalbusiness judgment rule, but instead will be scrutinized for reasonableness in relation to this discrete obligation.
The force of this statement spurred a corporate takeover frenzy, since directors believed that they were compelled to conduct an auction whenever their corporation appeared to be "in play," so as to not violate their fiduciary duties to the shareholders.
Colloquially, the board of a firm that is "inRevlonmode" acquires certainRevlonduties, which requires the firm to be auctioned or sold to the highest bidder.
The Court reached this holding in affirming the issuance by theDelaware Court of Chancerybelow of a preliminary injunction precluding Revlon, Inc. from consummating a proposed transaction with one of two competing bidders that effectively ended an active and ongoing auction to acquire the company.
CEORonald PerelmanofPantry Prideapproached theRevloncorporation, proposing either a negotiated transaction or, if necessary, ahostiletender offer, at a price of between $42 and $45 per share. Revlon's board rejected the negotiated transaction, fearing that the acquisition would be financed byjunk bondsand result in the corporation's dissolution.
To prevent thehostiletender offer, the Revlon board promptly undertook defensive action. Most notably, it adopted a Note Purchase Rights Plan, a variation on the traditionalpoison pillthat, when triggered, resulted in the issuance of debt rather than equity rights to existing shareholders other than the unapproved bidder.
Shortly thereafter, Pantry Pride declared a hostile cash tender offer for any or all Revlon shares at a price of $47.50, subject to its ability to secure financing and to the redemption of the rights issued to shareholders under the newly adopted Rights Plan.
The Revlon board responded by advising shareholders to reject the offer as inadequate, and it commenced its own offer to repurchase a significant percentage of its own outstanding shares in exchange for senior subordinated notes and convertible preferred stock valued at $100 per share. The offer was quickly oversubscribed and in exchange for 10 million of its own tendered shares, the company issued notes that contained covenants restricting Revlon's ability to incur debt, sell assets or issue dividends going forward.
The successful consummation of the Revlon repurchase program effectively thwarted Pantry Pride's outstanding tender offer. A few weeks later, however, Pantry Pride issued a new one that, taking into account the completed exchange offer, reflected value essentially equivalent to its first offer. Following rejection of this offer by the Revlon board, Pantry Pride repeatedly revised its offer over the course of the next several weeks, raising the offer price to $50, and later to $53 per share.
During this same period, the Revlon board had commenced discussions with Forstmann, Little regarding a possible leveraged buyout led by Forstmann as an alternative to the acquisition by Pantry Pride. It quickly reached agreement in principle on a transaction at a price of $56. The terms of the proposed deal importantly included a waiver of the restrictive covenants contained in the notes issued by Revlon in the earlier repurchase. The announcement of the proposed deal, and in particular the anticipated waiver of the covenants, sent the trading value of the notes into a steep decline, engendering threats of litigation from now irate noteholders.
Pantry Pride promptly raised the price of its offer to $56.25 per share. It further announced publicly that it would top any ensuing bid that Forstmann might make, if only by a fraction. In light of this, Forstmann expressed reluctance to reenter the bidding without significant assurances from Revlon that any resulting deal would close. The Revlon board assuaged Forstmann's concern. Less than a week following Pantry Pride's $56.25 offer, it struck a deal with Forstmann pursuant to which Forstmann would pay $57.25 per share conditioned on its receipt of a lock-up option to purchase one of Revlon's important business divisions at a discounted price should another acquirer secure 40% or more of Revlon's outstanding stock, a $25 million termination fee, a restrictive no-shop provision precluding the Revlon board from negotiating with Pantry Pride or any other rival bidder except under very narrow circumstances, removal of the Note Purchase Rights, and waiver of the restrictive covenants contained in the recently issued notes. Forstmann for its part agreed to support the par value of the Notes, still falling in value in the market, by exchanging them for new notes, presumably at the initial values of the Notes when they had been first issued.
Pantry Pride raised its offer to $58 per share. Simultaneously, it filed a claim in the Court of Chancery, seeking interim injunctive relief to nullify the asset option, the no-shop, the termination fee and the Rights. It argued that the board had breached its fiduciary duty by foreclosing Revlon stockholders from accepting its higher cash offer.
The Court of Chancery granted the requested relief, finding the Revlon directors had acted to lock up the Forstmann deal by way of the challenged deal provisions out of concern for their potential liability to Revlon's disaffected and potentially litigious noteholders, a concern that would be allayed by Forstmanns agreement to restore the full value of the notes in connection with the new deal. The Court of Chancery found that, by thus pursuing their personal interests rather than maximizing the sale price for the benefit of the shareholders, the Revlon directors had breached their duty of loyalty.
The Delaware Supreme Court affirmed the judgment below.
First, the Court reviewed Pantry Pride's challenges to the Revlon board's defensive actions: the adoption of a poison pill and the consummation of the repurchase program. Referencing its recent decision inUnocal v. Mesa Petroleum, the Court observed initially that the business judgment rule, while generally applicable to a board's approval of a proposed merger, does not apply to a board's decision to implement anti-takeover measures, given the omnipresent specter that the board, in so doing, is serving its own interests in remaining in office at the expense of the interests of shareholders in securing maximum value.[3]Rather, it is the directors' threshold burden to establish that they had a reasonable basis for perceiving the need for defensive actions (typically by showing good faith and reasonable investigation) and that the action taken was reasonable in relation to the threat posed.
Applying this test, the Court found, first, that the Revlon board had acted reasonably and proportionately in adopting the Note Purchase Rights Plan in the face of a demonstrably inadequate offer of $45 per share, particularly since it retained the flexibility to redeem the rights in the event an acceptable offer should later appear and since the effect of such an action was to create bargaining leverage that resulted in significantly more favorable offers. It reached the same conclusion with respect to the exchange offer, for many of the same reasons.
However, a different legal standard applied once the board authorized the negotiations of a merger with Forstmann, the break-up of the company or its sale to one suitor or another became inevitable, and the board clearly recognized that the company was for sale. Now it was no longer charged with protecting the shareholders and the corporate entity from perceived threats to its ability to continue to perform, but instead became obligated to the maximize the company's immediate monetized value for the benefit of shareholders.
It was this new and far more narrow duty that the Revlon directors were found to have violated. By having agreed to structure the most recent Forstmann transaction in a way that effectively destroyed the ongoing bidding contest between Forstmann and Pantry Pride, the Revlon board was held to have acted contrary to its newly acquired, auctioneer-like obligation to pursue and secure the highest purchase price available for shareholders.
The Court was not swayed by defendants' claims that its concessions to Forstmann in fact resulted in a higher price than would otherwise have been available, while simultaneously enhancing the interests of noteholders by shoring up the sagging market for its outstanding notes.
The opinion provides two main passages meant to guide the actions of future boards, regarding when duties attach that lead to enhanced judicial scrutiny. The first of these passages explains that
When Pantry Pride increased its offer to $50 per share, and then to $53, it became apparent to all that the break-up of the company was inevitable. The Revlon board's authorization permitting management to negotiate a merger [*513] or buyout with a third party was a recognition that the company was for sale. The duty of the board had thus changed from the preservation of Revlon as a corporate entity to the maximization of the company's value at a sale for the stockholders' benefit. This significantly altered the board's responsibilities under the Unocal standards. It no longer faced threats to corporate policy and effectiveness, or to the stockholders' interests, from a grossly inadequate bid. The whole question of defensive measures became moot. The directors' role changed from defenders of the corporate bastion to auctioneers charged with getting the best price for the stockholders at a sale of the company.[4]
The other portion of the opinion which provides guidance can be found in the following:
The Revlon board argued that it acted in good faith in protecting the noteholders because Unocal permits consideration of other corporate constituencies ... However, such concern for non-stockholder interests is inappropriate when an auction among active bidders is in progress, and the object no longer is to protect or maintain the corporate enterprise but to sell it to the highest bidder.[2]
Given that factual and legal backdrop, the court concluded that the Revlon board impermissibly ended the "intense bidding contest on an insubstantial basis."[5]As a result, not only did the board's activities fail the new Revlon standard, but they also failed the Unocal standard.[6]
This opinion was written by Justice Andrew G.T. Moore.
Today, there are three levels of judicial review when an action is brought under the allegation of a breach of fiduciary duties.[7]As the court in Golden Cycle, LLC v. Allan stated, these levels are: "the deferentialbusiness judgment rule, the Unocal or Revlon enhanced scrutiny standard [and] the stringent standard of entire fairness."
The first and most deferential standard, the business judgment rule, has become virtually a rubber-stamp in Delaware corporate law for corporate boards to meet their duty of care.[8]It is the default standard (i.e., the facts must demonstrate why there should be a deviation from this level of review). The business judgment rule provides a rebuttable presumption "that in making a business decision the directors of a corporation acted on an informed basis, in good faith and in the honest belief that the action taken was in the best interests of the company."[9]Thus, at bottom, the business judgment rule reflects little more than process inquiry.
TheUnocalandRevlonstandards are similar in that they involve a reasonableness inquiry by the court and are triggered by some factual events.[10]TheUnocalstandard is focused on the erection of defensive tactics by the target board, and involves reasonableness review of legitimate corporate threat and proportionality.[11]The board's case is materially advanced when it can demonstrate that the board was independent, highly informed, and acted in good faith.[12]Revlonduties, on the other hand, are triggered by what may be loosely referred to as a "change in control", and require a general reasonableness standard.[13]This reasonableness standard requires virtually absolute independence of the board, careful attention to the type and scope of information to be considered by the board, good faith negotiation, and a focus on what constitutes the best value for the shareholders.[14]Finding the best value for shareholders may or may not require an auction, depending on the circumstances, and, again, this decision is subjected to a reasonableness inquiry.[15]
The entire fairness standard is triggered "where a majority of the directors approving the transaction were interested or where a majority stockholder stands on both sides of the transaction."[16]Directors can be found to be interested if they "appear on both sides of a transaction [or] expect to derive any personal financial benefit from it in the sense of self-dealing, as opposed to a benefit which devolves upon the corporation or all stockholders generally."[17]Once the entire fairness standard is triggered, the corporate board has the burden to demonstrate that the transaction was inherently fair to the shareholders, by both demonstrating fair dealing (i.e., process) and fair price (i.e., substance).[18]
Subsequent cases such asParamount v. Time(the Time Warner case) andParamount v. QVCaddressed when a board assumes theRevlonduty to auction the company and forego defensive measures that would otherwise be permissible underUnocal.
Expanding the scope of the intermediate enhanced scrutiny standard of judicial review previously announce inUnocalandMoran v. Household International, Inc., the Revlon opinion gave rise to years of academic debate and decisional law with respect to the events that should be deemed to trigger its application. Even today, questions continue to persist as to the extent to which the doctrine has been absorbed into the traditional duty of care, particularly in connection with so-called ownership transactions such as mergers, and its interplay with theUnocaltest traditionally applicable to defensive board action to fend off a hostile acquisition bid, and more recently to deal protection devices contained in merger agreements. SeeOmnicare v. NCS Healthcare, Inc.818 A.2d 914 (Del. 2003); compare, e.g., In re Netsmart Technologies, Inc. Shareholders Litigation, 924 A.2d 171 (Del. Ch. 2007). Despite the expanse of precedent thatRevlonhas engendered in the more than 20 years since its issuance, theRevlondoctrine remains alive, well and surprisingly vague in terms of its scope and its application. Far more certain, however, is the likelihood that as a result of the principles enunciated by theRevlondecision, practitioners can be assured that cash mergers and change of control transactions will engender far closer judicial scrutiny than the broad judicial deference that had been previously regarded as typical and appropriate, as will a board's approval of provisions in merger agreements that unduly restrict the board's ability to entertain more lucrative offers that appear between signing of the merger agreement and its presentation for approval of shareholders. Even actions by independent boards in such circumstances that fail to evidence a reasonable effort and intent to secure the highest and best price reasonably available are likely to invoke searching judicial scrutiny. However, recent Delaware litigation deferred to an independent's board decisions to not engage in negotiations with a competing bidder to try to obtain improved terms after a merger agreement had been signed.[19]
|
https://en.wikipedia.org/wiki/Revlon,_Inc._v._MacAndrews_%26_Forbes_Holdings,_Inc.
|
Automated journalism, also known asalgorithmic journalismorrobot journalism,[1][2][3]is a term that attempts to describe modern technological processes that have infiltrated the journalistic profession, such as news articles and videos generated by computer programs.[3][4][5]There are four main fields of application for automated journalism, namely automated content production, Data Mining, news dissemination and content optimization.[6]Throughartificial intelligence(AI) software, stories are produced automatically by computers rather than human reporters. These programs interpret, organize, and present data in human-readable ways. Typically, the process involves an algorithm that scans large amounts of provided data, selects from an assortment of pre-programmed article structures, orders key points, and inserts details such as names, places, amounts, rankings, statistics, and other figures.[4]The output can also be customized to fit a certain voice, tone, or style.[2][3][4]
Data science and AI companies such asAutomated Insights,Narrative Science,United RobotsandMonokdevelop and provide these algorithms to news outlets.[4][7][8][9]As of 2016, only a few media organizations have used automated journalism. Early adopters include news providers such as theAssociated Press,Forbes,ProPublica, and theLos Angeles Times.[3]
Early implementations were mainly used for stories based on statistics and numerical figures. Common topics include sports recaps, weather, financial reports, real estate analysis, and earnings reviews.[3]StatSheet, an online platform covering college basketball, runs entirely on an automated program.[4]TheAssociated Pressbegan using automation to cover 10,000 minor baseball leagues games annually, using a program fromAutomated Insightsand statistics from MLB Advanced Media.[10]Outside of sports, theAssociated Pressalso uses automation to produce stories on corporate earnings.[4]In 2006,Thomson Reutersannounced their switch to automation to generate financial news stories on its online news platform.[11]More famously, an algorithm called Quakebot published a story about a 2014 California earthquake onThe Los Angeles Timeswebsite within three minutes after the shaking had stopped.[4][7]
Automated journalism is sometimes seen as an opportunity to free journalists from routine reporting, providing them with more time for complex tasks. It also allows efficiency and cost-cutting, alleviating some financial burden that many news organizations face. However, automated journalism is also perceived as a threat to the authorship and quality of news and a threat to the livelihoods of human journalists.[2][3]
Robot reporters are built to produce large quantities of information at quicker speeds. TheAssociated Pressannounced that their use of automation has increased the volume of earnings reports from customers by more than ten times. With software fromAutomated Insightsand data from other companies, they can produce 150 to 300-word articles in the same time it takes journalists to crunch numbers and prepare information.[4]By automating routine stories and tasks, journalists are promised more time for complex jobs such as investigative reporting and in-depth analysis of events.[2][3]
Francesco Marconi[12]of the Associated Press stated that, through automation, the news agency freed up 20 percent[13]of reporters’ time to focus on higher-impact projects.
Automated journalism is cheaper because more content can be produced within less time. It also lowers labour costs for news organizations. Reduced human input means less expenses on wages or salaries, paid leaves, vacations, and employment insurance. Automation serves as a cost-cutting tool for news outlets struggling with tight budgets but still wish to maintain the scope and quality of their coverage.[3][11]
In an automated story, there is often confusion about who should be credited as the author. Several participants of a study on algorithmic authorship[3]attributed the credit to the programmer; others perceived the news organization as the author, emphasizing the collaborative nature of the work. There is also no way for the reader to verify whether an article was written by a robot or human, which raises issues of transparency although such issues also arise with respect to authorship attribution between human authors too.[3][14]
Concerns about the perceived credibility of automated news is similar to concerns about the perceived credibility of news in general. Critics doubt if algorithms are "fair and accurate, free from subjectivity, error, or attempted influence."[15]Again, these issues about fairness, accuracy, subjectivity, error, and attempts at influence or propaganda has also been present in articles written by humans over thousands of years. A common criticism is that machines do not replace human capabilities such as creativity, humour, and critical-thinking. However, as the technology evolves, the aim is to mimic human characteristics. When the UK's Guardian newspaper used an AI to write an entire article in September 2020, commentators pointed out that the AI still relied on human editorial content. Austin Tanney, the head of AI at Kainos said: "The Guardian got three or four different articles and spliced them together. They also gave it the opening paragraph. It doesn’t belittle what it is. It was written by AI, but there was human editorial on that."[3][14][16]
The largest single study of readers' evaluations of news articles produced with and without the help of automation exposed 3,135 online news consumers to 24 articles. It found articles that had been automated were significantly less comprehensible, in part because they were considered to contain too many numbers. However, the automated articles were evaluated equally on other criteria including tone, narrative flow, and narrative structure.[17]
Beyond human evaluation, there are now numerous algorithmic methods to identify machine written articles[18]although some articles may still contain errors that are obvious for a human to identify, they can at times score better with these automatic identifiers than human-written articles.[19]
Among the concerns about automation is the loss of employment for journalists as publishers switch to using AIs.[3][4][20]The use of automation has become a near necessity in newsrooms nowadays, in order to keep up with the ever-increasing demand for news stories, which in turn has affected the very nature of the journalistic profession.[6]In 2014, an annual census from The American Society of News Editors announced that the newspaper industry lost 3,800 full-time, professional editors.[21]Falling by more than 10% within a year, this is the biggest drop since the industry cut over 10,000 jobs in 2007 and 2008.[21][22]
There has been a significant amount of recent scholarship on the relationship between platform companies, such as Google and Facebook, and the news industry with researchers examining the impact of these platforms on the distribution and monetization of news content, as well as the implications for journalism and democracy.[23][24][25]Some scholars have extended this line of thinking to automated journalism and the use of AI in the news. A 2022 paper by theOxford Universityacademic Felix Simon, for example, argues that the concentration of AI tools and infrastructure in the hands of a few major technology companies, such as Google, Microsoft, and Amazon Web Services, is a significant issue for the news industry, as it risks shifting more control to these companies and increasing the industry's dependence on them.[26]Simon argues that this could lead to vendor lock-in, where news organizations become structurally dependent on AI provided by these companies and are unable to switch to another vendor without incurring significant costs. The companies also possess artefactual and contractual control[27]over their AI infrastructure and services, which could expose news organizations to the risk of unforeseen changes or the stopping of their AI solutions entirely. Additionally, the author argues the reliance on these companies for AI can make it more difficult for news organizations to understand the decisions or predictions made by the systems and can limit their ability to protect sources or proprietary business information.
A 2017Nieman Reportsarticle by Nicola Bruno[28]discusses whether or not machines will replace journalists and addresses concerns around the concept of automated journalism practices. Ultimately, Bruno came to the conclusion that AI would assist journalists, not replace them. "No automated software or amateur reporter will ever replace a good journalist", she said.
In 2020, however,Microsoftdid just that - replacing 27 journalists with AI. One staff member was quoted by The Guardian as saying: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.” The journalist went on to say that replacing humans with software was risky, as existing staff were careful to stick to “very strict editorial guidelines” which ensured that users were not presented with violent or inappropriate content when opening their browser, for example.[29]
|
https://en.wikipedia.org/wiki/Automated_journalism
|
Government by algorithm[1](also known asalgorithmic regulation,[2]regulation by algorithms,algorithmic governance,[3][4]algocratic governance,algorithmic legal orderoralgocracy[5]) is an alternative form ofgovernmentorsocial orderingwhere the usage of computeralgorithmsis applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration.[6][7][8][9][10]The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013.[11]A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation ofjudiciaryis in its scope.[12]In the context of blockchain, it is also known asblockchain governance.[13]
Government by algorithm raises new challenges that are not captured in thee-governmentliterature and the practice of public administration.[14]Some sources equatecyberocracy, which is a hypotheticalform of governmentthat rules by the effective use of information,[15][16][17]with algorithmic governance, although algorithms are not the only means of processing information.[18][19]Nello Cristianiniand Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms asocial machine.[20]
In 1962, the director of the Institute for Information Transmission Problems of theRussian Academy of Sciencesin Moscow (later Kharkevich Institute),[21]Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy.[22][23]In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (ProjectOGAS). This created a serious concern among CIA analysts.[24]In particular,Arthur M. Schlesinger Jr.warned that"by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employingself-teaching computers".[24]
Between 1971 and 1973, theChileangovernment carried outProject Cybersynduring thepresidency of Salvador Allende. This project was aimed at constructing a distributeddecision support systemto improve the management of the national economy.[25][2]Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by aCIA-sponsored strike of forty thousand truck drivers.[26]
Also in the 1960s and 1970s,Herbert A. Simonchampionedexpert systemsas tools for rationalization and evaluation of administrative behavior.[27]The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success.[28]Early work from this period includes Thorne McCarty's influential TAXMAN project[29]in the US and Ronald Stamper'sLEGOLproject[30]in the UK. In 1993, the computer scientistPaul Cockshottfrom theUniversity of Glasgowand the economist Allin Cottrell from theWake Forest Universitypublished the bookTowards a New Socialism, where they claim to demonstrate the possibility of a democraticallyplanned economybuilt on modern computer technology.[31]The Honourable JusticeMichael Kirbypublished a paper in 1998, where he expressed optimism that the then-available computer technologies such aslegal expert systemcould evolve to computer systems, which will strongly affect the practice of courts.[32]In 2006, attorneyLawrence Lessig, known for the slogan"Code is law", wrote:
[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible[33]
Since the 2000s, algorithms have been designed and used toautomatically analyze surveillance videos.[34]
In his 2006 bookVirtual Migration,A. Aneeshdeveloped the concept of algocracy — information technologies constrain human participation in public decision making.[35][36]Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).[37]
In 2013, algorithmic regulation was coined byTim O'Reilly, founder and CEO of O'Reilly Media Inc.:
Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.[38]
In 2017, Ukraine'sMinistry of Justiceran experimentalgovernment auctionsusingblockchaintechnology to ensure transparency and hinder corruption in governmental transactions.[39]"Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London.[40]
Asmart cityis an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance.[41]In particular, the combined use of artificial intelligence and blockchains forIoTmay lead to the creation ofsustainablesmart city ecosystems.[42]Intelligent street lightinginGlasgowis an example of successful government application of AI algorithms.[43]A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors.[44]
Thecryptocurrencymillionaire Jeffrey Berns proposed the operation oflocal governmentsinNevadaby tech firms in 2021.[45]Berns bought 67,000 acres (271 km2) in Nevada's ruralStorey County(population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000.[45]Cryptocurrency would be allowed for payments.[45]Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water[46]for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboringWashoe County.[47]A similar water pipeline proposed in 2007 was estimated to cost $100 million and would have taken about 10 years to develop.[47]With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development.[47]
InSaudi Arabia, the planners ofThe Lineassert that it will be monitored by AI to improve life by using data and predictive modeling.[48]
Tim O'Reilly suggested that data sources andreputation systemscombined in algorithmic regulation can outperform traditional regulations.[38]For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated".[38]O'Reilly's suggestion is based on thecontrol-theorericconcept offeed-back loop—improvementsanddisimprovementsof reputation enforce desired behavior.[20]The usage of feedback-loops for the management of social systems has already been suggested inmanagement cyberneticsbyStafford Beerbefore.[50]
These connections are explored byNello Cristianiniand Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by asocial machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed.[20]
China'sSocial Credit Systemwas said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception.[51][52][53]
Smart contracts,cryptocurrencies, anddecentralized autonomous organizationare mentioned as means to replace traditional ways of governance.[54][55][10]Cryptocurrencies are currencies which are enabled by algorithms without a governmentalcentral bank.[56]Central bank digital currencyoften employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China.Smart contractsare self-executablecontracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs.[57][58]A decentralized autonomous organization is anorganizationrepresented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government.[59][60][61]Smart contracts have been discussed for use in such applications as use in (temporary)employment contracts[62][63]and automatic transfership of funds and property (i.e.inheritance, upon registration of adeath certificate).[64][65][66][67]Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titlesandreal estateownership)[39][68][69][70]Ukraine is also looking at other areas too such asstate registers.[39]
According to a study ofStanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020.[1]US federal agencies counted the number ofartificial intelligenceapplications, which are listed below.[1]53% of these applications were produced by in-house experts.[1]Commercial providers of residual applications includePalantir Technologies.[71]
In 2012,NOPDstarted a collaboration with Palantir Technologies in the field ofpredictive policing.[72]Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) includeSAS.[73]
In the fight against money laundering,FinCENemploys the FinCEN Artificial Intelligence System (FAIS) since 1995.[74][75]
National health administration entities and organisations such as AHIMA (American Health Information Management Association) holdmedical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on aEuropean Health Data Spacewhich supports the use of health data.[76]
USDepartment of Homeland Securityhas employed the software ATLAS, which run onAmazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review byUSCISofficers regardingdenaturalization.[77][78]They were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came fromTerrorist Screening DatabaseandNational Crime Information Center.
TheNarxCareis a US software,[79]which combines data from the prescription registries of variousU.S. states[80][81]and usesmachine learningto generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores,[82]in a process that potentially includesEMSand criminal justice data[79]as well as court records.[83]
In Estonia, artificial intelligence is used in itse-governmentto make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment). One example is the automated registering of babies when they are born.[84]Estonia'sX-Road systemwill also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.[85]
In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.[86]
Besides using e-tenders for regularpublic works(construction of buildings, roads), e-tenders can also be used forreforestationprojects and othercarbon sinkrestoration projects.[87]Carbon sinkrestoration projectsmaybe part of thenationally determined contributionsplans in order to reach the nationalParis agreement goals.
Governmentprocurementaudit softwarecan also be used.[88][89]Audits are performed in some countries aftersubsidies have been received.
Some government agencies provide track and trace systems for services they offer. An example istrack and tracefor applications done by citizens (i.e. driving license procurement).[90]
Some government services useissue tracking systemsto keep track of ongoing issues.[91][92][93][94]
Judges' decisions in Australia are supported by the"Split Up" softwarein cases of determining the percentage of a split after adivorce.[95]COMPASsoftware is used in the USA to assess the risk ofrecidivismin courts.[96][97]According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court.[98][99][100]The Chinese AI judge is avirtual recreationof an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".[98]Also,Estoniaplans to employ artificial intelligence to decide small-claim cases of less than €7,000.[101]
Lawbotscan perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence,[102]and others vary in sophistication and dependence on scriptedalgorithms.[103]Another legal technologychatbotapplication isDoNotPay.
Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students.[104]The public high schoolWestminster Highemployed algorithms to assign grades. UK'sDepartment for Educationalso employed a statistical calculus to assign final grades inA-levels, due to the pandemic.[105]
Besides use in grading, software systems like AI were used in preparation for college entrance exams.[106]
AI teaching assistants are being developed and used for education (e.g. Georgia Tech's Jill Watson)[107][108]and there is also an ongoing debate on the possibility of teachers being entirely replaced by AI systems (e.g. inhomeschooling).[109]
In 2018, an activist named Michihito Matsuda ran for mayor in theTama city area of Tokyoas a human proxy for anartificial intelligenceprogram.[110]While election posters and campaign material used the termrobot, and displayedstock imagesof a feminineandroid, the "AI mayor" was in fact amachine learning algorithmtrained using Tama city datasets.[111]The project was backed by high-profile executives Tetsuzo Matsumoto ofSoftbankand Norio Murakami ofGoogle.[112]Michihito Matsuda came third in the election, being defeated byHiroyuki Abe.[113]Organisers claimed that the 'AI mayor' was programmed to analyzecitizen petitionsput forward to thecity councilin a more 'fair and balanced' way than human politicians.[114]
In 2018,Cesar Hidalgopresented the idea ofaugumented democracy.[115]In an augumented democracy, legislation is done bydigital twinsof every single person.
In 2019, AI-powered messengerchatbotSAM participated in the discussions on social media connected to an electoral race in New Zealand.[116]The creator of SAM, Nick Gerritsen, believed SAM would be advanced enough to run as acandidateby late 2020, when New Zealand had its next general election.[117]
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated forThe Synthetic Partyto run in the 2022Danishparliamentary election,[118]and was built by the artist collectiveComputer Lars.[119]Leader Lars differed from earlier virtual politicians by leading apolitical partyand by not pretending to be an objective candidate.[120]This chatbot engaged in critical discussions on politics with users from around the world.[121]
In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda[122]
In the2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency ofBrighton Pavilionas an AI avatar named "AI Steve",[123]saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support.[124]AI Steve placed last with 179 votes.[125]
In February 2020, China launched amobile appto deal with theCoronavirus outbreak[127]called "close-contact-detector".[128]Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights)[128]and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps likeAlipayorWeChat.[129]The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.[130]
Alipay also has theAlipay Health Codewhich is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.[131]
In Cannes, France, monitoring software has been used on footage shot byCCTVcameras, allowing to monitor their compliance to localsocial distancingandmask wearingduring the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowingfiningto be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...)[132]
Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries.[133][134]In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens.[135]Also in March 2020,Deutsche Telekomshared private cellphone data with the federal government agency,Robert Koch Institute, in order to research and prevent the spread of the virus.[136]Russia deployedfacial recognition technologyto detect quarantine breakers.[137]Italian regional health commissionerGiulio Gallerasaid that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators.[138]In USA, Europe and UK,Palantir Technologiesis taken in charge to provide COVID-19 tracking services.[139]
Tsunamiscan be detected byTsunami warning systems. They can make use of AI.[140][141]Floodingscan also be detected using AI systems.[142]Wildfirescan be predicted using AI systems.[143][144]Wildfire detection is possible by AI systems(i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires,[145]to investigate how householders responded in wildfires[146]and spotting wildfire in real time usingcomputer vision.[147][148]Earthquake detection systemsare now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates.[149][150][151]Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms ofdeep-learning, analysis, and computational models.[152]Locustbreeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase.[153]
Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective.[154][155]AsDeloitteestimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.[156]
There are potential risks associated with the use of algorithms in government. Those include:
According to a 2016's bookWeapons of Math Destruction, algorithms andbig dataare suspected to increase inequality due to opacity, scale and damage.[159]
There is also a serious concern thatgamingby the regulated parties might occur, once moretransparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even useadversarial machine learning.[1][20]According toHarari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.[160]
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived as being high risk for committingwelfare fraud, which quietly flagged thousands of people to investigators.[161]This caused a public protest. The district court of Hague shut down SyRI referencingArticle 8 of the European Convention on Human Rights(ECHR).[162]
The contributors of the 2019 documentaryiHumanexpressed apprehension of "infinitely stable dictatorships" created by government AI.[163]
Due to public criticism, the Australian government announced the suspension ofRobodebt schemekey functions in 2019, and a review of all debts raised using the programme.[164]
In 2020, algorithms assigning exam grades to students in theUK sparked open protestunder the banner "Fuck the algorithm."[105]This protest was successful and the grades were taken back.[165]
In 2020, the US government softwareATLAS, which run onAmazon Cloud, sparked uproar from activists and Amazon's own employees.[166]
In 2021, Eticas Foundation launched a database of governmental algorithms calledObservatory of Algorithms with Social Impact(OASI).[167]
An initial approach towards transparency included theopen-sourcing of algorithms.[168]Software code can be looked into and improvements can be proposed throughsource-code-hosting facilities.
A 2019 poll conducted byIE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run.[169]The following table lists the results by country:
Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable.[170]The evidence is established by survey experiments on university students of all genders.
A 2021 poll byIE Universityindicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea.[171]The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed.[171]
The 1970David Bowiesong "Saviour Machine" depicts an algocratic society run by the titular mechanism, which ended famine and war through "logic" but now threatens to cause an apocalypse due to its fear that its subjects have become excessively complacent.[172]
The novelsDaemon(2006) andFreedom™(2010) byDaniel Suarezdescribe a fictional scenario of global algorithmic regulation.[173]Matthew De Abaitua'sIf Thenimagines an algorithm supposedly based on "fairness" recreating a premodern rural economy.[174]
|
https://en.wikipedia.org/wiki/AI_mayor
|
CPU-Zis afreewaresystem profilingandmonitoringapplication forMicrosoft WindowsandAndroidthat detects thecentral processing unit,RAM,motherboardchipset, and other hardware features of a modernpersonal computerorAndroid device.
CPU-Z is more comprehensive in virtually all areas compared to the tools provided in Windows to identify various hardware components, and thus assists in identifying certain components without the need of opening the case; particularly the core revision andRAMclock rate. It also provides information on the system'sGPU.
ThisMicrosoft Windowssoftware-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/CPU-Z
|
"Argument Clinic" is a sketch fromMonty Python's Flying Circus, written byJohn CleeseandGraham Chapman. The sketch was originally broadcast as part of the television series and has subsequently been performed live by the group. It relies heavily on wordplay and dialogue, and has been used as an example of how language works.
After the episode's end credits have scrolled, theBBC 1 mirror globeappears on screen, while acontinuity announcer(Eric Idle) introduces "five more minutes ofMonty Python's Flying Circus".[2]In the ensuing sketch, an unnamed man (Michael Palin) approaches a receptionist (Rita Davies) and says that he would like to have an argument. She directs him to a Mr. Barnard who occupies an office along the corridor. The customer enters an office in which Barnard (Graham Chapman) hurls angry insults at him. The customer says that he came into the room for an argument, causing Barnard to apologize and clarify that his office is dedicated to "abuse"; "argument" is next door. He politely sends the customer on his way before calling him a "stupidgit" out of earshot.[2]
The customer enters the next office, where Mr. Vibrating (John Cleese) is seated.[2]The customer asks if he is in the right office for an argument, to which Vibrating responds that he has already told him he is. The customer disputes this, and the men begin an argumentative back-and-forth exchange. Their exchange is a very shallow one, consisting mostly of petty and contradictory "is/isn't" responses, to the point that the customer feels that he is not getting what he paid for. They then argue over the very definition of an argument until Vibrating rings a bell and announces that the customer's paid time has concluded.
The customer is dissatisfied and tries to argue with Vibrating over whether he really got as much time as he paid for, but he insists that he is not allowed to argue unless he is paid for another session. The man finally relents and pays more money for additional arguing time, but Vibrating continues to insist that he has not paid, and another argument breaks out over that issue. He believes that he has caught Vibrating in a contradiction—arguing without being paid—but Vibrating counters that he could be arguing in his spare time. Frustrated, the customer storms out of the room.
He proceeds to explore other rooms in the clinic; he enters a room marked "Complaints" hoping to lodge a complaint, only to find that it is a complaintclinicin which the man in charge (Idle) is complaining about his shoes. The next office contains another man, Spreaders (Terry Jones), offering "being-hit-on-the-head lessons", which the customer finds a stupid concept. At that point aScotland Yarddetective, Inspector Fox "of the Light Entertainment Police, Comedy Division, Special Flying Squad" (Chapman) intervenes and declares the two men under arrest for participating in a confusing sketch. However, a second officer, Inspector Thompson's Gazelle "of the Programme Planning Police, Light Entertainment Division, Special Flying Squad" (Idle) comes in and charges the three for "self-conscious behaviour", saying "It's so and so of the Yard" every time the police appear, and for ending a sketch by having a police officer intervene.
As he realizes that he is a part of the skit's absurdity, another policeman (Cleese) enters the room to stop Thompson's Gazelle, followed by a hairy hand stopping him, and the sketch ends. Afterwards, the globe ident appears on screen while the announcer introduces "one more minute ofMonty Python's Flying Circus".[2]
The sketch parodies modern consumer culture, implying that anything can be purchased, even absurd things such as arguing, abuse, or being hit over the head.[3]The sketch was typical for Cleese and Chapman's writing at the time, as it relied on verbal comedy.[4]Python author Darl Larsen believes the sketch was influenced bymusic halland radio comedy, particularly that ofthe Goons, and notes that there is little camera movement during the original television recording.[3]
One line in the middle of the sketch, "An argument is a connected series of statements intended to establish a definite proposition" was taken almost verbatim from theOxford English Dictionary.[3]
The sketch originally appeared in the 29th episode of the original television series, entitled "The Money Programme",[5]and was released (in audio only) on the LPMonty Python's Previous Record, onCharisma Recordsin 1972.[6]
The sketch was subsequently performed live at theHollywood Bowlin September 1980, which was filmed and released asMonty Python Live at the Hollywood Bowl.[7]The sketch features the discussion with the receptionist (played here byCarol Cleveland), the abuse from Chapman, and most of the argument between Cleese and Palin. It is then ended abruptly by the entrance ofTerry Gilliam, on wires, singing "I've Got Two Legs".[8]A further live performance occurred in 1989 at theSecret Policeman's Ball, where Cleveland and Chapman's roles were replaced byDawn FrenchandChris Langham. This performance was subsequently released on DVD.[9]The sketch was performed again in July 2014 duringMonty Python Live (Mostly), withTerry Jonesfilling in for Chapman's role and Gilliam reprising "I've Got Two Legs".[10]
The sketch has been frequently used as an example of how not to argue, because, as Palin's character notes, it contains little more thanad hominemattacks andcontradiction,[11]and does not contribute tocritical thinking.[12]It has also been described as a "classical case in point" of dialogue where two parties are unwilling to co-operate,[13]and as an example of flawedlogic, since Palin is attempting to argue that Cleese is not arguing with him.[14]
The text of the argument has been presented as a good example of the workings ofEnglish grammar, where sentences can be reduced to simple subject/verb pairs.[15]It has been included as an example of analysing English in school textbooks.[1]The sketch has become popular withphilosophystudents, who note that arguing is "all we are good at", and wonder about the intellectual exercise one could get from paying for a professional quality debate.[16]
ThePython programming language, which contains many Monty Python references as feature names, has an internal-only module called "Argument Clinic" to pre-process Python files.[17][18]
The sketch is referenced in a line of dialogue in the TV showHouse, season 6, episode 10, "Wilson". The character Dr. Wilson says to Dr. House "I didn't come here for an argument," to which House replies "No, right, that's room 12A," echoing the lines from the "abuse room" in the Argument Clinic sketch.
|
https://en.wikipedia.org/wiki/Argument_Clinic
|
Ageometric Brownian motion (GBM)(also known asexponential Brownian motion) is a continuous-timestochastic processin which thelogarithmof the randomly varying quantity follows aBrownian motion(also called aWiener process) withdrift.[1]It is an important example of stochastic processes satisfying astochastic differential equation(SDE); in particular, it is used inmathematical financeto model stock prices in theBlack–Scholes model.
A stochastic processStis said to follow a GBM if it satisfies the followingstochastic differential equation(SDE):
whereWt{\displaystyle W_{t}}is aWiener process or Brownian motion, andμ{\displaystyle \mu }('the percentage drift') andσ{\displaystyle \sigma }('the percentage volatility') are constants.
The former parameter is used to model deterministic trends, while the latter parameter models unpredictable events occurring during the motion.
For an arbitrary initial valueS0the above SDE has the analytic solution (underItô's interpretation):
The derivation requires the use ofItô calculus. ApplyingItô's formulaleads to
wheredStdSt{\displaystyle dS_{t}\,dS_{t}}is thequadratic variationof the SDE.
Whendt→0{\displaystyle dt\to 0},dt{\displaystyle dt}converges to 0 faster thandWt{\displaystyle dW_{t}},
sincedWt2=O(dt){\displaystyle dW_{t}^{2}=O(dt)}. So the above infinitesimal can be simplified by
Plugging the value ofdSt{\displaystyle dS_{t}}in the above equation and simplifying we obtain
Taking the exponential and multiplying both sides byS0{\displaystyle S_{0}}gives the solution claimed above.
The process forXt=lnStS0{\displaystyle X_{t}=\ln {\frac {S_{t}}{S_{0}}}}, satisfying the SDE
or more generally the process solving the SDE
wherem{\displaystyle m}andv>0{\displaystyle v>0}are real constants and for an initial conditionX0{\displaystyle X_{0}}, is called an Arithmetic Brownian Motion (ABM). This was the model postulated byLouis Bachelierin 1900 for stock prices, in the first published attempt to model Brownian motion, known today asBachelier model. As was shown above, the ABM SDE can be obtained through the logarithm of a GBM via Itô's formula. Similarly, a GBM can be obtained by exponentiation of an ABM through Itô's formula.
The above solutionSt{\displaystyle S_{t}}(for any value of t) is alog-normally distributedrandom variablewithexpected valueandvariancegiven by[2]
They can be derived using the fact thatZt=exp(σWt−12σ2t){\displaystyle Z_{t}=\exp \left(\sigma W_{t}-{\frac {1}{2}}\sigma ^{2}t\right)}is amartingale, and that
Theprobability density functionofSt{\displaystyle S_{t}}is:
To derive the probability density function for GBM, we must use theFokker-Planck equationto evaluate the time evolution of the PDF:
whereδ(S){\displaystyle \delta (S)}is theDirac delta function. To simplify the computation, we may introduce a logarithmic transformx=log(S/S0){\displaystyle x=\log(S/S_{0})}, leading to the form of GBM:
Then the equivalent Fokker-Planck equation for the evolution of the PDF becomes:
DefineV=μ−σ2/2{\displaystyle V=\mu -\sigma ^{2}/2}andD=σ2/2{\displaystyle D=\sigma ^{2}/2}. By introducing the new variablesξ=x−Vt{\displaystyle \xi =x-Vt}andτ=Dt{\displaystyle \tau =Dt}, the derivatives in the Fokker-Planck equation may be transformed as:
Leading to the new form of the Fokker-Planck equation:
However, this is the canonical form of theheat equation. which has the solution given by theheat kernel:
Plugging in the original variables leads to the PDF for GBM:
When deriving further properties of GBM, use can be made of the SDE of which GBM is the solution, or the explicit solution given above can be used. For example, consider the stochastic process log(St). This is an interesting process, because in the Black–Scholes model it is related to thelog returnof the stock price. UsingItô's lemmawithf(S) = log(S) gives
It follows thatElog(St)=log(S0)+(μ−σ2/2)t{\displaystyle \operatorname {E} \log(S_{t})=\log(S_{0})+(\mu -\sigma ^{2}/2)t}.
This result can also be derived by applying the logarithm to the explicit solution of GBM:
Taking the expectation yields the same result as above:Elog(St)=log(S0)+(μ−σ2/2)t{\displaystyle \operatorname {E} \log(S_{t})=\log(S_{0})+(\mu -\sigma ^{2}/2)t}.
GBM can be extended to the case where there are multiple correlated price paths.[3]
Each price path follows the underlying process
where the Wiener processes are correlated such thatE(dWtidWtj)=ρi,jdt{\displaystyle \operatorname {E} (dW_{t}^{i}\,dW_{t}^{j})=\rho _{i,j}\,dt}whereρi,i=1{\displaystyle \rho _{i,i}=1}.
For the multivariate case, this implies that
A multivariate formulation that maintains the driving Brownian motionsWti{\displaystyle W_{t}^{i}}independent is
where the correlation betweenSti{\displaystyle S_{t}^{i}}andStj{\displaystyle S_{t}^{j}}is now expressed through theσi,j=ρi,jσiσj{\displaystyle \sigma _{i,j}=\rho _{i,j}\,\sigma _{i}\,\sigma _{j}}terms.
Geometric Brownian motion is used to model stock prices in the Black–Scholes model and is the most widely used model of stock price behavior.[4]
Some of the arguments for using GBM to model stock prices are:
However, GBM is not a completely realistic model, in particular it falls short of reality in the following points:
Apart from modeling stock prices, Geometric Brownian motion has also found applications in the monitoring of trading strategies.[5]
In an attempt to make GBM more realistic as a model for stock prices, also in relation to thevolatility smileproblem, one can drop the assumption that the volatility (σ{\displaystyle \sigma }) is constant. If we assume that the volatility is adeterministicfunction of the stock price and time, this is called alocal volatilitymodel. A straightforward extension of the Black Scholes GBM is a local volatility SDE whose distribution is a mixture of distributions of GBM, the lognormal mixture dynamics, resulting in a convex combination of Black Scholes prices for options.[3][6][7][8]If instead we assume that the volatility has a randomness of its own—often described by a different equation driven by a different Brownian Motion—the model is called astochastic volatilitymodel, see for example theHeston model.[9]
|
https://en.wikipedia.org/wiki/Geometric_Brownian_motion
|
Disk encryption softwareis acomputer securitysoftware that protects the confidentiality of data stored on computer media (e.g., ahard disk,floppy disk, orUSB device) by usingdisk encryption.
Compared to access controls commonly enforced by anoperating system(OS), encryption passively protects data confidentiality even when the OS is not active, for example, if data is read directly from the hardware or by a different OS. In addition,crypto-shreddingsuppresses the need to erase the data at the end of the disk's lifecycle.
Disk encryption generally refers to wholesale encryption that operates on an entirevolumemostly transparently to the user, the system, and applications. This is generally distinguished from file-level encryption that operates by user invocation on a single file or group of files, and which requires the user to decide which specific files should be encrypted. Disk encryption usually includes all aspects of the disk, including directories, so that an adversary cannot determine content, name or size of any file. It is well suited to portable devices such aslaptop computersandthumb driveswhich are particularly susceptible to being lost or stolen. If used properly, someone finding a lost device cannot penetrate actual data, or even know what files might be present.
The disk's data is protected usingsymmetric cryptographywith the key randomly generated when a disk's encryption is first established. This key is itself encrypted in some way using a password or pass-phrase known (ideally) only to the user. Thereafter, in order to access the disk's data, the user must supply the password to make the key available to the software. This must be done sometime after each operating system start-up before the encrypted data can be used.
Done in software,encryptiontypically operates at a level between all applications and most system programs and the low-leveldevice driversby "transparently" (from a user's point of view) encrypting data after it is produced by a program but before it is physically written to the disk. Conversely, it decrypts data immediately after being read but before it is presented to a program. Properly done, programs are unaware of these cryptographic operations.
Some disk encryption software (e.g.,TrueCryptorBestCrypt) provide features that generally cannot be accomplished withdisk hardware encryption: the ability to mount "container" files as encrypted logical disks with their ownfile system; and encrypted logical "inner" volumes which are secretly hidden within the free space of the more obvious "outer" volumes. Such strategies provideplausible deniability.
Well-known examples of disk encryption software include,BitLockerfor Windows;FileVaultfor Apple OS/X;LUKSa standard free software mainly forLinuxandTrueCrypt, a non-commercial freeware application, for Windows, OS/X and Linux.
Some disk encryption systems, such asVeraCrypt,CipherShed(active open source forks of the discontinuedTrueCryptproject),BestCrypt(proprietary trialware), offer levels ofplausible deniability, which might be useful if a user is compelled to reveal the password of an encrypted volume.
Hidden volumes are asteganographicfeature that allows a second, "hidden", volume to reside within the apparent free space of a visible "container" volume (sometimes known as "outer" volume). The hidden volume has its own separate file system, password, and encryption key distinct from the container volume.
The content of the hidden volume is encrypted and resides in the free space of the file system of the outer volume—space which would otherwise be filled with random values if the hidden volume did not exist. When the outer container is brought online through the disk encryption software, whether the inner or outer volume ismounteddepends on the password provided. If the "normal" password/key of the outer volume proves valid, the outer volume is mounted; if the password/key of the hidden volume proves valid, then (and only then) can the existence of the hidden volume even be detected, and it is mounted; otherwise if the password/key does not successfully decrypt either the inner or outer volume descriptors, then neither is mounted.
Once a hidden volume has been created inside the visible container volume, the user will store important-looking information (but which the user does not actually mind revealing) on the outer volume, whereas more sensitive information is stored within the hidden volume.
If the user is forced to reveal a password, the user can reveal the password to the outer volume, without disclosing the existence of the hidden volume. The hidden volume will not be compromised, if the user takes certain precautions in overwriting the free areas of the "host" disk.[2]
Volumes, be they stored in a file or a device/partition, may intentionally not contain any discernible "signatures" or unencrypted headers. As cipher algorithms are designed to be indistinguishable from apseudorandom permutationwithout knowing thekey, the presence of data on the encrypted volume is also undetectable unless there are known weaknesses in the cipher.[3]This means that it is impossible to prove that any file or partition is an encrypted volume (rather than random data) without having the password to mount it. This characteristic also makes it impossible to determine if a volume contains another hidden volume.
A file hosted volume (as opposed to partitions) may look out of place in some cases since it will be entirely random data placed in a file intentionally. However, a partition or device hosted volume will look no different from a partition or device that has been wiped with a common disk wiping tool such asDarik's Boot and Nuke. One can plausibly claim that such a device or partition has been wiped to clear personal data.
Portable or "traveller mode" means the encryption software can be run without installation to the system hard drive. In this mode, the software typically installs a temporarydriverfrom the portable media. Since it is installing a driver (albeit temporarily), administrative privileges are still required.
Some disk encryption software allows encrypted volumes to be resized. Not many systems implement this fully and resort to using "sparse files" to achieve this.[citation needed]
Encrypted volumes contain "header" (or "CDB") data, which may be backed up. Overwriting these data will destroy the volume, so the ability to back them up is useful.
Restoring the backup copy of these data may reset the volume's password to what it was when the backup was taken.
|
https://en.wikipedia.org/wiki/Disk_encryption_software
|
Ingraph theory, ad-interval hypergraphis a kind of ahypergraphconstructed usingintervalsofreal lines. The parameterdis apositive integer. Theverticesof ad-interval hypergraph are the points ofddisjointlines(thus there areuncountably manyvertices). Theedgesof the graph ared-tuplesof intervals, one interval in every real line.[1]
The simplest case isd= 1. The vertex set of a 1-interval hypergraph is the set of real numbers; each edge in such a hypergraph is an interval of the real line. For example, theset{ [−2, −1], [0, 5], [3, 7] }defines a 1-interval hypergraph. Note the difference from aninterval graph: in an interval graph, the vertices are the intervals (afinite set); in a 1-interval hypergraph, the vertices are all points in the real line (anuncountable set).
As another example, in a 2-interval hypergraph, the vertex set is thedisjoint unionof two real lines, and each edge is a union of two intervals: one in line #1 and one in line #2.
The following two concepts are defined ford-interval hypergraphs just like for finite hypergraphs:
ν(H) ≤τ(H)is true for any hypergraphH.
Tibor Gallaiproved that, in a 1-interval hypergraph, they are equal:τ(H) =ν(H). This is analogous toKőnig's theoremforbipartite graphs.
Gabor Tardos[1]proved that, in a 2-interval hypergraph,τ(H) ≤ 2ν(H), and it is tight (i.e., every 2-interval hypergraph with a matching of sizem, can be covered by2mpoints).
Kaiser[2]proved that, in ad-interval hypergraph,τ(H) ≤d(d– 1)ν(H), and moreover, everyd-interval hypergraph with a matching of sizem, can be covered by atd(d− 1)mpoints,(d− 1)mpoints on each line.
Frick and Zerbib[3]proved a colorful ("rainbow") version of this theorem.
|
https://en.wikipedia.org/wiki/D-interval_hypergraph
|
Greetingis an act ofcommunicationin which human beings intentionally make their presence known to each other, to show attention to, and to suggest a type of relationship (usually cordial) orsocial status(formal or informal) between individuals or groups of people coming in contact with each other. Greetings are sometimes used just prior to aconversationor to greet in passing, such as on a sidewalk or trail. While greetingcustomsare highlyculture- and situation-specific and may change within a culture depending on social status and relationship, they exist in all known human cultures. Greetings can be expressed both audibly and physically, and often involve a combination of the two. This topic excludes military and ceremonialsalutesbut includes rituals other thangestures. A greeting, orsalutation, can also be expressed in written communications, such as letters and emails.
Some epochs and cultures have had very elaborate greeting rituals, e.g. greeting a sovereign. Conversely,secret societieshave often furtive or arcane greeting gestures and rituals, such as asecret handshake, which allows members to recognize each other.
In some languages and cultures, the word or gesture is used as both greeting andfarewell.
A greeting can consist of an exchange of formal expression, kisses, handshakes, hugs, and various gestures. The form of greeting is determined by social etiquette, as well as by the relationship of the people.
The formal greeting may involve a verbal acknowledgment and sometimes a handshake, but beyond that, facial expression, gestures, body language, and eye contact can all signal what type of greeting is expected.[1]Gestures are the most obvious signal, for instance, greeting someone with open arms is generally a sign that a hug is expected.[2]However, crossing arms can be interpreted as a sign of hostility. The facial expression, body language, and eye contact reflect emotions and interest level. A frown, slouching and lowered eye contact suggests disinterest, while smiling and an exuberant attitude is a sign of welcome.
Many different gestures are used throughout the world as simple greetings. In Western cultures, thehandshakeis very common, though it has numerous subtle variations in the strength of grip, the vigour of the shake, the dominant position of one hand over the other, and whether or not the left hand is used.
Historically, when men normally wore hats out of doors, male greetings to people they knew, and sometimes those they did not, involved touching, raising slightly ("tipping"), or removing their hat in a variety of gestures. This basic gesture remained normal in very many situations from the Middle Ages until men typically ceased wearing hats in the mid-20th century. Hat-raising began with an element of recognition of superiority, where only the socially inferior party might perform it, but gradually lost this element; KingLouis XIV of Francemade a point of at least touching his hat to all women he encountered. However, the gesture was never used by women, for whom their head-covering included considerations of modesty. When a man was not wearing a hat he might touch his hair to the side of the front of his head to replicate a hat-tipping gesture. This was typically performed by lower classmen to social superiors, such as peasants to the land-owner, and is known as "tugging the forelock", which still sometimes occurs as a metaphor for submissive behaviour.
The Arabic termsalaam(literally "peace", from the spoken greeting that accompanies the gesture), refers to the practice of placing the right palm on the heart, before and after a handshake.
In Moroccan society, same-sex people do not greet each other the same as do opposite sex. While same-sex people (men or women) will shake hands, kiss on the cheek and even hug multiple times, a man and woman greeting each other in public will not go further than a handshake. This is due to Moroccan culture being conservative. Verbal greetings in Morocco can go from a basicsalaam, to asking about life details to make sure the other person is doing well.[3]In the kingdom of Morocco, the greeting should always be made with the right hand, as the left hand is traditionally considered unclean.
The most common Chinese greeting,Gongshou, features the right fist placed in the palm of the left hand and both shaken back and forth two or three times, it may be accompanied by a head nod or bow. The gesture may be used on meeting and parting, and when congratulating, thanking, or apologizing.
In India, it is common to see theNamastegreeting (or "Sat Sri Akal" forSikhs) where the palms of the hands are pressed together and held near the heart with the head gently bowed.
Among Christians in certain parts of the world such asPoland, the greeting phrase "Praise the Lord" has had common usage, especially in the pre-World War IIera.[4][5][6]
Adab, meaning respect and politeness, is a hand gesture used as a secular greeting in South Asia, especially of Urdu-speaking communities ofUttar Pradesh,Hyderabad, andBengalin India, as well as among theMuhajir peopleof Pakistan.[7]The gesture involves raising the right hand towards the face with palm inwards such that it is in front of the eyes and the fingertips are almost touching the forehead, as the upper torso is bent forward.[8]It is typical for the person to say "adab arz hai", or just "adab". It is often answered with the same or the word "Tasleem" is said as an answer or sometimes it is answered with a facial gesture of acceptance.
In Indonesia, a nation with a huge variety of cultures and religions, many greetings are expressed, from the formalized greeting of the highly stratified and hierarchicalJavaneseto the more egalitarian and practical greetings of outer islands.
Javanese,Batakand other ethnicities currently or formerly involved in the armed forces will salute a government-employed superior, and follow with a deep bow from the waist or short nod of the head and a passing, loose handshake. Hand position is highly important; the superior's hand must be higher than the inferior's. Muslim men will clasp both hands, palms together at the chest and utter the correct Islamicslametan(greeting) phrase, which may be followed by cheek-to-cheek contact, a quick hug or loose handshake. Pious Muslim women rotate their hands from a vertical to the perpendicular prayer-like position in order to barely touch the fingertips of the male greeter and may opt-out of the cheek-to-cheek contact.
If the male is anAbdi Dalemroyal servant, courtier or particularly "peko-peko" (taken directly from Japanese to mean obsequious) or even a highly formal individual, he will retreat backwards with head downcast, the left arm crossed against the chest and the right arm hanging down, never showing his side or back to his superior. His head must always be lower than that of his superior. Younger Muslim males and females will clasp their elder's or superior's outstretched hand to the forehead as a sign of respect and obeisance.
If a manual worker or a person with obviously dirty hands salute or greets an elder or superior, he will show deference to his superior and avoid contact by bowing, touching the right forehead in a very quick salute or a distant "slamet" gesture.
The traditional JavaneseSungkeminvolves clasping the palms of both hands together, aligning the thumbs with the nose, turning the head downwards and bowing deeply, bending from the knees. In a royal presence, the one performingsungkemwould kneel at the base of the throne.
A gesture called awaiis used in Thailand, where the hands are placed together palm to palm, approximately at nose level, while bowing. Thewaiis similar in form to the gesture referred to by the Japanese termgasshoby Buddhists. In Thailand, the men and women would usually press two palms together and bow a little while saying "Sawadee ka" (female speaker) or "Sawadee krap" (male speaker).
In Europe, the formal style of upper-class greeting used by a man to a woman in theEarly Modern Periodwas to hold the woman's presented hand (usually the right) with his right hand and kiss it while bowing. In cases of a low degree of intimacy, the hand is held but not kissed. The ultra-formal style, with the man's right knee on the floor, is now only used in marriage proposals, as a romantic gesture.
Cheek kissingis common in Europe, parts of Canada (Quebec) and Latin America and has become a standard greeting mainly in Southern Europe but also in some Central European countries.
While cheek kissing is a common greeting in many cultures, each country has a unique way of kissing. In Russia, Poland, Slovenia, Serbia, Macedonia, Montenegro, the Netherlands, Iran and Egypt it is customary to "kiss three times, on alternate cheeks".[9]Italians, Spanish, Hungarian, Romanians, Bosnia-and-Herzegovinans usually kiss twice in a greeting and in Mexico and Belgium only one kiss is necessary. In the Galapagos women kiss on the right cheek only[10]and in Oman, it is not unusual for men to kiss one another on the nose after a handshake.[11]French culture accepts a number of ways to greet depending on the region. Two kisses are most common throughout all of France but inProvencethree kisses are given and in Nantes four are exchanged.[12]However, in Finistère at the western tip of Brittany and Deux-Sèvres in the Poitou-Charentes region, one kiss is preferred.[13][14]
A spoken greeting or verbal greeting is acustomaryorritualisedword or phrase used to introduce oneself or to greet someone. Greeting habits are highly culture- and situation-specific and may change within a culture depending on social status.
As with gestures, some languages and cultures use the same word as both greeting andfarewell. Examples ofcolexifiedgreetings are "Good day" in English, "Drud" inPersian, "Sat Shri Akaal" inPunjabi, "As-salamu alaykum" inArabic, "Aloha" inHawaiian, "Shalom" inHebrew, "Namaste" inHindi, "Ayubowan" inSri Lanka, "Sawatdi" inThaiand "Ciao" inItalian.
InEnglish, some common verbal greetings are:
Voicemail greetings are pre-recorded messages that are automatically played to callers when theanswering machineorvoicemailsystem answers the call. Some systems allow for different greetings to be played to different callers.
In ruralBurundi, familiar women greet each other in a complex interlocking vocal rhythm calledakazehe, regardless of the meeting's contextual occasion or time.[15]
|
https://en.wikipedia.org/wiki/Greeting
|
In themathematicaldiscipline oflinear algebra, amatrix decompositionormatrix factorizationis afactorizationof amatrixinto a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
Innumerical analysis, different decompositions are used to implement efficient matrixalgorithms.
For example, when solving asystem of linear equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, the matrixAcan be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixLand anupper triangular matrixU. The systemsL(Ux)=b{\displaystyle L(U\mathbf {x} )=\mathbf {b} }andUx=L−1b{\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} }require fewer additions and multiplications to solve, compared with the original systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, though one might require significantly more digits in inexact arithmetic such asfloating point.
Similarly, theQR decompositionexpressesAasQRwithQanorthogonal matrixandRan upper triangular matrix. The systemQ(Rx) =bis solved byRx=QTb=c, and the systemRx=cis solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable.
TheJordan normal formand theJordan–Chevalley decomposition
Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling.
Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4]
There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatricesandcmatricesorcontinuous matrices.[13]A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator.
These factorizations are based on early work byFredholm (1903),Hilbert (1904)andSchmidt (1907). For an account, and a translation to English of the seminal papers, seeStewart (2011).
|
https://en.wikipedia.org/wiki/Matrix_factorization
|
Agraphical modelorprobabilistic graphical model(PGM) orstructured probabilistic modelis aprobabilistic modelfor which agraphexpresses theconditional dependencestructure betweenrandom variables. Graphical models are commonly used inprobability theory,statistics—particularlyBayesian statistics—andmachine learning.
Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a distribution over a multi-dimensional space and a graph that is a compact orfactorizedrepresentation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely,Bayesian networksandMarkov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce.[1]
The undirected graph shown may have one of several interpretations; the common feature is that the presence of an edge implies some sort of dependence between the corresponding random variables. From this graph, we might deduce that B, C, and D are allconditionally independentgiven A. This means that if the value of A is known, then the values of B, C, and D provide no further information about each other. Equivalently (in this case), the joint probability distribution can be factorized as:
for some non-negative functionsfAB,fAC,fAD{\displaystyle f_{AB},f_{AC},f_{AD}}.
If the network structure of the model is adirected acyclic graph, the model represents a factorization of the jointprobabilityof all random variables. More precisely, if the events areX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}then the joint probability satisfies
wherepa(Xi){\displaystyle {\text{pa}}(X_{i})}is the set of parents of nodeXi{\displaystyle X_{i}}(nodes with edges directed towardsXi{\displaystyle X_{i}}). In other words, thejoint distributionfactors into a product of conditional distributions. For example, in the directed acyclic graph shown in the Figure this factorization would be
Any two nodes areconditionally independentgiven the values of their parents. In general, any two sets of nodes are conditionally independent given a third set if a criterion calledd-separationholds in the graph. Local independences and global independences are equivalent in Bayesian networks.
This type of graphical model is known as a directed graphical model,Bayesian network, or belief network. Classic machine learning models likehidden Markov models,neural networksand newer models such asvariable-order Markov modelscan be considered special cases of Bayesian networks.
One of the simplest Bayesian Networks is theNaive Bayes classifier.
The next figure depicts a graphical model with a cycle. This may be interpreted in terms of each variable 'depending' on the values of its parents in some manner.
The particular graph shown suggests a joint probability density that factors as
but other interpretations are possible.[2]
The framework of the models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract the unstructured information, allows them to be constructed and utilized effectively.[1]Applications of graphical models includecausal inference,information extraction,speech recognition,computer vision, decoding oflow-density parity-check codes, modeling ofgene regulatory networks, gene finding and diagnosis of diseases, andgraphical models for protein structure.
|
https://en.wikipedia.org/wiki/Graphical_model
|
Incomputing,quadruple precision(orquad precision) is a binaryfloating-point–basedcomputer number formatthat occupies 16 bytes (128 bits) with precision at least twice the 53-bitdouble precision.
This 128-bit quadruple precision is designed not only for applications requiring results in higher than double precision,[1]but also, as a primary function, to allow the computation of double precision results more reliably and accurately by minimising overflow andround-off errorsin intermediate calculations and scratch variables.William Kahan, primary architect of the original IEEE 754 floating-point standard noted, "For now the10-byte Extended formatis a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16-byte format ... That kind of gradual evolution towards wider precision was already in view whenIEEE Standard 754 for Floating-Point Arithmeticwas framed."[2]
InIEEE 754-2008the 128-bit base-2 format is officially referred to asbinary128.
The IEEE 754 standard specifies abinary128as having:
The sign bit determines the sign of the number (including when this number is zero, which issigned). "1" stands for negative.
This gives from 33 to 36 significant decimal digits precision. If a decimal string with at most 33 significant digits is converted to the IEEE 754 quadruple-precision format, giving a normal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 quadruple-precision number is converted to a decimal string with at least 36 significant digits, and then converted back to quadruple-precision representation, the final result must match the original number.[3]
The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros (used to encodesubnormal numbersand zeros). Thus only 112 bits of thesignificandappear in the memory format, but the total precision is 113 bits (approximately 34 decimal digits:log10(2113) ≈ 34.016) for normal values; subnormals have gracefully degrading precision down to 1 bit for the smallest non-zero value. The bits are laid out as:
The quadruple-precision binary floating-point exponent is encoded using anoffset binaryrepresentation, with the zero offset being 16383; this is also known as exponent bias in the IEEE 754 standard.
Thus, as defined by the offset binary representation, in order to get the true exponent, the offset of 16383 has to be subtracted from the stored exponent.
The stored exponents 000016and 7FFF16are interpreted specially.
The minimum strictly positive (subnormal) value is 2−16494≈ 10−4965and has a precision of only one bit. The minimum positive normal value is 2−16382≈3.3621 × 10−4932and has a precision of 113 bits, i.e. ±2−16494as well. The maximum representable value is216384− 216271≈1.1897 × 104932.
These examples are given in bitrepresentation, inhexadecimal, of the floating-point value. This includes the sign, (biased) exponent, and significand.
By default, 1/3 rounds down likedouble precision, because of the odd number of bits in the significand. Thus, the bits beyond the rounding point are0101...which is less than 1/2 of aunit in the last place.
A common software technique to implement nearly quadruple precision usingpairsofdouble-precisionvalues is sometimes calleddouble-double arithmetic.[4][5][6]Using pairs of IEEE double-precision values with 53-bit significands, double-double arithmetic provides operations on numbers with significands of at least[4]2 × 53 = 106 bits(actually 107 bits[7]except for some of the largest values, due to the limited exponent range), only slightly less precise than the 113-bit significand of IEEE binary128 quadruple precision. The range of a double-double remains essentially the same as the double-precision format because the exponent has still 11 bits,[4]significantly lower than the 15-bit exponent of IEEE quadruple precision (a range of1.8 × 10308for double-double versus1.2 × 104932for binary128).
In particular, a double-double/quadruple-precision valueqin the double-double technique is represented implicitly as a sumq=x+yof two double-precision valuesxandy, each of which supplies half ofq's significand.[5]That is, the pair(x,y)is stored in place ofq, and operations onqvalues(+, −, ×, ...)are transformed into equivalent (but more complicated) operations on thexandyvalues. Thus, arithmetic in this technique reduces to a sequence of double-precision operations; since double-precision arithmetic is commonly implemented in hardware, double-double arithmetic is typically substantially faster than more generalarbitrary-precision arithmetictechniques.[4][5]
Note that double-double arithmetic has the following special characteristics:[8]
In addition to the double-double arithmetic, it is also possible to generate triple-double or quad-double arithmetic if higher precision is required without any higher precision floating-point library. They are represented as a sum of three (or four) double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively.
A similar technique can be used to produce adouble-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 (or 227) bits.[9]
Quadruple precision is often implemented in software by a variety of techniques (such as the double-double technique above, although that technique does not implement IEEE quadruple precision), since direct hardware support for quadruple precision is, as of 2016[update], less common (see "Hardware support" below). One can use generalarbitrary-precision arithmeticlibraries to obtain quadruple (or higher) precision, but specialized quadruple-precision implementations may achieve higher performance.
A separate question is the extent to which quadruple-precision types are directly incorporated into computerprogramming languages.
Quadruple precision is specified inFortranby thereal(real128)(moduleiso_fortran_envfrom Fortran 2008 must be used, the constantreal128is equal to 16 on most processors), or asreal(selected_real_kind(33, 4931)), or in a non-standard way asREAL*16. (Quadruple-precisionREAL*16is supported by theIntel Fortran Compiler[10]and by theGNU Fortrancompiler[11]onx86,x86-64, andItaniumarchitectures, for example.)
For theC programming language, ISO/IEC TS 18661-3 (floating-point extensions for C, interchange and extended types) specifies_Float128as the type implementing the IEEE 754 quadruple-precision format (binary128).[12]Alternatively, inC/C++with a few systems and compilers, quadruple precision may be specified by thelong doubletype, but this is not required by the language (which only requireslong doubleto be at least as precise asdouble), nor is it common.
On x86 and x86-64, the most common C/C++ compilers implementlong doubleas either 80-bitextended precision(e.g. theGNU C Compilergcc[13]and theIntel C++ Compilerwith a/Qlong‑doubleswitch[14]) or simply as being synonymous with double precision (e.g.Microsoft Visual C++[15]), rather than as quadruple precision. The procedure call standard for theARM 64-bit architecture(AArch64) specifies thatlong doublecorresponds to the IEEE 754 quadruple-precision format.[16]On a few other architectures, some C/C++ compilers implementlong doubleas quadruple precision, e.g. gcc onPowerPC(as double-double[17][18][19]) andSPARC,[20]or theSun Studio compilerson SPARC.[21]Even iflong doubleis not quadruple precision, however, some C/C++ compilers provide a nonstandard quadruple-precision type as an extension. For example, gcc provides a quadruple-precision type called__float128for x86, x86-64 andItaniumCPUs,[22]and onPowerPCas IEEE 128-bit floating-point using the -mfloat128-hardware or -mfloat128 options;[23]and some versions of Intel's C/C++ compiler for x86 and x86-64 supply a nonstandard quadruple-precision type called_Quad.[24]
Zigprovides support for it with itsf128type.[25]
Google's work-in-progress languageCarbonprovides support for it with the type calledf128.[26]
As of 2024,Rustis currently working on adding a newf128type for IEEE quadruple-precision 128-bit floats.[27]
IEEE quadruple precision was added to theIBM System/390G5 in 1998,[32]and is supported in hardware in subsequentz/Architectureprocessors.[33][34]The IBMPOWER9CPU (Power ISA 3.0) has native 128-bit hardware support.[23]
Native support of IEEE 128-bit floats is defined inPA-RISC1.0,[35]and inSPARCV8[36]and V9[37]architectures (e.g. there are 16 quad-precision registers %q0, %q4, ...), but no SPARC CPU implements quad-precision operations in hardware as of 2004[update].[38]
Non-IEEE extended-precision(128 bits of storage, 1 sign bit, 7 exponent bits, 112 fraction bits, 8 bits unused) was added to theIBM System/370series (1970s–1980s) and was available on someSystem/360models in the 1960s (System/360-85,[39]-195, and others by special request or simulated by OS software).
TheSiemens7.700 and 7.500 series mainframes and their successors support the same floating-point formats and instructions as the IBM System/360 and System/370.
TheVAXprocessor implemented non-IEEE quadruple-precision floating point as its "H Floating-point" format. It had one sign bit, a 15-bit exponent and 112-fraction bits, however the layout in memory was significantly different from IEEE quadruple precision and the exponent bias also differed. Only a few of the earliest VAX processors implemented H Floating-point instructions in hardware, all the others emulated H Floating-point in software.
TheNEC Vector Enginearchitecture supports adding, subtracting, multiplying and comparing 128-bit binary IEEE 754 quadruple-precision numbers.[40]Two neighboring 64-bit registers are used. Quadruple-precision arithmetic is not supported in the vector register.[41]
TheRISC-Varchitecture specifies a "Q" (quad-precision) extension for 128-bit binary IEEE 754-2008 floating-point arithmetic.[42]The "L" extension (not yet certified) will specify 64-bit and 128-bit decimal floating point.[43]
Quadruple-precision (128-bit) hardware implementation should not be confused with "128-bit FPUs" that implementSIMDinstructions, such asStreaming SIMD ExtensionsorAltiVec, which refers to 128-bitvectorsof four 32-bit single-precision or two 64-bit double-precision values that are operated on simultaneously.
|
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format
|
This is a list ofscientific phenomenaand concepts named after people(eponymous phenomena). For other lists of eponyms, seeeponym.
|
https://en.wikipedia.org/wiki/Scientific_phenomena_named_after_people
|
TheFermat primality testis aprobabilistictest to determine whether a number is aprobable prime.
Fermat's little theoremstates that ifpis prime andais not divisible byp, then
If one wants to test whetherpis prime, then we can pick random integersanot divisible bypand see whether the congruence holds. If it does not hold for a value ofa, thenpis composite. This congruence is unlikely to hold for a randomaifpis composite.[1]Therefore, if the equality does hold for one or more values ofa, then we say thatpisprobably prime.
However, note that the above congruence holds trivially fora≡1(modp){\displaystyle a\equiv 1{\pmod {p}}}, because the congruence relation iscompatible with exponentiation. It also holds trivially fora≡−1(modp){\displaystyle a\equiv -1{\pmod {p}}}ifpis odd, for the same reason. That is why one usually chooses a randomain the interval1<a<p−1{\displaystyle 1<a<p-1}.
Anyasuch that
whennis composite is known as aFermat liar. In this casenis calledFermat pseudoprimeto basea.
If we do pick anasuch that
thenais known as aFermat witnessfor the compositeness ofn.
Suppose we wish to determine whethern= 221 is prime. Randomly pick 1 <a< 220, saya= 38. We check the above congruence and find that it holds:
Either 221 is prime, or 38 is a Fermat liar, so we take anothera, say 24:
So 221 is composite and 38 was indeed a Fermat liar. Furthermore, 24 is a Fermat witness for the compositeness of 221.
The algorithm can be written as follows:
Theavalues 1 andn− 1 are not used as the equality holds for allnand all oddnrespectively, hence testing them adds no value.
Using fast algorithms formodular exponentiationand multiprecision multiplication, the running time of this algorithm isO(klog2nlog logn) =Õ(klog2n), wherekis the number of times we test a randoma, andnis the value we want to test for primality; seeMiller–Rabin primality testfor details.
There are infinitely manyFermat pseudoprimesto any given basisa> 1.[1]: Theorem 1Even worse, there are infinitely manyCarmichael numbers.[2]These are numbersn{\displaystyle n}for whichallvalues ofa{\displaystyle a}withgcd(a,n)=1{\displaystyle \operatorname {gcd} (a,n)=1}are Fermat liars. For these numbers, repeated application of the Fermat primality test performs the same as a simple random search for factors. While Carmichael numbers are substantially rarer than prime numbers (Erdös' upper bound for the number of Carmichael numbers[3]is lower than theprime number function n/log(n)) there are enough of them that Fermat's primality test is not often used in the above form. Instead, other more powerful extensions of the Fermat test, such asBaillie–PSW,Miller–Rabin, andSolovay–Strassenare more commonly used.
In general, ifn{\displaystyle n}is a composite number that is not a Carmichael number, then at least half of all
are Fermat witnesses. For proof of this, leta{\displaystyle a}be a Fermat witness anda1{\displaystyle a_{1}},a2{\displaystyle a_{2}}, ...,as{\displaystyle a_{s}}be Fermat liars. Then
and so alla⋅ai{\displaystyle a\cdot a_{i}}fori=1,2,...,s{\displaystyle i=1,2,...,s}are Fermat witnesses.
As mentioned above, most applications use aMiller–RabinorBaillie–PSWtest for primality. Sometimes a Fermat test (along with some trial division by small primes) is performed first to improve performance.GMPsince version 3.0 uses a base-210 Fermat test after trial division and before running Miller–Rabin tests.Libgcryptuses a similar process with base 2 for the Fermat test, butOpenSSLdoes not.
In practice with most big number libraries such as GMP, the Fermat test is not noticeably faster than a Miller–Rabin test, and can be slower for many inputs.[4]
As an exception, OpenPFGW uses only the Fermat test for probable prime testing. The program is typically used with multi-thousand digit inputs with a goal of maximum speed with very large inputs. Another well known program that relies only on the Fermat test isPGPwhere it is only used for testing of self-generated large random values (an open source counterpart,GNU Privacy Guard, uses a Fermat pretest followed by Miller–Rabin tests).
|
https://en.wikipedia.org/wiki/Fermat_primality_test
|
Hypercube-based NEAT, orHyperNEAT,[1]is a generative encoding that evolvesartificial neural networks(ANNs) with the principles of the widely usedNeuroEvolution of Augmented Topologies(NEAT) algorithm developed byKenneth Stanley.[2]It is a novel technique for evolving large-scale neural networks using the geometric regularities of the task domain. It uses Compositional Pattern Producing Networks[3](CPPNs), which are used to generate the images forPicbreeder.orgArchived2011-07-25 at theWayback Machineand shapes forEndlessForms.comArchived2018-11-14 at theWayback Machine. HyperNEAT has recently been extended to also evolve plastic ANNs[4]and to evolve the location of every neuron in the network.[5]
This bioinformatics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/HyperNEAT
|
Feature engineeringis a preprocessing step insupervised machine learningandstatistical modeling[1]which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability.[2][3][4]
Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists constructdimensionless numberssuch as theReynolds numberinfluid dynamics, theNusselt numberinheat transfer, and theArchimedes numberinsedimentation. They also develop first approximations of solutions, such as analytical solutions for thestrength of materialsin mechanics.[5]
One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based onmatrix decompositionhas been extensively used for data clustering under non-negativity constraints on the feature coefficients. These includeNon-Negative Matrix Factorization(NMF),[6]Non-Negative Matrix-Tri Factorization(NMTF),[7]Non-Negative Tensor Decomposition/Factorization(NTF/NTD),[8]etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, includingorthogonality-constrained factorizationfor hard clustering, andmanifold learningto overcome inherent issues with these algorithms.
Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example isMulti-view Classification based on Consensus Matrix Decomposition(MCMD),[2]which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and:
Coupled matrix and tensor decompositions are popular in multi-view feature engineering.[9]
Feature engineering inmachine learningandstatistical modelinginvolves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods likePrincipal Components Analysis(PCA),Independent Component Analysis(ICA), andLinear Discriminant Analysis(LDA), and selecting the most relevant features for model training based on importance scores andcorrelation matrices.[10]
Features vary in significance.[11]Even relatively insignificant features may contribute to a model.Feature selectioncan reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting).[12]
Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include:
Feature explosion can be limited via techniques such as:regularization,kernel methods, andfeature selection.[13]
Automation of feature engineering is a research topic that dates back to the 1990s.[14]Machine learning software that incorporatesautomated feature engineeringhas been commercially available since 2016.[15]Related academic literature can be roughly separated into two types:
Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods torelational databases, handling complex data relationships across tables. It innovatively uses selection graphs asdecision nodes, refined systematically until a specific termination criterion is reached.[14]
Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as tuple id propagation.[16][17]
There are a number of open-source libraries and tools that automate feature engineering on relational data and time series:
[OneBM] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost.[22]
The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition.[32][33]
Thefeature storeis where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions.[34]
A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used.[35]
Feature stores can be standalone software tools or built into machine learning platforms.
Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error.[36][37]Deep learning algorithmsmay be used to process a large raw dataset without having to resort to feature engineering.[38]However, deep learning algorithms still require careful preprocessing and cleaning of the input data.[39]In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process.[40]
|
https://en.wikipedia.org/wiki/Feature_extraction
|
Intheoretical computer science, anondeterministic Turing machine(NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state isnotcompletely determined by its action and the current symbol it sees, unlike adeterministic Turing machine.
NTMs are sometimes used inthought experimentsto examine the abilities and limits of computers. One of the most important open problems in theoreticalcomputer scienceis theP versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer.
In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internalstateandwhat symbol it currently sees. An example of one of a Turing Machine's rules might thus be: "If you are in state 2 and you see an 'A', then change it to 'B', move left, and switch to state 3."
In adeterministic Turing machine(DTM), the set of rules prescribes at most one action to be performed for any given situation.
A deterministic Turing machine has atransition functionthat, for a given state and symbol under the tape head, specifies three things:
For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one position to the right, and switch to state 5.
In contrast to a deterministic Turing machine, in anondeterministic Turing machine(NTM) the set of rules may prescribe more than one action to be performed for any given situation. For example, an X on the tape in state 3 might allow the NTM to:
or
Because there can be multiple actions that can follow from a given situation, there can be multiple possible sequences of steps that the NTM can take starting from a given input. If at least one of these possible sequences leads to an "accept" state, the NTM is said to accept the input. While a DTM has a single "computation path" that it follows, an NTM has a "computationtree".
A nondeterministic Turing machine can be formally defined as a six-tupleM=(Q,Σ,ι,⊔,A,δ){\displaystyle M=(Q,\Sigma ,\iota ,\sqcup ,A,\delta )}, where
The difference with a standard (deterministic)Turing machineis that, for deterministic Turing machines, the transition relation is a function rather than just a relation.
Configurations and theyieldsrelation on configurations, which describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, except that theyieldsrelation is no longer single-valued. (If the machine is deterministic, the possible computations are all prefixes of a single, possibly infinite, path.)
The input for an NTM is provided in the same manner as for a deterministic Turing machine: the machine is started in the configuration in which the tape head is on the first character of the string (if any), and the tape is all blank otherwise.
An NTM accepts an input string if and only ifat least oneof the possible computational paths starting from that string puts the machine into an accepting state. When simulating the many branching paths of an NTM on a deterministic machine, we can stop the entire simulation as soon asanybranch reaches an accepting state.
As a mathematical construction used primarily in proofs, there are a variety of minor variations on the definition of an NTM, but these variations all accept equivalent languages.
The head movement in the output of the transition relation is often encoded numerically instead of using letters to represent moving the head Left (-1), Stationary (0), and Right (+1); giving a transition function output of(Q×Σ×{−1,0,+1}){\displaystyle \left(Q\times \Sigma \times \{-1,0,+1\}\right)}. It is common to omit the stationary (0) output,[1]and instead insert the transitive closure of any desired stationary transitions.
Some authors add an explicitrejectstate,[2]which causes the NTM to halt without accepting. This definition still retains the asymmetry thatanynondeterministic branch can accept, buteverybranch must reject for the string to be rejected.
Any computational problem that can be solved by a DTM can also be solved by a NTM, and vice versa. However, it is believed that in general thetime complexitymay not be the same.
NTMs include DTMs as special cases, so every computation that can be carried out by a DTM can also be carried out by the equivalent NTM.
It might seem that NTMs are more powerful than DTMs, since they can allow trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. However, it is possible to simulate NTMs with DTMs, and in fact this can be done in more than one way.
One approach is to use a DTM of which the configurations represent multiple configurations of the NTM, and the DTM's operation consists of visiting each of them in turn, executing a single step at each visit, and spawning new configurations whenever the transition relation defines multiple continuations.
Another construction simulates NTMs with 3-tape DTMs, of which the first tape always holds the original input string, the second is used to simulate a particular computation of the NTM, and the third encodes a path in the NTM's computation tree.[3]The 3-tape DTMs are easily simulated with a normal single-tape DTM.
In the second construction, the constructed DTM effectively performs abreadth-first searchof the NTM's computation tree, visiting all possible computations of the NTM in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is believed to be a general property of simulations of NTMs by DTMs. TheP = NP problem, the most famous unresolved question in computer science, concerns one case of this issue: whether or not every problem solvable by a NTM in polynomial time is necessarily also solvable by a DTM in polynomial time.
An NTM has the property of bounded nondeterminism. That is, if an NTM always halts on a given input tapeTthen it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations.
Becausequantum computersusequantum bits, which can be insuperpositionsof states, rather than conventional bits, there is sometimes a misconception thatquantum computersare NTMs.[4]However, it is believed by experts (but has not been proven) that the power of quantum computers is, in fact, incomparable to that of NTMs; that is, problems likely exist that an NTM could efficiently solve that a quantum computer cannot and vice versa.[5][better source needed]In particular, it is likely thatNP-completeproblems are solvable by NTMs but not by quantum computers in polynomial time.
Intuitively speaking, while a quantum computer can indeed be in a superposition state corresponding to all possible computational branches having been executed at the same time (similar to an NTM), the final measurement will collapse the quantum computer into a randomly selected branch. This branch then does not, in general, represent the sought-for solution, unlike the NTM, which is allowed to pick the right solution among the exponentially many branches.
|
https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine
|
Code-mixingis the mixing of two or more languages orlanguage varietiesin speech.[a]
Some scholars use the terms "code-mixing" and "code-switching" interchangeably, especially in studies ofsyntax,morphology, and otherformalaspects of language.[1][2]Others assume more specific definitions of code-mixing, but these specific definitions may be different in different subfields oflinguistics,education theory,communicationsetc.
Code-mixing is similar to the use or creation ofpidgins, but while a pidgin is created across groups that do not share a common language, code-mixing may occur within amultilingualsetting where speakers share more than one language.
Some linguists use the terms code-mixing and code-switching more or less interchangeably. Especially in formal studies of syntax, morphology, etc., both terms are used to refer toutterancesthat draw from elements of two or moregrammatical systems.[1]These studies are often interested in the alignment of elements from distinct systems, or on constraints that limit switching.
Some work defines code-mixing as the placing or mixing of various linguistic units (affixes, words, phrases, clauses) from two different grammatical systems within the same sentence and speech context, while code-switching is the placing or mixing of units (words, phrases, sentences) from two codes within the same speech context. The structural difference between code-switching and code-mixing is the position of the altered elements—for code-switching, the modification of the codes occurs intersententially, while for code-mixing, it occurs intrasententially.[3]
In other work the term code-switching emphasizes a multilingual speaker's movement from one grammatical system to another, while the term code-mixing suggests a hybrid form, drawing from distinct grammars. In other words,code-mixingemphasizes the formal aspects of language structures orlinguistic competence, whilecode-switchingemphasizeslinguistic performance.[citation needed]
While many linguists have worked to describe the difference between code-switching andborrowingof words or phrases, the term code-mixing may be used to encompass both types of language behavior.[4]
While linguists who are primarily interested in the structure or form of code-mixing may have relatively little interest to separate code-mixing from code-switching, some sociolinguists have gone to great lengths to differentiate the two phenomena. For these scholars, code-switching is associated with particularpragmaticeffects,discoursefunctions, or associations with groupidentity.[b]In this tradition, the termscode-mixingorlanguage alternationare used to describe more stable situations in which multiple languages are used without such pragmatic effects. See alsoCode-mixing as fused lect, below.
In studies of bilinguallanguage acquisition,code-mixingrefers to a developmental stage during which children mix elements of more than one language. Nearly all bilingual children go through a period in which they move from one language to another without apparent discrimination.[5]This differs from code-switching, which is understood as the socially and grammatically appropriate use of multiple varieties.
Beginning at thebabblingstage, young children in bilingual or multilingual environments produce utterances that combine elements of both (or all) of their developing languages. Some linguists suggest that this code-mixing reflects a lack of control or ability to differentiate the languages. Others argue that it is a product of limited vocabulary; very young children may know a word in one language but not in another. More recent studies argue that this early code-mixing is a demonstration of a developing ability to code-switch in socially appropriate ways.[5]
For young bilingual children, code-mixing may be dependent on the linguistic context, cognitive task demands, and interlocutor. Code-mixing may also function to fill gaps in their lexical knowledge. Some forms of code-mixing by young children may indicate risk forlanguage impairment.[6]
Inpsychologyand inpsycholinguisticsthe labelcode-mixingis used in theories that draw on studies of language alternation or code-switching to describe the cognitive structures underlying bilingualism. During the 1950s and 1960s, psychologists and linguists treated bilingual speakers as, in Grosjean's terms, "two monolinguals in one person".[7]This "fractional view" supposed that a bilingual speaker carried two separate mental grammars that were more or less identical to the mental grammars of monolinguals and that were ideally kept separate and used separately. Studies since the 1970s, however, have shown that bilinguals regularly combine elements from "separate" languages. These findings have led to studies of code-mixing in psychology and psycholinguistics.[8]
Sridhar and Sridhar define code-mixing as "the transition from using linguistic units (words, phrases, clauses, etc.) of one language to using those of another within a single sentence".[8]They note that this is distinct from code-switching in that it occurs in a single sentence (sometimes known asintrasentential switching) and in that it does not fulfill the pragmatic or discourse-oriented functions described by sociolinguists. (SeeCode-mixing in sociolinguisticsabove.) The practice of code-mixing, which draws from competence in two languages at the same time suggests that these competencies are not stored or processed separately. Code-mixing among bilinguals is therefore studied in order to explore the mental structures underlying language abilities.
Amixed languageor afused lectis a relatively stable mixture of two or more languages. What some linguists have described as "codeswitching as unmarked choice"[9]or "frequent codeswitching"[10]has more recently been described as "language mixing", or in the case of the most strictlygrammaticalizedforms as "fused lects".[11]
In areas where code-switching among two or more languages is very common, it may become normal for words from both languages to be used together in everyday speech. Unlike code-switching, where a switch tends to occur atsemanticallyorsociolinguisticallymeaningful junctures,[c]this code-mixing has no specific meaning in the local context. A fused lect is identical to a mixed language in terms of semantics and pragmatics, but fused lects allow less variation since they are fully grammaticalized. In other words, there are grammatical structures of the fused lect that determine which source-language elements may occur.[11]
A mixed language is different from acreole language. Creoles are thought to develop from pidgins as they becomenativized.[12]Mixed languages develop from situations of code-switching. (See the distinction between code-mixing and pidgin above.)
There are many names for specific mixed languages or fused lects. These names are often used facetiously or carry apejorativesense.[13]Named varieties include the following, among others.
|
https://en.wikipedia.org/wiki/Code_mixing
|
Insoftware testing,monkey testingis a technique where the user tests the application or system by providingrandominputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automatedunit tests.
While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with theinfinite monkey theorem,[1]which states that a monkey hitting keys atrandomon atypewriter keyboardfor an infinite amount of time willalmost surelytype a given text, such as the complete works ofWilliam Shakespeare. Some others believe that the name comes from theclassic Mac OSapplication "The Monkey" developed bySteve Cappsprior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs inMacPaint.[2]
Monkey Testing is also included inAndroid Studioas part of the standard testing tools forstress testing.[3]
Monkey testing can be categorized intosmart monkey testsordumb monkey tests.
Smart monkeys are usually identified by the following characteristics:[4]
Some smart monkeys are also referred to asbrilliant monkeys,[citation needed]which perform testing as per user's behavior and can estimate the probability of certain bugs.
Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics:[citation needed]
Monkey testing is an effective way to identify some out-of-the-box errors. Since the scenarios tested are usuallyad-hoc, monkey testing can also be a good way to perform load and stress testing. The intrinsic randomness of monkey testing also makes it a good way to find major bugs that can break the entire system. The setup of monkey testing is easy, therefore good for any application. Smart monkeys, if properly set up with an accurate state model, can be really good at finding various kinds of bugs.
The randomness of monkey testing often makes the bugs found difficult or impossible to reproduce. Unexpected bugs found by monkey testing can also be challenging and time consuming to analyze. In some systems, monkey testing can go on for a long time before finding a bug. For smart monkeys, the ability highly depends on the state model provided, and developing a good state model can be expensive.[1]
While monkey testing is sometimes treated the same asfuzz testing[5]and the two terms are usually used together,[6]some believe they are different by arguing that monkey testing is more about random actions while fuzz testing is more about random data input.[7]Monkey testing is also different fromad-hoc testingin that ad-hoc testing is performed without planning and documentation and the objective of ad-hoc testing is to divide the system randomly into subparts and check their functionality, which is not the case in monkey testing.
|
https://en.wikipedia.org/wiki/Monkey_testing
|
Location-based service(LBS) is a general term denoting softwareserviceswhich usegeographic data and informationto provide services or information to users.[1]LBS can be used in a variety of contexts, such as health, indoorobject search,[2]entertainment,[3]work, personal life, etc.[4]Commonly used examples of location-based services include navigation software,social networking services,location-based advertising, andtracking systems.[5]LBS can also includemobile commercewhen taking the form of coupons or advertising directed at customers based on their current location. LBS also includes personalized weather services and even location-based games.
LBS is critical to many businesses as well as government organizations to drive real insight from data tied to a specific location where activities take place. The spatial patterns that location-related data and services can provide is one of its most powerful and useful aspects where location is a common denominator in all of these activities and can be leveraged to better understand patterns and relationships. Banking, surveillance,online commerce, and many weapon systems are dependent on LBS.
Access policiesare controlled bylocationdata or time-of-day constraints, or a combination thereof. As such, an LBS is an information service and has a number of uses insocial networkingtoday as information, in entertainment or security, which is accessible withmobile devicesthrough themobile networkand which uses information on the geographical position of the mobile device.[6][7][8][9]
This concept of location-based systems is not compliant with the standardized concept ofreal-time locating systems(RTLS) and related local services, as noted in ISO/IEC 19762-5[10]and ISO/IEC 24730-1.[11]While networked computing devices generally do very well to inform consumers of days old data, the computing devices themselves can also be tracked, even in real-time. LBS privacy issues arise in that context, and are documented below.
Location-based services (LBSs) are widely used in many computer systems and applications. Modern location-based services are made possible by technological developments such as theWorld Wide Web,satellite navigationsystems, and the widespread use ofmobile phones.[12]
Location-based services were developed by integrating data fromsatellite navigation systems,cellular networks, andmobile computing, to provide services based on the geographical locations of users.[13]Over their history, location-based software has evolved from simple synchronization-based service models to authenticated and complex tools for implementing virtually any location-based service model or facility.
There is currently no agreed upon criteria for defining the market size of location-based services, but theEuropean GNSS Agencyestimated that 40% of allcomputer applicationsused location-based software as of 2013, and 30% of all Internet searches were for locations.[14]
LBS is the ability to open and close specific data objects based on the use of location or time (or both) as controls and triggers or as part of complexcryptographickey or hashing systems and the data they provide access to. Location-based services may be one of the most heavily usedapplication-layerdecision frameworkin computing.
TheGlobal Positioning Systemwas first developed by theUnited States Department of Defensein the 1970s, and was made available for worldwide use and use by civilians in the 1980s.[15]Research forerunners of today's location-based services include the infrared Active Badge system[16](1989–1993), theEricsson-EuropolitanGSMLBS trial by Jörgen Johansson (1995), and the master thesis written by Nokia employee Timo Rantalainen in 1995.[17]
In 1990 International Teletrac Systems (laterPacTelTeletrac), founded in Los Angeles CA, introduced the world's first dynamic real-timestolen vehicle recoveryservices. As an adjacency to this they began developing location-based services that could transmit information about location-based goods and services to custom-programmed alphanumericMotorolapagers. In 1996 the USFederal Communications Commission(FCC) issued rules requiring all US mobile operators to locateemergency callers. This rule was a compromise resulting from US mobile operators seeking the support of the emergency community in order to obtain the same protection from lawsuits relating to emergency calls as fixed-line operators already had.
In 1997 Christopher Kingdon, of Ericsson, handed in the Location Services (LCS) stage 1 description to the joint GSM group of theEuropean Telecommunications Standards Institute(ETSI) and theAmerican National Standards Institute(ANSI). As a result, the LCS sub-working group was created under ANSI T1P1.5. This group went on to select positioning methods and standardize Location Services (LCS), later known as Location Based Services (LBS). Nodes defined include the Gateway Mobile Location Centre (GMLC), the Serving Mobile Location Centre (SMLC) and concepts such as Mobile Originating Location Request (MO-LR), Network Induced Location Request (NI-LR) and Mobile Terminating Location Request (MT-LR).
As a result of these efforts in 1999 the first digital location-based service patent was filed in the US and ultimately issued after nine office actions in March 2002. The patent[18]has controls which when applied to today's networking models provide key value in all systems.
In 2000, after approval from the world’s twelve largest telecom operators, Ericsson, Motorola andNokiajointly formed and launched the Location Interoperability Forum Ltd (LIF). This forum first specified theMobile Location Protocol(MLP), an interface between the telecom network and an LBS application running on a server in the Internet domain. Then, much driven by theVodafonegroup, LIF went on to specify the Location Enabling Server (LES), a "middleware", which simplifies the integration of multiple LBS with an operators infrastructure. In 2004 LIF was merged with theOpen Mobile Association(OMA). An LBS work group was formed within the OMA.
In 2002, Marex.com in Miami Florida designed the world first marine asset telemetry device for commercial sale. The device, designed by Marex and engineered by its partner firms in telecom and hardware, was capable of transmitting location data and retrieving location-based service data via both cellular and satellite-based communications channels. Utilizing the Orbcomm satellite network, the device had multi level SOS features for both MAYDAY and marine assistance, vessel system condition and performance monitoring with remote notification, and a dedicated hardware device similar to GPS units. Based upon the device location, it was capable of providing detailed bearing, distance and communication information to the vessel operator in real time, in addition to the marine assistance and MAYDAY features. The concept and functionality was coinedLocation Based Servicesby the principal architect and product manager for Marex, Jason Manowitz, SVP, Product and Strategy. The device was branded asIntegrated Marine Asset Management System(IMAMS), and the proof-of-concept beta device was demonstrated to various US government agencies for vessel identification, tracking, and enforcement operations in addition to the commercial product line.[19]The device was capable of tracking assets including ships, planes, shipping containers, or any other mobile asset with a proper power source and antenna placement. Marex's financial challenges were unable to support product introduction and the beta device disappeared.
The first consumer LBS-capable mobile Web device was thePalm VII, released in 1999.[20]Two of the in-the-box applications made use of theZIP-code–level positioning information and share the title for first consumer LBS application: the Weather.com app from The Weather Channel, and the[21]TrafficTouch app from Sony-Etak/ Metro Traffic.[22][23]
The first LBS services were launched during 2001 by TeliaSonera in Sweden (FriendFinder, yellow pages, houseposition, emergency call location etc.) and by EMT in Estonia (emergency call location, friend finder, TV game). TeliaSonera and EMT based their services on the Ericsson Mobile Positioning System (MPS).
Other early LBSs include friendzone, launched by swisscom inSwitzerlandin May 2001, using the technology of valis ltd. The service included friend finder, LBS dating and LBS games. The same service was launched later byVodafoneGermany, Orange Portugal and Pelephone inIsrael.[21]Microsoft's Wi-Fi-based indoor location system RADAR (2000), MIT's Cricket project using ultrasound location (2000) and Intel's Place Lab with wide-area location (2003).[24]
In May 2002, go2 andAT&T Mobilitylaunched the first (US) mobile LBS local search application that used Automatic Location Identification (ALI) technologies mandated by the FCC. go2 users were able to use AT&T's ALI to determine their location and search near that location to obtain a list of requested locations (stores, restaurants, etc.) ranked by proximity to the ALI provide by the AT&T wireless network. The ALI determined location was also used as a starting point forturn-by-turndirections.
The main advantage is that mobile users do not have to manually specify postal codes or other location identifiers to use LBS, when they roam into a different location.
There are various companies that sell access to an individual's location history and this is estimated to be a $12 billion industry composed of collectors, aggregators and marketplaces. As of 2021, a company named Near claimed to have data from 1.6 billion people in 44 different countries,Mobilewallaclaims data on 1.9 billion devices, andX-Modeclaims to have a database of 25 percent of the U.S. adult population. An analysis, conducted by the non-profit newsroom calledThe Markup, found six out of 47 companies who claimed over a billion devices in their database. As of 2021, there are no rules or laws governing who can buy an individual's data.[25]
There are a number of ways in which the location of an object, such as a mobile phone or device, can be determined. Another emerging method for confirming location is IoT and blockchain-based relative object location verification.[26]
Withcontrol planelocating, sometimes referred to as positioning, the mobile phone service provider gets the location based on the radio signal delay of the closest cell-phone towers (for phones without satellite navigation features) which can be quite slow as it uses the 'voice control' channel.[9]In theUK, networks do not use trilateration; Because LBS services use a single base station, with a "radius" of inaccuracy, to determine a phone's location. This technique was the basis of the E-911 mandate and is still used to locate cellphones as a safety measure. Newer phones andPDAstypically have an integratedA-GPSchip.
In addition there are emerging techniques like Real Time Kinematics and WiFi RTT (Round Trip Timing) as part of Precision Time Management services in WiFi and related protocols.
In order to provide a successful LBS technology the following factors must be met:
Several categories of methods can be used to find the location of the subscriber.[7][27]The simple and standard solution is LBS based on asatellite navigationsystem such asGalileoorGPS.Sony Ericsson's "NearMe" is one such example; it is used to maintain knowledge of the exact location. Satellite navigation is based on the concept oftrilateration, a basic geometric principle that allows finding one location if one knows its distance from other, already known locations.
A low cost alternative to using location technology to track the player, is to not track at all. This has been referred to as "self-reported positioning". It was used in themixed reality gamecalledUncle Roy All Around Youin 2003 and considered for use in theAugmented realitygames in 2006.[28]Instead of tracking technologies, players were given a map which they could pan around and subsequently mark their location upon.[29][30]With the rise of location-based networking, this is more commonly known as a user "check-in".
Near LBS (NLBS) involves local-range technologies such asBluetooth Low Energy,wireless LAN, infrared ornear-field communicationtechnologies, which are used to match devices to nearby services. This application allows a person to access information based on their surroundings; especially suitable for using inside closed premises, restricted or regional area.
Another alternative is an operator- and satellite-independent location service based on access into the deep level telecoms network (SS7). This solution enables accurate and quick determination of geographical coordinates of mobile phones by providing operator-independent location data and works also for handsets that do not have satellite navigation capability.
In addition, theIP addresscould provide the end-user's location.
Many otherlocal positioning systemsandindoor positioning systemsare available, especially for indoor use. GPS and GSM do not work very well indoors, so other techniques are used, including co-pilot beacon for CDMA networks, Bluetooth, UWB,RFIDand Wi-Fi.[31]
Location-based services may be employed in a number of applications, including:[7]
For the carrier, location-based services provide added value by enabling services such as:
In theU.S.theFCCrequires that all carriers meet certain criteria for supporting location-based services (FCC 94–102). The mandate requires 95% of handsets to resolve within 300 meters for network-based tracking (e.g. triangulation) and 150 meters for handset-based tracking (e.g. GPS). This can be especially useful when dialing anemergency telephone number– such asenhanced 9-1-1inNorth America, or112inEurope– so that the operator can dispatch emergency services such asemergency medical services,policeorfirefightersto the correct location. CDMA and iDEN operators have chosen to use GPS location technology for locating emergency callers. This led to rapidly increasing penetration of GPS in iDEN and CDMA handsets in North America and other parts of the world where CDMA is widely deployed. Even though no such rules are yet in place in Japan or in Europe the number of GPS-enabled GSM/WCDMA handset models is growing fast. According to the independent wireless analyst firmBerg Insightthe attach rate for GPS is growing rapidly in GSM/WCDMA handsets, from less than 8% in 2008 to 15% in 2009.[34]
As for economic impact, location-based services are estimated to have a $1.6 Trillion impact on the US economy alone.[35]
European operators are mainly usingCell IDfor locating subscribers. This is also a method used in Europe by companies that are using cell-based LBS as part of systems to recover stolen assets. In the US companies such asRave Wirelessin New York are using GPS and triangulation to enable college students to notify campus police when they are in trouble.
Currently there are roughly three different models for location-based apps on mobile devices. All share that they allow one's location to be tracked by others. Each functions in the same way at a high level, but with differing functions and features. Below is a comparison of an example application from each of the three models.
[36]
Mobile messaging plays an essential role in LBS. Messaging, especially SMS, has been used in combination with various LBS applications, such as location-based mobile advertising.SMSis still the main technology carrying mobile advertising / marketing campaigns to mobile phones. A classic example of LBS applications using SMS is the delivery of mobile coupons or discounts to mobile subscribers who are near to advertising restaurants, cafes, movie theatres. The Singaporean mobile operatorMobileOnecarried out such an initiative in 2007 that involved many local marketers, what was reported to be a huge success in terms of subscriber acceptance.
The Location Privacy Protection Act of 2012 (S.1223)[37]was introduced by SenatorAl Franken(D-MN) in order to regulate the transmission and sharing of user location data in the United States. It is based on the individual's one time consent to participate in these services (Opt In). The bill specifies the collecting entities, the collectable data and its usage. The bill does not specify, however, the period of time that the data collecting entity can hold on to the user data (a limit of 24 hours seems appropriate since most of the services use the data for immediate searches, communications, etc.), and the bill does not include location data stored locally on the device (the user should be able to delete the contents of the location data document periodically just as he would delete a log document). The bill which was approved by theSenate Judiciary Committee, would also require mobile services to disclose the names of the advertising networks or other third parties with which they share consumers' locations.[38]
With the passing of theCAN-SPAM Actin 2003, it became illegal in the United States to send any message to the end user without the end user specifically opting-in. This put an additional challenge on LBS applications as far as "carrier-centric" services were concerned. As a result, there has been a focus on user-centric location-based services and applications which give the user control of the experience, typically by opting in first via a website or mobile interface (such asSMS, mobile Web, andJava/BREWapplications).
TheEuropean Unionalso provides a legal framework for data protection that may be applied for location-based services, and more particularly several European directives such as: (1) Personal data: Directive 95/46/EC; (2) Personal data in electronic communications: Directive 2002/58/EC; (3) Data Retention:Directive 2006/24/EC. However the applicability of legal provisions to varying forms of LBS and of processing location data is unclear.[39]
One implication of this technology is that data about a subscriber's location and historical movements is owned and controlled by the network operators, including mobile carriers and mobile content providers.[40]Mobile content providers and app developers are a concern. Indeed, a 2013 MIT study[41][42]by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity. A critical article by Dobson and Fisher[43]discusses the possibilities for misuse of location information.
Beside the legal framework there exist several technical approaches to protect privacy usingprivacy-enhancing technologies(PETs). Such PETs range from simplistic on/off switches[44]to sophisticated PETs using anonymization techniques (e.g. providing k-anonymity),[45]or cryptograpic protocols.[46]Only few LBS offer such PETs, e.g.,Google Latitudeoffered an on/off switch and allows to stick one's position to a free definable location. Additionally, it is an open question how users perceive and trust in different PETs. The only study that addresses user perception of state of the art PETs is.[47]Another set of techniques included in the PETs are thelocation obfuscationtechniques, which slightly alter the location of the users in order to hide their real location while still being able to represent their position and receive services from their LBS provider.
Recent research has shown thatcrowdsourcingis also an effective approach at locating lost objects while still upholding the privacy of users. This is done by ensuring a limited level of interactions between users.[48]
|
https://en.wikipedia.org/wiki/Location-based_service
|
Intaxonomy,binomial nomenclature("two-term naming system"), also calledbinary nomenclature, is a formal system of namingspeciesof living things by giving each a name composed of two parts, both of which useLatin grammatical forms, although they can be based on words from other languages. Such a name is called abinomial name(often shortened to just "binomial"), abinomen,binominal name, or ascientific name; more informally, it is also called aLatin name. In theInternational Code of Zoological Nomenclature(ICZN), the system is also calledbinominal nomenclature,[1]with an "n" before the "al" in "binominal", which isnota typographic error, meaning "two-name naming system".[2]
The first part of the name – thegeneric name– identifies thegenusto which the species belongs, whereas the second part – thespecific nameorspecific epithet– distinguishes the species within the genus. For example, modern humans belong to the genusHomoand within this genus to the speciesHomo sapiens.Tyrannosaurus rexis likely the most widely known binomial.[3]Theformalintroduction of this system of naming species is credited toCarl Linnaeus, effectively beginning with his workSpecies Plantarumin 1753.[4]But as early as 1622,Gaspard Bauhinintroduced in his bookPinax theatri botanici(English,Illustrated exposition of plants) containing many names of genera that were later adopted by Linnaeus.[5]Binomial nomenclature was introduced in order to provide succinct, relatively stable and verifiable names that could be used and understood internationally, unlikecommon nameswhich are usually different in every language.[6]
The application of binomial nomenclature is now governed by various internationally agreed codes of rules, of which the two most important are theInternational Code of Zoological Nomenclature(ICZN) for animals and theInternational Code of Nomenclature for algae, fungi, and plants(ICNafporICN). Although the general principles underlying binomial nomenclature are common to these two codes, there are some differences in the terminology they use and their particular rules.
In modern usage, the first letter of the generic name is always capitalized in writing, while that of the specific epithet is not, even when derived from aproper nounsuch as the name of a person or place. Similarly, both parts areitalicizedin normal text (or underlined in handwriting). Thus the binomial name of the annual phlox (named after botanistThomas Drummond) is now written asPhlox drummondii. Often, after a species name is introduced in a text, the generic name is abbreviated to the first letter in subsequent mentions (e.g.,P. drummondii).
In scientific works, theauthorityfor a binomial name is usually given, at least when it is first mentioned, and the year of publication may be specified.
The wordbinomialis composed of two elements:bi-(Latinprefix meaning 'two') andnomial(theadjectiveform ofnomen, Latin for 'name'). In Medieval Latin, the related wordbinomiumwas used to signify one term in a binomial expression in mathematics.[7]In fact, the Latin wordbinomiummay validly refer to either of the epithets in the binomial name, which can equally be referred to as abinomen(pl.binomina).[8]
Before the adoption of the modern binomial system of naming species, a scientific name consisted of a generic name combined with a specific name that was from one to several words long. Together they formed a system of polynomial nomenclature.[9]These names had two separate functions: to designate or label the species, and to be a diagnosis or description. These two goals were eventually found to be incompatible.[10]In a simple genus that contained few species, it was easy to tell them apart with a one-word genus and a one-word specific name; but as more species were discovered, the names necessarily became longer and unwieldy—for instance,Plantago foliis ovato-lanceolatus pubescentibus, spica cylindrica, scapo tereti("plantain with pubescent ovate-lanceolate leaves, a cylindric spike and ateretescape"), which we know today asPlantago media.[11]
Such "polynomial names" may sometimes look like binomials, but are different. For example, Gerard's herbal (as amended by Johnson) describes various kinds of spiderwort: "The first is calledPhalangium ramosum, Branched Spiderwort; the second,Phalangium non ramosum, Unbranched Spiderwort. The other ... is aptly termedPhalangium Ephemerum Virginianum, Soon-Fading Spiderwort of Virginia".[12]The Latin phrases are short descriptions, rather than identifying labels.
TheBauhins, in particularCaspar Bauhin(1560–1624), took some important steps towards the binomial system by pruning the Latin descriptions, in many cases to two words.[13]The adoption by biologists of a system of strictly binomial nomenclature is due to Swedish botanist and physicianCarl Linnaeus(1707–1778).It was in his 1753Species Plantarumthat Linnaeus began consistently using a one-wordtrivial name(nomen triviale) after a generic name (genus name) in a system of binomial nomenclature.[14]Trivial names had already appeared in hisCritica Botanica(1737) andPhilosophia Botanica(1751). This trivial name is what is now known as aspecific epithet(ICNafp) orspecific name(ICZN).[14]The Bauhins' genus names were retained in many of these, but the descriptive part was reduced to a single word.
Linnaeus's trivial names introduced the idea that the function of a name could simply be to give a species a unique label, meaning that the name no longer needed to be descriptive. Both parts could, for example, be derived from the names of people. Thus Gerard'sPhalangium ephemerum virginianumbecameTradescantia virginiana, where the genus name honouredJohn Tradescant the Younger,[note 1]an English botanist and gardener.[15]A bird in the parrot family was namedPsittacus alexandri, meaning "Alexander's parrot", afterAlexander the Great, whose armies introduced eastern parakeets to Greece.[16]Linnaeus's trivial names were much easier to remember and use than the parallel polynomial names, and eventually replaced them.[4]
The value of the binomial nomenclature system derives primarily from its economy, its widespread use, and the uniqueness and stability of names that the Codes ofZoologicalandBotanical,BacterialandViralNomenclature provide:
Binomial nomenclature for species has the effect that when a species is moved from one genus to another, sometimes the specific name or epithet must be changed as well. This may happen because the specific name is already used in the new genus, or toagree in genderwith the new genus if the specific epithet is an adjective modifying the genus name. Some biologists have argued for the combination of the genus name and specific epithet into a single unambiguous name, or for the use of uninomials (as used in nomenclature of ranks above species).[23][24]
Because genus names are unique only within a nomenclature code, it is possible for homonyms (two or more species sharing the same genus name) to happen, and even the same binomial if they occur in different kingdoms. At least 1,258 instances of genus name duplication occur (mainly between zoology and botany).[25][26]
Nomenclature (including binomial nomenclature) is not the same as classification, although the two are related. Classification is the ordering of items into groups based on similarities or differences; inbiological classification, species are one of the kinds of item to be classified.[27]In principle, the names given to species could be completely independent of their classification. This is not the case for binomial names, since the first part of a binomial is the name of the genus into which the species is placed. Above the rank of genus, binomial nomenclature and classification are partly independent; for example, a species retains its binomial name if it is moved from one family to another or from one order to another, unless it better fits a different genus in the same or different family, or it is split from its old genus and placed in a newly created genus. The independence is only partial since the names of families and other higher taxa are usually based on genera.
Taxonomyincludes both nomenclature and classification. Its first stages (sometimes called "alpha taxonomy") are concerned with finding, describing and naming species of living orfossilorganisms.[28]Binomial nomenclature is thus an important part of taxonomy as it is the system by which species are named. Taxonomists are also concerned with classification, including its principles, procedures and rules.[29]
A complete binomial name is always treated grammatically as if it were a phrase in the Latin language (hence the common use of the term "Latin name" for a binomial name). However, the two parts of a binomial name can each be derived from a number of sources, of which Latin is only one. These include:
The first part of the name, which identifies the genus, must be a word that can be treated as a Latinsingularnoun in thenominative case. It must be unique within the purview of eachnomenclatural code, but can be repeated between them. ThusHuia recurvatais an extinct species of plant, found asfossilsinYunnan, China,[39]whereasHuia masoniiis a species of frog found inJava, Indonesia.[40]
The second part of the name, which identifies the species within the genus, is also treated grammatically as a Latin word. It can have one of a number of forms:
Whereas the first part of a binomial name must be unique within the purview of each nomenclatural code, the second part is quite commonly used in two or more genera (as is shown by examples ofhodgsoniiabove), but cannot be used more than once within a single genus. The full binomial name must be unique within each code.
From the early 19th century onwards it became ever more apparent that a body of rules was necessary to govern scientific names. In the course of time these becamenomenclature codes. TheInternational Code of Zoological Nomenclature(ICZN) governs the naming of animals,[42]theInternational Code of Nomenclature for algae, fungi, and plants(ICNafp) that of plants (includingcyanobacteria), and theInternational Code of Nomenclature of Bacteria(ICNB) that ofbacteria(includingArchaea).Virusnames are governed by theInternational Committee on Taxonomy of Viruses(ICTV), a taxonomic code, which determines taxa as well as names. These codes differ in certain ways, e.g.:
Unifying the different codes into a single code, the "BioCode", has been suggested,[47]although implementation is not in sight. (There is also a published code for a different system of biotic nomenclature, which does not use ranks above species, but instead namesclades. This is calledPhyloCode.)
As noted above, there are some differences between the codes in how binomials can be formed; for example theICZNallows both parts to be the same, while theICNafpdoes not. Another difference is in how personal names are used in forming specific names or epithets. TheICNafpsets out precise rules by which a personal name is to be converted to a specific epithet. In particular, names ending in a consonant (but not "er") are treated as first being converted into Latin by adding "-ius" (for a man) or "-ia" (for a woman), and then being made genitive (i.e. meaning "of that person or persons"). This produces specific epithets likelecardiifor Lecard (male),wilsoniaefor Wilson (female), andbrauniarumfor the Braun sisters.[48]By contrast, theICZNdoes not require the intermediate creation of a Latin form of a personal name, allowing the genitive ending to be added directly to the personal name.[49]This explains the difference between the names of the plantMagnolia hodgsoniiand the birdAnthus hodgsoni. Furthermore, theICNafprequires names not published in the form required by the code to be corrected to conform to it,[50]whereas theICZNis more protective of the form used by the original author.[51]
By tradition, the binomial names of species are usually typeset in italics; for example,Homo sapiens.[52]Generally, the binomial should be printed in afont styledifferent from that used in the normal text; for example, "Several moreHomo sapiensfossils were discovered." When handwritten, a binomial name should be underlined; for example,Homosapiens.[53]
The first part of the binomial, the genus name, is always written with an initial capital letter. Older sources, particularly botanical works published before the 1950s, used a different convention: if the second part of the name was derived from a proper noun, e.g., the name of a person or place, a capital letter was used. Thus, the modern formBerberis darwiniiwas written asBerberis Darwinii. A capital was also used when the name is formed by two nouns in apposition, e.g.,Panthera LeoorCentaurea Cyanus.[54][note 3]In current usage, the second part is never written with an initial capital.[56][57]
When used with a common name, the scientific name often follows in parentheses, although this varies with publication.[58]For example, "The house sparrow (Passer domesticus) is decreasing in Europe."
The binomial name should generally be written in full. The exception to this is when several species from the same genus are being listed or discussed in the same paper or report, or the same species is mentioned repeatedly; in which case the genus is written in full when it is first used, but may then be abbreviated to an initial (and a period/full stop).[59]For example, a list of members of the genusCanismight be written as "Canis lupus,C. aureus,C. simensis". In rare cases, this abbreviated form has spread to more general use; for example, the bacteriumEscherichia coliis often referred to as justE. coli, andTyrannosaurus rexis perhaps even better known simply asT. rex, these two both often appearing in this form in popular writing even where the full genus name has not already been given.
The abbreviation "sp." is used when the actual specific name cannot or need not be specified. The abbreviation "spp." (plural) indicates "several species". These abbreviations are not italicised (or underlined).[60][61]For example: "Canissp." means "an unspecified species of the genusCanis", while "Canisspp." means "two or more species of the genusCanis". (These abbreviations should not be confused with the abbreviations "ssp." (zoology) or "subsp." (botany), plurals "sspp." or "subspp.", referring to one or moresubspecies. Seetrinomen(zoology) andinfraspecific name.)
The abbreviation "cf." (i.e.,conferin Latin) is used to compare individuals/taxa with known/described species. Conventions for use of the "cf." qualifier vary.[62]In paleontology, it is typically used when the identification is not confirmed.[63]For example, "Corvuscf.nasicus" was used to indicate "a fossil bird similar to theCuban crowbut not certainly identified as this species".[64]In molecular systematics papers, "cf." may be used to indicate one or more undescribed species assumed to be related to a described species. For example, in a paper describing the phylogeny of small benthic freshwater fish called darters, five undescribed putative species (Ozark, Sheltowee, Wildcat, Ihiyo, and Mamequit darters), notable for brightly colored nuptial males with distinctive color patterns,[65]were referred to as "Etheostomacf.spectabile" because they had been viewed as related to, but distinct from,Etheostoma spectabile(orangethroat darter).[66]This view was supported to varying degrees by DNA analysis. The somewhat informal use of taxa names with qualifying abbreviations is referred to asopen nomenclatureand it is not subject to strict usage codes.
In some contexts, the dagger symbol ("†") may be used before or after the binomial name to indicate that the species is extinct.
In scholarly texts, at least the first or main use of the binomial name is usually followed by the "authority" – a way of designating the scientist(s) who first published the name. The authority is written in slightly different ways in zoology and botany. For names governed by theICZNthe surname is usually written in full together with the date (normally only the year) of publication. One example of author citation of scientific name is: "AmabelaMöschler, 1880."[note 4]TheICZNrecommends that the "original author and date of a name should be cited at least once in each work dealing with the taxon denoted by that name."[67]For names governed by theICNafpthe name is generally reduced to a standard abbreviation and the date omitted. TheInternational Plant Names Indexmaintains an approved list of botanical author abbreviations. Historically, abbreviations were used in zoology too.
When the original name is changed, e.g., the species is moved to a different genus, both codes use parentheses around the original authority; theICNafpalso requires the person who made the change to be given. In theICNafp, the original name is then called thebasionym. Some examples:
Binomial nomenclature, as described here, is a system for naming species. Implicitly, it includes a system for naming genera, since the first part of the name of the species is a genus name. In a classification system based on ranks, there are also ways of naming ranks above the level of genus and below the level of species. Ranks above genus (e.g., family, order, class) receive one-part names, which are conventionally not written in italics. Thus, the house sparrow,Passer domesticus, belongs to the familyPasseridae.Familynames are normally based on genus names, such as in zoology,[69]although the endings used differ between zoology and botany.
Ranks below species receive three-part names, conventionally written in italics like the names of species. There are significant differences between theICZNand theICNafp. In zoology, the only formal rank below species is subspecies and the name is written simply as three parts (a trinomen). Thus, one of the subspecies of theolive-backed pipitisAnthus hodgsoni berezowskii. Informally, in some circumstances, aformmay be appended. For exampleHarmonia axyridisf.spectabilisis the harlequin ladybird in its black or melanic forms having four large orange or red spots. In botany, there are many ranks below species and although the name itself is written in three parts, a "connecting term" (not part of the name) is needed to show the rank. Thus, the American black elder isSambucus nigrasubsp.canadensis; the white-flowered form of the ivy-leaved cyclamen isCyclamen hederifoliumf.albiflorum.[70]
|
https://en.wikipedia.org/wiki/Binomial_nomenclature#Derivation_of_binomial_names
|
(Stochastic) variance reductionis an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classicalStochastic approximationsetting.
Variance reduction approaches are widely used for training machine learning models such aslogistic regressionandsupport vector machines[1]as these problems have finite-sum structure and uniformconditioningthat make them ideal candidates for variance reduction.
A functionf{\displaystyle f}is considered to have finite sum structure if it can be decomposed into a summation or average:
where the function value and derivative of eachfi{\displaystyle f_{i}}can be queried independently. Although variance reduction methods can be applied for any positiven{\displaystyle n}and anyfi{\displaystyle f_{i}}structure, their favorable theoretical and practical properties arise whenn{\displaystyle n}is large compared to thecondition numberof eachfi{\displaystyle f_{i}}, and when thefi{\displaystyle f_{i}}have similar (but not necessarily identical)Lipschitz smoothnessandstrong convexityconstants.
The finite sum structure should be contrasted with the stochastic approximation setting which deals with functions of the formf(θ)=Eξ[F(θ,ξ)]{\textstyle f(\theta )=\operatorname {E} _{\xi }[F(\theta ,\xi )]}which is the expected value of a function depending on arandom variableξ{\textstyle \xi }. Any finite sum problem can be optimized using a stochastic approximation algorithm by usingF(⋅,ξ)=fξ{\displaystyle F(\cdot ,\xi )=f_{\xi }}.
Stochastic variance reduced methods without acceleration are able to find a minima off{\displaystyle f}within accuracyϵ>{\displaystyle \epsilon >}, i.e.f(x)−f(x∗)≤ϵ{\displaystyle f(x)-f(x_{*})\leq \epsilon }in a number of steps of the order:
The number of steps depends only logarithmically on the level of accuracy required, in contrast to the stochastic approximation framework, where the number of stepsO(L/(μϵ)){\displaystyle O{\bigl (}L/(\mu \epsilon ){\bigr )}}required grows proportionally to the accuracy required.
Stochastic variance reduction methods converge almost as fast as the gradient descent method'sO((L/μ)log(1/ϵ)){\displaystyle O{\bigl (}(L/\mu )\log(1/\epsilon ){\bigr )}}rate, despite using only a stochastic gradient, at a1/n{\displaystyle 1/n}lower cost than gradient descent.
Accelerated methods in the stochastic variance reduction framework achieve even faster convergence rates, requiring only
steps to reachϵ{\displaystyle \epsilon }accuracy, potentiallyn{\displaystyle {\sqrt {n}}}faster than non-accelerated methods. Lower complexity bounds.[2]for the finite sum class establish that this rate is the fastest possible for smooth strongly convex problems.
Variance reduction approaches fall within 3 main categories: table averaging methods, full-gradient snapshot methods and dual methods. Each category contains methods designed for dealing with convex, non-smooth, and non-convex problems, each differing in hyper-parameter settings and other algorithmic details.
In the SAGA method,[3]the prototypical table averaging approach, a table of sizen{\displaystyle n}is maintained that contains the last gradient witnessed for eachfi{\displaystyle f_{i}}term, which we denotegi{\displaystyle g_{i}}. At each step, an indexi{\displaystyle i}is sampled, and a new gradient∇fi(xk){\displaystyle \nabla f_{i}(x_{k})}is computed. The iteratexk{\displaystyle x_{k}}is updated with:
and afterwards table entryi{\displaystyle i}is updated withgi=∇fi(xk){\displaystyle g_{i}=\nabla f_{i}(x_{k})}.
SAGA is among the most popular of the variance reduction methods due to its simplicity, easily adaptable theory, and excellent performance. It is the successor of the SAG method,[4]improving on its flexibility and performance.
The stochastic variance reduced gradient method (SVRG),[5]the prototypical snapshot method, uses a similar update except instead of using the average of a table it instead uses a full-gradient that is reevaluated at a snapshot pointx~{\displaystyle {\tilde {x}}}at regular intervals ofm≥n{\displaystyle m\geq n}iterations. The update becomes:
This approach requires two stochastic gradient evaluations per step, one to compute∇fi(xk){\displaystyle \nabla f_{i}(x_{k})}and one to compute∇fi(x~),{\displaystyle \nabla f_{i}({\tilde {x}}),}where-as table averaging approaches need only one.
Despite the high computational cost, SVRG is popular as its simple convergence theory is highly adaptable to new optimization settings. It also has lower storage requirements than tabular averaging approaches, which make it applicable in many settings where tabular methods can not be used.
Exploiting thedual representationof the objective leads to another variance reduction approach that is particularly suited to finite-sums where each term has a structure that makes computing theconvex conjugatefi∗,{\displaystyle f_{i}^{*},}or itsproximal operatortractable. The standard SDCA method[6]considers finite sums that have additional structure compared to generic finite sum setting:
where eachfi{\displaystyle f_{i}}is 1 dimensional and eachvi{\displaystyle v_{i}}is a data point associated withfi{\displaystyle f_{i}}.
SDCA solves the dual problem:
by a stochasticcoordinate ascentprocedure, where at each step the objective is optimized with respect to a randomly chosen coordinateαi{\displaystyle \alpha _{i}}, leaving all other coordinates the same. An approximate primal solutionx{\displaystyle x}can be recovered from theα{\displaystyle \alpha }values:
This method obtains similar theoretical rates of convergence to other stochastic variance reduced methods, while avoiding the need to specify a step-size parameter. It is fast in practice whenλ{\displaystyle \lambda }is large, but significantly slower than the other approaches whenλ{\displaystyle \lambda }is small.
Accelerated variance reduction methods are built upon the standard methods above. The earliest approaches make use of proximal operators to accelerate convergence, either approximately or exactly. Direct acceleration approaches have also been developed[7]
The catalyst framework[8]uses any of the standard methods above as an inner optimizer to approximately solve aproximal operator:
after which it uses an extrapolation step to determine the nexty{\displaystyle y}:
The catalyst method's flexibility and simplicity make it a popular baseline approach. It doesn't achieve the optimal rate of convergence among accelerated methods, it is potentially slower by up to a log factor in the hyper-parameters.
Proximal operations may also be applied directly to thefi{\displaystyle f_{i}}terms to yield an accelerated method. The Point-SAGA method[9]replaces the gradient operations in SAGA with proximal operator evaluations, result in a simple, direct acceleration method:
with the table updategj=1γ(zk−xk+1){\displaystyle g_{j}={\frac {1}{\gamma }}(z_{k}-x_{k+1})}performed after each step. Hereproxjγ{\displaystyle {\text{prox}}_{j}^{\gamma }}is defined as the proximal operator for thej{\displaystyle j}th term:
Unlike other known accelerated methods, Point-SAGA requires only a single iterate sequencex{\displaystyle x}to be maintained between steps, and it has the advantage of only having a single tunable parameterγ{\displaystyle \gamma }. It obtains the optimal accelerated rate of convergence for strongly convex finite-sum minimization without additional log factors.
|
https://en.wikipedia.org/wiki/Stochastic_variance_reduction
|
Amobile equipment identifier (MEID)is a globally unique number identifying a physical piece ofCDMA2000mobile station equipment. The number format is defined by the3GPP2 report S.R0048but in practical terms, it can be seen as anIMEIbut withhexadecimaldigits.
An MEID is 56bitslong (14 hexadecimal digits). It consists of three fields, including an 8-bit regional code (RR), a 24-bit manufacturer code, and a 24-bit manufacturer-assigned serial number. The check digit (CD) is not considered part of the MEID.
The MEID was created to replaceelectronic serial numbers(ESNs), whose virgin form was exhausted in November 2008.[1]As of TIA/EIA/IS-41 Revision D and TIA/EIA/IS-2000 Rev C, the ESN is still a required field in many messages—for compatibility, devices with an MEID can use a pseudo-ESN (pESN), which is a manufacturer code of 0x80 (formerly reserved) followed by the least significant 24 bits of theSHA-1hash of the MEID.[2]MEIDs are used on CDMA mobile phones. GSM phones do not have ESN or MIN, only an International Mobile Station Equipment Identity (IMEI) number.
Commonly, opening the phone's dialler and typing *#06# will display its MEID.[3]
The separation between international mobile equipment identifiers (IMEIs) used by GSM/UMTS and MEIDs is based on the number ranges. There are two administrators: the global decimal administrator (GDA) for IMEIs and the global hexadecimal administrator (GHA).
As of August 2006, the TIA acts as the GHA to assign MEID code prefixes (0xA0 and up), and the GSM Association acts as the global decimal administrator. TIA also allocates IMEI codes, specifically destined for dual-technology phones, out of the RR=99 range. This range is commonly (but not exclusively) used forLTE-capable handsets with CDMA support. Other administrators working under GSMA may also allocate any IMEI for use in dual-technology phones. For instance,AppleandLGnormally use RR=35 which is allocated byBABTwhileChinesebrands such asHuaweiuse two RR=86 IMEIs allocated by TAF for 3GPP networks alongside a distinct RR=99 decimal or RR=A0 hexadecimal MEID for 3GPP2 networks. Every IMEI can also be used as an MEID in CDMA devices (as well as in single-mode devices designed with GSM or other 3GPP protocols) but MEIDs can also contain hexadecimal digits and this MEID variant cannot be used as an IMEI.
There are two standard formats for MEIDs, and both can include an optional check-digit. This is defined by3GPP2 standard X.S0008.
The hexadecimal form is specified to be 14 digits grouped together and applies whether all digits are in the decimal range or whether some are in the range 'A'–'F'. In the first case, all digits are in the range '0'–'9', the check-digit is calculated using the normal base 10Luhnalgorithm, but if at least one digit is in the range 'A'–'F' this check digit algorithm uses base 16 arithmetic. The check-digit is never transmitted or stored. It is intended to detect most (but not all) input errors, it is not intended to be a checksum or CRC to detect transmission errors. Consequently, it may be printed on phones or their packaging in case of manual entry of an MEID (e.g. because there is nobar codeor the bar code is unreadable).
The decimal form is specified to be 18 digits grouped in a 5–5–4–4 pattern and is calculated by converting the manufacturer code portion (32 bits) to decimal and padding on the left with '0' digits to 10 digits and separately converting the serial number portion to decimal and padding on the left to 8 digits. A check-digit can be calculated from the 18 digit result using the standard base 10Luhnalgorithm and appended to the end. Note that to produce this form the MEID digits are treated as base 16 numbers even if all of them are in the range '0'–9'.
Because the pESN is formed by a hash on the MEID there is the potential for hash collisions. These will cause an extremely rare condition known as a 'collision' on a pure ESN-only network as the ESN is used for the calculation of the Public Long Code Mask (PLCM) used for communication with the base-station. Two mobiles using the same pESN within the same base-station area (operating on the same frequency) can result in call setup and page failures.
The probability of a collision has been carefully examined.[4]Roughly, it is estimated that even on a heavily loaded network the frequency of this situation is closer to 1 out of 1 million calls than to 1 out of 100 000.
3GPP2 specificationC.S0072provides a solution to this problem by allowing the PLCM to be established by the base station. It is easy for the base station to ensure that all PLCM codes are unique when this is done. This specification also allows the PLCM to be based on the MEID orIMSI.
A different problem occurs when ESN codes are stored in a database (such as forOTASP). In this situation, the risk of at least two phones having the same pseudo-ESN can be calculated using thebirthday paradoxand works out to about a 50 per cent probability in a database with 4,800 pseudo-ESN entries. 3GPP2 specificationsC.S0016(Revision C or higher) andC.S0066have been modified to allow the replacement MEID identifier to be transmitted, resolving this problem.
Another problem is that messages delivered on the forward paging channel using the pESN as an address could be delivered to multiple mobiles seemingly randomly. This problem can be avoided by usingmobile identification number(MIN) or IMSI based addressing instead.
This shortPythonscript will convert an MEID to a pESN.
The CDG also provides ajavascript calculator with more conversion options.
This C# method will convert an MEID from HEX to DEC format (or return empty for an invalid MEID HEX value)
|
https://en.wikipedia.org/wiki/Mobile_equipment_identifier
|
Markov decision process(MDP), also called astochastic dynamic programor stochastic control problem, is a model forsequential decision makingwhenoutcomesare uncertain.[1]
Originating fromoperations researchin the 1950s,[2][3]MDPs have since gained recognition in a variety of fields, includingecology,economics,healthcare,telecommunicationsandreinforcement learning.[4]Reinforcement learning utilizes the MDP framework to model the interaction between a learning agent and its environment. In this framework, the interaction is characterized by states, actions, and rewards. The MDP framework is designed to provide a simplified representation of key elements ofartificial intelligencechallenges. These elements encompass the understanding ofcause and effect, the management of uncertainty and nondeterminism, and the pursuit of explicit goals.[4]
The name comes from its connection toMarkov chains, a concept developed by the Russian mathematicianAndrey Markov. The "Markov" in "Markov decision process" refers to the underlying structure ofstate transitionsthat still follow theMarkov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.
A Markov decision process is a 4-tuple(S,A,Pa,Ra){\displaystyle (S,A,P_{a},R_{a})}, where:
A policy functionπ{\displaystyle \pi }is a (potentially probabilistic) mapping from state space (S{\displaystyle S}) to action space (A{\displaystyle A}).
The goal in a Markov decision process is to find a good "policy" for the decision maker: a functionπ{\displaystyle \pi }that specifies the actionπ(s){\displaystyle \pi (s)}that the decision maker will choose when in states{\displaystyle s}. Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like aMarkov chain(since the action chosen in states{\displaystyle s}is completely determined byπ(s){\displaystyle \pi (s)}).
The objective is to choose a policyπ{\displaystyle \pi }that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:
whereγ{\displaystyle \ \gamma \ }is the discount factor satisfying0≤γ≤1{\displaystyle 0\leq \ \gamma \ \leq \ 1}, which is usually close to1{\displaystyle 1}(for example,γ=1/(1+r){\displaystyle \gamma =1/(1+r)}for some discount rater{\displaystyle r}). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely.
Another possible, but strictly related, objective that is commonly used is theH−{\displaystyle H-}step return. This time, instead of using a discount factorγ{\displaystyle \ \gamma \ }, the agent is interested only in the firstH{\displaystyle H}steps of the process, with each reward having the same weight.
whereH{\displaystyle \ H\ }is the time horizon. Compared to the previous objective, the latter one is more used inLearning Theory.
A policy that maximizes the function above is called anoptimal policyand is usually denotedπ∗{\displaystyle \pi ^{*}}. A particular MDP may have multiple distinct optimal policies. Because of theMarkov property, it can be shown that the optimal policy is a function of the current state, as assumed above.
In many cases, it is difficult to represent the transition probability distributions,Pa(s,s′){\displaystyle P_{a}(s,s')}, explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often calledepisodesmay be produced.
Another form of simulator is agenerative model, a single step simulator that can generate samples of the next state and reward given any state and action.[5](Note that this is a different meaning from the termgenerative modelin the context of statistical classification.) Inalgorithmsthat are expressed usingpseudocode,G{\displaystyle G}is often used to represent a generative model. For example, the expressions′,r←G(s,a){\displaystyle s',r\gets G(s,a)}might denote the action of sampling from the generative model wheres{\displaystyle s}anda{\displaystyle a}are the current state and action, ands′{\displaystyle s'}andr{\displaystyle r}are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory.
These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models throughregression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, thedynamic programmingalgorithms described in the next section require an explicit model, andMonte Carlo tree searchrequires a generative model (or an episodic simulator that can be copied at any state), whereas mostreinforcement learningalgorithms require only an episodic simulator.
An example of MDP is the Pole-Balancing model, which comes from classic control theory.
In this example, we have
Solutions for MDPs with finite state and action spaces may be found through a variety of methods such asdynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example usingfunction approximation. Also, some processes with countably infinite state and action spaces can beexactlyreduced to ones with finite state and action spaces.[6]
The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state:valueV{\displaystyle V}, which contains real values, andpolicyπ{\displaystyle \pi }, which contains actions. At the end of the algorithm,π{\displaystyle \pi }will contain the solution andV(s){\displaystyle V(s)}will contain the discounted sum of the rewards to be earned (on average) by following that solution from states{\displaystyle s}.
The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values.
Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.[7]
In value iteration (Bellman 1957), which is also calledbackward induction,
theπ{\displaystyle \pi }function is not used; instead, the value ofπ(s){\displaystyle \pi (s)}is calculated withinV(s){\displaystyle V(s)}whenever it is needed. Substituting the calculation ofπ(s){\displaystyle \pi (s)}into the calculation ofV(s){\displaystyle V(s)}gives the combined step[further explanation needed]:
wherei{\displaystyle i}is the iteration number. Value iteration starts ati=0{\displaystyle i=0}andV0{\displaystyle V_{0}}as a guess of thevalue function. It then iterates, repeatedly computingVi+1{\displaystyle V_{i+1}}for all statess{\displaystyle s}, untilV{\displaystyle V}converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem[clarification needed]).Lloyd Shapley's 1953 paper onstochastic gamesincluded as a special case the value iteration method for MDPs,[8]but this was recognized only later on.[9]
In policy iteration (Howard 1960)harv error: no target: CITEREFHoward1960 (help), step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimizeSearscatalogue mailing, which he had been optimizing using value iteration.[10])
Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by makings=s′{\displaystyle s=s'}in the step two equation.[clarification needed]Thus, repeating step two to convergence can be interpreted as solving the linear equations byrelaxation.
This variant has the advantage that there is a definite stopping condition: when the arrayπ{\displaystyle \pi }does not change in the course of applying step 1 to all states, the algorithm is completed.
Policy iteration is usually slower than value iteration for a large number of possible states.
In modified policy iteration (van Nunen 1976;Puterman & Shin 1978), step one is performed once, and then step two is repeated several times.[11][12]Then step one is again performed once and so on.
In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes inV{\displaystyle V}orπ{\displaystyle \pi }around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
Algorithms for finding optimal policies withtime complexitypolynomial in the size of the problem representation exist for finite MDPs. Thus,decision problemsbased on MDPs are in computationalcomplexity classP.[13]However, due to thecurse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such asMonte Carlo tree searchcan find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.[14]
A Markov decision process is astochastic gamewith only one player.
The solution above assumes that the states{\displaystyle s}is known when action is to be taken; otherwiseπ(s){\displaystyle \pi (s)}cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.
Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.[15]
The method of Lagrange multipliers applies to CMDPs.
Many Lagrangian-based algorithms have been developed.
There are a number of applications for CMDPs. It has recently been used inmotion planningscenarios in robotics.[17]
In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, forcontinuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that hascontinuous dynamics, i.e., the system dynamics is defined byordinary differential equations(ODEs). These kind of applications raise inqueueing systems, epidemic processes, andpopulation processes.
Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimalpolicywhich could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:
where0≤γ<1.{\displaystyle 0\leq \gamma <1.}
If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes anergodiccontinuous-time Markov chain under a stationarypolicy. Under this assumption, although the decision maker can make a decision at any time in the current state, there is no benefit in taking multiple actions. It is better to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,[18]if our optimal value functionV∗{\displaystyle V^{*}}is independent of statei{\displaystyle i}, we will have the following inequality:
If there exists a functionh{\displaystyle h}, thenV¯∗{\displaystyle {\bar {V}}^{*}}will be the smallestg{\displaystyle g}satisfying the above equation. In order to findV¯∗{\displaystyle {\bar {V}}^{*}}, we could use the following linear programming model:
y(i,a){\displaystyle y(i,a)}is a feasible solution to the D-LP ify(i,a){\displaystyle y(i,a)}is nonnative and satisfied the constraints in the D-LP problem. A feasible solutiony∗(i,a){\displaystyle y^{*}(i,a)}to the D-LP is said to be an optimal solution if
for all feasible solutiony(i,a){\displaystyle y(i,a)}to the D-LP. Once we have found the optimal solutiony∗(i,a){\displaystyle y^{*}(i,a)}, we can use it to establish the optimal policies.
In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solvingHamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulate
our problem
D(⋅){\displaystyle D(\cdot )}is the terminal reward function,s(t){\displaystyle s(t)}is the system state vector,a(t){\displaystyle a(t)}is the system control vector we try to find.f(⋅){\displaystyle f(\cdot )}shows how the state vector changes over time. The Hamilton–Jacobi–Bellman equation is as follows:
We could solve the equation to find the optimal controla(t){\displaystyle a(t)}, which could give us the optimalvalue functionV∗{\displaystyle V^{*}}
Reinforcement learningis an interdisciplinary area ofmachine learningandoptimal controlthat has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.[19]
Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance isε−{\displaystyle \varepsilon -}close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible).
For the purpose of this section, it is useful to define a further function, which corresponds to taking the actiona{\displaystyle a}and then continuing optimally (or according to whatever policy one currently has):
While this function is also unknown, experience during learning is based on(s,a){\displaystyle (s,a)}pairs (together with the outcomes′{\displaystyle s'}; that is, "I was in states{\displaystyle s}and I tried doinga{\displaystyle a}ands′{\displaystyle s'}happened"). Thus, one has an arrayQ{\displaystyle Q}and uses experience to update it directly. This is known asQ-learning.
Another application of MDP process inmachine learningtheory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detaillearning automatapaper is surveyed byNarendraand Thathachar (1974), which were originally described explicitly asfinite-state automata.[20]Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.[21]
In learning automata theory,a stochastic automatonconsists of:
The states of such an automaton correspond to the states of a "discrete-state discrete-parameterMarkov process".[22]At each time stept= 0,1,2,3,..., the automaton reads an input from its environment, updates P(t) to P(t+ 1) byA, randomly chooses a successor state according to the probabilities P(t+ 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.[21]
Other than the rewards, a Markov decision process(S,A,P){\displaystyle (S,A,P)}can be understood in terms ofCategory theory. Namely, letA{\displaystyle {\mathcal {A}}}denote thefree monoidwith generating setA. LetDistdenote theKleisli categoryof theGiry monad. Then a functorA→Dist{\displaystyle {\mathcal {A}}\to \mathbf {Dist} }encodes both the setSof states and the probability functionP.
In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result(C,F:C→Dist){\displaystyle ({\mathcal {C}},F:{\mathcal {C}}\to \mathbf {Dist} )}acontext-dependent Markov decision process, because moving from one object to another inC{\displaystyle {\mathcal {C}}}changes the set of available actions and the set of possible states.[citation needed]
The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factorβorγ, while the other focuses on minimization problems from engineering and navigation[citation needed], using the terms control, cost, cost-to-go, and calling the discount factorα. In addition, the notation for the transition probability varies.
In addition, transition probability is sometimes writtenPr(s,a,s′){\displaystyle \Pr(s,a,s')},Pr(s′∣s,a){\displaystyle \Pr(s'\mid s,a)}or, rarely,ps′s(a).{\displaystyle p_{s's}(a).}
|
https://en.wikipedia.org/wiki/Markov_decision_process
|
Aninfluence diagram(ID) (also called arelevance diagram,decision diagramor adecision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of aBayesian network, in which not onlyprobabilistic inferenceproblems but alsodecision makingproblems (following themaximum expected utilitycriterion) can be modeled and solved.
ID was first developed in the mid-1970s bydecision analystswith an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to thedecision treewhich typically suffers fromexponential growthin number of branches with each variable modeled. ID is directly applicable inteam decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extensions of ID also find their use ingame theoryas an alternative representation of thegame tree.
An ID is adirected acyclic graphwith three types (plus one subtype) ofnodeand three types ofarc(or arrow) between nodes.
Nodes:
Arcs:
Given a properly structured ID:
Alternative, information, and preferenceare termeddecision basisin decision analysis, they represent three required components of any valid decision situation.
Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by thed{\displaystyle d}-separation criterion of Bayesian network. According to this semantic, every node is probabilistically
independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value nodeX{\displaystyle X}and non-value nodeY{\displaystyle Y}implies that there exists a set of non-value nodesZ{\displaystyle Z}, e.g., the parents ofY{\displaystyle Y}, that rendersY{\displaystyle Y}independent ofX{\displaystyle X}given the outcome of the nodes inZ{\displaystyle Z}.
Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation.
The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as thevalue of information. Consider the following three scenarios;
Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what they care about (Weather Condition) when making their decision. Scenario 3, however, is the worst possible scenario for this decision situation since they need to make their decision without any hint (Weather Forecast) on what they care about (Weather Condition) will turn out to be.
The decision-maker is usually better off (definitely no worse off, on average) to move from scenario 3 to scenario 2 through the acquisition of new information. The most they should be willing to pay for such move is called thevalue of informationonWeather Forecast, which is essentially thevalue of imperfect informationonWeather Condition.
The applicability of this simple ID and the value of information concept is tremendous, especially inmedical decision makingwhen most decisions have to be made with imperfect information about their patients, diseases, etc.
Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as awell-formed influence diagram(WFID). WFIDs can be evaluated usingreversalandremovaloperations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed byartificial intelligenceresearchers concerningBayesian network inference(belief propagation).
An influence diagram having only uncertainty nodes (i.e., a Bayesian network) is also called arelevance diagram. An arc connecting nodeAtoBimplies not only that "Ais relevant toB", but also that "Bis relevant toA" (i.e.,relevanceis asymmetricrelationship).
|
https://en.wikipedia.org/wiki/Influence_diagram
|
Incomputing, afile systemorfilesystem(often abbreviated toFSorfs) governsfileorganization and access. Alocalfile system is a capability of anoperating systemthat services the applications running on the samecomputer.[1][2]Adistributed file systemis aprotocolthat provides file access betweennetworkedcomputers.
A file system provides adata storageservicethat allowsapplicationsto sharemass storage. Without a file system, applications could access the storage inincompatibleways that lead toresource contention,data corruptionanddata loss.
There are many file systemdesignsandimplementations– with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more.
File systems have been developed for many types ofstorage devices, includinghard disk drives(HDDs),solid-state drives(SSDs),magnetic tapesandoptical discs.[3]
A portion of the computermain memorycan be set up as aRAM diskthat serves as a storage device for a file system. File systems such astmpfscan store files invirtual memory.
Avirtualfile system provides access to files that are either computed on request, calledvirtual files(seeprocfsandsysfs), or are mapping into another, backing storage.
Fromc.1900and before the advent of computers the termsfile system,filing systemandsystem for filingwere used to describe methods of organizing, storing and retrieving paper documents.[4]By 1961, the termfile systemwas being applied to computerized filing alongside the original meaning.[5]By 1964, it was in general use.[6]
A local file system'sarchitecturecan be described aslayers of abstractioneven though a particular file system design may not actually separate the concepts.[7]
Thelogical file systemlayer provides relatively high-level access via anapplication programming interface(API) for file operations including open, close, read and write – delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors.[8]It provides file access, directory operations, security and protection.[7]
Thevirtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which is called a file system implementation.[8]
Thephysical file systemlayer provides relatively low-level access to a storage device (e.g. disk). It reads and writesdata blocks, providesbufferingand othermemory managementand controls placement of blocks in specific locations on the storage medium. This layer usesdevice driversorchannel I/Oto drive the storage device.[7]
Afile name, orfilename, identifies a file to consuming applications and in some cases users.
A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory.
Most file systems restrict the length of a file name.
Some file systems match file names ascase sensitiveand others as case insensitive. For example, the namesMYFILEandmyfilematch the same file for case insensitive, but different files for case sensitive.
Most modern file systems allow a file name to contain a wide range of characters from theUnicodecharacter set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type.
File systems typically support organizing files intodirectories, also calledfolders, which segregate files into groups.
This may be implemented by associating the file name with an index in atable of contentsor aninodein aUnix-likefile system.
Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories.
The first file system to support arbitrary hierarchies of directories was used in theMulticsoperating system.[9]The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do,Apple'sHierarchical File Systemand its successorHFS+inclassic Mac OS, theFATfile system inMS-DOS2.0 and later versions of MS-DOS and inMicrosoft Windows, theNTFSfile system in theWindows NTfamily of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of theFiles-11file system inOpenVMS.
In addition to data, the file content, a file system also manages associatedmetadatawhich may include but is not limited to:
A file system stores associated metadata separate from the content of the file.
Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file.
Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as theinode.
Most file systems also store metadata not associated with any one particular file.
Such metadata includes information about unused regions—free space bitmap,block availability map—and information aboutbad sectors.
Often such information about anallocation groupis stored inside the allocation group itself.
Additional attributes can be associated on file systems, such asNTFS,XFS,ext2,ext3, some versions ofUFS, andHFS+, usingextended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image.
Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to asstreamsorforks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago.
Seecomparison of file systems § Metadatafor details on which file systems support which kinds of metadata.
A local file system tracks which areas of storage belong to which file and which are not being used.
When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
To delete a file, the file system records that the file's space is free; available to use for another file.
A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e.bytes). For example, inApple DOSof the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used atrack/sector map.[citation needed]
The granular nature results in unused space, sometimes calledslack space, for each file except for those that have the rare size that is a multiple of the granular allocation.[10]For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB.
Generally, the allocation unit size is set when the storage is configured.
Choosing a relatively small size compared to the files stored, results in excessive access overhead.
Choosing a relatively large size results in excessive unused space.
Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space.
As a file system creates, modifies and deletes files, the underlying storage representation may becomefragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous.
A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.[11]
This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such ashard disk drives. Other hardware such assolid-state drivesare not affected by fragmentation.
A file system often supports access control of data that it manages.
The intent of access control is often to prevent certain users from reading or modifying certain files.
Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere andfile permissionsin the form of permission bits,access control lists, orcapabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders.
Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data.
Some operating systems allow a system administrator to enabledisk quotasto limit a user's use of storage space.
A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like:
Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media.
A file system might record events to allow analysis of issues such as:
Many file systems access data as a stream ofbytes. Typically, to read file data, a program provides amemory bufferand the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium.
Some file systems, or layers on top of a file system, allow a program to define arecordso that a program can read and write data as a structure; not an unorganized sequence of bytes.
If afixed lengthrecord definition is used, then locating the nthrecord can be calculated mathematically, which is relatively fast compared to parsing the data for record separators.
An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.[12]
Typically, a file system can be managed by the user via various utility programs.
Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system.
Directory utilities may be used to create, rename and deletedirectory entries, which are also known asdentries(singular:dentry),[13]and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard linksinUnix), to rename parent links (".." inUnix-likeoperating systems),[clarification needed]and to create bidirectional links to files.
File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category.
Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file systemdefragmentationutilities.
Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system.
Utilities, libraries and programs usefile system APIsto make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal.
Frequently, retail systems are configured with a single file system occupying the entirestorage device.
Another approach is topartitionthe disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be setread-onlyand only periodically be set writable. Some file systems, such asZFSandAPFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.[14][15]
A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using theext4file system) in a virtual machine under his/her production Windows environment (usingNTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on thehypervisorand settings) in the NTFS host file system.
Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of thesystemfile system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition,defragmentationmay be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches).
Adisk file systemtakes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples includeFAT(FAT12,FAT16,FAT32),exFAT,NTFS,ReFS,HFSandHFS+,HPFS,APFS,UFS,ext2,ext3,ext4,XFS,btrfs,Files-11,Veritas File System,VMFS,ZFS,ReiserFS,NSSand ScoutFS. Some disk file systems arejournaling file systemsorversioning file systems.
ISO 9660andUniversal Disk Format(UDF) are two common formats that targetCompact Discs,DVDsandBlu-raydiscs.Mount Rainieris an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs.
Aflash file systemconsiders the special abilities, performance and restrictions offlash memorydevices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.[16]
Atape file systemis a file system and tape format designed to store files on tape.Magnetic tapesare sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system.
In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks.
Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other.
Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file.
Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to asstreaming, so that time-consuming and repeated tape motions are not required to write new data.
However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future.
IBM has developed a file system for tape called theLinear Tape File System. The IBM implementation of this file system has been released as the open-sourceIBM Linear Tape File System — Single Drive Edition (LTFS-SDE)product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape.
Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes.[a]With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media.
Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time.
Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similarrich metadata.[17]
IBM DB2 for i[18](formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object basedIBM i[19]operating system (formerly known as OS/400 and i5/OS), incorporating asingle level storeand running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish.[20]These technologies are informally known as 'Fortress Rochester'[citation needed]and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective[citation needed].
Some other projects that are not "pure" database file systems but that use some aspects of a database file system:
Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the commandshell, may leave the entire system in an unusable state.
Transaction processingintroduces theatomicityguarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide theisolationguarantee[clarification needed], meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properlyserializedwith the transaction.
Windows, beginning with Vista, added transaction support toNTFS, in a feature calledTransactional NTFS, but its use is now discouraged.[21]There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system,[22]Amino,[23]LFS,[24]and a transactionalext3file system on the TxOS kernel,[25]as well as transactional file systems targeting embedded systems, such as TFFS.[26]
Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions.File lockingcan be used as aconcurrency controlmechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot preventTOCTTOUrace conditions on symbolic links.
File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity.
Journaling file systemsis one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call.
Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software.
Anetwork file systemis a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for theNFS,[27]AFS,SMBprotocols, and file-system-like clients forFTPandWebDAV.
Ashared disk file systemis one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually astorage area network). The file system arbitrates access to that subsystem, preventing write collisions.[28]Examples includeGFS2fromRed Hat,GPFS, now known as Spectrum Scale, from IBM,SFSfrom DataPlow,CXFSfromSGI,StorNextfromQuantum Corporationand ScoutFS from Versity.
Some file systems expose elements of the operating system as files so they can be acted on via thefile system API. This is common inUnix-likeoperating systems, and to a lesser extent in other operating systems. Examples include:
In the 1970s disk and digital tape devices were too expensive for some earlymicrocomputerusers. An inexpensive basic data storage system was devised that used commonaudio cassettetape.
When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, thenmodulated soundsthat encoded a prefix, the data, achecksumand a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system wouldlistento the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as theCommodore PETseries of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data.
In a flat file system, there are nosubdirectories; directory entries for all files are stored in a single directory.
Whenfloppy diskmedia was first available this type of file system was adequate due to the relatively small amount of data space available.CP/Mmachines featured a flat file system, where files could be assigned to one of 16user areasand generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specificquotafor each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The earlyApple Macintoshalso featured a flat file system, theMacintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder.IBMDOS/360andOS/360store entries for all files on a disk pack (volume) in a directory on the pack called aVolume Table of Contents(VTOC).
While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files.
A recent addition to the flat file system family isAmazon'sS3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes.
Anoperating system(OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently.
An OS typically provides file system access to the user. Often an OS providescommand line interface, such asUnix shell, WindowsCommand PromptandPowerShell, andOpenVMS DCL. An OS often also providesgraphical user interfacefile browserssuch as MacOSFinderand WindowsFile Explorer.
Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory.
Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called themount point– it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems.
Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose.
Linuxsupports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2,ext3andext4),XFS,JFS, andbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there areUBIFS,JFFS2andYAFFS, among others.SquashFSis a common compressed read-only file system.
Solarisin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS.
Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (journaling)VxFS, Sun Microsystems (clustering)QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS.
Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orjournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS.
Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools inZFS.
macOS (formerly Mac OS X)uses theApple File System(APFS), which in 2017 replaced a file system inherited fromclassic Mac OScalledHFS Plus(HFS+). Apple also uses the term "Mac OS Extended" for HFS+.[29]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter.
File names can be up to 255 characters. HFS Plus usesUnicodeto store file names. On macOS, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension.
HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic links, andaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland.
macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses theApple File Systemonsolid-state drives.
macOS also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[30]As ofMac OS X LionUFS support was completely dropped.
Newer versions of macOS are capable of reading and writing to the legacyFATfile systems (16 and 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on macOS versions prior toMac OS X Snow Leopardthird-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).[31]
Finally, macOS supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[32]
OS/21.2 introduced theHigh Performance File System(HPFS). HPFS supports mixed case file names in differentcode pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data,extent-basedspace allocation, aB+ treestructure for directories, and the root directory located at the midpoint of the disk, for faster average access. Ajournaled filesystem(JFS) was shipped in 1999.
PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28.
Plan 9 from Bell Labstreats everything as a file and accesses all objects as a file would be accessed (i.e., there is noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system.
TheInfernooperating system shares these concepts with Plan 9.
Windows makes use of theFAT,NTFS,exFAT,Live File SystemandReFSfile systems (the last of these is only supported and usable inWindows Server 2012,Windows Server 2016,Windows 8,Windows 8.1, andWindows 10; Windows cannot boot from it).
Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primaryhard disk drivepartition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967.
The family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PC DOS,OS/2, andDR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age.
The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed]
Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows.
The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions.
FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS.
FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion.
NTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links).
exFAThas certain advantages over NTFS with regard tofile system overhead.[citation needed]
exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11.
exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard).[32]Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.[33][34]
Prior to the introduction ofVSAM,OS/360systems implemented a hybrid file system. The system was designed to easily supportremovable disk packs, so the information relating to all files on one disk (volumein IBM terminology) is stored on that disk in aflat system filecalled theVolume Table of Contents(VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of theSystem Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access.
The IBMConversational Monitor System(CMS) component ofVM/370uses a separate flat file system for eachvirtual disk(minidisk). File data and control information are scattered and intermixed. The anchor is a record called theMaster File Directory(MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels ofindirection, where the file's directory entry (called aFile Status Table(FST) entry) points to blocks containing a list of addresses of the individual records.
Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in asingle-level store. Many types ofobjectsare defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integratedrelational database.
File systems limitstorable data capacity– generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future.
Since storage sizes have increased at nearexponentialrate (seeMoore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity.
With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980shome computerswith 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems.
It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason.
In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended.[39]On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse.[39]On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back),[40]and both ext3 and ext4 can be converted tobtrfs, and converted back until the undo information is deleted.[41]These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases usingsparse filesupport.[41]
Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system.
For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted.
An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup.
Inhierarchical file systems, files are accessed by means of apaththat is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name.
Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
|
https://en.wikipedia.org/wiki/File_system
|
Monomorphizationis acompile-timeprocess wherepolymorphic functionsare replaced by many monomorphic functions for each unique instantiation.[1]It is considered beneficial to undergo the mentioned transformation because it results in the outputintermediate representation(IR) having specific types, which allows for more effective optimization. Additionally, many IRs are intended to be low-level and do not accommodate polymorphism. The resulting code is generally faster thandynamic dispatch, but may require more compilation time and storage space due to duplicating the function body.[2][3][4][5][6][7]
This is an example of a use of agenericidentity function inRust
After monomorphization, this would become equivalent to
|
https://en.wikipedia.org/wiki/Monomorphization
|
Attribute-based access control(ABAC), also known aspolicy-based access controlforIAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes.[1]
ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes.[2]ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes.[3]
Unlikerole-based access control(RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups.
Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples areroleandproject. Atomic-valued attributes contain only one atomic value. Examples areclearanceandsensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control.[citation needed]
Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information systems to be defined to resolve an authorization and achieve an efficient regulatory compliance, allowing enterprises flexibility in their implementations based on their existing infrastructures.
Attribute-based access control is sometimes referred to aspolicy-based access control(PBAC) orclaims-based access control(CBAC), which is a Microsoft-specific term. The key standards that implement ABAC areXACMLandALFA (XACML).[4]
ABAC can be seen as:
ABAC comes with a recommended architecture which is as follows:
Attributes can be about anything and anyone. They tend to fall into 4 different categories:
Policies are statements that bring together attributes to express what can happen and is not allowed. Policies in ABAC can be granting or denying policies. Policies can also be local or global and can be written in a way that they override other policies. Examples include:
With ABAC you can have an unlimited number of policies that cater to many different scenarios and technologies.[7]
Historically, access control models have includedmandatory access control(MAC),discretionary access control(DAC), and more recentlyrole-based access control(RBAC). These access control models are user-centric and do not take into account additional parameters such as resource information, the relationship between the user (the requesting entity) and the resource, and dynamic information, e.g. time of the day or user IP.
ABAC tries to address this by defining access control based on attributes which describe the requesting entity (the user), the targeted object or resource, the desired action (view, edit, delete), and environmental or contextual information. This is why access control is said to be attribute-based.
There are three main implementations of ABAC:
XACML, the eXtensible Access Control Markup Language, defines an architecture (shared with ALFA and NGAC), a policy language, and a request/response scheme. It does not handle attribute management (user attribute assignment, object attribute assignment, environment attribute assignment) which is left to traditionalIAMtools, databases, and directories.
Companies, including every branch in the United States military, have started using ABAC. At a basic level, ABAC protects data with 'IF/THEN/AND' rules rather than assign data to users. The US Department of Commerce has made this a mandatory practice and the adoption is spreading throughout several governmental and military agencies.[8]
The concept of ABAC can be applied at any level of the technology stack and an enterprise infrastructure. For example, ABAC can be used at the firewall, server, application, database, and data layer. The use of attributes bring additional context to evaluate the legitimacy of any request for access and inform the decision to grant or deny access.
An important consideration when evaluating ABAC solutions is to understand its potential overhead on performance and its impact on the user experience. It is expected that the more granular the controls, the higher the overhead.
ABAC can be used to apply attribute-based, fine-grained authorization to the API methods or functions. For instance, a banking API may expose an approveTransaction(transId) method. ABAC can be used to secure the call. With ABAC, a policy author can write the following:
The flow would be as follows:
One of the key benefits to ABAC is that the authorization policies and attributes can be defined in a technology neutral way. This means policies defined for APIs or databases can be reused in the application space. Common applications that can benefit from ABAC are:
The same process and flow as the one described in the API section applies here too.
Security for databases has long been specific to the database vendors: Oracle VPD, IBM FGAC, and Microsoft RLS are all means to achieve fine-grained ABAC-like security.
An example would be:
Data security typically goes one step further than database security and applies control directly to the data element. This is often referred to as data-centric security. On traditional relational databases, ABAC policies can control access to data at the table, column, field, cell and sub-cell using logical controls with filtering conditions and masking based on attributes. Attributes can be data, user, session or tools based to deliver the greatest level of flexibility in dynamically granting/denying access to a specific data element. Onbig data, and distributed file systems such as Hadoop, ABAC applied at the data layer control access to folder, sub-folder, file, sub-file and other granular.
Attribute-based access control can also be applied to Big Data systems like Hadoop. Policies similar to those used previously can be applied when retrieving data from data lakes.[9][10]
As of Windows Server 2012, Microsoft has implemented an ABAC approach to controlling access to files and folders. This is achieved through dynamic access control (DAC)[11]and Security Descriptor Definition Language (SDDL). SDDL can be seen as an ABAC language as it uses metadata of the user (claims) and of the file/ folder to control access.
|
https://en.wikipedia.org/wiki/Attribute-based_access_control
|
Anemployment websiteis awebsitethat deals specifically withemploymentorcareers. Many employment websites are designed to allowemployersto post job requirements for a position to be filled and are commonly known as job boards. Other employment sites offer employer reviews, career and job-search advice, and describe different job descriptions or employers. Through a job website, a prospective employee can locate and fill out ajob applicationor submitresumesover the Internet for the advertised position.
The Online Career Center was developed in 1992 byBill Warren[1]as a non-profit organization backed by forty major corporations to allow job hunters to post their resumes and forrecruitersto post job openings.[2]
In 1994, Robert J. McGovern began NetStart Inc. as software sold to companies for listing job openings on their websites and manage the incoming e-mails those listings generated. After an influx of two million dollars in investment capital[3]he then transported this software to its own web address, at first listing the job openings from the companies who utilized the software.[4]NetStart Inc. changed its name in 1998 to operate under the name of their software,CareerBuilder.[5]The company received a further influx of seven million dollars from investment firms such asNew Enterprise Associatesto expand their operations.[6]
Six major newspapers joined forces in 1995 to list their classified sections online. The service was called CareerPath.com and featured help-wanted listings from the Los Angeles Times, the Boston Globe, Chicago Tribune, the New York Times, San Jose Mercury News and the Washington Post.[7]
The industry attempted to reach a broader, less tech-savvy base in 1998 whenHotjobs.comattempted to buy aSuper Bowlspot, but Fox rejected the ad for being in poor taste. The ad featured a janitor at a zoo sweeping out the elephant cage completely unbeknownst to the animal. The elephant sits down briefly and when it stands back up, the janitor has disappeared, suggesting the worker was now stuck in the elephant's anus. The ad meant to illustrate a need for those stuck in jobs they hate, and offer a solution through their Web site.[8]
In 1999,Monster.comran on three 30 second Super Bowl ads for four million dollars.[9]One ad which featured children speaking like adults, drolly intoning their dream of working at various dead-end jobs to humorous effect were far more popular than rival Hotjobs.com ad about a security guard who transitions from a low paying security job to the same job at a fancier building.[10]Soon thereafter, Monster.com was elevated to the top spot of online employment sites.[11]Hotjobs.com's ad wasn't as successful, but it gave the company enough of a boost for itsIPOin August.[12]
After being purchased in a joint venture byKnight RidderandTribune Companyin July,[13]CareerBuilder absorbed competitor boards CareerPath.com and then Headhunter.net which had already acquired CareerMosaic. Even with these aggressive mergers CareerBuilder still trailed behind the number one employment site Jobsonline.com, number two Monster.com and number three Hotjobs.com.[14]
Monster.com made a move in 2001 to purchase Hotjobs.com for $374 million instock, but were unsuccessful due toYahoo's unsolicited cash and stock bid of $430 million late in the year. Yahoo had previously announced plans to enter the job board business, but decided to jump start that venture by purchasing the established brand.[15]In February 2010, Monster acquired HotJobs from Yahoo for $225 million.[16]
Ajob boardis awebsitethat facilitatesjob huntingand range from large scale generalist sites to niche job boards for job categories such asengineering,legal,insurance,social work,teaching,mobile appdevelopment as well as cross-sector categories such asgreen jobs,ethical jobsandseasonal jobs. Users can typically upload theirrésumésand submit them to potentialemployersandrecruitersfor review, while employers and recruiters can post job ads and search for potential employees.
The termjob search enginemight refer to a job board with asearch enginestyle interface, or to a web site that actually indexes and searches other web sites.
Niche job boards are starting to play a bigger role in providing more targeted job vacancies and employees to the candidate and the employer respectively. Job boards such as airport jobs and federal jobs among others provide a very focused way of eliminating and reducing time to applying to the most appropriate role.USAJobs.govis the United States' official website for jobs. It gathers job listings from over 500 federal agencies.[17]
Some web sites are simplysearch enginesthat collect results from multiple independent job boards. This is an example of bothmetasearch(since these are search engines which search other search engines) andvertical search(since the searches are limited to a specific topic - job listings).
Some of these newsearch enginesprimarily index traditional job boards. These sites aim to provide a "one-stop shop" for job-seekers who don't need to search the underlying job boards. In 2006, tensions developed between the job boards and severalscraper sites, withCraigslistbanning scrapers from its job classifieds andMonster.comspecifically banning scrapers through its adoption of arobots exclusion standardon all its pages while others have embraced them.
Industry specific posting boards are also appearing. These consolidate all the vacancies in a very specific industry. The largest "niche" job board isDice.comwhich focuses on the IT industry. Many industry and professional associations offer members a job posting capability on the association website.
An employer review website is a type of employment website where past and currentemployeespost comments about their experiences working for a company or organization. An employer review website usually takes the form of aninternet forum. Typical comments are aboutmanagement,working conditions, andpay. Although employer review websites may produce links to potential employers, they do not necessarily list vacancies.[18][19]
Although many sites that provide access to job advertisements include pages with advice about writing resumes and CVs, performing well in interviews, and other topics of interest to job seekers there are sites that specialize in providing information of this kind, rather than job opportunities. One such isWorking in Canada. It does provide links to theCanadian Job Bank. However, most of its content is information about local labor markets (in Canada), requirements for working in various occupations, information about relevant laws and regulations, government services and grants, and so on. Most items could be of interest to people in various roles and conditions including those considering career options, job seekers, employers and employees.
Employment sites typically charge fees to employers for listing job postings. Often these are flat fees for a specific duration (30 days, 60 days, etc). Other sites may allow employers to post basic listings for free, but charge a fee for more prominent placement of listings in search results. Employment sites like job aggregators use "pay-per-click" orpay-for-performancemodels, where the employer listing the job pays for clicks on the listing.[20][21]
In Japan, some sites have come under fire for allowing employers to list a job for free for an initial duration, then charging exorbitant fees after the free period expires. Most of these sites seem to have appeared within the last year in response to the labor shortage in Japan.[22]
Many job search engines and job boards encourage users to post theirresumeand contact details. While this is attractive for the site operators (who sell access to the resume bank toheadhuntersand recruiters), job-seekers should exercise caution in uploading personal information, since they have no control over where their resume will eventually be seen. Their resume may be viewed by a current employer or, worse, by criminals who may use information from it to amass and sell personal contact information, or even perpetrateidentity theft.[23][24]
|
https://en.wikipedia.org/wiki/Employer_review_website
|
Causal inferenceis the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference ofassociationis that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed.[1][2]The study of why things occur is calledetiology, and can be described using the language ofscientific causal notation. Causal inference is said to provide the evidence of causality theorized bycausal reasoning.
Causal inference is widely studied across all sciences. Several innovations in the development and implementation of methodology designed to determine causality have proliferated in recent decades. Causal inference remains especially difficult where experimentation is difficult or impossible, which is common throughout most sciences.
The approaches to causal inference are broadly applicable across all types of scientific disciplines, and many methods of causal inference that were designed for certain disciplines have found use in other disciplines. This article outlines the basic process behind causal inference and details some of the more conventional tests used across different disciplines; however, this should not be mistaken as a suggestion that these methods apply only to those disciplines, merely that they are the most commonly used in that discipline.
Causal inference is difficult to perform and there is significant debate amongst scientists about the proper way to determine causality. Despite other innovations, there remain concerns of misattribution by scientists of correlative results as causal, of the usage of incorrect methodologies by scientists, and of deliberate manipulation by scientists of analytical results in order to obtain statistically significant estimates. Particular concern is raised in the use of regression models, especially linear regression models.
Inferring thecauseof something has been described as:
Causal inference is conducted via the study of systems where the measure of one variable is suspected to affect the measure of another. Causal inference is conducted with regard to thescientific method. The first step of causal inference is to formulate a falsifiablenull hypothesis, which is subsequentlytested with statistical methods. Frequentiststatistical inferenceis the use of statistical methods to determine the probability that the data occur under the null hypothesis by chance;Bayesian inferenceis used to determine the effect of an independent variable.[5]Statistical inference is generally used to determine the difference between variations in the original data that arerandom variationor the effect of a well-specified causal mechanism. Notably,correlation does not imply causation, so the study of causality is as concerned with the study of potential causal mechanisms as it is with variation amongst the data.[citation needed]A frequently sought after standard of causal inference is an experiment wherein treatment is randomly assigned but all other confounding factors are held constant. Most of the efforts in causal inference are in the attempt to replicate experimental conditions.
Epidemiological studies employ differentepidemiological methodsof collecting and measuring evidence of risk factors and effect and different ways of measuring association between the two. Results of a 2020 review of methods for causal inference found that using existing literature for clinical training programs can be challenging. This is because published articles often assume an advanced technical background, they may be written from multiple statistical, epidemiological, computer science, or philosophical perspectives, methodological approaches continue to expand rapidly, and many aspects of causal inference receive limited coverage.[6]
Common frameworks for causal inference include thecausal pie model(component-cause),Pearl's structural causal model(causal diagram+do-calculus),structural equation modeling, andRubin causal model(potential-outcome), which are often used in areas such as social sciences and epidemiology.[7]
Experimental verification of causal mechanisms is possible using experimental methods. The main motivation behind an experiment is to hold other experimental variables constant while purposefully manipulating the variable of interest. If the experiment produces statistically significant effects as a result of only the treatment variable being manipulated, there is grounds to believe that a causal effect can be assigned to the treatment variable, assuming that other standards for experimental design have been met.
Quasi-experimental verification of causal mechanisms is conducted when traditional experimental methods are unavailable. This may be the result of prohibitive costs of conducting an experiment, or the inherent infeasibility of conducting an experiment, especially experiments that are concerned with large systems such as economies of electoral systems, or for treatments that are considered to present a danger to the well-being of test subjects. Quasi-experiments may also occur where information is withheld for legal reasons.
Epidemiologystudies patterns of health and disease in defined populations ofliving beingsin order toinfercauses and effects. An association between an exposure to a putativerisk factorand a disease may be suggestive of, but is not equivalent to causality becausecorrelation does not imply causation. Historically,Koch's postulateshave been used since the 19th century to decide if a microorganism was the cause of a disease. In the 20th century theBradford Hill criteria, described in 1965[8]have been used to assess causality of variables outside microbiology, although even these criteria are not exclusive ways to determine causality.
Inmolecular epidemiologythe phenomena studied are on amolecular biologylevel, including genetics, wherebiomarkersare evidence of cause or effects.
A recent trend[when?]is to identify evidence for influence of the exposure onmolecular pathologywithin diseasedtissueor cells, in the emerging interdisciplinary field ofmolecular pathological epidemiology(MPE).[independent source needed]Linking the exposure to molecular pathologic signatures of the disease can help to assess causality.[independent source needed]Considering the inherent nature ofheterogeneityof a given disease, the unique disease principle, disease phenotyping and subtyping are trends in biomedical andpublic healthsciences, exemplified aspersonalized medicineandprecision medicine.[independent source needed]
Causal Inference has also been used for treatment effect estimation. Assuming a set of observable patient symptoms(X) caused by a set of hidden causes(Z) we can choose to give or not a treatmentt. The result of the giving or not giving the treatment is the effect estimationy. If the treatment is not guaranteed to have a positive effect then the decision whether the treatment should be applied or not depends firstly on expert knowledge that encompasses the causal connections. For novel diseases, this expert knowledge may not be available. As a result, we rely solely on past treatment outcomes to make decisions. A modifiedvariational autoencodercan be used to model the causal graph described above.[9]While the above scenario could be modelled without the use of the hidden confounder(Z) we would lose the insight that the symptoms a patient together with other factors impacts both the treatment assignment and the outcome.
Causal inference is an important concept in the field ofcausal artificial intelligence. Determination of cause and effect from joint observational data for two time-independent variables, say X and Y, has been tackled using asymmetry between evidence for some model in the directions, X → Y and Y → X. The primary approaches are based onAlgorithmic information theorymodels and noise models.[citation needed]
Incorporate an independent noise term in the model to compare the evidences of the two directions.
Here are some of the noise models for the hypothesis Y → X with the noise E:
The common assumption in these models are:
On an intuitive level, the idea is that the factorization of the joint distribution P(Cause, Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of "complexity" is intuitively appealing, it is not obvious how it should be precisely defined.[13]A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations.[14]
The social sciences in general have moved increasingly toward including quantitative frameworks for assessing causality. Much of this has been described as a means of providing greater rigor to social science methodology. Political science was significantly influenced by the publication ofDesigning Social Inquiry, byGary King,Robert Keohane, andSidney Verba, in 1994. King, Keohane, and Verba recommend that researchers apply both quantitative and qualitative methods and adopt the language of statistical inference to be clearer about their subjects of interest and units of analysis.[15][16]Proponents of quantitative methods have also increasingly adopted thepotential outcomes framework, developed byDonald Rubin, as a standard for inferring causality.[citation needed]
While much of the emphasis remains on statistical inference in the potential outcomes framework, social science methodologists have developed new tools to conduct causal inference with both qualitative and quantitative methods, sometimes called a "mixed methods" approach.[17][18]Advocates of diverse methodological approaches argue that different methodologies are better suited to different subjects of study. Sociologist Herbert Smith and political scientists James Mahoney and Gary Goertz have cited the observation ofPaul W. Holland, a statistician and author of the 1986 article "Statistics and Causal Inference", that statistical inference is most appropriate for assessing the "effects of causes" rather than the "causes of effects".[19][20]Qualitative methodologists have argued that formalized models of causation, includingprocess tracingandfuzzy settheory, provide opportunities to infer causation through the identification of critical factors within case studies or through a process of comparison among several case studies.[16]These methodologies are also valuable for subjects in which a limited number of potential observations or the presence of confounding variables would limit the applicability of statistical inference.[citation needed]
On longer timescales,persistence studiesuses causal inference to link historical events to later political, economic and social outcomes.[21]
In theeconomic sciencesandpolitical sciencescausal inference is often difficult, owing to the real world complexity of economic and political realities and the inability to recreate many large-scale phenomena within controlled experiments. Causal inference in the economic and political sciences continues to see improvement in methodology and rigor, due to the increased level of technology available to social scientists, the increase in the number of social scientists and research, and improvements to causal inference methodologies throughout social sciences.[22]
Despite the difficulties inherent in determining causality in economic systems, several widely employed methods exist throughout those fields.
Economists and political scientists can use theory (often studied in theory-driven econometrics) to estimate the magnitude of supposedly causal relationships in cases where they believe a causal relationship exists.[23]Theorists can presuppose a mechanism believed to be causal and describe the effects using data analysis to justify their proposed theory. For example, theorists can use logic to construct a model, such as theorizing that rain causes fluctuations in economic productivity but that the converse is not true.[24]However, using purely theoretical claims that do not offer any predictive insights has been called "pre-scientific" because there is no ability to predict the impact of the supposed causal properties.[5]It is worth reiterating that regression analysis in the social science does not inherently imply causality, as many phenomena may correlate in the short run or in particular datasets but demonstrate no correlation in other time periods or other datasets. Thus, the attribution of causality to correlative properties is premature absent a well defined and reasoned causal mechanism.
Theinstrumental variables(IV) technique is a method of determining causality that involves the elimination of a correlation between one of a model's explanatory variables and the model's error term. This method presumes that if a model's error term moves similarly with the variation of another variable, then the model's error term is probably an effect of variation in that explanatory variable. The elimination of this correlation through the introduction of a new instrumental variable thus reduces the error present in the model as a whole.[25]
Model specification is the act of selecting a model to be used in data analysis. Social scientists (and, indeed, all scientists) must determine the correct model to use because different models are good at estimating different relationships.[26]
Model specification can be useful in determining causality that is slow to emerge, where the effects of an action in one period are only felt in a later period. It is worth remembering that correlations only measure whether two variables have similar variance, not whether they affect one another in a particular direction; thus, one cannot determine the direction of a causal relation based on correlations only. Because causal acts are believed to precede causal effects, social scientists can use a model that looks specifically for the effect of one variable on another over a period of time. This leads to using the variables representing phenomena happening earlier as treatment effects, where econometric tests are used to look for later changes in data that are attributed to the effect of such treatment effects, where a meaningful difference in results following a meaningful difference in treatment effects may indicate causality between the treatment effects and the measured effects (e.g., Granger-causality tests). Such studies are examples oftime-series analysis.[27]
Other variables, or regressors in regression analysis, are either included or not included across various implementations of the same model to ensure that different sources of variation can be studied more separately from one another. This is a form of sensitivity analysis: it is the study of how sensitive an implementation of a model is to the addition of one or more new variables.[28]
A chief motivating concern in the use of sensitivity analysis is the pursuit of discoveringconfounding variables. Confounding variables are variables that have a large impact on the results of a statistical test but are not the variable that causal inference is trying to study. Confounding variables may cause a regressor to appear to be significant in one implementation, but not in another.
Another reason for the use of sensitivity analysis is to detectmulticollinearity. Multicollinearity is the phenomenon where the correlation between two explanatory variables is very high. A high level of correlation between two such variables can dramatically affect the outcome of a statistical analysis, where small variations in highly correlated data can flip the effect of a variable from a positive direction to a negative direction, or vice versa. This is an inherent property of variance testing. Determining multicollinearity is useful in sensitivity analysis because the elimination of highly correlated variables in different model implementations can prevent the dramatic changes in results that result from the inclusion of such variables.[29]
However, there are limits to sensitivity analysis' ability to prevent the deleterious effects of multicollinearity, especially in the social sciences, where systems are complex. Because it is theoretically impossible to include or even measure all of the confounding factors in a sufficiently complex system, econometric models are susceptible to the common-cause fallacy, where causal effects are incorrectly attributed to the wrong variable because the correct variable was not captured in the original data. This is an example of the failure to account for alurking variable.[30]
Recently, improved methodology in design-based econometrics has popularized the use of both natural experiments and quasi-experimental research designs to study the causal mechanisms that such experiments are believed to identify.[31]
Despite the advancements in the development of methodologies used to determine causality, significant weaknesses in determining causality remain. These weaknesses can be attributed both to the inherent difficulty of determining causal relations in complex systems but also to cases of scientific malpractice.
Separate from the difficulties of causal inference, the perception that large numbers of scholars in the social sciences engage in non-scientific methodology exists among some large groups of social scientists. Criticism of economists and social scientists as passing off descriptive studies as causal studies are rife within those fields.[5]
In the sciences, especially in the social sciences, there is concern among scholars that scientific malpractice is widespread. As scientific study is a broad topic, there are theoretically limitless ways to have a causal inference undermined through no fault of a researcher. Nonetheless, there remain concerns among scientists that large numbers of researchers do not perform basic duties or practice sufficiently diverse methods in causal inference.[32][22][33][failed verification][34]
One prominent example of common non-causal methodology is the erroneous assumption of correlative properties as causal properties. There is no inherent causality in phenomena that correlate. Regression models are designed to measure variance within data relative to a theoretical model: there is nothing to suggest that data that presents high levels of covariance have any meaningful relationship (absent a proposed causal mechanism with predictive properties or a random assignment of treatment). The use of flawed methodology has been claimed to be widespread, with common examples of such malpractice being the overuse of correlative models, especially the overuse of regression models and particularly linear regression models.[5]The presupposition that two correlated phenomena are inherently related is a logical fallacy known asspurious correlation. Some social scientists claim that widespread use of methodology that attributes causality to spurious correlations have been detrimental to the integrity of the social sciences, although improvements stemming from better methodologies have been noted.[31]
A potential effect of scientific studies that erroneously conflate correlation with causality is an increase in the number of scientific findings whose results are not reproducible by third parties. Such non-reproducibility is a logical consequence of findings that correlation only temporarily being overgeneralized into mechanisms that have no inherent relationship, where new data does not contain the previous, idiosyncratic correlations of the original data. Debates over the effect of malpractice versus the effect of the inherent difficulties of searching for causality are ongoing.[35]Critics of widely practiced methodologies argue that researchers have engaged statistical manipulation in order to publish articles that supposedly demonstrate evidence of causality but are actually examples of spurious correlation being touted as evidence of causality: such endeavors may be referred to asP hacking.[36]To prevent this, some have advocated that researchers preregister their research designs prior to conducting to their studies so that they do not inadvertently overemphasize a nonreproducible finding that was not the initial subject of inquiry but was found to be statistically significant during data analysis.[37]
|
https://en.wikipedia.org/wiki/Causal_inference
|
PageRank(PR) is analgorithmused byGoogle Searchtorankweb pagesin theirsearch engineresults. It is named after both the term "web page" and co-founderLarry Page. PageRank is a way of measuring the importance of website pages. According to Google:
PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.[1]
Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known.[2][3]As of September 24, 2019, all patents associated with PageRank have expired.[4]
PageRank is alink analysisalgorithm and it assigns a numericalweightingto each element of ahyperlinkedsetof documents, such as theWorld Wide Web, with the purpose of "measuring" its relative importance within the set. Thealgorithmmay be applied to any collection of entities withreciprocalquotations and references. The numerical weight that it assigns to any given elementEis referred to as thePageRank of Eand denoted byPR(E).{\displaystyle PR(E).}
A PageRank results from a mathematical algorithm based on theWebgraph, created by all World Wide Web pages as nodes andhyperlinksas edges, taking into consideration authority hubs such ascnn.comormayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is definedrecursivelyand depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself.
Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5]In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6]
Other link-based ranking algorithms for Web pages include theHITS algorithminvented byJon Kleinberg(used byTeomaand nowAsk.com), the IBMCLEVER project, theTrustRankalgorithm, theHummingbirdalgorithm,[7]and theSALSA algorithm.[8]
Theeigenvalueproblem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895,Edmund Landausuggested using it for determining the winner of a chess tournament.[9][10]The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked onscientometricsranking scientific journals,[11]in 1977 byThomas Saatyin his concept ofAnalytic Hierarchy Processwhich weighted alternative choices,[12]and in 1995 by Bradley Love and Steven Sloman as acognitive modelfor concepts, the centrality algorithm.[13][14]
A search engine called "RankDex" from IDD Information Services, designed byRobin Liin 1996, developed a strategy for site-scoring and page-ranking.[15]Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it.[16]RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996.[17]Li filed a patent for the technology in RankDex in 1997; it was granted in 1999.[18]He later used it when he foundedBaiduin China in 2000.[19][20]Google founderLarry Pagereferenced Li's work as a citation in some of his U.S. patents for PageRank.[21][17][22]
Larry Page andSergey Brindeveloped PageRank atStanford Universityin 1996 as part of a research project about a new kind of search engine. An interview withHéctor García-Molina, Stanford Computer Science professor and advisor to Sergey,[23]provides background into the development of the page-rank algorithm.[24]Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.[25]The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google.[5]Rajeev MotwaniandTerry Winogradco-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of theGoogle search engine, published in 1998.[5]Shortly after, Page and Brin foundedGoogle Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.[26]
The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of aweb page.[27][28]The word is a trademark of Google, and the PageRank process has beenpatented(U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million.[29][30]
PageRank was influenced bycitation analysis, early developed byEugene Garfieldin the 1950s at the University of Pennsylvania, and byHyper Search, developed byMassimo Marchioriat theUniversity of Padua. In the same year PageRank was introduced (1998),Jon Kleinbergpublished his work onHITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.[5][31]
The PageRank algorithm outputs aprobability distributionused to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document.
Assume a small universe of four web pages:A,B,C, andD. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume aprobability distributionbetween 0 and 1. Hence the initial value for each page in this example is 0.25.
The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links.
If the only links in the system were from pagesB,C, andDtoA, each link would transfer 0.25 PageRank toAupon the next iteration, for a total of 0.75.
Suppose instead that pageBhad a link to pagesCandA, pageChad a link to pageA, and pageDhad links to all three pages. Thus, upon the first iteration, pageBwould transfer half of its existing value (0.125) to pageAand the other half (0.125) to pageC. PageCwould transfer all of its existing value (0.25) to the only page it links to,A. SinceDhad three outbound links, it would transfer one third of its existing value, or approximately 0.083, toA. At the completion of this iteration, pageAwill have a PageRank of approximately 0.458.
In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound linksL( ).
In the general case, the PageRank value for any pageucan be expressed as:
i.e. the PageRank value for a pageuis dependent on the PageRank values for each pagevcontained in the setBu(the set containing all pages linking to pageu), divided by the numberL(v) of links from pagev.
The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factord. The probability that they instead jump to any random page is1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5]
The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is,
So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion:
The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied byNand the sum becomesN. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[5]and claims by other Google employees[32]support the first variant of the formula above.
Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[5]
Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.
The formula uses a model of arandom surferwho reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as aMarkov chainin which the states are pages, and the transitions are the links between pages – all of which are all equally probable.
If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks anotherURLat random and continues surfing again.
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability,d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows:
wherep1,p2,...,pN{\displaystyle p_{1},p_{2},...,p_{N}}are the pages under consideration,M(pi){\displaystyle M(p_{i})}is the set of pages that link topi{\displaystyle p_{i}},L(pj){\displaystyle L(p_{j})}is the number of outbound links on pagepj{\displaystyle p_{j}}, andN{\displaystyle N}is the total number of pages.
The PageRank values are the entries of the dominant righteigenvectorof the modifiedadjacency matrixrescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is
whereRis the solution of the equation
where the adjacency functionℓ(pi,pj){\displaystyle \ell (p_{i},p_{j})}is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if pagepj{\displaystyle p_{j}}does not link topi{\displaystyle p_{i}}, and normalized such that, for eachj
i.e. the elements of each column sum up to 1, so the matrix is astochastic matrix(for more details see thecomputationsection below). Thus this is a variant of theeigenvector centralitymeasure used commonly innetwork analysis.
Because of the largeeigengapof the modified adjacency matrix above,[33]the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations.
Google's founders, in their original paper,[31]reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear inlogn{\displaystyle \log n}, where n is the size of the network.
As a result ofMarkov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equalt−1{\displaystyle t^{-1}}wheret{\displaystyle t}is theexpectationof the number of clicks (or random jumps) required to get from the page back to itself.
One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such asWikipedia).
Several strategies have been proposed to accelerate the computation of PageRank.[34]
Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept,[citation needed]which purports to determine which documents are actually highly valued by the Web community.
Since December 2007, when it startedactivelypenalizing sites selling paid text links, Google has combattedlink farmsand other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google'strade secrets.
PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as thepower iterationmethod[35][36]or the power method. The basic mathematical operations performed are identical.
Att=0{\displaystyle t=0}, an initial probability distribution is assumed, usually
where N is the total number of pages, andpi;0{\displaystyle p_{i};0}is page i at time 0.
At each time step, the computation, as detailed above, yields
where d is the damping factor,
or in matrix notation
whereRi(t)=PR(pi;t){\displaystyle \mathbf {R} _{i}(t)=PR(p_{i};t)}and1{\displaystyle \mathbf {1} }is the column vector of lengthN{\displaystyle N}containing only ones.
The matrixM{\displaystyle {\mathcal {M}}}is defined as
i.e.,
whereA{\displaystyle A}denotes theadjacency matrixof the graph andK{\displaystyle K}is the diagonal matrix with the outdegrees in the diagonal.
The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some smallϵ{\displaystyle \epsilon }
i.e., when convergence is assumed.
If the matrixM{\displaystyle {\mathcal {M}}}is a transition probability, i.e., column-stochastic andR{\displaystyle \mathbf {R} }is a probability distribution (i.e.,|R|=1{\displaystyle |\mathbf {R} |=1},ER=1{\displaystyle \mathbf {E} \mathbf {R} =\mathbf {1} }whereE{\displaystyle \mathbf {E} }is matrix of all ones), then equation (2) is equivalent to
Hence PageRankR{\displaystyle \mathbf {R} }is the principal eigenvector ofM^{\displaystyle {\widehat {\mathcal {M}}}}. A fast and easy way to compute this is using thepower method: starting with an arbitrary vectorx(0){\displaystyle x(0)}, the operatorM^{\displaystyle {\widehat {\mathcal {M}}}}is applied in succession, i.e.,
until
Note that in equation (3) the matrix on the right-hand side in the parenthesis can be interpreted as
whereP{\displaystyle \mathbf {P} }is an initial probability distribution. n the current case
Finally, ifM{\displaystyle {\mathcal {M}}}has columns with only zero values, they should be replaced with the initial probability vectorP{\displaystyle \mathbf {P} }. In other words,
where the matrixD{\displaystyle {\mathcal {D}}}is defined as
with
In this case, the above two computations usingM{\displaystyle {\mathcal {M}}}only give the same PageRank if their results are normalized:
The PageRank of an undirectedgraphG{\displaystyle G}is statistically close to thedegree distributionof the graphG{\displaystyle G},[37]but they are generally not identical: IfR{\displaystyle R}is the PageRank vector defined above, andD{\displaystyle D}is the degree distribution vector
wheredeg(pi){\displaystyle \deg(p_{i})}denotes the degree of vertexpi{\displaystyle p_{i}}, andE{\displaystyle E}is the edge-set of the graph, then, withY=1N1{\displaystyle Y={1 \over N}\mathbf {1} },[38]shows that:
1−d1+d‖Y−D‖1≤‖R−D‖1≤‖Y−D‖1,{\displaystyle {1-d \over 1+d}\|Y-D\|_{1}\leq \|R-D\|_{1}\leq \|Y-D\|_{1},}
that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree.
A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis.[39]In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to consideringbipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perron–Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate.
Sarma et al. describe tworandom walk-baseddistributed algorithmsfor computing PageRank of nodes in a network.[40]One algorithm takesO(logn/ϵ){\displaystyle O(\log n/\epsilon )}rounds with high probability on any graph (directed or undirected), where n is the network size andϵ{\displaystyle \epsilon }is the reset probability (1−ϵ{\displaystyle 1-\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takesO(logn/ϵ){\displaystyle O({\sqrt {\log n}}/\epsilon )}rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size.
TheGoogle Toolbarlong had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The "Toolbar Pagerank" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from itsWebmaster Toolssection, saying that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most importantmetricfor them to track, which is simply not true."[41]
The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming.[42]In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate.[43]On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar,[44]though the PageRank continued to be used internally to rank content in search results.[45]
Thesearch engine results page(SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200).[46][unreliable source?]Search engine optimization(SEO) is aimed at influencing the SERP rank for a website or a set of web pages.
Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword.[47]The PageRank of the HomePage of a website is the best indication Google offers for website authority.[48]
After the introduction ofGoogle Placesinto the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[49]When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google.[50]
TheGoogle DirectoryPageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.[51]
It was known that the PageRank shown in the Toolbar could easily bespoofed. Redirection from one page to another, either via aHTTP 302response or a "Refresh"meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. Spoofing can usually be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection.
Forsearch engine optimizationpurposes, some companies offer to sell high PageRank links to webmasters.[52]As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling[53]is intensely debated across the Webmaster community. Google advised webmasters to use thenofollowHTML attributevalue on paid links. According toMatt Cutts, Google is concerned about webmasters who try togame the system, and thereby reduce the quality and relevance of Google search results.[52]
In 2019, Google announced two additional link attributes providing hints about which links to consider or exclude within Search:rel="ugc"as a tag for user-generated content, such as comments; andrel="sponsored"as a tag for advertisements or other types of sponsored content. Multiplerelvalues are also allowed, for example,rel="ugc sponsored"can be used to hint that the link came from user-generated content and is sponsored.[54]
Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings.[55]
A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query,Q={q1,q2,⋯}{\displaystyle Q=\{q1,q2,\cdots \}}, the surfer selects aq{\displaystyle q}according to some probability distribution,P(q){\displaystyle P(q)}, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.[56]
The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.[57]
PageRank has been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index.[58]
For the analysis of protein networks in biology PageRank is also a useful tool.[59][60]
In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[61]
A similar newer use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[62]
A version of PageRank has recently been proposed as a replacement for the traditionalInstitute for Scientific Information(ISI)impact factor,[63]and implemented atEigenfactoras well as atSCImago. Instead of merely counting total citations to a journal, the "importance" of each citation is determined in a PageRank fashion.
Inneuroscience, the PageRank of aneuronin a neural network has been found to correlate with its relative firing rate.[64]
Personalized PageRank is used byTwitterto present users with other accounts they may wish to follow.[65]
Swiftype's site search product builds a "PageRank that's specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page.[66]
AWeb crawlermay use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers[67]that were used in the creation of Google isEfficient crawling through URL ordering,[68]which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.
The PageRank may also be used as a methodology to measure the apparent impact of a community like theBlogosphereon the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of theScale-free networkparadigm.[citation needed]
In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[69][70]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used.
In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA;[71]individual soccer players;[72]and athletes in the Diamond League.[73]
PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[74][75]Inlexical semanticsit has been used to performWord Sense Disambiguation,[76]Semantic similarity,[77]and also to automatically rankWordNetsynsetsaccording to how strongly they possess a given semantic property, such as positivity or negativity.[78]
How a traffic system changes its operational mode can be described by transitions between quasi-stationary states in correlation structures of traffic flow. PageRank has been used to identify and explore the dominant states among these quasi-stationary states in traffic systems.[79]
In early 2005, Google implemented a new value, "nofollow",[80]for therelattribute of HTML link and anchor elements, so that website developers andbloggerscan make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combatspamdexing.
As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See:Spam in blogs#nofollow)
In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[81]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[82]
|
https://en.wikipedia.org/wiki/PageRank
|
Anultramicroscopeis amicroscopewith a system that lights the object in a way that allows viewing of tinyparticlesvialight scattering, and notlight reflectionorabsorption. When the diameter of a particle is below or near thewavelengthofvisible light(around 500nanometers), the particle cannot be seen in alight microscopewith the usual methods of illumination. Theultra-inultramicroscoperefers to the ability to see objects whose diameter is shorter than the wavelength of visible light, on the model of theultra-inultraviolet.
In the system, the particles to be observed are dispersed in a liquid or gascolloid(or less often in a coarsersuspension). The colloid is placed in a light-absorbing, dark enclosure, and illuminated with a convergent beam of intense light entering from one side. Light hitting the colloid particles will be scattered. In discussions about light scattering, the converging beam is called a "Tyndall cone". The scene is viewed through an ordinary microscope placed at right angles to the direction of the lightbeam. Under the microscope, the individual particles will appear as small fuzzy spots of light moving irregularly. The spots are inherently fuzzy because light scattering produces fuzzier images than light reflection. The particles are inBrownian motionin most kinds of liquid and gas colloids, which causes the movement of the spots. The ultramicroscope system can also be used to observe tiny nontransparent particles dispersed in a transparent solid or gel.
Ultramicroscopes have been used for general observation ofaerosolsandcolloids, in studyingBrownian motion, in observingionizationtracks incloud chambers, and in studying biologicalultrastructure.
In 1902, the ultramicroscope was developed byRichard Adolf Zsigmondy(1865–1929) andHenry Siedentopf(1872–1940), working forCarl Zeiss AG.[1]Applying bright sunlight for illumination they were able to determine the size of 4 nm smallnanoparticlesincranberry glass. Zsigmondy further improved the ultramicroscope and presented the immersion ultramicroscope in 1912, allowing the observation of suspended nanoparticles in defined fluidic volumes.[2][3]In 1925, he was awarded the Nobel Prize in Chemistry for his research on colloids and the ultramicroscope.
Later the development ofelectron microscopesprovided additional ways to see objects too small for light microscopy.
|
https://en.wikipedia.org/wiki/Ultramicroscope
|
Incomputing,data recoveryis a process of retrieving deleted, inaccessible, lost, corrupted, damaged, overwritten or formatted data fromsecondary storage,removable mediaorfiles, when the data stored in them cannot be accessed in a usual way.[1]The data is most often salvaged from storage media such as internal or externalhard disk drives(HDDs),solid-state drives(SSDs),USB flash drives,magnetic tapes,CDs,DVDs,RAIDsubsystems, and otherelectronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to thefile systemthat prevents it from beingmountedby the hostoperating system(OS).[1]
Logical failures occur when the hard drive devices are functional but the user or automated-OS cannot retrieve or access data stored on them. Logical failures can occur due to corruption of the engineering chip, lost partitions, firmware failure, or failures during formatting/re-installation.[2][3]
Data recovery can be a very simple or technical challenge. This is why there are specific software companies specialized in this field.[4]
The most common data recovery scenarios involve an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be accomplished using aLive CD, or DVD by booting directly from aROMor a USB drive instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with afile manageroroptical disc authoringsoftware. Such cases can often be mitigated bydisk partitioningand consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.
Another scenario involves a drive-level failure, such as a compromisedfile systemor drive partition, or ahard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table, ormaster boot record, or updating thefirmwareor drive recovery techniques ranging from software-based recovery of corrupted data, to hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for the extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.
In a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind ofend users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often several disconnectedfragments, and may be recoverable if not overwritten by other data files.
The term "data recovery" is also used in the context offorensicapplications orespionage, where data which have beenencrypted, hidden, or deleted, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attacks which can only be recovered by some computer forensic experts.
A wide variety of failures can cause physical damage to storage media, which may result from human errors and natural disasters.CD-ROMscan have their metallic substrate or dye layer scratched off; hard disks can suffer from a multitude of mechanical failures, such ashead crashes, PCB failure, and failed motors;tapescan simply break.
Physical damage to a hard drive, even in cases where a head crash has occurred, does not necessarily mean permanent data loss. However, in extreme cases, such as prolonged exposure tomoistureandcorrosion—like the lostBitcoin hard drive of James Howells, buried in the Newport landfillfor over a decade — recovery is usually impossible. In rare cases, forensic techniques likeMagnetic Force Microscopy(MFM) have been explored to detect residual magnetic traces when data holds exceptional value.[5]Other techniques employed by many professional data recovery companies can typically salvage most, if not all, of the data that had been lost when the failure occurred.
Of course, there are exceptions to this, such as cases where severe damage to the hard driveplattersmay have occurred. However, if the hard drive can be repaired and a full image or clone created, then the logical file structure can be rebuilt in most instances.
Most physical damage cannot be repaired by end users. For example, opening a hard disk drive in a normal environment can allow airborne dust to settle on the platter and become caught between the platter and theread/write head. During normal operation, read/write heads float 3 to 6nanometersabove the platter surface, and the average dust particles found in a normal environment are typically around 30,000nanometers in diameter.[6]When these dust particles get caught between the read/write heads and the platter, they can cause new head crashes that further damage the platter and thus compromise the recovery process. Furthermore, end users generally do not have the hardware or technical expertise required to make these repairs. Consequently, data recovery companies are often employed to salvage important data with the more reputable ones usingclass 100dust- and static-freecleanrooms.[7]
Recovering data from physically damaged hardware can involve multiple techniques. Some damage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover every readable bit from the surface. Once this image is acquired and saved on a reliable medium, the image can be safely analyzed for logical damage and will possibly allow much of the original file system to be reconstructed.
A common misconception is that a damagedprinted circuit board(PCB) may be simply replaced during recovery procedures by an identical PCB from a healthy drive. While this may work in rare circumstances on hard disk drives manufactured before 2003, it will not work on newer drives. Electronics boards of modern drives usually contain drive-specificadaptation data(generally a map of bad sectors and tuning parameters) and other information required to properly access data on the drive. Replacement boards often need this information to effectively recover all of the data. The replacement board may need to be reprogrammed. Some manufacturers (Seagate, for example) store this information on a serialEEPROMchip, which can be removed and transferred to the replacement board.[8][9]
Each hard disk drive has what is called asystem areaorservice area; this portion of the drive, which is not directly accessible to the end user, usually contains drive's firmware and adaptive data that helps the drive operate within normal parameters.[10]One function of the system area is to log defective sectors within the drive; essentially telling the drive where it can and cannot write data.
The sector lists are also stored on various chips attached to the PCB, and they are unique to each hard disk drive. If the data on the PCB do not match what is stored on the platter, then the drive will not calibrate properly.[11]In most cases the drive heads will click because they are unable to find the data matching what is stored on the PCB.
The term "logical damage" refers to situations in which the error is not a problem in the hardware and requires software-level solutions.
In some cases, data on a hard disk drive can be unreadable due to damage to thepartition tableorfile system, or to (intermittent) media errors. In the majority of these cases, at least a portion of the original data can be recovered by repairing the damaged partition table or file system using specialized data recovery software such asTestDisk; software likeddrescuecan image media despite intermittent errors, and image raw data when there is partition table or file system damage. This type of data recovery can be performed by people without expertise in drive hardware as it requires no special physical equipment or access to platters.
Sometimes data can be recovered using relatively simple methods and tools;[12]more serious cases can require expert intervention, particularly if parts of files are irrecoverable.Data carvingis the recovery of parts of damaged files using knowledge of their structure.
After data has been physically overwritten on a hard disk drive, it is generally assumed that the previous data are no longer possible to recover. In 1996,Peter Gutmann, a computer scientist, presented a paper that suggested overwritten data could be recovered through the use ofmagnetic force microscopy.[13]In 2001, he presented another paper on a similar topic.[14]To guard against this type of data recovery, Gutmann and Colin Plumb designed a method of irreversibly scrubbing data, known as theGutmann methodand used by several disk-scrubbing software packages.
Substantial criticism has followed, primarily dealing with the lack of any concrete examples of significant amounts of overwritten data being recovered.[15]Gutmann's article contains a number of errors and inaccuracies, particularly regarding information about how data is encoded and processed on hard drives.[16]Although Gutmann's theory may be correct, there is no practical evidence that overwritten data can be recovered, while research has shown to support that overwritten data cannot be recovered.[specify][17][18][19]
Solid-state drives(SSD) overwrite data differently from hard disk drives (HDD) which makes at least some of their data easier to recover. Most SSDs useflash memoryto store data in pages and blocks, referenced bylogical block addresses(LBA) which are managed by theflash translation layer(FTL). When the FTL modifies a sector it writes the new data to another location and updates the map so the new data appear at the target LBA. This leaves the pre-modification data in place, with possibly many generations, and recoverable by data recovery software.
Sometimes, data present in the physical drives (Internal/External Hard disk,Pen Drive, etc.) gets lost, deleted and formatted due to circumstances like virus attack, accidental deletion or accidental use of SHIFT+DELETE. In these cases, data recovery software is used to recover/restore the data files.
In the list of logical failures of hard disks, a logical bad sector is the most common fault leading data not to be readable. Sometimes it is possible to sidestep error detection even in software, and perhaps with repeated reading and statistical analysis recover at least some of the underlying stored data. Sometimes prior knowledge of the data stored and the error detection and correction codes can be used to recover even erroneous data. However, if the underlying physical drive is degraded badly enough, at least the hardware surrounding the data must be replaced, or it might even be necessary to apply laboratory techniques to the physical recording medium. Each of the approaches is progressively more expensive, and as such progressively more rarely sought.
Eventually, if the final, physical storage medium has indeed been disturbed badly enough, recovery will not be possible using any means; the information has irreversibly been lost.
Recovery experts do not always need to have physical access to the damaged hardware. When the lost data can be recovered by software techniques, they can often perform the recovery using remote access software over the Internet, LAN or other connection to the physical location of the damaged media. The process is essentially no different from what the end user could perform by themselves.[20]
Remote recovery requires a stable connection with an adequate bandwidth. However, it is not applicable where access to the hardware is required, as in cases of physical damage.
Usually, there are four phases when it comes to successful data recovery, though that can vary depending on the type of data corruption and recovery required.[21]
TheWindowsoperating system can be reinstalled on a computer that is already licensed for it. The reinstallation can be done by downloading the operating system or by using a "restore disk" provided by the computer manufacturer. Eric Lundgren was fined and sentenced to U.S. federal prison in April 2018 for producing 28,000 restore disks and intending to distribute them for about 25 cents each as a convenience to computer repair shops.[22]
Data recovery cannot always be done on a running system. As a result, aboot disk,live CD,live USB, or any other type of live distro contains a minimal operating system.
|
https://en.wikipedia.org/wiki/List_of_data_recovery_software
|
Bluetooth Meshis a computermesh networkingstandardbased onBluetooth Low Energythat allows for many-to-many communication over Bluetooth radio. The Bluetooth Mesh specifications were defined in the Mesh Profile[1]and Mesh Model[2]specifications by theBluetooth Special Interest Group(Bluetooth SIG). Bluetooth Mesh was conceived in 2014[3]and adopted on July 13, 2017(2017-07-13).[4]
Bluetooth Mesh is amesh networkingstandard that operates on aflood networkprinciple. It's based on the nodes relaying the messages: every relay node that receives a network packet that
can be retransmitted with TTL = TTL - 1. Message caching is used to prevent relaying recently seen messages.
Communication is carried in the messages that may be up to 384 bytes long, when using Segmentation and Reassembly (SAR) mechanism, but most of the messages fit in one segment, that is 11 bytes. Each message starts with an opcode, which may be a single byte (for special messages), 2 bytes (for standard messages), or 3 bytes (for vendor-specific messages).
Every message has a source and a destination address, determining which devices process messages. Devices publish messages to destinations which can be single things / groups of things / everything.
Each message has a sequence number that protects the network against replay attacks.
Each message is encrypted and authenticated. Two keys are used to secure messages: (1) network keys – allocated to a single mesh network, (2) application keys – specific for a given application functionality, e.g. turning the light on vs reconfiguring the light.
Messages have atime to live(TTL). Each time message is received and retransmitted, TTL is decremented which limits the number of "hops", eliminating endless loops.
Bluetooth Mesh has a layered architecture, with multiple layers as below.
Nodes that support the various features can be formed into a particular mesh network topology.
to enable larger networks.
advertising bearers.
duty cycles only in conjunction with a node supporting the Friend feature.
messages destined for those nodes.
The practical limits of Bluetooth Mesh technology are unknown. Some limits that are built into the specification include:
Number of virtual groups is 2128.
As of version 1.0 of Bluetooth Mesh specification,[2]the following standard models and model groups have been defined:
Foundation models have been defined in the core specification. Two of them are mandatory for all mesh nodes.
Provisioning is a process of installing the device into a network. It is a mandatory step to build a Bluetooth Mesh network.
In the provisioning process, a provisioner securely distributes a network key and a unique address space for a device. The provisioning protocol uses P256 Elliptic CurveDiffie-HellmanKey Exchange to create a temporary key to encrypt network key and other information. This provides security from a passive eavesdropper.
It also provides various authentication mechanisms to protect network information, from an active eavesdropper who usesman-in-the-middle attack, during provisioning process.
A key unique to a device known as "Device Key" is derived from elliptic curve shared secret on provisioner and device during the provisioning process. This device key is used by the provisioner to encrypt messages for that specific device.
The security of the provisioning process has been analyzed in a paper presented during theIEEE CNS2018 conference.[5]
The provisioning can be performed using a Bluetooth GATT connection or advertising using the specific bearer.[1]
Free softwareandopen source softwareimplementations include the following:
|
https://en.wikipedia.org/wiki/Bluetooth_mesh_networking
|
This is a list of feature films and documentaries that includemathematicians, scientists who use math or references to mathematicians.
Films where mathematics is central to the plot:
Biographical films based on real-life mathematicians:
Films where one or more of the main characters are mathematicians, but that are not otherwise about mathematics:
Films where one or more of the members of the main cast is a mathematician:
|
https://en.wikipedia.org/wiki/List_of_films_about_mathematicians
|
Anetworkis an abstract structure capturing only the basics of connection patterns and little else. Because it is a generalized pattern, tools developed for analyzing,modelingand understanding networks can theoretically be implemented across disciplines. As long as a system can be represented by a network, there is an extensive set of tools –mathematical,computational, andstatistical– that are well-developed and if understood can be applied to the analysis of the system of interest.
Tools that are currently employed inrisk assessmentare often sufficient, but model complexity and limitations of computational power can tether risk assessors to involve more causal connections and account for moreBlack Swanevent outcomes. By applyingnetwork theorytools to risk assessment, computational limitations may be overcome and result in broader coverage of events with a narrowed range of uncertainties.[1]
Decision-making processes are not incorporated into routine risk assessments; however, they play a critical role in such processes.[2]It is therefore very important for risk assessors to minimizeconfirmation biasby carrying out their analysis and publishing their results with minimal involvement of external factors such as politics, media, and advocates. In reality, however, it is nearly impossible to break theiron triangleamong politicians, scientists (in this case, risk assessors), and advocates and media.[3]Risk assessors need to be sensitive to the difference between risk studies and risk perceptions.[4][5]One way to bring the two closer is to provide decision-makers with data they can easily rely on and understand. Employing networks in the risk analysis process can visualize causal relationships and identify heavily-weighted or important contributors to the probability of the critical event.[6]
Bow-tie diagrams,cause-and-effect diagrams,Bayesian networks(adirected acyclicnetwork) andfault treesare few examples of how network theories can be applied in risk assessment.[7]
In epidemiology risk assessments (Figure 7 and 9), once a network model was constructed, we can visually see then quantify and evaluate the potential exposure or infection risk of people related to the well-connected patients (Patient 1, 6, 35, 130 and 127 in Figure 7) or high-traffic places (Hotel M in Figure 9). In ecological risk assessments (Figure 8), through a network model we can identify thekeystone speciesand determine how widespread the impacts will extend from the potential hazards being investigated.
Risk assessment is a method for dealing with uncertainty. For it to be beneficial to the overall risk management and decision making process, it must be able to capture extreme and catastrophic events. Risk assessment involves two parts: risk analysis and risk evaluation, although the term “risk assessment” can be seen used indistinguishable with “risk analysis”. In general, risk assessment can be divided into these steps:[8]
Naturally, the number of steps required varies with each assessment. It depends on the scope of the analysis and the complexity of the study object.[9]Because these is always varies degrees of uncertainty involved in any risk analysis process, sensitivity and uncertainty analysis are usually carried out to mitigate the level of uncertainty and therefore improve the overall risk assessment result.
A network is a simplified representation that reduces a system to an abstract structure. Simply put, it is a collection of points linked together by lines. Each point is known as a “vertex” (multiple: “vertices”) or “nodes”, and each line as “edges” or “links”.[10]Network modeling and studying have already been applied in many areas, including computer, physical, biological, ecological, logistical and social science. Through the studying of these models, we gain insights into the nature of individual components (i.e. vertices), connections or interactions between those components (i.e. edges), as well as the pattern of connections (i.e. network).
Undoubtedly, modifications of the structure (or pattern) of any given network can have a big effect on the behavior of the system it depicts. For example, connections in a social network affect how people communicate, exchange news, travel, and, less obviously, spread diseases. In order to gain better understanding of how each of these systems functions, some knowledge of the structure of the network is necessary.
Small-World Effect
Degree, Hubs, and Paths
Centrality
Components
Directed Networks
Weighted Network
Trees
Early social network studies can be traced back to the end of the nineteenth century. However well-documented studies and foundation of this field are usually attributed to a psychiatrist named Jacob Moreno. He published a book entitledWho Whall Survive?in 1934 which laid out the foundation forsociometry(later known associal network analysis).
Another famous contributor to the early development of social network analysis is a perimental psychologist known asStanley Milgram. His"small-world" experimentsgave rise to concepts such assix degrees of separationand well-connected acquaintances (also known as "sociometric superstars"). This experiment was recently repeated by Doddset al.by means of email messages, and the basic results were similar to Milgram's. The estimated true average path length (that is, the number of edges the email message has to pass from one unique individual to the intended targets in different countries) for the experiment was around five to seven, which is not much deviated from the original six degree of separation.[14]
Afood web, orfood chain, is an example of directed network which describes the prey-predator relationship in a given ecosystem. Vertices in this type of network represent species, and the edges the prey-predator relationship. A collection of species may be represented by a single vertex if all members in that collection prey upon and are preyed on by the same organisms. A food web is often acyclic, with few exceptions such as adults preys on juveniles and parasitism.[15]
Epidemiologyis closely related to social network. Contagious diseases can spread through connection networks such as work space, transportation, intimate body contacts and water system (see Figure 7 and 9). Though it only exists virtually, a computer viruses spread across internet networks are not much different from their physical counterparts. Therefore, understanding each of these network patterns can no doubt aid us in more precise prediction of the outcomes of epidemics and preparing better disease prevention protocols.
The simplest model of infection is presented as aSI(susceptible - infected) model. Most diseases, however, do not behave in such simple manner. Therefore, many modifications to this model were made such as theSIR(susceptible – infected – recovered), theSIS(the secondSdenotesreinfection) andSIRSmodels. The idea oflatencyis taken into accounts in models such asSEIR(whereEstands forexposed). The SIR model is also known as theReed-Frost model.[16]
To factor these into an outbreak network model, one must consider the degree distributions of vertices in the giant component of the network (outbreaks in small components are isolation and die out quickly, which does not allow the outbreaks to become epidemics). Theoretically, weighted network can provide more accurate information on exposure probability of vertices but more proofs are needed. Pastor-Satorraset al.pioneered much work in this area, which began with the simplest form (theSImodel) and applied to networks drawn from the configuration model.[17]
The biology of how an infection causes disease in an individual is complicated and is another type of disease pattern specialists are interested in (a process known aspathogenesiswhich involves immunology of the host andvirulence factorsof the pathogen).
|
https://en.wikipedia.org/wiki/Network_theory_in_risk_assessment
|
Software architectureis the set of structures needed to reason about asoftware systemand the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.[1]
Thearchitectureof a software system is ametaphor, analogous to thearchitectureof a building.[2]It functions as the blueprints for the system and the development project, whichproject managementcan later use to extrapolate the tasks necessary to be executed by the teams and people involved.
Software architecture is about making fundamental structural choices that are costly to change once implemented. Software architecture choices include specific structural options from possibilities inthe design of the software. There are two fundamental laws in software architecture:[3][4]
"Architectural Kata" is a teamwork which can be used to produce an architectural solution that fits the needs. Each team extracts and prioritizes architectural characteristics (akanon functional requirements) then models the components accordingly. The team can useC4 Modelwhich is a flexible method to model the architecture just enough. Note that synchronous communication between architectural components, entangles them and they must share the same architectural characteristics.[4]
Documenting softwarearchitecture facilitates communication betweenstakeholders, captures early decisions about the high-level design, and allows the reuse of design components between projects.[5]: 29–35
Software architecture design is commonly juxtaposed withsoftware application design. Whilst application design focuses on the design of the processes and data supporting the required functionality (the services offered by the system), software architecture design focuses on designing the infrastructure within which application functionality can be realized and executed such that the functionality is provided in a way which meets the system'snon-functional requirements.
Software architectures can be categorized into two main types:monolithanddistributed architecture, each having its own subcategories.[4]
Software architecture tends to become more complex over time.Software architectsshould use "fitness functions" tocontinuouslykeep the architecture in check.[4]
Opinions vary as to the scope of software architectures:[6]
There is no sharp distinction between software architecture versus design and requirements engineering (seeRelated fieldsbelow). They are all part of a "chain of intentionality" from high-level intentions to low-level details.[12]: 18
Software Architecture Patternrefers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system. Software architecture patterns operate at a higher level of abstraction than softwaredesign patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples includeCircuit Breaker.[13][14][15]
Software Architecture Stylerefers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions. Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples includeLayered Architecture,Microservices, andEvent-Driven Architecture.[13][14][15]
The following architecturalanti-patternscan arise whenarchitectsmake decisions. These anti-patterns often follow a progressive sequence, where resolving one may lead to the emergence of another.[4]
Software architecture exhibits the following:
Multitude of stakeholders:software systems have to cater to a variety of stakeholders such as business managers, owners, users, and operators. These stakeholders all have their own concerns with respect to the system. Balancing these concerns and demonstrating that they are addressed is part of designing the system.[5]: 29–31This implies that architecture involves dealing with a broad variety of concerns and stakeholders, and has a multidisciplinary nature.
Separation of concerns:the established way for architects to reduce complexity is to separate the concerns that drive the design. Architecture documentation shows that all stakeholder concerns are addressed by modeling and describing the architecture from separate points of view associated with the various stakeholder concerns.[16]These separate descriptions are called architectural views (see for example the4+1 architectural view model).
Quality-driven:classicsoftware designapproaches (e.g.Jackson Structured Programming) were driven by required functionality and the flow of data through the system, but the current insight[5]: 26–28is that the architecture of a software system is more closely related to itsquality attributessuch asfault-tolerance,backward compatibility,extensibility,reliability,maintainability,availability, security, usability, and other such –ilities. Stakeholder concerns often translate intorequirementson these quality attributes, which are variously callednon-functional requirements, extra-functional requirements, behavioral requirements, or quality attribute requirements.
Recurring styles:like building architecture, the software architecture discipline has developed standard ways to address recurring concerns. These "standard ways" are called by various names at various levels of abstraction. Common terms for recurring solutions are architectural style,[12]: 273–277tactic,[5]: 70–72reference architectureandarchitectural pattern.[17][18][5]: 203–205
Conceptual integrity:a term introduced byFred Brooksin his 1975 bookThe Mythical Man-Monthto denote the idea that the architecture of a software system represents an overall vision of what it should do and how it should do it. This vision should be separated from its implementation. The architect assumes the role of "keeper of the vision", making sure that additions to the system are in line with the architecture, hence preservingconceptual integrity.[19]: 41–50
Cognitive constraints:An observation first made in a 1967 paper by computer programmerMelvin Conwaythat organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.[20]Fred Brooks introduced it to a wider audience when he cited the paper and the idea inThe Mythical Man-Month, calling itConway's Law.
Software architecture is an "intellectually graspable" abstraction of a complex system.[5]: 5–6This abstraction provides a number of benefits:
The comparison between software design and (civil) architecture was first drawn in the late 1960s,[23]but the term "software architecture" did not see widespread usage until the 1990s.[24]The field ofcomputer sciencehad encountered problems associated with complexity since its formation.[25]Earlier problems of complexity were solved by developers by choosing the rightdata structures, developingalgorithms, and by applying the concept ofseparation of concerns. Although the term "software architecture" is relatively new to the industry, the fundamental principles of the field have been applied sporadically bysoftware engineeringpioneers since the mid-1980s. Early attempts to capture and explain software architecture of a system were imprecise and disorganized, often characterized by a set of box-and-linediagrams.[26]
Software architecture as a concept has its origins in the research ofEdsger Dijkstrain 1968 andDavid Parnasin the early 1970s. These scientists emphasized that the structure of a software system matters and getting the structure right is critical. During the 1990s there was a concerted effort to define and codify fundamental aspects of the discipline, with research work concentrating on architectural styles (patterns),architecture description languages,architecture documentation, andformal methods.[27]
Research institutions have played a prominent role in furthering software architecture as a discipline.Mary ShawandDavid GarlanofCarnegie Mellonwrote a book titledSoftware Architecture: Perspectives on an Emerging Disciplinein 1996, which promoted software architecture concepts such ascomponents, connectors, and styles. TheUniversity of California, Irvine's Institute for Software Research's efforts in software architecture research is directed primarily in architectural styles, architecture description languages, and dynamic architectures.
IEEE 1471-2000, "Recommended Practice for Architecture Description of Software-Intensive Systems", was the first formal standard in the area of software architecture. It was adopted in 2007 by ISO asISO/IEC 42010:2007. In November 2011, IEEE 1471–2000 was superseded byISO/IEC/IEEE 42010:2011, "Systems and software engineering – Architecture description" (jointly published by IEEE and ISO).[16]
While inIEEE 1471, software architecture was about the architecture of "software-intensive systems", defined as "any system where software contributes essential influences to the design, construction, deployment, and evolution of the system as a whole", the 2011 edition goes a step further by including theISO/IEC 15288andISO/IEC 12207definitions of a system, which embrace not only hardware and software, but also "humans, processes, procedures, facilities, materials and naturally occurring entities". This reflects the relationship between software architecture,enterprise architectureandsolution architecture.
Making architectural decisions involves collecting sufficient relevant information, providing justification for the decision, documenting the decision and its rationale, and communicating it effectively to the appropriate stakeholders.[4]
It's software architect's responsibility to matcharchitectural characteristics(akanon-functional requirements) with business requirements. For example:[4]
There are four core activities in software architecture design.[28]These core architecture activities are performed iteratively and at different stages of the initial software development life-cycle, as well as over the evolution of a system.
Architectural analysisis the process of understanding the environment in which a proposed system will operate and determining the requirements for the system. The input or requirements to the analysis activity can come from any number of stakeholders and include items such as:
The outputs of the analysis activity are those requirements that have a measurable impact on a software system's architecture, called architecturally significant requirements.[31]
Architectural synthesisor design is the process of creating an architecture. Given the architecturally significant requirements determined by the analysis, the current state of the design and the results of any evaluation activities, the design is created and improved.[28][5]: 311–326
Architecture evaluationis the process of determining how well the current design or a portion of it satisfies the requirements derived during analysis. An evaluation can occur whenever an architect is considering a design decision, it can occur after some portion of the design has been completed, it can occur after the final design has been completed or it can occur after the system has been constructed. Some of the available software architecture evaluation techniques includeArchitecture Tradeoff Analysis Method (ATAM)and TARA.[32]Frameworks for comparing the techniques are discussed in frameworks such asSARA Report[21]andArchitecture Reviews: Practice and Experience.[33]
Architecture evolutionis the process of maintaining and adapting an existing software architecture to meet changes in requirements and environment. As software architecture provides a fundamental structure of a software system, its evolution and maintenance would necessarily impact its fundamental structure. As such, architecture evolution is concerned with adding new functionality as well as maintaining existing functionality and system behavior.
Architecture requires critical supporting activities. These supporting activities take place throughout the core software architecture process. They include knowledge management and communication, design reasoning and decision-making, and documentation.
Software architecture supporting activities are carried out during core software architecture activities. These supporting activities assist a software architect to carry out analysis, synthesis, evaluation, and evolution. For instance, an architect has to gather knowledge, make decisions, and document during the analysis phase.
Software architecture inherently deals with uncertainties, and the size of architectural components can significantly influence a system's outcomes, both positively and negatively. Neal Ford and Mark Richards propose an iterative approach to address the challenge of identifying and right-sizing components. This method emphasizes continuous refinement as teams develop a more nuanced understanding of system behavior and requirements.[4]
The approach typically involves a cycle with several stages:[4]
This cycle serves as a general framework and can be adapted to different domains.
There are also concerns that software architecture leads to too muchbig design up front, especially among proponents ofagile software development. A number of methods have been developed to balance the trade-offs of up-front design and agility,[38]including the agile methodDSDMwhich mandates a "Foundations" phase during which "just enough" architectural foundations are laid.IEEE Softwaredevoted a special issue to the interaction between agility and architecture.
Software architecture erosion refers to a gradual gap between the intended and implemented architecture of a software system over time.[39]The phenomenon of software architecture erosion was initially brought to light in 1992 by Perry and Wolf alongside their definition of software architecture.[2]
Software architecture erosion may occur in each stage of the software development life cycle and has varying impacts on the development speed and the cost of maintenance. Software architecture erosion occurs due to various reasons, such asarchitectural violations,the accumulation of technical debt, andknowledge vaporization.[40]A famous case of architecture erosion is the failure of Mozilla Web browser.[41]Mozilla is an application created by Netscape with a complex codebase that became harder to maintain due to continuous changes. Due to initial poor design and growing architecture erosion, Netscape spent two years redeveloping the Mozilla Web browser, demonstrating the importance of proactive architecture management to prevent costly repairs and project delays.
Architecture erosion can decrease software performance, substantially increase evolutionary costs, and degrade software quality. Various approaches and tools have been proposed to detect architecture erosion. These approaches are primarily classified into four categories: consistency-based, evolution-based, defect-based, and decision-based approaches.[39]For instance, automated architecture conformance checks, static code analysis tools, and refactoring techniques help identify and mitigate erosion early.
Besides, the measures used to address architecture erosion contain two main types: preventative and remedial measures.[39]Preventative measures include enforcing architectural rules, regular code reviews, and automated testing, while remedial measures involve refactoring, redesign, and documentation updates.
Software architecture recovery (or reconstruction, orreverse engineering) includes the methods, techniques, and processes to uncover a software system's architecture from available information, including its implementation and documentation. Architecture recovery is often necessary to make informed decisions in the face of obsolete or out-of-date documentation andarchitecture erosion: implementation and maintenance decisions diverging from the envisioned architecture.[42]Practices exist to recover software architecture asstatic program analysis. This is a part of the subjects covered by thesoftware intelligencepractice.
Architecture isdesignbut not all design is architectural.[1]In practice, the architect is the one who draws the line between software architecture (architectural design) and detailed design (non-architectural design). There are no rules or guidelines that fit all cases, although there have been attempts to formalize the distinction.
According to theIntension/Locality Hypothesis,[43]the distinction between architectural and detailed design is defined by theLocality Criterion,[43]according to which a statement about software design is non-local (architectural) if and only if a program that satisfies it can be expanded into a program that does not. For example, theclient–serverstyle is architectural (strategic) because a program that is built on this principle can be expanded into a program that is not client–server—for example, by addingpeer-to-peernodes.
Requirements engineeringand software architecture can be seen as complementary approaches: while software architecture targets the 'solution space' or the 'how', requirements engineering addresses the 'problem space' or the 'what'.[44]Requirements engineering entails theelicitation,negotiation,specification,validation,documentation, andmanagementofrequirements. Both requirements engineering and software architecture revolve aroundstakeholderconcerns, needs, and wishes.
There is considerable overlap between requirements engineering and software architecture, as evidenced for example by a study into five industrial software architecture methods that concludes that"the inputs (goals, constraints, etc.) are usually ill-defined, and only get discovered or better understood as the architecture starts to emerge"and that while"most architectural concerns are expressed as requirements on the system, they can also include mandated design decisions".[28]In short, required behavior impacts solution architecture, which in turn may introduce new requirements.[45]Approaches such as the Twin Peaks model[46]aim to exploit thesynergisticrelation between requirements and architecture.
|
https://en.wikipedia.org/wiki/Software_architecture
|
Johann Bernoulli[a](also known asJeanin French orJohnin English; 6 August [O.S.27 July] 1667 – 1 January 1748) was aSwissmathematician and was one of the many prominent mathematicians in theBernoulli family. He is known for his contributions toinfinitesimal calculusand educatingLeonhard Eulerin the pupil's youth.
Johann was born inBasel, the son of Nicolaus Bernoulli, anapothecary, and his wife, Margarethe Schongauer, and began studying medicine atUniversity of Basel. His father desired that he study business so that he might take over the family spice trade, but Johann Bernoulli did not like business and convinced his father to allow him to study medicine instead. Johann Bernoulli began studying mathematics on the side with his older brotherJacob Bernoulli.[5]Throughout Johann Bernoulli's education atBasel University, the Bernoulli brothers worked together, spending much of their time studying the newly discovered infinitesimal calculus. They were among the first mathematicians to not only study and understandcalculusbut to apply it to various problems.[6]In 1690,[7]he completed a degree dissertation in medicine,[8]reviewed byGottfried Leibniz,[7]whose title wasDe Motu musculorum et de effervescent et fermentation.[9]
After graduating from Basel University, Johann Bernoulli moved to teachdifferential equations. Later, in 1694, he married Dorothea Falkner, the daughter of analdermanof Basel, and soon after accepted a position as the professor of mathematics at theUniversity of Groningen. At the request of hisfather-in-law, Bernoulli began the voyage back to his home town of Basel in 1705. Just after setting out on the journey he learned of his brother's death totuberculosis. Bernoulli had planned on becoming the professor of Greek at Basel University upon returning but instead was able to take over as professor of mathematics, his older brother's former position. As a student ofLeibniz's calculus, Bernoulli sided with him in 1713 in theLeibniz–Newton debateover who deserved credit for the discovery of calculus. Bernoulli defended Leibniz by showing that he had solved certain problems with his methods thatNewtonhad failed to solve. Bernoulli also promotedDescartes'vortex theoryoverNewton's theory of gravitation. This ultimately delayed acceptance of Newton's theory incontinental Europe.[10]
In 1724, Johann Bernoulli entered a competition sponsored by the FrenchAcadémie Royale des Sciences, which posed the question:
In defending a view previously espoused by Leibniz, he found himself postulating an infinite external force required to make the body elastic by overcoming the infinite internal force making the body hard. In consequence, he was disqualified for the prize, which was won byMaclaurin. However, Bernoulli's paper was subsequently accepted in 1726 when the Académie considered papers regarding elastic bodies, for which the prize was awarded to Pierre Mazière. Bernoulli received an honourable mention in both competitions.
Although Johann and his brother Jacob Bernoulli worked together before Johann graduated from Basel University, shortly after this, the two developed a jealous and competitive relationship. Johann was jealous of Jacob's position and the two often attempted to outdo each other. After Jacob's death, Johann's jealousy shifted toward his own talented son,Daniel. In 1738 the father–son duo nearly simultaneously published separate works onhydrodynamics(Daniel'sHydrodynamicain 1738 and Johann'sHydraulicain 1743). Johann attempted to take precedence over his son by purposely and falsely predating his work six years prior to his son's.[11][12]
The Bernoulli brothers often worked on the same problems, but not without friction. Their most bitter dispute concerned thebrachistochrone curveproblem, or the equation for the path followed by a particle from one point to another in the shortest amount of time, if the particle is acted upon by gravity alone. Johann presented the problem in 1696, offering a reward for its solution. Entering the challenge, Johann proposed the cycloid, the path of a point on a moving wheel, also pointing out the relation this curve bears to the path taken by a ray of light passing through layers of varied density. Jacob proposed the same solution, but Johann's derivation of the solution was incorrect, and he presented his brother Jacob's derivation as his own.[13]
Bernoulli was hired byGuillaume de l'Hôpitalfor tutoring in mathematics. Bernoulli and l'Hôpital signed a contract which gave l'Hôpital the right to use Bernoulli's discoveries as he pleased. L'Hôpital authored the first textbook on infinitesimal calculus,Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbesin 1696, which mainly consisted of the work of Bernoulli, including what is now known asl'Hôpital's rule.[14][15][16]Subsequently, in letters to Leibniz,Pierre Varignonand others, Bernoulli complained that he had not received enough credit for his contributions, in spite of the preface of his book:
I recognize I owe much to the insights of the Messrs. Bernoulli, especially to those of the younger (John), currently a professor in Groningen. I did unceremoniously use their discoveries, as well as those of Mr. Leibniz. For this reason I consent that they claim as much credit as they please, and will content myself with what they will agree to leave me.
|
https://en.wikipedia.org/wiki/Johann_Bernoulli#Disputes_and_controversy
|
In the context ofsoftware quality,defect criticalityis a measure of the impact of a software defect. It is defined as the product of severity, likelihood, and class.
Defects are different fromuser stories, and therefore the priority (severity) should be calculated as follows.
|
https://en.wikipedia.org/wiki/Defect_criticality
|
Thedollar sign, also known as thepeso sign, is acurrency symbolconsisting of acapital⟨S⟩crossed with one or two vertical strokes ($ordepending ontypeface), used to indicate the unit of variouscurrenciesaround the world, including most currencies denominated "dollar" or "peso". The explicitly double-barredsign is calledcifrãoin thePortuguese language.
The sign is also used in several compound currency symbols, such as theBrazilian real(R$) and theUnited States dollar(US$): in local use, the nationality prefix is usually omitted. In countries that have othercurrency symbols, the US dollar is often assumed and the "US" prefix omitted.
The one- and two-stroke versions are often considered mere stylistic (typeface) variants, although in some places and epochs one of them may have been specifically assigned, by law or custom, to a specific currency. TheUnicodecomputer encoding standard defines a single code for both.
In mostEnglish-speaking countries that use that symbol, it is placed to the left of the amount specified, e.g. "$1", read as "one dollar".
The symbol appears in business correspondence in the 1770s fromSpanish America, the early independent U.S., British America and Britain, referring to the Spanish American peso,[1][2]also known as "Spanish dollar" or "piece of eight" in British America. Those coins provided the model for the currency that the United States adopted in 1792, and for the larger coins of the new Spanish American republics, such as theMexican peso,Argentine peso,Peruvian real, andBolivian solcoins.
With theCoinage Act of 1792, the United States Congress created the U.S. dollar, defining it to have "the value of a Spanish milled dollar as the same is now current"[3][4]but a variety of foreign coins were deemed to belegal tenderuntil theCoinage Act of 1857ended this status.[5]
The earliest U.S. dollar coins did not have any dollar symbol. The first occurrence in print is claimed to be from 1790s, by a Philadelphia printerArchibald Binny, creator of theMonticello typeface.[6]The $1United States Noteissued by the United States in 1869 included a large symbol consisting of a "U" with the right bar overlapping an "S" like a single-bar dollar sign, as well as a very small double-stroke dollar sign in the legal warning against forgery.[7]
It is still uncertain, however, how the dollar sign came to represent the Spanish American peso. There are currently several competing hypotheses:
The following theories seem to have been discredited or contradicted by documentary evidence:
The numerous currencies called "dollar" use the dollar sign to express money amounts. The sign is also generally used for the many currencies called "peso" (except thePhilippine peso, which uses the symbol "₱"). Within a country the dollar/peso sign may be used alone. In other cases, and to avoid ambiguity in international usage, it is usually combined with other glyphs, e.g. CA$ or Can$ forCanadian dollar. Particularly in professional contexts, the unambiguousISO 4217 three letter code(AUD, MXN, USD, etc.) is preferred.
The dollar sign, alone or in combination with other glyphs, is or was used to denote several currencies with other names, including:
In the United States, Mexico, Australia, Argentina, Chile, Colombia, New Zealand, Hong Kong, Pacific Island nations, and English-speaking Canada, the sign is written before the number ("$5"), even though the word is written or spoken after it ("five dollars", "cinco pesos"). In French-speaking Canada, exceptionally, the dollar symbol usually appears after the number,[25]e.g., "5$". (Thecent symbolis written after the number in most countries that use it, e.g., "5¢".)
In Portugal, Brazil, and other parts of thePortuguese Empire, the two-stroke variant of the sign namedcifrão(Portuguese pronunciation:[siˈfɾɐ̃w]ⓘ) has been used as the thousands separator in the national currency, thereal(plural "réis", abbreviated "Rs."). For instance,123500 would be equivalent to123500réis. This usage is attested in 1775, but may be older by a century or more.[14]The cifrão is always written with two vertical lines like, and is the official sign of theCape Verdean escudo(ISO 4217: CVE).
In 1911, Portugal redefined the national currency as theescudo, worth1000 réis, and divided into 100centavos; but thecifrãocontinued to be used as thedecimal separator,[26]so that12350 meant123.50 escudosor 123 escudos and 50 centavos. This usage ended in 2002 when the country switched to theeuro. (A similar scheme to use a letter symbol instead of a decimal point is used by theRKM codein electrical engineering since 1952.)
Cape Verde, a republic and former Portuguese colony, similarly switched from the real to their localescudoand centavos in 1914, and retains thecifrãousage as decimals separator as of 2021. Local versions of the Portuguese escudo were for a time created also for other overseas colonies, includingEast Timor(1958–1975),Portuguese India(1958–1961),Angola(1914–1928 and 1958–1977),Mozambique(1914–1980),Portuguese Guinea(1914–1975), andSão Tomé and Príncipe(1914–1977); all using thecifrãoas decimals separator.[citation needed]
Brazilretained the real and thecifrãoas thousands separator until 1942, when it switched to theBrazilian cruzeiro, with comma as the decimals separator. The dollar sign, officially with one stroke but often rendered with two, was retained as part of the currency symbol"Cr$", so one would writeCr$13,50for 13 cruzeiros and 50 centavos.[27]
Thecifrãowas formerly used by thePortuguese escudo(ISO: PTE) before its replacement by theeuroand by thePortuguese Timor escudo(ISO: TPE) before its replacement by theIndonesian rupiahand theUS dollar.[28]In Portuguese and Cape Verdean usage, thecifrãois placed as a decimal point between the escudo andcentavovalues.[29]The name originates in theArabicṣifr(صِفْر), meaning 'zero'.[30]
Outside the Portuguese cultural sphere, theSouth Vietnamese đồngbefore 1975 used a method similar to thecifrãoto separate values of đồng from its decimal subunitxu. For example,750 meant 7 đồng and 50 xu.
In some places and at some times, the one- and two-stroke variants have been used in the same contexts to distinguish between the U.S. dollar and other local currency, such as the formerPortuguese escudo.[26]
However, such usage is not standardized, and theUnicodespecification considers the two versions asgraphic variants of the same symbol—atypeface designchoice.[31]Computer and typewriter keyboards usually have a single key for that sign, and many character encodings (includingASCIIandUnicode) reserve a single numeric code for it. Indeed, dollar signs in the same digital document may be rendered with one or two strokes, if differentcomputer fontsare used, but the underlyingcodepointU+0024 (ASCII 3610) remains unchanged.
When a specific variant is not mandated by law or custom, the choice is usually a matter of expediency or aesthetic preference. Both versions were used in the US in the 18th century. (An 1861Civil War-era advertisement depicts the two-stroked symbol as a snake.[13]) The two-stroke version seems to be generally less popular today, though used in some "old-style" fonts likeBaskerville.
Because of its use in early American computer applications such as business accounting, the dollar sign is almost universally present in computercharacter sets, and thus has been appropriated for many purposes unrelated to money inprogramming languagesandcommand languages.
The dollar sign "$" has Unicode code point U+0024 (inherited fromASCIIviaLatin-1).[31]
There are no separate encodings for one- and two-line variants. The choice is typeface-dependent, they areallographs. However, there are three other code points that originate from other East Asian standards: the Taiwanesesmall form variant, the CJKfullwidth form, and the Japaneseemoji. The glyphs for these code points are typically larger or smaller than the primary code point, but the difference is mostly aesthetic or typographic, and the meanings of the symbols are the same.
However, for usage as the special character in various computing applications (see following sections), U+0024 is typically the only code that is recognized.
Support for the two-line variant varies. As of 2019,[update]theUnicodestandard considers the distinction between one- and two-bar dollar signs astylistic distinction between fonts, and has no separatecode pointfor thecifrão. The symbol is not in the October 2019 "pipeline",[34]though it has been requested formally.[26]
Among others, the following fonts display a double-bar dollar sign for code point 0024:[citation needed]regular-weightBaskerville,Big Caslon,Bodoni MT,Garamond: ($)
InLaTeX, with the textcomp package installed, thecifrão() can be input using the command\textdollaroldstyle. However, because offont substitutionand the lack of a dedicated code point, the author of an electronic document who uses one of these fonts intending to represent acifrãocannot be sure that every reader will see a double-bar glyph rather than the single barred version. Because of the continued lack of support in Unicode, a single bar dollar sign is frequently employed in its place even for official purposes.[29][35]Where there is any risk of misunderstanding, theISO 4217three-letter acronym is used.
The symbol is sometimes used derisively, in place of the letter S, to indicate greed or excess money such as in "Micro$oft", "Di$ney", "Chel$ea" and "GW$"; or supposed overt Americanisation as in "$ky". The dollar sign is also used intentionally to stylize names such asA$AP Rocky,Ke$ha, andTy Dolla $ignor words such as¥€$. In 1872,Ambrose Biercereferred to California governorLeland Stanfordas $tealand Landford.[38]
InScrabblenotation, a dollar sign is placed after a word to indicate that it is valid according to theNorth American word lists, but not according to the British word lists.[39]
A dollar symbol is used asunit of reactivityfor a nuclear reactor,0$being the threshold of slow criticality, meaning a steady reaction rate, while1 $is the threshold ofprompt criticality, which means a nuclear excursion or explosion.[40][41]
In the 1993 version of theTurkmen Latin alphabet$ was used as a transliteration of the Cyrillic letter Ш, in 1999 was replaced by the letter Ş.
|
https://en.wikipedia.org/wiki/Cifr%C3%A3o
|
Indigital logic, ahazardis an undesirable effect caused by either a deficiency in the system or external influences in bothsynchronous[citation needed]andasynchronous circuits.[1]: 43Logic hazards are manifestations of a problem in which changes in the input variables do not change the output correctly due to some form of delay caused by logic elements (NOT,AND,OR gates, etc.) This results in the logic not performing its function properly. The three different most common kinds of hazards are usually referred to as static, dynamic and function hazards.
Hazards are a temporary problem, as the logic circuit will eventually settle to the desired function. Therefore, in synchronous designs, it is standard practice toregisterthe output of a circuit before it is being used in a different clock domain or routed out of the system, so that hazards do not cause any problems. If that is not the case, however, it is imperative that hazards be eliminated as they can have an effect on other connected systems.
A static hazard is a change of a signal state twice in a row when the signal is expected to stay constant.[1]: 48When one input signal changes, the output changes momentarily before stabilizing to the correct value. There are two types of static hazards:
In properly formed two-level AND-OR logic based on a Sum Of Products expression, there will be no static-0 hazards (but may still have static-1 hazards). Conversely, there will be no static-1 hazards in an OR-AND implementation of a Product Of Sums expression (but may still have static-0 hazards).
The most commonly used method to eliminate static hazards is to add redundant logic (consensus terms in the logic expression).
Consider an imperfect circuit that suffers from a delay in the physical logic elements i.e. AND gates etc.
The simple circuit performs the function noting:
From a look at the starting diagram it is clear that if no delays were to occur, then the circuit would function normally. However, no two gates are ever manufactured exactly the same. Due to this imperfection, the delay for the first AND gate will be slightly different than its counterpart. Thus an error occurs when the input changes from 111 to 011. i.e. when A changes state.
Now we know roughly how the hazard is occurring, for a clearer picture and the solution on how to solve this problem, we would look to theKarnaugh map.
A theorem proved by Huffman[2]tells us that adding a redundant loop 'BC' will eliminate the hazard.
The amended function is:
Now we can see that even with imperfect logic elements, our example will not show signs of hazards when A changes state. This theory can be applied to any logic system. Computer programs deal with most of this work now, but for simple examples it is quicker to do the debugging by hand. When there are many input variables (say 6 or more) it will become quite difficult to 'see' the errors on a Karnaugh map.
A dynamic hazard are a series of changes of a signal state that happen several times in a row when the signal is expected to change state only once.[1]: 48A dynamic hazard is the possibility of an output changing more than once as a result of a single input change.
Dynamic hazards often occur in larger logic circuits where there are different routes to the output (from the input). If each route has a different delay, then it quickly becomes clear that there is the potential for changing output values that differ from the required / expected output.
E.g. A logic circuit is meant to change output state from1to0, but instead changes from1to0then1and finally rests at the correct value0. This is a dynamic hazard.
As a rule, dynamic hazards are more complex to resolve, but note that if all static hazards have been eliminated from a circuit, then dynamic hazards cannot occur.
In contrast to static and dynamic hazards, functional hazards are ones caused by a change applied to more than one input. There is no specific logical solution to eliminate them. One really reliable method is preventing inputs from changing simultaneously, which is not applicable in some cases. So, circuits should be carefully designed to have equal delays in each path.[3]
|
https://en.wikipedia.org/wiki/Hazard_(logic)
|
Inmathematics, aself-similarobject is exactly or approximatelysimilarto a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such ascoastlines, are statistically self-similar: parts of them show the same statistical properties at many scales.[2]Self-similarity is a typical property offractals.Scale invarianceis an exact form of self-similarity where at any magnification there is a smaller piece of the object that issimilarto the whole. For instance, a side of theKoch snowflakeis bothsymmetricaland scale-invariant; it can be continually magnified 3x without changing shape. The non-trivial similarity evident in fractals is distinguished by their fine structure, or detail on arbitrarily small scales. As acounterexample, whereas any portion of astraight linemay resemble the whole, further detail is not revealed.
Peitgenet al.explain the concept as such:
If parts of a figure are small replicas of the whole, then the figure is calledself-similar....A figure isstrictly self-similarif the figure can be decomposed into parts which are exact replicas of the whole. Any arbitrary part contains an exact replica of the whole figure.[3]
Since mathematically, a fractal may show self-similarity under arbitrary magnification, it is impossible to recreate this physically. Peitgenet al.suggest studying self-similarity using approximations:
In order to give an operational meaning to the property of self-similarity, we are necessarily restricted to dealing with finite approximations of the limit figure. This is done using the method which we will call box self-similarity where measurements are made on finite stages of the figure using grids of various sizes.[4]
This vocabulary was introduced byBenoit Mandelbrotin 1964.[5]
Inmathematics,self-affinityis a feature of afractalwhose pieces arescaledby different amounts in thexandydirections. This means that to appreciate the self-similarity of these fractal objects, they have to be rescaled using ananisotropicaffine transformation.
Acompacttopological spaceXis self-similar if there exists afinite setSindexing a set of non-surjectivehomeomorphisms{fs:s∈S}{\displaystyle \{f_{s}:s\in S\}}for which
IfX⊂Y{\displaystyle X\subset Y}, we callXself-similar if it is the onlynon-emptysubsetofYsuch that the equation above holds for{fs:s∈S}{\displaystyle \{f_{s}:s\in S\}}. We call
aself-similar structure. The homeomorphisms may beiterated, resulting in aniterated function system. The composition of functions creates the algebraic structure of amonoid. When the setShas only two elements, the monoid is known as thedyadic monoid. The dyadic monoid can be visualized as an infinitebinary tree; more generally, if the setShaspelements, then the monoid may be represented as ap-adictree.
Theautomorphismsof the dyadic monoid is themodular group; the automorphisms can be pictured ashyperbolic rotationsof the binary tree.
A more general notion than self-similarity isself-affinity.
TheMandelbrot setis also self-similar aroundMisiurewicz points.
Self-similarity has important consequences for the design of computer networks, as typical network traffic has self-similar properties. For example, inteletraffic engineering,packet switcheddata traffic patterns seem to be statistically self-similar.[6]This property means that simple models using aPoisson distributionare inaccurate, and networks designed without taking self-similarity into account are likely to function in unexpected ways.
Similarly,stock marketmovements are described as displayingself-affinity, i.e. they appear self-similar when transformed via an appropriateaffine transformationfor the level of detail being shown.[7]Andrew Lodescribes stock market log return self-similarity ineconometrics.[8]
Finite subdivision rulesare a powerful technique for building self-similar sets, including theCantor setand theSierpinski triangle.
Somespace filling curves, such as thePeano curveandMoore curve, also feature properties of self-similarity.[9]
Theviable system modelofStafford Beeris an organizational model with an affine self-similar hierarchy, where a given viable system is one element of the System One of a viable system one recursive level higher up, and for whom the elements of its System One are viable systems one recursive level lower down.
Self-similarity can be found in nature, as well. Plants, such asRomanesco broccoli, exhibit strong self-similarity.
|
https://en.wikipedia.org/wiki/Self-similarity
|
Functional fixednessis acognitive biasthat limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated inGestalt psychology, a movement in psychology that emphasizesholisticprocessing.Karl Dunckerdefined functional fixedness as being a mental block against using an object in a new way that is required tosolve a problem.[1]This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person fails to think to use the hammer in a way other than in its conventional function.
When tested, five-year-old children show no signs of functional fixedness. It has been argued that this is because at age five, any goal to be achieved with an object is equivalent to any other goal. However, by age seven, children have acquired the tendency to treat the originally intended purpose of an object as special.[2]
Experimental paradigms typically involvesolving problemsin novel situations in which the subject has the use of a familiar object in an unfamiliar context. The object may be familiar from the subject's past experience or from previous tasks within an experiment.
In a classic experiment demonstrating functional fixedness,Duncker(1945)[1]gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below. Duncker found that participants tried to attach the candle directly to the wall with the tacks, or to glue it to the wall by melting it. Very few of them thought of using the inside of the box as a candle-holder and tacking this to the wall. In Duncker's terms, the participants were "fixated" on the box's normal function of holding thumbtacks and could not re-conceptualize it in a manner that allowed them to solve the problem. For instance, participants presented with an empty tack box were two times more likely to solve the problem than those presented with the tack box used as a container.[3]
More recently, Frank and Ramscar (2003)[4]gave a written version of the candle problem to undergraduates atStanford University. When the problem was given with identical instructions to those in the original experiment, only 23% of the students were able to solve the problem. For another group of students, the noun phrases such as "box of matches" were underlined, and for a third group, the nouns (e.g., "box") were underlined. For these two groups, 55% and 47% were able to solve the problem effectively. In a follow-up experiment, all the nouns except "box" were underlined and similar results were produced. The authors concluded that students' performance was contingent on their representation of the lexical concept "box" rather than instructional manipulations. The ability to overcome functional fixedness was contingent on having a flexible representation of the word box which allows students to see that the box can be used when attaching a candle to a wall.
When Adamson (1952)[3]replicated Duncker's box experiment, Adamson split participants into two experimental groups: preutilization and no preutilization. In this experiment, when there is preutilization, meaning when objects are presented to participants in a traditional manner (materials are in the box, thus using the box as a container), participants are less likely to consider the box for any other use, whereas with no preutilization (when boxes are presented empty), participants are more likely to think of other uses for the box.
Birch and Rabinowitz (1951)[5]adapted the two-cord problem from experiments byNorman Maier(1930, 1931), where a participant would be shown two cords hanging from the ceiling and instructed to connect them, but the cords are far enough apart so that the participant cannot reach one while holding the other. The only solution was to tie a heavy object to one cord as a weight, making it possible to swing the cord as a pendulum, then catch the swinging cord while holding the stationary cord, and tie them together. The only heavy objects provided were an electrical switch and an electrical relay. Participants were questioned on their choice between the two objects after successfully solving the problem. The participants were split into three groups: Group R was given a pretest task to complete an electrical circuit using a relay, Group S completed an identical circuit using a switch, and Group C was the control group made up of engineering students and was given no pretraining. Participants from Group C used both objects equally as the pendulum weight, while Group R exclusively used the switch as the pendulum weight, and most from Group S used the relay. When questioned on their choice, participants argued that whichever object they had used was obviously better suited for solving the problem. Their previous experience emphasised the other object as an electrical object, and functional fixedness prevented them from seeing it as being used for another purpose.
The barometer question is an example of an incorrectly designed examination question demonstrating functional fixedness that causes a moral dilemma for the examiner. In its classic form, popularized by American test designer professor Alexander Calandra (1911–2006), the question asked the student to "show how it is possible to determine the height of a tall building with the aid of a barometer?"[6]The examiner was confident that there was one, and only one, correct answer. Contrary to the examiner's expectations, the student responded with a series of completely different answers. These answers were also correct, yet none of them proved the student's competence in the specific academic field being tested.
Calandra presented the incident as a real-life,first-personexperience that occurred during theSputnik crisis.[7]Calandra's essay, "Angels on a Pin", was published in 1959 inPride, a magazine of theAmerican College Public Relations Association.[8]It was reprinted inCurrent Sciencein 1964,[9]reprinted again inSaturday Reviewin 1968,[10]and included in the 1969 edition of Calandra'sThe Teaching of Elementary Science and Mathematics.[11]In the same year (1969), Calandra's essay became a subject of an academic discussion.[12]The essay has been referenced frequently since,[13]making its way into books on subjects ranging from teaching,[14]writing skills,[15]workplace counseling,[16]and investment inreal estate[17]tochemical industry,[18]computer programming,[19]andintegrated circuitdesign.[20]
Researchers have investigated whether functional fixedness is affected byculture.
In a recent study, preliminary evidence supporting the universality of functional fixedness was found.[21]The study's purpose was to test if individuals from non-industrialized societies, specifically with low exposure to "high-tech" artifacts, demonstrated functional fixedness. The study tested theShuar, hunter-horticulturalists of the Amazon region of Ecuador, and compared them to a control group from an industrial culture.
The Shuar community had only been exposed to a limited amount of industrialized artifacts, such as machete, axes, cooking pots, nails, shotguns, and fishhooks, all considered "low-tech". Two tasks were assessed to participants for the study: the box task, where participants had to build a tower to help a character from a fictional storyline to reach another character with a limited set of varied materials; the spoon task, where participants were also given a problem to solve based on a fictional story of a rabbit that had to cross a river (materials were used to represent settings) and they were given varied materials including a spoon. In the box-task, participants were slower to select the materials than participants in control conditions, but no difference in time to solve the problem was seen. In the spoon task, participants were slower in selection and completion of task. Results showed that Individuals from non-industrial ("technologically sparse cultures") were susceptible to functional fixedness. They were faster to use artifacts without priming than when design function was explained to them. This occurred even though participants were less exposed to industrialized manufactured artifacts, and that the few artifacts they currently use were used in multiple ways regardless of their design.[21]
Investigators examined in two experiments "whether the inclusion of examples with inappropriate elements, in addition to the instructions for a design problem, would produce fixation effects in students naive to design tasks".[22]They examined the inclusion of examples of inappropriate elements, by explicitly depicting problematic aspects of the problem presented to the students through example designs. They tested non-expert participants on three problem conditions: with standard instruction, fixated (with inclusion of problematic design), and defixated (inclusion of problematic design accompanied with helpful methods). They were able to support their hypothesis by finding that a) problematic design examples produce significant fixation effects, and b) fixation effects can be diminished with the use of defixating instructions.
In "The Disposable Spill-Proof Coffee Cup Problem", adapted from Janson & Smith, 1991, participants were asked to construct as many designs as possible for an inexpensive, disposable, spill-proof coffee cup. Standard condition participants were presented only with instructions. In the fixated condition, participants were presented with instructions, a design, and problems they should be aware of. Finally, in the defixated condition, participants were presented the same as other conditions in addition to suggestions of design elements they should avoid using. The other two problems included building a bike rack, and designing a container for cream cheese.
Based on the assumption that students are functionally fixed, a study onanalogical transferin the science classroom shed light on significant data that could provide an overcoming technique for functional fixedness. The findings support the fact that students show positive transfer (performance) on problem solving after being presented with analogies of certain structure and format.[23]The present study expanded Duncker's experiments from 1945 by trying to demonstrate that when students were "presented with a single analogy formatted as a problem, rather than as a story narrative, they would orient the task of problem-solving and facilitate positive transfer".[23]
A total of 266 freshmen students from a high school science class participated in the study. The experiment was a 2x2 design where conditions: "task contexts" (type and format) vs. "prior knowledge" (specific vs. general) were attested. Students were classified into five different groups, where four were according to their prior science knowledge (ranging from specific to general), and one served as a control group (no analog presentation). The four different groups were then classified into "analog type and analog format" conditions, structural or surface types and problem or surface formats.
Inconclusive evidence was found for positive analogical transfer based on prior knowledge; however, groups did demonstrate variability. The problem format and the structural type of analog presentation showed the highest positive transference to problem solving. The researcher suggested that a well-thought and planned analogy relevant in format and type to the problem-solving task to be completed can be helpful for students to overcome functional fixedness. This study not only brought new knowledge about the human mind at work but also provides important tools for educational purposes and possible changes that teachers can apply as aids to lesson plans.[23]
One study suggests that functional fixedness can be combated by design decisions from functionally fixed designs so that the essence of the design is kept (Latour, 1994).[24]This helps the subjects who have created functionally fixed designs understand how to go about solving general problems of this type, rather than using the fixed solution for a specific problem. Latour performed an experiment researching this by having software engineers analyze a fairly standard bit of code—thequicksortalgorithm—and use it to create a partitioning function. Part of the quicksort algorithm involves partitioning a list into subsets so that it can be sorted; the experimenters wanted to use the code from within the algorithm to just do the partitioning. To do this, they abstracted each block of code in the function, discerning the purpose of it, and deciding if it is needed for the partitioning algorithm. This abstracting allowed them to reuse the code from the quicksort algorithm to create a working partition algorithm without having to design it from scratch.[24]
A comprehensive study exploring several classical functional fixedness experiments showed an overlying theme of overcoming prototypes. Those that were successful at completing the tasks had the ability to look beyond the prototype, or the original intention for the item in use. Conversely, those that could not create a successful finished product could not move beyond the original use of the item. This seemed to be the case for functional fixedness categorization studies as well. Reorganization into categories of seemingly unrelated items was easier for those that could look beyond intended function. Therefore, there is a need to overcome the prototype in order to avoid functional fixedness. Carnevale (1998)[25]suggests analyzing the object and mentally breaking it down into its components. After that is completed, it is essential to explore the possible functions of those parts. In doing so, an individual may familiarize themselves with new ways to use the items that are available to them at the givens. Individuals are therefore thinking creatively and overcoming the prototypes that limit their ability to successfully complete the functional fixedness problem.[25]
For each object, you need to decouple its function from its form. McCaffrey (2012)[26]shows a highly effective technique for doing so. As you break an object into its parts, ask yourself two questions. "Can I subdivide the current part further?" If yes, do so. "Does my current description imply a use?" If yes, create a more generic description involving its shape and material. For example, initially I divide a candle into its parts: wick and wax. The word "wick" implies a use: burning to emit light. So, describe it more generically as a string. Since "string" implies a use, I describe it more generically: interwoven fibrous strands. This brings to mind that I could use the wick to make a wig for my hamster. Since "interwoven fibrous strands" does not imply a use, I can stop working on wick and start working on wax. People trained in this technique solved 67% more problems that suffered from functional fixedness than a control group. This technique systematically strips away all the layers of associated uses from an object and its parts.[27]
|
https://en.wikipedia.org/wiki/Functional_fixedness
|
Incomputer science, analgorithmis callednon-blockingif failure orsuspensionof anythreadcannot cause failure or suspension of another thread;[1]for some operations, these algorithms provide a useful alternative to traditionalblocking implementations. A non-blocking algorithm islock-freeif there is guaranteed system-wideprogress, andwait-freeif there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.[2]
The word "non-blocking" was traditionally used to describetelecommunications networksthat could route a connection through a set of relays "without having to re-arrange existing calls"[This quote needs a citation](seeClos network). Also, if the telephone exchange "is not defective, it can always make the connection"[This quote needs a citation](seenonblocking minimal spanning switch).
The traditional approach to multi-threaded programming is to uselocksto synchronize access to sharedresources. Synchronization primitives such asmutexes,semaphores, andcritical sectionsare all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority orreal-timetask, it would be highly undesirable to halt its progress.
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such asdeadlock,livelock, andpriority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities forparallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use ininterrupt handlers: even though thepreemptedthread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.[3]
A lock-free data structure can be used to improve performance.
A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on amulti-core processor, because access to the shared data structure does not need to be serialized to stay coherent.[4]
With few exceptions, non-blocking algorithms useatomicread-modify-writeprimitives that the hardware must provide, the most notable of which iscompare and swap (CAS).Critical sectionsare almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field ofsoftware transactional memorypromises standard abstractions for writing efficient non-blocking code.[5][6]
Much research has also been done in providing basicdata structuressuch asstacks,queues,sets, andhash tables. These allow programs to easily exchange data between threads asynchronously.
Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include:
Several libraries internally use lock-free techniques,[7][8][9]but it is difficult to write lock-free code that is correct.[10][11][12][13]
Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order.
Optimizing compilers can aggressively re-arrange operations.
Even when they don't, many modern CPUs often re-arrange such operations (they have a "weakconsistency model"),
unless amemory barrieris used to tell the CPU not to reorder.C++11programmers can usestd::atomicin<atomic>,
andC11programmers can use<stdatomic.h>,
both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.[14]
Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput withstarvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.[15]This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high.
It was shown in the 1980s[16]that all algorithms can be implemented wait-free, and many transformations from serial code, calleduniversal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs.
Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[17]that the widely available atomicconditionalprimitives,CASandLL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads.
However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation needed]is greater.[clarification needed]
Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan andPetrank[18]presented a wait-free queue building on theCASprimitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott,[19]which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[20]provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank[21]provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures.
Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free.[22]Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce.
Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes
progress (for some sensible definition of progress).
All wait-free algorithms are lock-free.
In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm isnotlock-free. (If we suspend one thread that holds the lock, then the second thread will block.)
An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, ifNprocessors are trying to execute an operation, some of theNprocesses will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors.
In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion.
The decision about when to assist, abort or wait when an obstruction is met is the responsibility of acontention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations.
Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running.
Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation.[15]All lock-free algorithms are obstruction-free.
Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continuallylive-lockingis the task of a contention manager.
Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
|
https://en.wikipedia.org/wiki/Non-blocking_algorithm
|
TheiPlant Collaborative, renamedCyversein 2017, is avirtual organizationcreated by acooperative agreementfunded by the USNational Science Foundation(NSF) to createcyberinfrastructurefor the plant sciences (botany).[1]The NSF compared cyberinfrastructure to physicalinfrastructure, "... thedistributed computer,information and communication technologiescombined with the personnel and integrating components that provide a long-term platform to empower the modern scientific research endeavor".[2]In September 2013 it was announced that the National Science Foundation had renewed iPlant's funding for a second 5-year term with an expansion of scope to all non-human life science research.[3]
The project develops computing systems and software that combine computing resources, like those ofTeraGrid, andbioinformaticsandcomputational biologysoftware. Its goal is easier collaboration among researchers with improved data access and processing efficiency. Primarily centered in the United States, it collaborates internationally.
Biology is relying more and more on computers.[4]Plant biology is changing with the rise of new technologies.[5]With the advent ofbioinformatics,computational biology,DNA sequencing,geographic information systemsand others computers can greatly assist researchers who study plant life looking for solutions to challenges inmedicine,biofuels,biodiversity,agricultureand problems likedrought tolerance,plant breeding, andsustainable farming.[6]Many of these problems cross traditional disciplines and facilitating collaboration between plant scientists of diverse backgrounds and specialties is necessary.[6][7][8]
In 2006, the NSF solicited proposals to create "a new type of organization – a cyberinfrastructure collaborative for plant science" with a program titled "Plant Science Cyberinfrastructure Collaborative" (PSCIC) with Christopher Greer as program director.[9]A proposal was accepted (adopting the convention of using the word "Collaborative" as a noun) and iPlant was officially created on February 1, 2008.[1][9]Funding was estimated as $10 million per year over five years.[10]
Richard Jorgensenled the team through the proposal stage and was theprincipal investigator(PI) from 2008 to 2009.[10]Gregory Andrews, Vicki Chandler, Sudha Ram and Lincoln Stein served as Co-Principal Investigators (Co-PIs) from 2008 to 2009. In late 2009, Stephen Goff was named PI and Daniel Stanzione was added as a Co-PI.[1][11][12]As of May 2014, Co-PI Stanzione was replaced by 4 new Co-PIs: Doreen Ware at Cold Spring Harbor, Nirav Merchant and Eric Lyons at the University of Arizona, and Matthew Vaughn at the Texas Advanced Computing Center.[13]
The iPlant project supports what has been callede-Science, which is a use of information systems technology that is being adopted by the research community in efforts such as theNational Center for Ecological Analysis and Synthesis(NCEAS), ELIXIR,[14]and the Bamboo Technology Project that started in September 2010.[15][16]iPlant is "designed to create the foundation to support the computational needs of the research community and facilitate progress toward solutions of major problems in plant biology."[6][17]
The project works as acollaboration. It seeks input from the wider plant science community on what to build.[18]Based on that input, it has enabled easier use of large data sets,[19]created a community-driven research environment to share existing data collections within a research area and between research areas[20]and shares data withprovenancetracking.[21][22]One model studied for collaboration wasWikipedia.[23][24]
Several more recent National Science Foundation awards mentioned iPlant explicitly in their descriptions, as either a design pattern to follow or a collaborator with whom the recipient will work.[25]
The primary institution for the iPlant project is theUniversity of Arizona, located within the BIO5 Institute inTucson.[26]Since its inception in 2008, personnel worked at other institutions includingCold Spring Harbor Laboratory,University of North Carolina, Wilmington, and theUniversity of Texas at Austinin theTexas Advanced Computing Center.[27]Purdue UniversityandArizona State Universitywere part of the original project group.[10]
Other collaborating institutions that received support from iPlant for their work on aGrand Challengeinphylogeneticsstarting in March 2009 includedYale University,University of Florida, and theUniversity of Pennsylvania.[27]A trait evolution group was led at theUniversity of Tennessee.[28]A visualization workshop employing iPlant was run byVirginia Techin 2011.[29]
The NSF requires that funding subcontracts stay within the United States, but international collaboration started in 2009 with theTechnical University Munich[27]andUniversity of Torontoin 2010.[29][30]East Main Evaluation & Consulting provides external oversight, advice, and assistance.[31]
The iPlant project makes its cyberinfrastructure available several different ways and offers services to make it the accessible to its primary audience. The design was meant to grow in response to needs of the research community it serves.[6]
The Discovery Environment integrates community-recommended software tools into a system that can handleterabytesof data using high-performance supercomputers to perform these tasks much more quickly. It has an interface designed to hide the complexity needed to do this from the end user. The goal was to make the cyberinfrastructure available to non-technical end users who are not as comfortable using acommand-line interface.[6][32]
A set ofapplication programming interfaces(APIs) for developers allow access to iPlant services, including authentication, data management, high performance supercomputing resources from custom, locally produced software.[6][33]
Atmosphere is acloud computingplatform that provides easy access to pre-configured, frequently used analysis routines, relevant algorithms, and data sets, and accommodates computationally and data-intensive bioinformatics tasks.[6]It uses theEucalyptusvirtualization platform.[34][35]
The iPlant Semantic Web effort uses an iPlant-created architecture, protocol, and platform called the Simple Semantic Web Architecture and Protocol (SSWAP) forsemantic weblinking using a plant science focusedontology.[6][36][37]SSWAP is based on the notion ofRESTfulweb services with an ontology based onWeb Ontology Language(OWL).[38][39]
The Taxonomic Name Resolution Service (TNRS) is a free utility for correcting and standardizing plant names. This is needed because plant names that are misspelled, out of date (because a newer synonym is preferred), or incomplete make it hard to use computers to process large lists.[6][40][41]
My-Plant.org is asocial networkingcommunity for plant biologists, educators and others to come together to share information and research, collaborate, and track the latest developments in plant science.[6][42]The My-Plant network uses the terminologycladesto group users in a manner similar tophylogeneticsof plants themselves.[42]It was implemented usingDrupalas itscontent management system.[42]
The DNA Subway website uses agraphical user interface(GUI) to generateDNAsequence annotations, explore plantgenomesfor members of gene andtransposonfamilies, and conductphylogeneticanalyses. It makes high-level DNA analysis available to faculty and students by simplifying annotation andcomparative genomicsworkflows.[6][43]It was developed for iPlant by theDolan DNA Learning Center.[44][45]
|
https://en.wikipedia.org/wiki/SSWAP
|
Network dynamicsis a research field for the study ofnetworkswhose status changes in time. The dynamics may refer to the structure of connections of the units of a network,[1][2]to the collective internal state of the network,[3][4]or both. The networked systems could be from the fields ofbiology,chemistry,physics,sociology,economics,computer science, etc. Networked systems are typically characterized ascomplex systemsconsisting of many units coupled by specific, potentially changing, interaction topologies.
For adynamical systems' approach to discrete network dynamics, seesequential dynamical system.
Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Network_dynamics
|
TheKahn–Kalai conjecture, also known as theexpectation threshold conjectureor more recently thePark-Pham Theorem, was aconjecturein the field ofgraph theoryandstatistical mechanics, proposed byJeff KahnandGil Kalaiin 2006.[1][2]It was proven in a paper published in 2024.[3]
This conjecture concerns the general problem of estimating whenphase transitionsoccur in systems.[1]For example, in arandom networkwithN{\displaystyle N}nodes, where each edge is included with probabilityp{\displaystyle p}, it is unlikely for the graph to contain aHamiltonian cycleifp{\displaystyle p}is less than a threshold value(logN)/N{\displaystyle (\log N)/N}, but highly likely ifp{\displaystyle p}exceeds that threshold.[4]
Threshold values are often difficult to calculate, but a lower bound for the threshold, the "expectation threshold", is generally easier to calculate.[1]The Kahn–Kalai conjecture is that the two values are generally close together in a precisely defined way, namely that there is auniversal constantK{\displaystyle K}for which the ratio between the two is less thanKlogl(F){\displaystyle K\log {l({\mathcal {F}})}}wherel(F){\displaystyle l({\mathcal {F}})}is the size of a largestminimal elementof an increasing familyF{\displaystyle {\mathcal {F}}}of subsets of a power set.[3]
Jinyoung Parkand Huy Tuan Pham announced a proof of the conjecture in 2022; it was published in 2024.[4][3]
|
https://en.wikipedia.org/wiki/Kahn%E2%80%93Kalai_conjecture
|
Libertarianismis one of the mainphilosophicalpositions related to the problems offree willanddeterminismwhich are part of the larger domain ofmetaphysics.[1]In particular, libertarianism is anincompatibilistposition[2][3]which argues that free will is logically incompatible with a deterministic universe. Libertarianism states that since agents have free will, determinism must be false.[4]
One of the first clear formulations of libertarianism is found inJohn Duns Scotus. In a theological context, metaphysical libertarianism was notablydefendedby Jesuit authors likeLuis de MolinaandFrancisco Suárezagainst the rathercompatibilistThomistBañecianism. Other important metaphysical libertarians in theearly modern periodwereRené Descartes,George Berkeley,Immanuel KantandThomas Reid.[5]
Roderick Chisholmwas a prominent defender of libertarianism in the 20th century[6]and contemporary libertarians includeRobert Kane,Geert Keil,Peter van InwagenandRobert Nozick.
The first recorded use of the termlibertarianismwas in 1789 byWilliam Belshamin a discussion of free will and in opposition tonecessitarianordeterministviews.[7][8]
Metaphysical libertarianism is one philosophical viewpoint under that of incompatibilism. Libertarianism holds onto a concept of free will that requires theagentto be able to take more than one possible course of action under a given set of circumstances.
Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, and consequently the world is not closed under physics. Suchinteractionist dualistsbelieve that some non-physicalmind, will, orsouloverrides physicalcausality.
Explanations of libertarianism that do not involve dispensing withphysicalismrequire physical indeterminism, such as probabilistic subatomic particle behavior—a theory unknown to many of the early writers on free will. Physical determinism, under the assumption of physicalism, implies there is only one possible future and is therefore not compatible with libertarian free will. Some libertarian explanations involve invokingpanpsychism, the theory that a quality ofmindis associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Other approaches do not require free will to be a fundamental constituent of the universe; ordinary randomness is appealed to as supplying the "elbow room" believed to be necessary by libertarians.
Freevolitionis regarded as a particular kind of complex, high-level process with an element of indeterminism. An example of this kind of approach has been developed byRobert Kane,[9]where he hypothesizes that,
In each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which has to be overcome by effort.
Although at the timequantum mechanics(and physicalindeterminism) was only in the initial stages of acceptance, in his bookMiracles: A preliminary studyC. S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality.[10]Indeterministicphysical models (particularly those involvingquantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allowincompatibilistfree will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in consciousvolition) maps to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable,[11]and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in theneuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality.[12]Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.[13]
In non-physical theories of free will, agents are assumed to have power to intervene in the physical world, a view known asagent causation.[14][15][16][17][18][19][20][21]Proponents of agent causation includeGeorge Berkeley,[22]Thomas Reid,[23]andRoderick Chisholm.[24]
Most events can be explained as the effects of prior events. When a tree falls, it does so because of the force of the wind, its own structural weakness, and so on. However, when a person performs a free act, agent causation theorists say that the action was not caused by any other events or states of affairs, but rather was caused by the agent. Agent causation isontologicallyseparate from event causation. The action was not uncaused, because the agent caused it. But the agent's causing it was not determined by the agent's character, desires, or past, since that would just be event causation.[25]As Chisholm explains it, humans have "a prerogative which some would attribute only to God: each of us, when we act, is aprime moverunmoved. In doing what we do, we cause certain events to happen, and nothing—or no one—causes us to cause those events to happen."[26]
This theory involves a difficulty which has long been associated with the idea of an unmoved mover. If a free action was not caused by any event, such as a change in the agent or an act of the will, then what is the difference between saying that an agent caused the event and simply saying that the event happened on its own? AsWilliam Jamesput it, "If a 'free' act be a sheer novelty, that comes not from me, the previous me, but ex nihilo, and simply tacks itself on to me, how can I, the previous I, be responsible? How can I have any permanent character that will stand still long enough for praise or blame to be awarded?"[27]Agent causation advocates respond that agent causation is actually more intuitive than event causation. They point toDavid Hume's argument that when we see two events happen in succession, our belief that one event caused the other cannot be justified rationally (known as theproblem of induction). If that is so, where does our belief in causality come from? According to Thomas Reid, "the conception of an efficient cause may very probably be derived from the experience we have had ... of our own power to produce certain effects."[28]Our everyday experiences of agent causation provide the basis for the idea of event causation.[29]
Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here asdeliberative indeterminism,centred accounts, andefforts of will theory.[30]The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world.
Deliberative indeterminismasserts that the indeterminism is confined to an earlier stage in the decision process.[31][32]This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction ofluck(random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced byDaniel Dennett[33]andJohn Martin Fischer.[34]An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model.
Centred accountspropose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen.[35][36][37][38][39][40][41]An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision.
Efforts of will theoryis related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events—and the outcomes of these events could therefore be considered caused by the agent. Models ofvolitionhave been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that ofRobert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which must be overcome by effort."[9]According to Robert Kane such "ultimate responsibility" is a required condition for free will.[42]An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (seenon-reductive physicalism).
Epicurus, an ancientHellenistic philosopher, argued that as atoms moved through the void, there were occasions when they would "swerve" (clinamen) from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused.
Epicurus did not say the swerve was directly involved in decisions. But followingAristotle, Epicurus thought human agents have the autonomous ability to transcend necessity and chance (both of which destroy responsibility), so that praise and blame are appropriate. Epicurus finds atertium quid, beyond necessity and beyond chance. Histertium quidis agent autonomy, what is "up to us."
[S]ome things happen of necessity (ἀνάγκη), others by chance (τύχη), others through our own agency (παρ' ἡμᾶς). [...]. [N]ecessity destroys responsibility and chance is inconstant; whereas our own actions are autonomous, and it is to them that praise and blame naturally attach.[43]
TheEpicureanphilosopherLucretius(1st century BC) saw the randomness as enabling free will, even if he could not explain exactly how, beyond the fact that random swerves would break the causal chain of determinism.
Again, if all motion is always one long chain, and new motion arises out of the old in order invariable, and if the first-beginnings do not make by swerving a beginning of motion such as to break the decrees of fate, that cause may not follow cause from infinity, whence comes this freedom (libera) in living creatures all over the earth, whence I say is this will (voluntas) wrested from the fates by which we proceed whither pleasure leads each, swerving also our motions not at fixed times and fixed places, but just where our mind has taken us? For undoubtedly it is his own will in each that begins these things, and from the will movements go rippling through the limbs.
However, the interpretation of these ancient philosophers is controversial. Tim O'Keefe has argued that Epicurus and Lucretius were not libertarians at all, but compatibilists.[44]
Robert Nozickput forward an indeterministic theory of free will inPhilosophical Explanations(1981).[45]
When human beings become agents through reflexive self-awareness, they express their agency by having reasons for acting, to which they assign weights. Choosing the dimensions of one's identity is a special case, in which the assigning of weight to a dimension is partly self-constitutive. But all acting for reasons is constitutive of the self in a broader sense, namely, by its shaping one's character and personality in a manner analogous to the shaping that law undergoes through the precedent set by earlier court decisions. Just as a judge does not merely apply the law but to some degree makes it through judicial discretion, so too a person does not merely discover weights but assigns them; one not only weighs reasons but also weights them. Set in train is a process of building a framework for future decisions that we are tentatively committed to.
The lifelong process of self-definition in this broader sense is construedindeterministicallyby Nozick. The weighting is "up to us" in the sense that it is undetermined by antecedent causal factors, even though subsequent action is fully caused by the reasons one has accepted. He compares assigning weights in this deterministic sense to "the currently orthodox interpretation of quantum mechanics", followingvon Neumannin understanding a quantum mechanical system as in a superposition or probability mixture of states, which changes continuously in accordance with quantum mechanical equations of motion and discontinuously via measurement or observation that "collapses the wave packet" from a superposition to a particular state. Analogously, a person before decision has reasons without fixed weights: he is in a superposition of weights. The process of decision reduces the superposition to a particular state that causes action.
One particularly influential contemporary theory of libertarian free will is that ofRobert Kane.[30][46][47]Kane argued that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and that (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)".[48]The crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will.[49]It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility".
Ultimate responsibility entails that agents must be the ultimate creators (or originators) and sustainers of their own ends and purposes. There must be more than one way for a person's life to turn out (AP). More importantly, whichever way it turns out must be based in the person's willing actions. Kane defines it as follows:
(UR) An agent isultimately responsiblefor some (event or state) E's occurring only if (R) the agent is personally responsible for E's occurring in a sense which entails that something the agent voluntarily (or willingly) did or omitted either was, or causally contributed to, E's occurrence and made a difference to whether or not E occurred; and (U) for every X and Y (where X and Y represent occurrences of events and/or states) if the agent is personally responsible for X and if Y is anarche(sufficient condition, cause or motive) for X, then the agent must also be personally responsible for Y.
In short, "an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring."[50]
What allows for ultimacy of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs—those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refraining in the life histories of agents that are required for UR. UR does not require thateveryact done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), they are responsible for the actions that are a result of their character.
Randolph Clarkeobjects that Kane's depiction of free will is not truly libertarian but rather a form ofcompatibilism. The objection asserts that although the outcome of an SFA is not determined, one's history up to the eventis; so the fact that an SFA will occur is also determined. The outcome of the SFA is based on chance, and from that point on one's life is determined. This kind of freedom, says Clarke, is no different from the kind of freedom argued for by compatibilists, who assert that even though our actions are determined, they are free because they are in accordance with our own wills, much like the outcome of an SFA.[51]
Kane responds that the difference between causal indeterminism and compatibilism is "ultimate control—the originative control exercised by agents when it is 'up to them' which of a set of possible choices or actions will now occur, and up to no one and nothing else over which the agents themselves do not also have control".[52]UR assures that the sufficient conditions for one's actions do not lie before one's own birth.
Galen Strawsonholds that there is a fundamental sense in whichfree willis impossible, whetherdeterminismis true or not. He argues for this position with what he calls his "basic argument", which aims to show that no-one is ever ultimately morally responsible for their actions, and hence that no one has free will in the sense that usually concerns us.
In his book defending compatibilism,Freedom Evolves, Daniel Dennett spends a chapter criticising Kane's theory.[53]Kane believes freedom is based on certain rare and exceptional events, which he calls self-forming actions or SFAs. Dennett notes that there is no guarantee such an event will occur in an individual's life. If it does not, the individual does not in fact have free will at all, according to Kane. Yet they will seem the same as anyone else. Dennett finds an essentiallyindetectablenotion of free will to be incredible.
Metaphysical libertarianism has faced significant criticism from both scientific and philosophical perspectives.
One major objection comes from neuroscience. Experiments by Benjamin Libet and others suggest that the brain may initiate decisions before subjects become consciously aware of them[54], raising questions about whether conscious free will exists at all. Critics argue this challenges the libertarian notion of uncaused or agent-caused actions.
Another prominent critique is the "luck objection." This argument claims that if an action is not determined by prior causes, then it seems to happen by chance. In this view, libertarian freedom risks reducing choice to randomness, undermining meaningful moral responsibility.[55]
Compatibilists, such as Daniel Dennett, argue that free will is compatible with determinism and that libertarianism wrongly assumes that causal determinism automatically negates responsibility. They maintain that what matters is whether a person's actions stem from their internal motivations—not whether those actions are ultimately uncaused.[56]
Some philosophers also raise metaphysical concerns about agent-causation, arguing that positing the agent as a "first cause" introduces mysterious or incoherent forms of causation into an otherwise naturalistic worldview.[57]
|
https://en.wikipedia.org/wiki/Libertarianism_(metaphysics)
|
Automated machine learning(AutoML) is the process ofautomatingthe tasks of applyingmachine learningto real-world problems. It is the combination of automation and ML.[1]
AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as anartificial intelligence-based solution to the growing challenge of applying machine learning.[2][3]The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models.[4]
Common techniques used in AutoML includehyperparameter optimization,meta-learningandneural architecture search.
In a typical machine learning application, practitioners have a set of input data points to be used for training.[5]The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriatedata pre-processing,feature engineering,feature extraction, andfeature selectionmethods. After these steps, practitioners must then performalgorithm selectionandhyperparameter optimizationto maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert.
Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively.
AutoML plays an important role within the broader approach of automatingdata science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction.[6]
Automated machine learning can target various stages of the machine learning process.[3]Steps to automate are:
There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry".[8]This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms andsystem design.[9]
Additionally, some other challenges include meta-learning challenges[10]and computational resource allocation.
|
https://en.wikipedia.org/wiki/Automated_machine_learning
|
Inmathematicsandtheoretical computer science, asemiautomatonis adeterministic finite automatonhaving inputs but no output. It consists of asetQofstates, a set Σ called the input alphabet, and a functionT:Q× Σ →Qcalled the transition function.
Associated with any semiautomaton is amonoidcalled thecharacteristic monoid,input monoid,transition monoidortransition systemof the semiautomaton, whichactson the set of statesQ. This may be viewed either as an action of thefree monoidofstringsin the input alphabet Σ, or as the inducedtransformation semigroupofQ.
In older books like Clifford and Preston (1967)semigroup actionsare called "operands".
Incategory theory, semiautomata essentially arefunctors.
Atransformation semigrouportransformation monoidis a pair(M,Q){\displaystyle (M,Q)}consisting of asetQ(often called the "set ofstates") and asemigroupormonoidMoffunctions, or "transformations", mappingQto itself. They are functions in the sense that every elementmofMis a mapm:Q→Q{\displaystyle m\colon Q\to Q}. Ifsandtare two functions of the transformation semigroup, their semigroup product is defined as theirfunction composition(st)(q)=(s∘t)(q)=s(t(q)){\displaystyle (st)(q)=(s\circ t)(q)=s(t(q))}.
Some authors regard "semigroup" and "monoid" as synonyms. Here a semigroup need not have anidentity element; a monoid is a semigroup with an identity element (also called "unit"). Since the notion of functions acting on a set always includes the notion of an identity function, which when applied to the set does nothing, a transformation semigroup can be made into a monoid by adding the identity function.
LetMbe amonoidandQbe a non-empty set. If there exists a multiplicative operation
which satisfies the properties
for 1 the unit of the monoid, and
for allq∈Q{\displaystyle q\in Q}ands,t∈M{\displaystyle s,t\in M}, then the triple(Q,M,μ){\displaystyle (Q,M,\mu )}is called arightM-actor simply aright act. In long-hand,μ{\displaystyle \mu }is theright multiplication of elements of Q by elements of M. The right act is often written asQM{\displaystyle Q_{M}}.
Aleft actis defined similarly, with
and is often denoted asMQ{\displaystyle \,_{M}Q}.
AnM-act is closely related to a transformation monoid. However the elements ofMneed not be functionsper se, they are just elements of some monoid. Therefore, one must demand that the action ofμ{\displaystyle \mu }be consistent with multiplication in the monoid (i.e.μ(q,st)=μ(μ(q,s),t){\displaystyle \mu (q,st)=\mu (\mu (q,s),t)}), as, in general, this might not hold for some arbitraryμ{\displaystyle \mu }, in the way that it does for function composition.
Once one makes this demand, it is completely safe to drop all parenthesis, as the monoid product and the action of the monoid on the set are completelyassociative. In particular, this allows elements of the monoid to be represented asstringsof letters, in the computer-science sense of the word "string". This abstraction then allows one to talk aboutstring operationsin general, and eventually leads to the concept offormal languagesas being composed of strings of letters.[further explanation needed]
Another difference between anM-act and a transformation monoid is that for anM-actQ, two distinct elements of the monoid may determine the same transformation ofQ. If we demand that this does not happen, then anM-act is essentially the same as a transformation monoid.
For twoM-actsQM{\displaystyle Q_{M}}andBM{\displaystyle B_{M}}sharing the same monoidM{\displaystyle M}, anM-homomorphismf:QM→BM{\displaystyle f\colon Q_{M}\to B_{M}}is a mapf:Q→B{\displaystyle f\colon Q\to B}such that
for allq∈QM{\displaystyle q\in Q_{M}}andm∈M{\displaystyle m\in M}. The set of allM-homomorphisms is commonly written asHom(QM,BM){\displaystyle \mathrm {Hom} (Q_{M},B_{M})}orHomM(Q,B){\displaystyle \mathrm {Hom} _{M}(Q,B)}.
TheM-acts andM-homomorphisms together form acategorycalledM-Act.[1]
Asemiautomatonis a triple(Q,Σ,T){\displaystyle (Q,\Sigma ,T)}whereΣ{\displaystyle \Sigma }is a non-empty set, called theinput alphabet,Qis a non-empty set, called theset of states, andTis thetransition function
When the set of statesQis a finite set—it need not be—, a semiautomaton may be thought of as adeterministic finite automaton(Q,Σ,T,q0,A){\displaystyle (Q,\Sigma ,T,q_{0},A)}, but without the initial stateq0{\displaystyle q_{0}}or set ofaccept statesA. Alternately, it is afinite-state machinethat has no output, and only an input.
Any semiautomaton induces an act of a monoid in the following way.
LetΣ∗{\displaystyle \Sigma ^{*}}be thefree monoidgenerated by thealphabetΣ{\displaystyle \Sigma }(so that the superscript * is understood to be theKleene star); it is the set of all finite-lengthstringscomposed of the letters inΣ{\displaystyle \Sigma }.
For every wordwinΣ∗{\displaystyle \Sigma ^{*}}, letTw:Q→Q{\displaystyle T_{w}\colon Q\to Q}be the function, defined recursively, as follows, for allqinQ:
LetM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}be the set
The setM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}is closed underfunction composition; that is, for allv,w∈Σ∗{\displaystyle v,w\in \Sigma ^{*}}, one hasTw∘Tv=Tvw{\displaystyle T_{w}\circ T_{v}=T_{vw}}. It also containsTε{\displaystyle T_{\varepsilon }}, which is theidentity functiononQ. Since function composition isassociative, the setM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}is a monoid: it is called theinput monoid,characteristic monoid,characteristic semigrouportransition monoidof the semiautomaton(Q,Σ,T){\displaystyle (Q,\Sigma ,T)}.
If the set of statesQis finite, then the transition functions are commonly represented asstate transition tables. The structure of all possible transitions driven by strings in the free monoid has a graphical depiction as ade Bruijn graph.
The set of statesQneed not be finite, or even countable. As an example, semiautomata underpin the concept ofquantum finite automata. There, the set of statesQare given by thecomplex projective spaceCPn{\displaystyle \mathbb {C} P^{n}}, and individual states are referred to asn-statequbits. State transitions are given byunitaryn×nmatrices. The input alphabetΣ{\displaystyle \Sigma }remains finite, and other typical concerns of automata theory remain in play. Thus, thequantum semiautomatonmay be simply defined as the triple(CPn,Σ,{Uσ1,Uσ2,…,Uσp}){\displaystyle (\mathbb {C} P^{n},\Sigma ,\{U_{\sigma _{1}},U_{\sigma _{2}},\dotsc ,U_{\sigma _{p}}\})}when the alphabetΣ{\displaystyle \Sigma }haspletters, so that there is one unitary matrixUσ{\displaystyle U_{\sigma }}for each letterσ∈Σ{\displaystyle \sigma \in \Sigma }. Stated in this way, the quantum semiautomaton has many geometrical generalizations. Thus, for example, one may take aRiemannian symmetric spacein place ofCPn{\displaystyle \mathbb {C} P^{n}}, and selections from its group ofisometriesas transition functions.
Thesyntactic monoidof aregular languageisisomorphicto the transition monoid of theminimal automatonaccepting the language.
|
https://en.wikipedia.org/wiki/Semiautomaton
|
Inmathematics, aCaccioppoli setis a subset ofRn{\displaystyle \mathbb {R} ^{n}}whoseboundaryis (in a suitable sense)measurableand has (at leastlocally) afinitemeasure. A synonym isset of (locally) finite perimeter. Basically, a set is a Caccioppoli set if itscharacteristic functionis afunction of bounded variation, and its perimeter is the total variation of the characteristic function.
The basic concept of a Caccioppoli set was first introduced by the Italian mathematicianRenato Caccioppoliin the paper (Caccioppoli 1927): considering a plane set or asurfacedefined on anopen setin theplane, he defined theirmeasureorareaas thetotal variationin the sense ofTonelliof their definingfunctions, i.e. of theirparametric equations,provided this quantity wasbounded. Themeasure of theboundary of a setwas defined as afunctional, precisely aset function, for the first time: also, being defined onopen sets, it can be defined on allBorel setsand its value can be approximated by the values it takes on an increasingnetofsubsets. Another clearly stated (and demonstrated) property of this functional was itslower semi-continuity.
In the paper (Caccioppoli 1928), he precised by using atriangular meshas an increasingnetapproximating the open domain, definingpositive and negative variationswhose sum is the total variation, i.e. thearea functional. His inspiring point of view, as he explicitly admitted, was those ofGiuseppe Peano, as expressed by thePeano-Jordan Measure:to associate to every portion of a surface anorientedplane area in a similar way as anapproximating chordis associated to a curve. Also, another theme found in this theory was theextension of afunctionalfrom asubspaceto the wholeambient space: the use of theorems generalizing theHahn–Banach theoremis frequently encountered in Caccioppoli research. However, the restricted meaning oftotal variationin the sense ofTonelliadded much complication to the formal development of the theory, and the use of a parametric description of the sets restricted its scope.
Lamberto Cesariintroduced the "right" generalization offunctions of bounded variationto the case of several variables only in 1936:[1]perhaps, this was one of the reasons that induced Caccioppoli to present an improved version of his theory only nearly 24 years later, in the talk (Caccioppoli 1953) at the IVUMICongress in October 1951, followed by five notes published in theRendicontiof theAccademia Nazionale dei Lincei. These notes were sharply criticized byLaurence Chisholm Youngin theMathematical Reviews.[2]
In 1952Ennio De Giorgipresented his first results, developing the ideas of Caccioppoli, on the definition of the measure of boundaries of sets at theSalzburgCongress of the Austrian Mathematical Society: he obtained this results by using a smoothing operator, analogous to amollifier, constructed from theGaussian function, independently proving some results of Caccioppoli. Probably he was led to study this theory by his teacher and friendMauro Picone, who had also been the teacher of Caccioppoli and was likewise his friend. De Giorgi met Caccioppoli in 1953 for the first time: during their meeting, Caccioppoli expressed a profound appreciation of his work, starting their lifelong friendship.[3]The same year he published his first paper on the topic i.e. (De Giorgi 1953): however, this paper and the closely following one did not attracted much interest from the mathematical community. It was only with the paper (De Giorgi 1954), reviewed again by Laurence Chisholm Young in the Mathematical Reviews,[4]that his approach to sets of finite perimeter became widely known and appreciated: also, in the review, Young revised his previous criticism on the work of Caccioppoli.
The last paper of De Giorgi on the theory ofperimeterswas published in 1958: in 1959, after the death of Caccioppoli, he started to call sets of finite perimeter "Caccioppoli sets". Two years laterHerbert FedererandWendell Flemingpublished their paper (Federer & Fleming 1960), changing the approach to the theory. Basically they introduced two new kind ofcurrents, respectivelynormal currentsandintegral currents: in a subsequent series of papers and in his famous treatise,[5]Federer showed that Caccioppoli sets are normalcurrentsof dimensionn{\displaystyle n}inn{\displaystyle n}-dimensionaleuclidean spaces. However, even if the theory of Caccioppoli sets can be studied within the framework of theory ofcurrents, it is customary to study it through the "traditional" approach usingfunctions of bounded variation, as the various sections found in a lot of importantmonographsinmathematicsandmathematical physicstestify.[6]
In what follows, the definition and properties offunctions of bounded variationin then{\displaystyle n}-dimensional setting will be used.
Definition 1. LetΩ{\displaystyle \Omega }be anopen subsetofRn{\displaystyle \mathbb {R} ^{n}}and letE{\displaystyle E}be aBorel set. TheperimeterofE{\displaystyle E}inΩ{\displaystyle \Omega }is defined as follows
whereχE{\displaystyle \chi _{E}}is thecharacteristic functionofE{\displaystyle E}. That is, the perimeter ofE{\displaystyle E}in an open setΩ{\displaystyle \Omega }is defined to be thetotal variationof itscharacteristic functionon that open set. IfΩ=Rn{\displaystyle \Omega =\mathbb {R} ^{n}}, then we writeP(E)=P(E,Rn){\displaystyle P(E)=P(E,\mathbb {R} ^{n})}for the (global) perimeter.
Definition 2. TheBorel setE{\displaystyle E}is aCaccioppoli setif and only if it has finite perimeter in everyboundedopen subsetΩ{\displaystyle \Omega }ofRn{\displaystyle \mathbb {R} ^{n}}, i.e.
Therefore, a Caccioppoli set has acharacteristic functionwhosetotal variationis locally bounded. From the theory offunctions of bounded variationit is known that this implies the existence of avector-valuedRadon measureDχE{\displaystyle D\chi _{E}}such that
As noted for the case of generalfunctions of bounded variation, this vectormeasureDχE{\displaystyle D\chi _{E}}is thedistributionalorweakgradientofχE{\displaystyle \chi _{E}}. The total variation measure associated withDχE{\displaystyle D\chi _{E}}is denoted by|DχE|{\displaystyle |D\chi _{E}|}, i.e. for every open setΩ⊂Rn{\displaystyle \Omega \subset \mathbb {R} ^{n}}we write|DχE|(Ω){\displaystyle |D\chi _{E}|(\Omega )}forP(E,Ω)=V(χE,Ω){\displaystyle P(E,\Omega )=V(\chi _{E},\Omega )}.
In his papers (De Giorgi 1953) and (De Giorgi 1954),Ennio De Giorgiintroduces the followingsmoothingoperator, analogous to theWeierstrass transformin the one-dimensionalcase
As one can easily prove,Wλχ(x){\displaystyle W_{\lambda }\chi (x)}is asmooth functionfor allx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}, such that
also, itsgradientis everywhere well defined, and so is itsabsolute value
Having defined this function, De Giorgi gives the following definition ofperimeter:
Definition 3. LetΩ{\displaystyle \Omega }be anopen subsetofRn{\displaystyle \mathbb {R} ^{n}}and letE{\displaystyle E}be aBorel set. TheperimeterofE{\displaystyle E}inΩ{\displaystyle \Omega }is the value
Actually De Giorgi considered the caseΩ=Rn{\displaystyle \Omega =\mathbb {R} ^{n}}: however, the extension to the general case is not difficult. It can be proved that the two definitions are exactly equivalent: for a proof see the already cited De Giorgi's papers or the book (Giusti 1984). Now having defined what a perimeter is, De Giorgi gives the same definition 2 of what a set of(locally) finiteperimeter is.
The following properties are the ordinary properties which the general notion of aperimeteris supposed to have:
For any given Caccioppoli setE⊂Rn{\displaystyle E\subset \mathbb {R} ^{n}}there exist two naturally associated analytic quantities: the vector-valuedRadon measureDχE{\displaystyle D\chi _{E}}and itstotal variation measure|DχE|{\displaystyle |D\chi _{E}|}. Given that
is the perimeter within any open setΩ{\displaystyle \Omega }, one should expect thatDχE{\displaystyle D\chi _{E}}alone should somehow account for the perimeter ofE{\displaystyle E}.
It is natural to try to understand the relationship between the objectsDχE{\displaystyle D\chi _{E}},|DχE|{\displaystyle |D\chi _{E}|}, and thetopological boundary∂E{\displaystyle \partial E}. There is an elementary lemma that guarantees that thesupport(in the sense ofdistributions) ofDχE{\displaystyle D\chi _{E}}, and therefore also|DχE|{\displaystyle |D\chi _{E}|}, is alwayscontainedin∂E{\displaystyle \partial E}:
Lemma. The support of the vector-valued Radon measureDχE{\displaystyle D\chi _{E}}is asubsetof thetopological boundary∂E{\displaystyle \partial E}ofE{\displaystyle E}.
Proof. To see this choosex0∉∂E{\displaystyle x_{0}\notin \partial E}: thenx0{\displaystyle x_{0}}belongs to theopen setRn∖∂E{\displaystyle \mathbb {R} ^{n}\setminus \partial E}and this implies that it belongs to anopen neighborhoodA{\displaystyle A}contained in theinteriorofE{\displaystyle E}or in the interior ofRn∖E{\displaystyle \mathbb {R} ^{n}\setminus E}. Letϕ∈Cc1(A;Rn){\displaystyle \phi \in C_{c}^{1}(A;\mathbb {R} ^{n})}. IfA⊆(Rn∖E)∘=Rn∖E−{\displaystyle A\subseteq (\mathbb {R} ^{n}\setminus E)^{\circ }=\mathbb {R} ^{n}\setminus E^{-}}whereE−{\displaystyle E^{-}}is theclosureofE{\displaystyle E}, thenχE(x)=0{\displaystyle \chi _{E}(x)=0}forx∈A{\displaystyle x\in A}and
Likewise, ifA⊆E∘{\displaystyle A\subseteq E^{\circ }}thenχE(x)=1{\displaystyle \chi _{E}(x)=1}forx∈A{\displaystyle x\in A}so
Withϕ∈Cc1(A,Rn){\displaystyle \phi \in C_{c}^{1}(A,\mathbb {R} ^{n})}arbitrary it follows thatx0{\displaystyle x_{0}}is outside the support ofDχE{\displaystyle D\chi _{E}}.
The topological boundary∂E{\displaystyle \partial E}turns out to be too crude for Caccioppoli sets because itsHausdorff measureovercompensates for the perimeterP(E){\displaystyle P(E)}defined above. Indeed, the Caccioppoli set
representing a square together with a line segment sticking out on the left has perimeterP(E)=4{\displaystyle P(E)=4}, i.e. the extraneous line segment is ignored, while its topological boundary
has one-dimensional Hausdorff measureH1(∂E)=5{\displaystyle {\mathcal {H}}^{1}(\partial E)=5}.
The "correct" boundary should therefore be a subset of∂E{\displaystyle \partial E}. We define:
Definition 4. Thereduced boundaryof a Caccioppoli setE⊂Rn{\displaystyle E\subset \mathbb {R} ^{n}}is denoted by∂∗E{\displaystyle \partial ^{*}E}and is defined to be equal to be the collection of pointsx{\displaystyle x}at which the limit:
exists and has length equal to one, i.e.|νE(x)|=1{\displaystyle |\nu _{E}(x)|=1}.
One can remark that by theRadon-Nikodym Theoremthe reduced boundary∂∗E{\displaystyle \partial ^{*}E}is necessarily contained in the support ofDχE{\displaystyle D\chi _{E}}, which in turn is contained in the topological boundary∂E{\displaystyle \partial E}as explained in the section above. That is:
The inclusions above are not necessarily equalities as the previous example shows. In that example,∂E{\displaystyle \partial E}is the square with the segment sticking out,supportDχE{\displaystyle \operatorname {support} D\chi _{E}}is the square, and∂∗E{\displaystyle \partial ^{*}E}is the square without its four corners.
For convenience, in this section we treat only the case whereΩ=Rn{\displaystyle \Omega =\mathbb {R} ^{n}}, i.e. the setE{\displaystyle E}has (globally) finite perimeter. De Giorgi's theorem provides geometric intuition for the notion of reduced boundaries and confirms that it is the more natural definition for Caccioppoli sets by showing
i.e. that itsHausdorff measureequals the perimeter of the set. The statement of the theorem is quite long because it interrelates various geometric notions in one fell swoop.
Theorem. SupposeE⊂Rn{\displaystyle E\subset \mathbb {R} ^{n}}is a Caccioppoli set. Then at each pointx{\displaystyle x}of the reduced boundary∂∗E{\displaystyle \partial ^{*}E}there exists a multiplicity oneapproximate tangent spaceTx{\displaystyle T_{x}}of|DχE|{\displaystyle |D\chi _{E}|}, i.e. a codimension-1 subspaceTx{\displaystyle T_{x}}ofRn{\displaystyle \mathbb {R} ^{n}}such that
for every continuous, compactly supportedf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }. In fact the subspaceTx{\displaystyle T_{x}}is theorthogonal complementof the unit vector
defined previously. This unit vector also satisfies
locally inL1{\displaystyle L^{1}}, so it is interpreted as an approximate inward pointingunitnormal vectorto the reduced boundary∂∗E{\displaystyle \partial ^{*}E}. Finally,∂∗E{\displaystyle \partial ^{*}E}is (n-1)-rectifiableand the restriction of (n-1)-dimensionalHausdorff measureHn−1{\displaystyle {\mathcal {H}}^{n-1}}to∂∗E{\displaystyle \partial ^{*}E}is|DχE|{\displaystyle |D\chi _{E}|}, i.e.
In other words, up toHn−1{\displaystyle {\mathcal {H}}^{n-1}}-measure zero the reduced boundary∂∗E{\displaystyle \partial ^{*}E}is the smallest set on whichDχE{\displaystyle D\chi _{E}}is supported.
From the definition of the vectorRadon measureDχE{\displaystyle D\chi _{E}}and from the properties of the perimeter, the following formula holds true:
This is one version of thedivergence theoremfordomainswith non smoothboundary. De Giorgi's theorem can be used to formulate the same identity in terms of the reduced boundary∂∗E{\displaystyle \partial ^{*}E}and the approximate inward pointing unit normal vectorνE{\displaystyle \nu _{E}}. Precisely, the following equality holds
|
https://en.wikipedia.org/wiki/Caccioppoli_set
|
Acomputer number formatis the internal representation of numeric values in digital device hardware and software, such as in programmablecomputersandcalculators.[1]Numerical values are stored as groupings ofbits, such asbytesand words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;[citation needed]the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes.
Abitis abinarydigitthat represents one of twostates. The concept of a bit can be understood as a value of either1or0,onoroff,yesorno,trueorfalse, orencodedby a switch ortoggleof some kind.
While a single bit, on its own, is able to represent only two values, astring of bitsmay be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1.
As the number of bits composing a string increases, the number of possible0and1combinations increasesexponentially. A single bit allows only two value-combinations, two bits combined can make four separate values, three bits for eight, and so on, increasing with the formula 2n. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2.
Groupings with a specific number of bits are used to represent varying things and have specific names.
Abyteis a bit string containing the number of bits needed to represent acharacter. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.[2]In manycomputer architectures, the byte is the smallestaddressable unit, the atom of addressability, say. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, manyCPUsread data in some multiple of eight bits.[3]Because the byte size of eight bits is so common, but the definition is not standardized, the termoctetis sometimes used to explicitly describe an eight bit sequence.
Anibble(sometimesnybble), is a number composed of four bits.[4]Being ahalf-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as ahexadecimal digit.[5]
Octaland hexadecimal encoding are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" (hex), number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hexadecimal "10" is the same as a decimal "16" and a hexadecimal "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below.
When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers.
Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hexadecimal weights are powers of 16. To convert from hexadecimal or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example:
Fixed-pointformatting can be useful to represent fractions in binary.
The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction.
The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the eighth's bit, and so on. For example:
This form of encoding cannot represent some values in binary. For example, the fraction1/5, 0.2 in decimal, the closest approximations would be as follows:
Even if more digits are used, an exact representation is impossible. The number1/3, written in decimal as 0.333333333..., continues indefinitely. If prematurely terminated, the value would not represent1/3precisely.
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision ofreal numbers, we have to abandon signed integers and fixed-point numbers and go to a "floating-point" format.
In the decimal system, we are familiar with floating-point numbers of the form (scientific notation):
or, more compactly:
which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "significand", multiplied by a power of 10 (E5, meaning 105or 100,000), known as an "exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example:
The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined byInstitute of Electrical and Electronics Engineers(IEEE). TheIEEE 754-2008standard specification defines a 64 bit floating-point format with:
With the bits stored in 8 bytes of memory:
where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits here have been extracted, they are converted with the computation:
This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers:
The specification also defines several special values that are not defined numbers, and are known asNaNs, for "Not A Number". These are used by programs to designate invalid operations and the like.
Some programs also use 32-bit floating-point numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in "excess-127" format, giving seven valid decimal digits.
The bits are converted to a numeric value with the computation:
leading to the following range of numbers:
Such floating-point numbers are known as "reals" or "floats" in general, but with a number of variations:
A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point value".
A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value".
The relation between numbers and bit patterns is chosen for convenience in computer manipulation; eight bytes stored in computer memory may represent a 64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into eight bytes. The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data.
Only a finite range of real numbers can be represented with a given number of bits. Arithmetic operations can overflow or underflow, producing a value too large or too small to be represented.
The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent.
The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...[6]
Programming inassembly languagerequires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software.
High-levelprogramming languagessuch asRubyandPythonoffer an abstract number that may be an expanded type such asrational,bignum, orcomplex. Mathematical operations are carried out by library routines provided by the implementation of the language. A given mathematical symbol in the source code, byoperator overloading, will invoke different object code appropriate to the representation of the numerical type; mathematical operations on any number—whether signed, unsigned, rational, floating-point, fixed-point, integral, or complex—are written exactly the same way.
Some languages, such asREXXandJava, provide decimal floating-points operations, which provide rounding errors of a different form.
The initial version of this article was based on apublic domainarticle fromGreg Goebel's Vectorsite.
|
https://en.wikipedia.org/wiki/Computer_number_format
|
Innews mediaandsocial media, anecho chamberis an environment or ecosystem in which participants encounterbeliefsthat amplify or reinforce their preexisting beliefs by communication and repetition inside a closed system and insulated from rebuttal.[2][3][4]An echo chamber circulates existing views without encountering opposing views, potentially resulting inconfirmation bias. Echo chambers may increasesocialandpolitical polarizationandextremism.[5]On social media, it is thought that echo chambers limit exposure to diverse perspectives, and favor and reinforce presupposed narratives and ideologies.[4][6]
The term is a metaphor based on an acousticecho chamber, in which soundsreverberatein a hollow enclosure. Another emerging term for this echoing and homogenizing effect within social-media communities on the Internet isneotribalism.
Many scholars note the effects that echo chambers can have on citizens' stances and viewpoints, and specifically implications has for politics.[7]However, some studies have suggested that the effects of echo chambers are weaker than often assumed.[8]
The Internet has expanded the variety and amount of accessible political information. On the positive side, this may create a more pluralistic form of public debate; on the negative side, greater access to information may lead toselective exposureto ideologically supportive channels.[5]In an extreme "echo chamber", one purveyor of information will make a claim, which many like-minded people then repeat, overhear, and repeat again (often in an exaggerated or otherwise distorted form)[9]until most people assume that some extreme variation of the story is true.[10]
The echo chamber effect occurs online when a harmonious group of people amalgamate and developtunnel vision. Participants in online discussions may find their opinions constantly echoed back to them, whichreinforcestheir individual belief systems due to the declining exposure to other's opinions.[11]Their individual belief systems are what culminate into a confirmation bias regarding a variety of subjects. When an individual wants something to be true, they often will only gather the information that supports their existing beliefs and disregard any statements they find that are contradictory or speak negatively upon their beliefs.[12]Individuals who participate in echo chambers often do so because they feel more confident that their opinions will be more readily accepted by others in the echo chamber.[13]This happens because the Internet has provided access to a wide range of readily available information. People are receiving their news online more rapidly through less traditional sources, such asFacebook,Google, andTwitter. These and many other social platforms and online media outlets have established personalizedalgorithmsintended to cater specific information to individuals’ online feeds. This method of curatingcontenthas replaced the function of the traditional news editor.[14]The mediated spread of information through online networks causes a risk of an algorithmic filter bubble, leading to concern regarding how the effects of echo chambers on the internet promote the division of online interaction.[15]
Members of an echo chamber are not fully responsible for their convictions. Once part of an echo chamber, an individual might adhere to seemingly acceptable epistemic practices and still be further misled. Many individuals may bestuckin echo chambers due to factors existing outside of their control, such as being raised in one.[3]
Furthermore, the function of an echo chamber does not entail eroding a member's interest intruth; it focuses upon manipulating their credibility levels so that fundamentally different establishments and institutions will be considered proper sources of authority.[16]
However, empirical findings to clearly support these concerns are needed[17]and the field is very fragmented when it comes to empirical results. There are some studies that do measure echo chamber effects, such as the study of Bakshy et al. (2015).[18][19]In this study the researchers found that people tend to share news articles they align with. Similarly, they discovered a homophily in online friendships, meaning people are more likely to be connected on social media if they have the samepolitical ideology. In combination, this can lead to echo chamber effects. Bakshy et al. found that a person's potential exposure to cross-cutting content (content that is opposite to their own political beliefs) through their own network is only 24% for liberals and 35% for conservatives. Other studies argue that expressing cross-cutting content is an important measure of echo chambers: Bossetta et al. (2023) find that 29% of Facebook comments during Brexit were cross-cutting expressions.[20]Therefore, echo chambers might be present in a person's media diet but not in how they interact with others on social media.
Another set of studies suggests that echo chambers exist, but that these are not a widespread phenomenon: Based on survey data, Dubois and Blank (2018) show that most people do consume news from various sources, while around 8% consume media with low diversity.[21]Similarly, Rusche (2022) shows that, most Twitter users do not show behavior that resembles that of an echo chamber. However, through high levels of online activity, the small group of users that do, make up a substantial share populist politicians' followers, thus creating homogeneous online spaces.[22]
Finally, there are other studies which contradict the existence of echo chambers. Some found that people also share news reports that don't align with their political beliefs.[23]Others found that people using social media are being exposed to more diverse sources than people not using social media.[24]In summation, it remains that clear and distinct findings are absent which either confirm or falsify the concerns of echo chamber effects.
Research on thesocial dynamicsof echo chambers shows that the fragmented nature ofonline culture, the importance of collective identity construction, and the argumentative nature of online controversies can generate echo chambers where participants encounter self-reinforcing beliefs.[2]Researchers show that echo chambers are prime vehicles to disseminatedisinformation, as participants exploit contradictions against perceived opponents amidst identity-driven controversies.[2]As echo chambers build uponidentity politicsand emotion, they can contribute topolitical polarizationandneotribalism.[25]
Echo chamber studies fail to achieve consistent and comparable results due to unclear definitions, inconsistent measurement methods, and unrepresentative data.[26]Social media platforms continually change their algorithms, and most studies are conducted in the US, limiting their application to political systems with more parties.
In recent years, closed epistemic networks have increasingly been held responsible for the era of post-truth andfake news.[27]However, the media frequently conflates two distinct concepts of socialepistemology: echo chambers and epistemic bubbles.[16]
An epistemic bubble is an informational network in which important sources have been excluded by omission, perhaps unintentionally. It is an impaired epistemic framework which lacks strong connectivity.[28]Members within epistemic bubbles are unaware of significant information and reasoning.
On the other hand, an echo chamber is an epistemic construct in which voices are actively excluded and discredited. It does not suffer from a lack in connectivity; rather it depends on a manipulation of trust by methodically discrediting all outside sources.[29]According to research conducted by theUniversity of Pennsylvania, members of echo chambers become dependent on the sources within the chamber and highly resistant to any external sources.[30]
An important distinction exists in the strength of the respective epistemic structures. Epistemic bubbles are not particularly robust. Relevant information has merely been left out, not discredited.[31]One can ‘pop’ an epistemic bubble by exposing a member to the information and sources that they have been missing.[3]
Echo chambers, however, are incredibly strong. By creating pre-emptive distrust between members and non-members, insiders will be insulated from the validity of counter-evidence and will continue to reinforce the chamber in the form of a closed loop.[29]Outside voices are heard, but dismissed.
As such, the two concepts are fundamentally distinct and cannot be utilized interchangeably. However, one must note that this distinction is conceptual in nature, and an epistemic community can exercise multiple methods of exclusion to varying extents.
Afilter bubble– a term coined by internet activistEli Pariser– is a state of intellectual isolation that allegedly can result frompersonalized searcheswhen a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent.
Homophilyis the tendency of individuals to associate andbondwith similar others, as in the proverb "birds of a feather flock together". The presence of homophily has been detected in a vast array ofnetworkstudies. For example, a study conducted by Bakshy et. al. explored the data of 10.1 million Facebook users. These users identified as either politically liberal, moderate, or conservative, and the vast majority of their friends were found to have a political orientation that was similar to their own. Facebook algorithms recognize this and selects information with a bias towards this political orientation to showcase in their newsfeed.[32]
Recommender systemsare information filtering systems put in place on different platforms that provide recommendations depending on information gathered from the user. In general, recommendations are provided in three different ways: based on content that was previously selected by the user, content that has similar properties or characteristics to that which has been previously selected by the user, or a combination of both.[32]
Both echo chambers and filter bubbles relate to the ways individuals are exposed to content devoid of clashing opinions, and colloquially might be used interchangeably. However, echo chamber refers to the overall phenomenon by which individuals are exposed only to information from like-minded individuals, while filter bubbles are a result of algorithms that choose content based on previous online behavior, as with search histories or online shopping activity.[18]Indeed, specific combinations of homophily and recommender systems have been identified as significant drivers for determining the emergence of echo chambers.[33]
Culture warsarecultural conflictsbetween social groups that have conflictingvaluesandbeliefs. It refers to "hot button" topics on which societalpolarizationoccurs.[34]A culture war is defined as "the phenomenon in which multiple groups of people, who hold entrenched values and ideologies, attempt to contentiously steer public policy."[2]Echo chambers on social media have been identified as playing a role on how multiple social groups, holding distinct values and ideologies, create groups circulate conversations through conflict and controversy.
Online social communities become fragmented by echo chambers when like-minded people group together and members hear arguments in one specific direction with no counter argument addressed. In certain online platforms, such as Twitter, echo chambers are more likely to be found when the topic is more political in nature compared to topics that are seen as more neutral.[35]Social networkingcommunities are communities that are considered to be some of the most powerful reinforcements of rumors[36]due to the trust in the evidence supplied by their own social group and peers, over the information circulating the news.[37][38]In addition to this, the reduction of fear that users can enjoy through projecting their views on the internet versus face-to-face allows for further engagement in agreement with their peers.[39]
This can create significant barriers to critical discourse within an online medium. Social discussion and sharing can potentially suffer when people have anarrow information baseand do not reach outside their network. Essentially, the filter bubble can distort one'srealityin ways which are not believed to be alterable by outside sources.[40]
Findings by Tokita et al. (2021) suggest that individuals’ behavior within echo chambers may dampen their access to information even from desirable sources. In highly polarized information environments, individuals who are highly reactive to socially-shared information are more likely than their less reactive counterparts to curate politically homogenous information environments and experience decreased information diffusion in order to avoid overreacting to news they deem unimportant. This makes these individuals more likely to develop extreme opinions and to overestimate the degree to which they are informed.[41]
Research has also shown that misinformation can become more viral as a result of echo chambers, as the echo chambers provide an initial seed which can fuel broader viral diffusion.[42]
Many offline communities are also segregated by politicalbeliefsand cultural views. The echo chamber effect may prevent individuals from noticing changes in language andcultureinvolving groups other than their own. Online echo chambers can sometimes influence an individual's willingness to participate in similar discussions offline. A 2016 study found that "Twitter users who felt their audience on Twitter agreed with their opinion were more willing to speak out on that issue in the workplace".[13]
Group polarizationcan occur as a result of growing echo chambers. The lack of external viewpoints and the presence of a majority of individuals sharing a similar opinion or narrative can lead to a more extreme belief set. Group polarisation can also aid the current of fake news and misinformation through social media platforms.[43]This can extend to offline interactions, with data revealing that offline interactions can be as polarising as online interactions (Twitter), arguably due to social media-enabled debates being highly fragmented.[44]
Echo chambers have existed in many forms. Examples cited since the late 20th century include:
Since the creation of the internet, scholars have been curious to see the changes in political communication.[56]Due to the new changes in information technology and how it is managed, it is unclear how opposing perspectives can reach common ground in a democracy.[57]The effects seen from the echo chamber effect has largely been cited to occur in politics, such asTwitter[58]andFacebookduring the2016 presidential election in the United States.[19]Some believe that echo chambers played a big part in the success ofDonald Trumpin the 2016 presidential elections.[59]
Some companies have also made efforts in combating the effects of an echo chamber onan algorithmic approach. A high-profile example of this is the changes Facebook made to its "Trending" page, which is an on-site news source for its users. Facebook modified their "Trending" page by transitioning from displaying a single news source to multiple news sources for a topic or event.[60]The intended purpose of this was to expand the breadth of news sources for any given headline, and therefore expose readers to a variety of viewpoints. There are startups building apps with the mission of encouraging users to open their echo chambers, such asUnFound.news.[61]Another example is a beta feature onBuzzFeed Newscalled "Outside Your Bubble",[62]which adds a module to the bottom of BuzzFeed News articles to show reactions from various platforms like Twitter, Facebook, and Reddit. This concept aims to bring transparency and prevent biased conversations, diversifying the viewpoints their readers are exposed to.[63]
|
https://en.wikipedia.org/wiki/Echo_chamber_(media)
|
Network switching subsystem(NSS) (orGSM core network) is the component of aGSMsystem that carries outcall outandmobility managementfunctions formobile phonesroamingon thenetwork of base stations. It is owned and deployed bymobile phone operatorsand allows mobile devices to communicate with each other andtelephonesin the widerpublic switched telephone network(PSTN). The architecture contains specific features and functions which are needed because the phones are not fixed in one location.
The NSS originally consisted of the circuit-switchedcore network, used for traditionalGSM servicessuch as voice calls,SMS, andcircuit switched datacalls. It was extended with an overlay architecture to provide packet-switched data services known as theGPRS core network. This allows mobile phones to have access to services such asWAP,MMSand theInternet.
Themobile switching center(MSC) is the primary service delivery node for GSM/CDMA, responsible forroutingvoice calls and SMS as well as other services (such as conference calls, FAX, and circuit-switched data).
The MSC sets up and releases theend-to-end connection, handles mobility and hand-over requirements during the call and takes care of charging and real-time prepaid account monitoring.
In the GSM mobile phone system, in contrast with earlier analogue services, fax and data information is sent digitally encoded directly to the MSC. Only at the MSC is this re-coded into an "analogue" signal (although actually this will almost certainly mean sound is encoded digitally as apulse-code modulation(PCM) signal in a 64-kbit/s timeslot, known as aDS0in America).
There are various different names for MSCs in different contexts which reflects their complex role in the network, all of these terms though could refer to the same MSC, but doing different things at different times.
Thegateway MSC(G-MSC) is the MSC that determines which "visited MSC" (V-MSC) the subscriber who is being called is currently located at. It also interfaces with the PSTN. All mobile to mobile calls and PSTN to mobile calls are routed through a G-MSC. The term is only valid in the context of one call, since any MSC may provide both the gateway function and the visited MSC function. However, some manufacturers design dedicated high capacity MSCs which do not have anybase station subsystems(BSS) connected to them. These MSCs will then be the gateway MSC for many of the calls they handle.
Thevisited MSC(V-MSC) is the MSC where a customer is currently located. Thevisitor location register(VLR) associated with this MSC will have the subscriber's data in it.
Theanchor MSCis the MSC from which ahandoverhas been initiated. Thetarget MSCis the MSC toward which a handover should take place. Amobile switching center serveris a part of the redesigned MSC concept starting from3GPP Release 4.
Themobile switching center serveris a soft-switch variant (therefore it may be referred to as mobile soft switch, MSS) of the mobile switching center, which provides circuit-switched calling mobility management, and GSM services to the mobile phonesroamingwithin the area that it serves. The functionality enables split control between (signaling ) and user plane (bearer in network element called as media gateway/MG), which guarantees better placement of network elements within the network.
MSS andmedia gateway(MGW) makes it possible to cross-connect circuit-switched calls switched by using IP, ATM AAL2 as well asTDM. More information is available in 3GPP TS 23.205.
The termCircuit switching(CS) used here originates from traditional telecommunications systems. However, modern MSS and MGW devices mostly use genericInternettechnologies and formnext-generation telecommunication networks. MSS software may run on generic computers orvirtual machinesincloudenvironment.
The MSC connects to the following elements:
Tasks of the MSC include:
Thehome location register(HLR) is a central database that contains details of each mobile phone subscriber that is authorized to use the GSM core network. There can be several logical, and physical, HLRs perpublic land mobile network(PLMN), though oneinternational mobile subscriber identity(IMSI)/MSISDN pair can be associated with only one logical HLR (which can span several physical nodes) at a time.
The HLRs store details of everySIM cardissued by the mobile phone operator. Each SIM has a unique identifier called an IMSI which is theprimary keyto each HLR record.
Another important item of data associated with the SIM are the MSISDNs, which are thetelephone numbersused by mobile phones to make and receive calls. The primary MSISDN is the number used for making and receiving voice calls and SMS, but it is possible for a SIM to have other secondary MSISDNs associated with it forfaxand data calls. Each MSISDN is also aunique keyto the HLR record. The HLR data is stored for as long as a subscriber remains with the mobile phone operator.
Examples of other data stored in the HLR against an IMSI record is:
The HLR is a system which directly receives and processesMAPtransactions and messages from elements in the GSM network, for example, the location update messages received as mobile phones roam around.
The HLR connects to the following elements:
The main function of the HLR is to manage the fact that SIMs and phones move around a lot. The following procedures are implemented to deal with this:
Theauthentication center(AuC) is a function toauthenticateeachSIM cardthat attempts to connect to thegsmcore network (typically when the phone is powered on). Once the authentication is successful, the HLR is allowed to manage the SIM and services described above. Anencryption keyis also generated that is subsequently used to encrypt all wireless communications (voice, SMS, etc.) between the mobile phone and the GSM core network.
If the authentication fails, then no services are possible from that particular combination of SIM card and mobile phone operator attempted. There is an additional form of identification check performed on the serial number of the mobile phone described in the EIR section below, but this is not relevant to the AuC processing.
Proper implementation of security in and around the AuC is a key part of an operator's strategy to avoidSIM cloning.
The AuC does not engage directly in the authentication process, but instead generates data known astripletsfor the MSC to use during the procedure. The security of the process depends upon ashared secretbetween the AuC and the SIM called theKi. TheKiis securely burned into the SIM during manufacture and is also securely replicated onto the AuC. ThisKiis never transmitted between the AuC and SIM, but is combined with the IMSI to produce achallenge/responsefor identification purposes and an encryption key calledKcfor use in over the air communications.
The AuC connects to the following elements:
The AuC stores the following data for each IMSI:
When the MSC asks the AuC for a new set of triplets for a particular IMSI, the AuC first generates a random number known asRAND. ThisRANDis then combined with theKito produce two numbers as follows:
The numbers (RAND, SRES,Kc) form the triplet sent back to the MSC. When a particular IMSI requests access to the GSM core network, the MSC sends theRANDpart of the triplet to the SIM. The SIM then feeds this number and theKi(which is burned onto the SIM) into the A3 algorithm as appropriate and an SRES is calculated and sent back to the MSC. If this SRES matches with the SRES in the triplet (which it should if it is a valid SIM), then the mobile is allowed to attach and proceed with GSM services.
After successful authentication, the MSC sends the encryption keyKcto thebase station controller(BSC) so that all communications can be encrypted and decrypted. Of course, the mobile phone can generate theKcitself by feeding the same RAND supplied during authentication and theKiinto the A8 algorithm.
The AuC is usually collocated with the HLR, although this is not necessary. Whilst the procedure is secure for most everyday use, it is by no means hack proof. Therefore, a new set of security methods was designed for 3G phones.
In practice, A3 and A8 algorithms are generally implemented together (known as A3/A8, seeCOMP128). An A3/A8 algorithm is implemented in Subscriber Identity Module (SIM) cards and in GSM network Authentication Centers. It is used to authenticate the customer and generate a key for encrypting voice and data traffic, as defined in 3GPP TS 43.020 (03.20 before Rel-4). Development of A3 and A8 algorithms is considered a matter for individual GSM network operators, although example implementations are available. To encrypt Global System for Mobile Communications (GSM) cellular communications A5 algorithm is used.[1]
TheVisitor Location Register (VLR)is a database of the MSs (Mobile stations) that have roamed into the jurisdiction of the Mobile Switching Center (MSC) which it serves. Each mainbase transceiver stationin the network is served by exactly one VLR (oneBTSmay be served by many MSCs in case of MSC in pool), hence a subscriber cannot be present in more than one VLR at a time.
The data stored in the VLR has either been received from theHome Location Register (HLR), or collected from the MS. In practice, for performance reasons, most vendors integrate the VLR directly to the V-MSC and, where this is not done, the VLR is very tightly linked with the MSC via a proprietary interface. Whenever an MSC detects a new MS in its network, in addition to creating a new record in the VLR, it also updates the HLR of the mobile subscriber, apprising it of the new location of that MS. If VLR data is corrupted it can lead to serious issues with text messaging and call services.
Data stored include:
The primary functions of the VLR are:
EIRis a system that handles real-time requests to check theIMEI(checkIMEI) of mobile devices that come from the switching equipment (MSC,SGSN,MME). The answer contains the result of the check:
The switching equipment must use the EIR response to determine whether or not to allow the device to register or re-register on the network. Since the response of switching equipment to ‘greylisted’ and ‘unknown equipment’ responses is not clearly described in the standard, they are most often not used.
Most often, EIR uses the IMEI blacklist feature, which contains the IMEI of the devices that need to be banned from the network. As a rule, these are stolen or lost devices. Mobile operators rarely use EIR capabilities to block devices on their own. Usually blocking begins when there is a law in the country, which obliges all cellular operators of the country to do so. Therefore, in the delivery of the basic components of the network switching subsystem (core network) is often already present EIR with basic functionality, which includes a ‘whitelisted’ response to all CheckIMEI and the ability to fill IMEI blacklist, which will be given a ‘blacklisted’ response.
When the legislative framework for blocking registration of devices in cellular networks appears in the country, the telecommunications regulator usually has a Central EIR (CEIR) system, which is integrated with the EIR of all operators and transmits to them the actual lists of identifiers that must be used when processing CheckIMEI requests. In doing so, there may be many new requirements for EIR systems that are not present in the legacy EIR:
Other functions may be required in individual cases. For example, Kazakhstan has introduced mandatory registration of devices and their binding to subscribers. But when a subscriber appears in the network with a new device, the network operation is not blocked completely, and the subscriber is allowed to register the device. To do this, there are blocked all services, except the following: calls to a specific service number, sending SMS to a specific service number, and all Internet traffic is redirected to a specific landing page. This is achieved by the fact that EIR can send commands to several MNO systems (HLR,PCRF,SMSC, etc.).
The most common suppliers of individual EIR systems (not as part of a complex solution) are the companies BroadForward, Mahindra Comviva, Mavenir, Nokia, Eastwind.
Connected more or less directly to the GSM core network are many other functions.
Thebilling centeris responsible for processing the toll tickets generated by the VLRs and HLRs and generating a bill for each subscriber. It is also responsible for generating billing data of roaming subscriber.
Themultimedia messaging servicecentersupports the sending of multimedia messages (e.g., images,audio,videoand their combinations) to (or from) MMS-bluetooth.
Thevoicemailsystemrecords and stores voicemail.
According to U.S. law, which has also been copied into many other countries, especially in Europe, all telecommunications equipment must provide facilities for monitoring the calls of selected users. There must be some level of support for this built into any of the different elements. The concept oflawful interceptionis also known, following the relevant U.S. law, asCALEA.
Generally, lawful Interception implementation is similar to the implementation of conference call. While A and B are talking with each other, C can join the call and listen silently.
|
https://en.wikipedia.org/wiki/Visitors_Location_Register
|
TheGauss–Newton algorithmis used to solvenon-linear least squaresproblems, which is equivalent to minimizing a sum of squared function values. It is an extension ofNewton's methodfor finding aminimumof a non-linearfunction. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximatezeroesof the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method forsolving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.[1]
Non-linear least squares problems arise, for instance, innon-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematiciansCarl Friedrich GaussandIsaac Newton, and first appeared in Gauss's 1809 workTheoria motus corporum coelestium in sectionibus conicis solem ambientum.[2]
Givenm{\displaystyle m}functionsr=(r1,…,rm){\displaystyle {\textbf {r}}=(r_{1},\ldots ,r_{m})}(often called residuals) ofn{\displaystyle n}variablesβ=(β1,…βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\ldots \beta _{n}),}withm≥n,{\displaystyle m\geq n,}the Gauss–Newton algorithmiterativelyfinds the value ofβ{\displaystyle \beta }that minimize the sum of squares[3]S(β)=∑i=1mri(β)2.{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}({\boldsymbol {\beta }})^{2}.}
Starting with an initial guessβ(0){\displaystyle {\boldsymbol {\beta }}^{(0)}}for the minimum, the method proceeds by the iterationsβ(s+1)=β(s)−(JrTJr)−1JrTr(β(s)),{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
where, ifrandβarecolumn vectors, the entries of theJacobian matrixare(Jr)ij=∂ri(β(s))∂βj,{\displaystyle \left(\mathbf {J_{r}} \right)_{ij}={\frac {\partial r_{i}\left({\boldsymbol {\beta }}^{(s)}\right)}{\partial \beta _{j}}},}
and the symbolT{\displaystyle ^{\operatorname {T} }}denotes thematrix transpose.
At each iteration, the updateΔ=β(s+1)−β(s){\displaystyle \Delta ={\boldsymbol {\beta }}^{(s+1)}-{\boldsymbol {\beta }}^{(s)}}can be found by rearranging the previous equation in the following two steps:
With substitutionsA=JrTJr{\textstyle A=\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} },b=−JrTr(β(s)){\displaystyle \mathbf {b} =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)}, andx=Δ{\displaystyle \mathbf {x} =\Delta }, this turns into the conventional matrix equation of formAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, which can then be solved in a variety of methods (seeNotes).
Ifm=n, the iteration simplifies to
β(s+1)=β(s)−(Jr)−1r(β(s)),{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} \right)^{-1}\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
which is a direct generalization ofNewton's methodin one dimension.
In data fitting, where the goal is to find the parametersβ{\displaystyle {\boldsymbol {\beta }}}such that a given model functionf(x,β){\displaystyle \mathbf {f} (\mathbf {x} ,{\boldsymbol {\beta }})}best fits some data points(xi,yi){\displaystyle (x_{i},y_{i})}, the functionsri{\displaystyle r_{i}}are theresiduals:ri(β)=yi−f(xi,β).{\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right).}
Then, the Gauss–Newton method can be expressed in terms of the JacobianJf=−Jr{\displaystyle \mathbf {J_{f}} =-\mathbf {J_{r}} }of the functionf{\displaystyle \mathbf {f} }asβ(s+1)=β(s)+(JfTJf)−1JfTr(β(s)).{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right).}
Note that(JfTJf)−1JfT{\displaystyle \left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }}is the leftpseudoinverseofJf{\displaystyle \mathbf {J_{f}} }.
The assumptionm≥nin the algorithm statement is necessary, as otherwise the matrixJrTJr{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }is not invertible and the normal equations cannot be solved (at least uniquely).
The Gauss–Newton algorithm can be derived bylinearly approximatingthe vector of functionsri. UsingTaylor's theorem, we can write at every iteration:r(β)≈r(β(s))+Jr(β(s))Δ{\displaystyle \mathbf {r} ({\boldsymbol {\beta }})\approx \mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta }
withΔ=β−β(s){\displaystyle \Delta ={\boldsymbol {\beta }}-{\boldsymbol {\beta }}^{(s)}}. The task of findingΔ{\displaystyle \Delta }minimizing the sum of squares of the right-hand side; i.e.,min‖r(β(s))+Jr(β(s))Δ‖22,{\displaystyle \min \left\|\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta \right\|_{2}^{2},}
is alinear least-squaresproblem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations arensimultaneous linear equations in the unknown incrementsΔ{\displaystyle \Delta }. They may be solved in one step, usingCholesky decomposition, or, better, theQR factorizationofJr{\displaystyle \mathbf {J_{r}} }. For large systems, aniterative method, such as theconjugate gradientmethod, may be more efficient. If there is a linear dependence between columns ofJr, the iterations will fail, asJrTJr{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }becomes singular.
Whenr{\displaystyle \mathbf {r} }is complexr:Cn→C{\displaystyle \mathbf {r} :\mathbb {C} ^{n}\to \mathbb {C} }the conjugate form should be used:(Jr¯TJr)−1Jr¯T{\displaystyle \left({\overline {\mathbf {J_{r}} }}^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}{\overline {\mathbf {J_{r}} }}^{\operatorname {T} }}.
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration[S]and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
It is desired to find a curve (model function) of the formrate=Vmax⋅[S]KM+[S]{\displaystyle {\text{rate}}={\frac {V_{\text{max}}\cdot [S]}{K_{M}+[S]}}}
that fits best the data in the least-squares sense, with the parametersVmax{\displaystyle V_{\text{max}}}andKM{\displaystyle K_{M}}to be determined.
Denote byxi{\displaystyle x_{i}}andyi{\displaystyle y_{i}}the values of[S]andraterespectively, withi=1,…,7{\displaystyle i=1,\dots ,7}. Letβ1=Vmax{\displaystyle \beta _{1}=V_{\text{max}}}andβ2=KM{\displaystyle \beta _{2}=K_{M}}. We will findβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}such that the sum of squares of the residualsri=yi−β1xiβ2+xi,(i=1,…,7){\displaystyle r_{i}=y_{i}-{\frac {\beta _{1}x_{i}}{\beta _{2}+x_{i}}},\quad (i=1,\dots ,7)}
is minimized.
The JacobianJr{\displaystyle \mathbf {J_{r}} }of the vector of residualsri{\displaystyle r_{i}}with respect to the unknownsβj{\displaystyle \beta _{j}}is a7×2{\displaystyle 7\times 2}matrix with thei{\displaystyle i}-th row having the entries∂ri∂β1=−xiβ2+xi;∂ri∂β2=β1⋅xi(β2+xi)2.{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{1}}}=-{\frac {x_{i}}{\beta _{2}+x_{i}}};\quad {\frac {\partial r_{i}}{\partial \beta _{2}}}={\frac {\beta _{1}\cdot x_{i}}{\left(\beta _{2}+x_{i}\right)^{2}}}.}
Starting with the initial estimates ofβ1=0.9{\displaystyle \beta _{1}=0.9}andβ2=0.2{\displaystyle \beta _{2}=0.2}, after five iterations of the Gauss–Newton algorithm, the optimal valuesβ^1=0.362{\displaystyle {\hat {\beta }}_{1}=0.362}andβ^2=0.556{\displaystyle {\hat {\beta }}_{2}=0.556}are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.
The Gauss-Newton iteration is guaranteed to converge toward a local minimum pointβ^{\displaystyle {\hat {\beta }}}under 4 conditions:[4]The functionsr1,…,rm{\displaystyle r_{1},\ldots ,r_{m}}are twice continuously differentiable in an open convex setD∋β^{\displaystyle D\ni {\hat {\beta }}}, the JacobianJr(β^){\displaystyle \mathbf {J} _{\mathbf {r} }({\hat {\beta }})}is of full column rank, the initial iterateβ(0){\displaystyle \beta ^{(0)}}is nearβ^{\displaystyle {\hat {\beta }}}, and the local minimum value|S(β^)|{\displaystyle |S({\hat {\beta }})|}is small. The convergence is quadratic if|S(β^)|=0{\displaystyle |S({\hat {\beta }})|=0}.
It can be shown[5]that the increment Δ is adescent directionforS, and, if the algorithm converges, then the limit is astationary pointofS. For large minimum value|S(β^)|{\displaystyle |S({\hat {\beta }})|}, however, convergence is not guaranteed, not evenlocal convergenceas inNewton's method, or convergence under the usual Wolfe conditions.[6]
The rate of convergence of the Gauss–Newton algorithm can approachquadratic.[7]The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrixJrTJr{\displaystyle \mathbf {J_{r}^{\operatorname {T} }J_{r}} }isill-conditioned. For example, consider the problem withm=2{\displaystyle m=2}equations andn=1{\displaystyle n=1}variable, given byr1(β)=β+1,r2(β)=λβ2+β−1.{\displaystyle {\begin{aligned}r_{1}(\beta )&=\beta +1,\\r_{2}(\beta )&=\lambda \beta ^{2}+\beta -1.\end{aligned}}}
Forλ<1{\displaystyle \lambda <1},β=0{\displaystyle \beta =0}is a local optimum. Ifλ=0{\displaystyle \lambda =0}, then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[8]
The Gauss-Newton iterationx(k+1)=x(k)−J(x(k))†f(x(k)),k=0,1,…{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {x} ^{(k)}-J(\mathbf {x} ^{(k)})^{\dagger }\mathbf {f} (\mathbf {x} ^{(k)})\,,\quad k=0,1,\ldots }is an effective method for solvingoverdetermined systemsof equations in the form off(x)=0{\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} }withf(x)=[f1(x1,…,xn)⋮fm(x1,…,xn)]{\displaystyle \mathbf {f} (\mathbf {x} )={\begin{bmatrix}f_{1}(x_{1},\ldots ,x_{n})\\\vdots \\f_{m}(x_{1},\ldots ,x_{n})\end{bmatrix}}}andm>n{\displaystyle m>n}whereJ(x)†{\displaystyle J(\mathbf {x} )^{\dagger }}is theMoore-Penrose inverse(also known aspseudoinverse) of theJacobian matrixJ(x){\displaystyle J(\mathbf {x} )}off(x){\displaystyle \mathbf {f} (\mathbf {x} )}.
It can be considered an extension ofNewton's methodand enjoys the same local quadratic convergence[4]toward isolated regular solutions.
If the solution doesn't exist but the initial iteratex(0){\displaystyle \mathbf {x} ^{(0)}}is near a pointx^=(x^1,…,x^n){\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},\ldots ,{\hat {x}}_{n})}at which the sum of squares∑i=1m|fi(x1,…,xn)|2≡‖f(x)‖22{\textstyle \sum _{i=1}^{m}|f_{i}(x_{1},\ldots ,x_{n})|^{2}\equiv \|\mathbf {f} (\mathbf {x} )\|_{2}^{2}}reaches a small local minimum, the Gauss-Newton iteration linearly converges tox^{\displaystyle {\hat {\mathbf {x} }}}. The pointx^{\displaystyle {\hat {\mathbf {x} }}}is often called aleast squaressolution of the overdetermined system.
In what follows, the Gauss–Newton algorithm will be derived fromNewton's methodfor function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.[9]
The recurrence relation for Newton's method for minimizing a functionSof parametersβ{\displaystyle {\boldsymbol {\beta }}}isβ(s+1)=β(s)−H−1g,{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\mathbf {H} ^{-1}\mathbf {g} ,}
wheregdenotes thegradient vectorofS, andHdenotes theHessian matrixofS.
SinceS=∑i=1mri2{\textstyle S=\sum _{i=1}^{m}r_{i}^{2}}, the gradient is given bygj=2∑i=1mri∂ri∂βj.{\displaystyle g_{j}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}.}
Elements of the Hessian are calculated by differentiating the gradient elements,gj{\displaystyle g_{j}}, with respect toβk{\displaystyle \beta _{k}}:Hjk=2∑i=1m(∂ri∂βj∂ri∂βk+ri∂2ri∂βj∂βk).{\displaystyle H_{jk}=2\sum _{i=1}^{m}\left({\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}+r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right).}
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated byHjk≈2∑i=1mJijJik,{\displaystyle H_{jk}\approx 2\sum _{i=1}^{m}J_{ij}J_{ik},}
whereJij=∂ri/∂βj{\textstyle J_{ij}={\partial r_{i}}/{\partial \beta _{j}}}are entries of the JacobianJr. Note that when the exact hessian is evaluated near an exact fit we have near-zerori{\displaystyle r_{i}}, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation asg=2JrTr,H≈2JrTJr.{\displaystyle \mathbf {g} =2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {r} ,\quad \mathbf {H} \approx 2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} .}
These expressions are substituted into the recurrence relation above to obtain the operational equationsβ(s+1)=β(s)+Δ;Δ=−(JrTJr)−1JrTr.{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\Delta ;\quad \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} .}
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation|ri∂2ri∂βj∂βk|≪|∂ri∂βj∂ri∂βk|{\displaystyle \left|r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right|\ll \left|{\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}\right|}
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:[10]
With the Gauss–Newton method the sum of squares of the residualsSmay not decrease at every iteration. However, since Δ is a descent direction, unlessS(βs){\displaystyle S\left({\boldsymbol {\beta }}^{s}\right)}is a stationary point, it holds thatS(βs+αΔ)<S(βs){\displaystyle S\left({\boldsymbol {\beta }}^{s}+\alpha \Delta \right)<S\left({\boldsymbol {\beta }}^{s}\right)}for all sufficiently smallα>0{\displaystyle \alpha >0}. Thus, if divergence occurs, one solution is to employ a fractionα{\displaystyle \alpha }of the increment vector Δ in the updating formula:βs+1=βs+αΔ.{\displaystyle {\boldsymbol {\beta }}^{s+1}={\boldsymbol {\beta }}^{s}+\alpha \Delta .}
In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective functionS. An optimal value forα{\displaystyle \alpha }can be found by using aline searchalgorithm, that is, the magnitude ofα{\displaystyle \alpha }is determined by finding the value that minimizesS, usually using adirect search methodin the interval0<α<1{\displaystyle 0<\alpha <1}or abacktracking line searchsuch asArmijo-line search. Typically,α{\displaystyle \alpha }should be chosen such that it satisfies theWolfe conditionsor theGoldstein conditions.[11]
In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of theLevenberg–Marquardt algorithm, atrust regionmethod.[3]The normal equations are modified in such a way that the increment vector is rotated towards the direction ofsteepest descent,(JTJ+λD)Δ=−JTr,{\displaystyle \left(\mathbf {J^{\operatorname {T} }J+\lambda D} \right)\Delta =-\mathbf {J} ^{\operatorname {T} }\mathbf {r} ,}
whereDis a positive diagonal matrix. Note that whenDis the identity matrixIandλ→+∞{\displaystyle \lambda \to +\infty }, thenλΔ=λ(JTJ+λI)−1(−JTr)=(I−JTJ/λ+⋯)(−JTr)→−JTr{\displaystyle \lambda \Delta =\lambda \left(\mathbf {J^{\operatorname {T} }J} +\lambda \mathbf {I} \right)^{-1}\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)=\left(\mathbf {I} -\mathbf {J^{\operatorname {T} }J} /\lambda +\cdots \right)\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)\to -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }, therefore thedirectionof Δ approaches the direction of the negative gradient−JTr{\displaystyle -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }.
The so-called Marquardt parameterλ{\displaystyle \lambda }may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every timeλ{\displaystyle \lambda }is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease inS. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization ofSthen becomes a standard Gauss–Newton minimization.
For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrixJr{\displaystyle \mathbf {J} _{\mathbf {r} }}is moresparsethan the approximate HessianJrTJr{\displaystyle \mathbf {J} _{\mathbf {r} }^{\operatorname {T} }\mathbf {J_{r}} }. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as theconjugate gradient method.
In order to make this kind of approach work, one needs at least an efficient method for computing the productJrTJrp{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} }
for some vectorp. Withsparse matrixstorage, it is in general practical to store the rows ofJr{\displaystyle \mathbf {J} _{\mathbf {r} }}in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one definescias rowiof the matrixJr{\displaystyle \mathbf {J} _{\mathbf {r} }}, the following simple relation holds:JrTJrp=∑ici(ci⋅p),{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} =\sum _{i}\mathbf {c} _{i}\left(\mathbf {c} _{i}\cdot \mathbf {p} \right),}
so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited forparallel computations. Note that every rowciis the gradient of the corresponding residualri; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.
In aquasi-Newton method, such as that due toDavidon, Fletcher and Powellor Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian∂2S∂βj∂βk{\textstyle {\frac {\partial ^{2}S}{\partial \beta _{j}\partial \beta _{k}}}}is built up numerically using first derivatives∂ri∂βj{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}only so that afternrefinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives isgradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
The following implementation inJuliaprovides one method which uses a provided Jacobian and another computing withautomatic differentiation.
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
|
Inmathematics, anellipseis aplane curvesurrounding twofocal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes acircle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by itseccentricitye{\displaystyle e}, a number ranging frome=0{\displaystyle e=0}(thelimiting caseof a circle) toe=1{\displaystyle e=1}(the limiting case of infinite elongation, no longer an ellipse but aparabola).
An ellipse has a simplealgebraicsolution for its area, but forits perimeter(also known ascircumference),integrationis required to obtain an exact solution.
The largest and smallestdiametersof an ellipse, also known as its width and height, are typically denoted2aand2b. An ellipse has fourextreme points: twoverticesat the endpoints of themajor axisand twoco-verticesat the endpoints of the minor axis.
Analytically, the equation of a standard ellipse centered at the origin is:x2a2+y2b2=1.{\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1.}Assuminga≥b{\displaystyle a\geq b}, the foci are(±c,0){\displaystyle (\pm c,0)}wherec=a2−b2{\textstyle c={\sqrt {a^{2}-b^{2}}}}, calledlinear eccentricity, is the distance from the center to a focus. The standardparametric equationis:(x,y)=(acos(t),bsin(t))for0≤t≤2π.{\displaystyle (x,y)=(a\cos(t),b\sin(t))\quad {\text{for}}\quad 0\leq t\leq 2\pi .}
Ellipses are theclosedtype ofconic section: a plane curve tracing the intersection of aconewith aplane(see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas andhyperbolas, both of which areopenandunbounded. An angledcross sectionof a right circularcylinderis also an ellipse.
An ellipse may also be defined in terms of one focal point and a line outside the ellipse called thedirectrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant, called theeccentricity:e=ca=1−b2a2.{\displaystyle e={\frac {c}{a}}={\sqrt {1-{\frac {b^{2}}{a^{2}}}}}.}
Ellipses are common inphysics,astronomyandengineering. For example, theorbitof each planet in theSolar Systemis approximately an ellipse with the Sun at one focus point (more precisely, the focus is thebarycenterof the Sun–planet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described byellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle underparallelorperspective projection. The ellipse is also the simplestLissajous figureformed when the horizontal and vertical motions aresinusoidswith the same frequency: a similar effect leads toelliptical polarizationof light inoptics.
The name,ἔλλειψις(élleipsis, "omission"), was given byApollonius of Pergain hisConics.
An ellipse can be defined geometrically as a set orlocus of pointsin the Euclidean plane:
The midpointC{\displaystyle C}of the line segment joining the foci is called thecenterof the ellipse. The line through the foci is called themajor axis, and the line perpendicular to it through the center is theminor axis.The major axis intersects the ellipse at twoverticesV1,V2{\displaystyle V_{1},V_{2}}, which have distancea{\displaystyle a}to the center. The distancec{\displaystyle c}of the foci to the center is called thefocal distanceor linear eccentricity. The quotiente=ca{\displaystyle e={\tfrac {c}{a}}}is defined as theeccentricity.
The caseF1=F2{\displaystyle F_{1}=F_{2}}yields a circle and is included as a special type of ellipse.
The equation|PF2|+|PF1|=2a{\displaystyle \left|PF_{2}\right|+\left|PF_{1}\right|=2a}can be viewed in a different way (see figure):
c2{\displaystyle c_{2}}is called thecircular directrix(related to focusF2{\displaystyle F_{2}})of the ellipse.[1][2]This property should not be confused with the definition of an ellipse using a directrix line below.
UsingDandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone.
The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, thex-axis is the major axis, and:
For an arbitrary point(x,y){\displaystyle (x,y)}the distance to the focus(c,0){\displaystyle (c,0)}is(x−c)2+y2{\textstyle {\sqrt {(x-c)^{2}+y^{2}}}}and to the other focus(x+c)2+y2{\textstyle {\sqrt {(x+c)^{2}+y^{2}}}}. Hence the point(x,y){\displaystyle (x,\,y)}is on the ellipse whenever:(x−c)2+y2+(x+c)2+y2=2a.{\displaystyle {\sqrt {(x-c)^{2}+y^{2}}}+{\sqrt {(x+c)^{2}+y^{2}}}=2a\ .}
Removing theradicalsby suitable squarings and usingb2=a2−c2{\displaystyle b^{2}=a^{2}-c^{2}}(see diagram) produces the standard equation of the ellipse:[3]x2a2+y2b2=1,{\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1,}or, solved fory:y=±baa2−x2=±(a2−x2)(1−e2).{\displaystyle y=\pm {\frac {b}{a}}{\sqrt {a^{2}-x^{2}}}=\pm {\sqrt {\left(a^{2}-x^{2}\right)\left(1-e^{2}\right)}}.}
The width and height parametersa,b{\displaystyle a,\;b}are called thesemi-major and semi-minor axes. The top and bottom pointsV3=(0,b),V4=(0,−b){\displaystyle V_{3}=(0,\,b),\;V_{4}=(0,\,-b)}are theco-vertices. The distances from a point(x,y){\displaystyle (x,\,y)}on the ellipse to the left and right foci area+ex{\displaystyle a+ex}anda−ex{\displaystyle a-ex}.
It follows from the equation that the ellipse issymmetricwith respect to the coordinate axes and hence with respect to the origin.
Throughout this article, thesemi-major and semi-minor axesare denoteda{\displaystyle a}andb{\displaystyle b}, respectively, i.e.a≥b>0.{\displaystyle a\geq b>0\ .}
In principle, the canonical ellipse equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}may havea<b{\displaystyle a<b}(and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable namesx{\displaystyle x}andy{\displaystyle y}and the parameter namesa{\displaystyle a}andb.{\displaystyle b.}
This is the distance from the center to a focus:c=a2−b2{\displaystyle c={\sqrt {a^{2}-b^{2}}}}.
The eccentricity can be expressed as:e=ca=1−(ba)2,{\displaystyle e={\frac {c}{a}}={\sqrt {1-\left({\frac {b}{a}}\right)^{2}}},}
assuminga>b.{\displaystyle a>b.}An ellipse with equal axes (a=b{\displaystyle a=b}) has zero eccentricity, and is a circle.
The length of the chord through one focus, perpendicular to the major axis, is called thelatus rectum. One half of it is thesemi-latus rectumℓ{\displaystyle \ell }. A calculation shows:[4]ℓ=b2a=a(1−e2).{\displaystyle \ell ={\frac {b^{2}}{a}}=a\left(1-e^{2}\right).}
The semi-latus rectumℓ{\displaystyle \ell }is equal to theradius of curvatureat the vertices (see sectioncurvature).
An arbitrary lineg{\displaystyle g}intersects an ellipse at 0, 1, or 2 points, respectively called anexterior line,tangentandsecant. Through any point of an ellipse there is a unique tangent. The tangent at a point(x1,y1){\displaystyle (x_{1},\,y_{1})}of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}has the coordinate equation:x1a2x+y1b2y=1.{\displaystyle {\frac {x_{1}}{a^{2}}}x+{\frac {y_{1}}{b^{2}}}y=1.}
A vectorparametric equationof the tangent is:x→=(x1y1)+s(−y1a2x1b2),s∈R.{\displaystyle {\vec {x}}={\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+s\left({\begin{array}{r}-y_{1}a^{2}\\x_{1}b^{2}\end{array}}\right),\quad s\in \mathbb {R} .}
Proof:Let(x1,y1){\displaystyle (x_{1},\,y_{1})}be a point on an ellipse andx→=(x1y1)+s(uv){\textstyle {\vec {x}}={\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+s{\begin{pmatrix}u\\v\end{pmatrix}}}be the equation of any lineg{\displaystyle g}containing(x1,y1){\displaystyle (x_{1},\,y_{1})}. Inserting the line's equation into the ellipse equation and respectingx12a2+y12b2=1{\textstyle {\frac {x_{1}^{2}}{a^{2}}}+{\frac {y_{1}^{2}}{b^{2}}}=1}yields:(x1+su)2a2+(y1+sv)2b2=1⟹2s(x1ua2+y1vb2)+s2(u2a2+v2b2)=0.{\displaystyle {\frac {\left(x_{1}+su\right)^{2}}{a^{2}}}+{\frac {\left(y_{1}+sv\right)^{2}}{b^{2}}}=1\ \quad \Longrightarrow \quad 2s\left({\frac {x_{1}u}{a^{2}}}+{\frac {y_{1}v}{b^{2}}}\right)+s^{2}\left({\frac {u^{2}}{a^{2}}}+{\frac {v^{2}}{b^{2}}}\right)=0\ .}There are then cases:
Using (1) one finds that(−y1a2x1b2){\displaystyle {\begin{pmatrix}-y_{1}a^{2}&x_{1}b^{2}\end{pmatrix}}}is a tangent vector at point(x1,y1){\displaystyle (x_{1},\,y_{1})}, which proves the vector equation.
If(x1,y1){\displaystyle (x_{1},y_{1})}and(u,v){\displaystyle (u,v)}are two points of the ellipse such thatx1ua2+y1vb2=0{\textstyle {\frac {x_{1}u}{a^{2}}}+{\tfrac {y_{1}v}{b^{2}}}=0}, then the points lie on twoconjugate diameters(seebelow). (Ifa=b{\displaystyle a=b}, the ellipse is a circle and "conjugate" means "orthogonal".)
If the standard ellipse is shifted to have center(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}, its equation is(x−x∘)2a2+(y−y∘)2b2=1.{\displaystyle {\frac {\left(x-x_{\circ }\right)^{2}}{a^{2}}}+{\frac {\left(y-y_{\circ }\right)^{2}}{b^{2}}}=1\ .}
The axes are still parallel to thex- andy-axes.
Inanalytic geometry, the ellipse is defined as aquadric: the set of points(x,y){\displaystyle (x,\,y)}of theCartesian planethat, in non-degenerate cases, satisfy theimplicitequation[5][6]Ax2+Bxy+Cy2+Dx+Ey+F=0{\displaystyle Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0}providedB2−4AC<0.{\displaystyle B^{2}-4AC<0.}
To distinguish thedegenerate casesfrom the non-degenerate case, let∆be thedeterminantΔ=|A12B12D12BC12E12D12EF|=ACF+14BDE−14(AE2+CD2+FB2).{\displaystyle \Delta ={\begin{vmatrix}A&{\frac {1}{2}}B&{\frac {1}{2}}D\\{\frac {1}{2}}B&C&{\frac {1}{2}}E\\{\frac {1}{2}}D&{\frac {1}{2}}E&F\end{vmatrix}}=ACF+{\tfrac {1}{4}}BDE-{\tfrac {1}{4}}(AE^{2}+CD^{2}+FB^{2}).}
Then the ellipse is a non-degenerate real ellipse if and only ifC∆< 0. IfC∆> 0, we have an imaginary ellipse, and if∆= 0, we have a point ellipse.[7]: 63
The general equation's coefficients can be obtained from known semi-major axisa{\displaystyle a}, semi-minor axisb{\displaystyle b}, center coordinates(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}, and rotation angleθ{\displaystyle \theta }(the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:A=a2sin2θ+b2cos2θB=2(b2−a2)sinθcosθC=a2cos2θ+b2sin2θD=−2Ax∘−By∘E=−Bx∘−2Cy∘F=Ax∘2+Bx∘y∘+Cy∘2−a2b2.{\displaystyle {\begin{aligned}A&=a^{2}\sin ^{2}\theta +b^{2}\cos ^{2}\theta &B&=2\left(b^{2}-a^{2}\right)\sin \theta \cos \theta \\[1ex]C&=a^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta &D&=-2Ax_{\circ }-By_{\circ }\\[1ex]E&=-Bx_{\circ }-2Cy_{\circ }&F&=Ax_{\circ }^{2}+Bx_{\circ }y_{\circ }+Cy_{\circ }^{2}-a^{2}b^{2}.\end{aligned}}}
These expressions can be derived from the canonical equationX2a2+Y2b2=1{\displaystyle {\frac {X^{2}}{a^{2}}}+{\frac {Y^{2}}{b^{2}}}=1}by a Euclidean transformation of the coordinates(X,Y){\displaystyle (X,\,Y)}:X=(x−x∘)cosθ+(y−y∘)sinθ,Y=−(x−x∘)sinθ+(y−y∘)cosθ.{\displaystyle {\begin{aligned}X&=\left(x-x_{\circ }\right)\cos \theta +\left(y-y_{\circ }\right)\sin \theta ,\\Y&=-\left(x-x_{\circ }\right)\sin \theta +\left(y-y_{\circ }\right)\cos \theta .\end{aligned}}}
Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations:[3]
a,b=−2(AE2+CD2−BDE+(B2−4AC)F)((A+C)±(A−C)2+B2)B2−4AC,x∘=2CD−BEB2−4AC,y∘=2AE−BDB2−4AC,θ=12atan2(−B,C−A),{\displaystyle {\begin{aligned}a,b&={\frac {-{\sqrt {2{\big (}AE^{2}+CD^{2}-BDE+(B^{2}-4AC)F{\big )}{\big (}(A+C)\pm {\sqrt {(A-C)^{2}+B^{2}}}{\big )}}}}{B^{2}-4AC}},\\x_{\circ }&={\frac {2CD-BE}{B^{2}-4AC}},\\[5mu]y_{\circ }&={\frac {2AE-BD}{B^{2}-4AC}},\\[5mu]\theta &={\tfrac {1}{2}}\operatorname {atan2} (-B,\,C-A),\end{aligned}}}
whereatan2is the 2-argument arctangent function.
Usingtrigonometric functions, a parametric representation of the standard ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}is:(x,y)=(acost,bsint),0≤t<2π.{\displaystyle (x,\,y)=(a\cos t,\,b\sin t),\ 0\leq t<2\pi \,.}
The parametert(called theeccentric anomalyin astronomy) is not the angle of(x(t),y(t)){\displaystyle (x(t),y(t))}with thex-axis, but has a geometric meaning due toPhilippe de La Hire(see§ Drawing ellipsesbelow).[8]
With the substitutionu=tan(t2){\textstyle u=\tan \left({\frac {t}{2}}\right)}and trigonometric formulae one obtainscost=1−u21+u2,sint=2u1+u2{\displaystyle \cos t={\frac {1-u^{2}}{1+u^{2}}}\ ,\quad \sin t={\frac {2u}{1+u^{2}}}}
and therationalparametric equation of an ellipse{x(u)=a1−u21+u2y(u)=b2u1+u2−∞<u<∞{\displaystyle {\begin{cases}x(u)=a\,{\dfrac {1-u^{2}}{1+u^{2}}}\\[10mu]y(u)=b\,{\dfrac {2u}{1+u^{2}}}\\[10mu]-\infty <u<\infty \end{cases}}}
which covers any point of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}except the left vertex(−a,0){\displaystyle (-a,\,0)}.
Foru∈[0,1],{\displaystyle u\in [0,\,1],}this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasingu.{\displaystyle u.}The left vertex is the limitlimu→±∞(x(u),y(u))=(−a,0).{\textstyle \lim _{u\to \pm \infty }(x(u),\,y(u))=(-a,\,0)\;.}
Alternately, if the parameter[u:v]{\displaystyle [u:v]}is considered to be a point on thereal projective lineP(R){\textstyle \mathbf {P} (\mathbf {R} )}, then the corresponding rational parametrization is[u:v]↦(av2−u2v2+u2,b2uvv2+u2).{\displaystyle [u:v]\mapsto \left(a{\frac {v^{2}-u^{2}}{v^{2}+u^{2}}},b{\frac {2uv}{v^{2}+u^{2}}}\right).}
Then[1:0]↦(−a,0).{\textstyle [1:0]\mapsto (-a,\,0).}
Rational representations of conic sections are commonly used incomputer-aided design(seeBézier curve).
A parametric representation, which uses the slopem{\displaystyle m}of the tangent at a point of the ellipse
can be obtained from the derivative of the standard representationx→(t)=(acost,bsint)T{\displaystyle {\vec {x}}(t)=(a\cos t,\,b\sin t)^{\mathsf {T}}}:x→′(t)=(−asint,bcost)T→m=−bacott→cott=−mab.{\displaystyle {\vec {x}}'(t)=(-a\sin t,\,b\cos t)^{\mathsf {T}}\quad \rightarrow \quad m=-{\frac {b}{a}}\cot t\quad \rightarrow \quad \cot t=-{\frac {ma}{b}}.}
With help oftrigonometric formulaeone obtains:cost=cott±1+cot2t=−ma±m2a2+b2,sint=1±1+cot2t=b±m2a2+b2.{\displaystyle \cos t={\frac {\cot t}{\pm {\sqrt {1+\cot ^{2}t}}}}={\frac {-ma}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}\ ,\quad \quad \sin t={\frac {1}{\pm {\sqrt {1+\cot ^{2}t}}}}={\frac {b}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}.}
Replacingcost{\displaystyle \cos t}andsint{\displaystyle \sin t}of the standard representation yields:c→±(m)=(−ma2±m2a2+b2,b2±m2a2+b2),m∈R.{\displaystyle {\vec {c}}_{\pm }(m)=\left(-{\frac {ma^{2}}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}},\;{\frac {b^{2}}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}\right),\,m\in \mathbb {R} .}
Herem{\displaystyle m}is the slope of the tangent at the corresponding ellipse point,c→+{\displaystyle {\vec {c}}_{+}}is the upper andc→−{\displaystyle {\vec {c}}_{-}}the lower half of the ellipse. The vertices(±a,0){\displaystyle (\pm a,\,0)}, having vertical tangents, are not covered by the representation.
The equation of the tangent at pointc→±(m){\displaystyle {\vec {c}}_{\pm }(m)}has the formy=mx+n{\displaystyle y=mx+n}. The still unknownn{\displaystyle n}can be determined by inserting the coordinates of the corresponding ellipse pointc→±(m){\displaystyle {\vec {c}}_{\pm }(m)}:y=mx±m2a2+b2.{\displaystyle y=mx\pm {\sqrt {m^{2}a^{2}+b^{2}}}\,.}
This description of the tangents of an ellipse is an essential tool for the determination of theorthopticof an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae.
Another definition of an ellipse usesaffine transformations:
An affine transformation of the Euclidean plane has the formx→↦f→0+Ax→{\displaystyle {\vec {x}}\mapsto {\vec {f}}\!_{0}+A{\vec {x}}}, whereA{\displaystyle A}is a regularmatrix(with non-zerodeterminant) andf→0{\displaystyle {\vec {f}}\!_{0}}is an arbitrary vector. Iff→1,f→2{\displaystyle {\vec {f}}\!_{1},{\vec {f}}\!_{2}}are the column vectors of the matrixA{\displaystyle A}, the unit circle(cos(t),sin(t)){\displaystyle (\cos(t),\sin(t))},0≤t≤2π{\displaystyle 0\leq t\leq 2\pi }, is mapped onto the ellipse:x→=p→(t)=f→0+f→1cost+f→2sint.{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}\!_{0}+{\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t\,.}
Heref→0{\displaystyle {\vec {f}}\!_{0}}is the center andf→1,f→2{\displaystyle {\vec {f}}\!_{1},\;{\vec {f}}\!_{2}}are the directions of twoconjugate diameters, in general not perpendicular.
The four vertices of the ellipse arep→(t0),p→(t0±π2),p→(t0+π){\displaystyle {\vec {p}}(t_{0}),\;{\vec {p}}\left(t_{0}\pm {\tfrac {\pi }{2}}\right),\;{\vec {p}}\left(t_{0}+\pi \right)}, for a parametert=t0{\displaystyle t=t_{0}}defined by:cot(2t0)=f→12−f→222f→1⋅f→2.{\displaystyle \cot(2t_{0})={\frac {{\vec {f}}\!_{1}^{\,2}-{\vec {f}}\!_{2}^{\,2}}{2{\vec {f}}\!_{1}\cdot {\vec {f}}\!_{2}}}.}
(Iff→1⋅f→2=0{\displaystyle {\vec {f}}\!_{1}\cdot {\vec {f}}\!_{2}=0}, thent0=0{\displaystyle t_{0}=0}.) This is derived as follows. The tangent vector at pointp→(t){\displaystyle {\vec {p}}(t)}is:p→′(t)=−f→1sint+f→2cost.{\displaystyle {\vec {p}}\,'(t)=-{\vec {f}}\!_{1}\sin t+{\vec {f}}\!_{2}\cos t\ .}
At a vertex parametert=t0{\displaystyle t=t_{0}}, the tangent is perpendicular to the major/minor axes, so:0=p→′(t)⋅(p→(t)−f→0)=(−f→1sint+f→2cost)⋅(f→1cost+f→2sint).{\displaystyle 0={\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}\!_{0}\right)=\left(-{\vec {f}}\!_{1}\sin t+{\vec {f}}\!_{2}\cos t\right)\cdot \left({\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t\right).}
Expanding and applying the identitiescos2t−sin2t=cos2t,2sintcost=sin2t{\displaystyle \;\cos ^{2}t-\sin ^{2}t=\cos 2t,\ \ 2\sin t\cos t=\sin 2t\;}gives the equation fort=t0.{\displaystyle t=t_{0}\;.}
From Apollonios theorem (see below) one obtains:The area of an ellipsex→=f→0+f→1cost+f→2sint{\displaystyle \;{\vec {x}}={\vec {f}}_{0}+{\vec {f}}_{1}\cos t+{\vec {f}}_{2}\sin t\;}isA=π|det(f→1,f→2)|.{\displaystyle A=\pi \left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|.}
With the abbreviationsM=f→12+f→22,N=|det(f→1,f→2)|{\displaystyle \;M={\vec {f}}_{1}^{2}+{\vec {f}}_{2}^{2},\ N=\left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|}the statements of Apollonios's theorem can be written as:a2+b2=M,ab=N.{\displaystyle a^{2}+b^{2}=M,\quad ab=N\ .}Solving this nonlinear system fora,b{\displaystyle a,b}yields the semiaxes:a=12(M+2N+M−2N)b=12(M+2N−M−2N).{\displaystyle {\begin{aligned}a&={\frac {1}{2}}({\sqrt {M+2N}}+{\sqrt {M-2N}})\\[1ex]b&={\frac {1}{2}}({\sqrt {M+2N}}-{\sqrt {M-2N}})\,.\end{aligned}}}
Solving the parametric representation forcost,sint{\displaystyle \;\cos t,\sin t\;}byCramer's ruleand usingcos2t+sin2t−1=0{\displaystyle \;\cos ^{2}t+\sin ^{2}t-1=0\;}, one obtains the implicit representationdet(x→−f→0,f→2)2+det(f→1,x→−f→0)2−det(f→1,f→2)2=0.{\displaystyle \det {\left({\vec {x}}\!-\!{\vec {f}}\!_{0},{\vec {f}}\!_{2}\right)^{2}}+\det {\left({\vec {f}}\!_{1},{\vec {x}}\!-\!{\vec {f}}\!_{0}\right)^{2}}-\det {\left({\vec {f}}\!_{1},{\vec {f}}\!_{2}\right)^{2}}=0.}
Conversely: If theequation
of an ellipse centered at the origin is given, then the two vectorsf→1=(e0),f→2=ed2−c2(−c1){\displaystyle {\vec {f}}_{1}={e \choose 0},\quad {\vec {f}}_{2}={\frac {e}{\sqrt {d^{2}-c^{2}}}}{-c \choose 1}}point to two conjugate points and the tools developed above are applicable.
Example: For the ellipse with equationx2+2xy+3y2−1=0{\displaystyle \;x^{2}+2xy+3y^{2}-1=0\;}the vectors aref→1=(10),f→2=12(−11).{\displaystyle {\vec {f}}_{1}={1 \choose 0},\quad {\vec {f}}_{2}={\frac {1}{\sqrt {2}}}{-1 \choose 1}.}
Forf→0=(00),f→1=a(cosθsinθ),f→2=b(−sinθcosθ){\displaystyle {\vec {f}}_{0}={0 \choose 0},\;{\vec {f}}_{1}=a{\cos \theta \choose \sin \theta },\;{\vec {f}}_{2}=b{-\sin \theta \choose \;\cos \theta }}one obtains a parametric representation of the standard ellipserotatedby angleθ{\displaystyle \theta }:x=xθ(t)=acosθcost−bsinθsint,y=yθ(t)=asinθcost+bcosθsint.{\displaystyle {\begin{aligned}x&=x_{\theta }(t)=a\cos \theta \cos t-b\sin \theta \sin t\,,\\y&=y_{\theta }(t)=a\sin \theta \cos t+b\cos \theta \sin t\,.\end{aligned}}}
The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allowsf→0,f→1,f→2{\displaystyle {\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}}to be vectors in space.
Inpolar coordinates, with the origin at the center of the ellipse and with the angular coordinateθ{\displaystyle \theta }measured from the major axis, the ellipse's equation is[7]: 75r(θ)=ab(bcosθ)2+(asinθ)2=b1−(ecosθ)2{\displaystyle r(\theta )={\frac {ab}{\sqrt {(b\cos \theta )^{2}+(a\sin \theta )^{2}}}}={\frac {b}{\sqrt {1-(e\cos \theta )^{2}}}}}wheree{\displaystyle e}is the eccentricity (notEuler's number).
If instead we use polar coordinates with the origin at one focus, with the angular coordinateθ=0{\displaystyle \theta =0}still measured from the major axis, the ellipse's equation isr(θ)=a(1−e2)1±ecosθ{\displaystyle r(\theta )={\frac {a(1-e^{2})}{1\pm e\cos \theta }}}
where the sign in the denominator is negative if the reference directionθ=0{\displaystyle \theta =0}points towards the center (as illustrated on the right), and positive if that direction points away from the center.
The angleθ{\displaystyle \theta }is called thetrue anomalyof the point. The numeratorℓ=a(1−e2){\displaystyle \ell =a(1-e^{2})}is thesemi-latus rectum.
Each of the two lines parallel to the minor axis, and at a distance ofd=a2c=ae{\textstyle d={\frac {a^{2}}{c}}={\frac {a}{e}}}from it, is called adirectrixof the ellipse (see diagram).
The proof for the pairF1,l1{\displaystyle F_{1},l_{1}}follows from the fact that|PF1|2=(x−c)2+y2,|Pl1|2=(x−a2c)2{\textstyle \left|PF_{1}\right|^{2}=(x-c)^{2}+y^{2},\ \left|Pl_{1}\right|^{2}=\left(x-{\tfrac {a^{2}}{c}}\right)^{2}}andy2=b2−b2a2x2{\displaystyle y^{2}=b^{2}-{\tfrac {b^{2}}{a^{2}}}x^{2}}satisfy the equation|PF1|2−c2a2|Pl1|2=0.{\displaystyle \left|PF_{1}\right|^{2}-{\frac {c^{2}}{a^{2}}}\left|Pl_{1}\right|^{2}=0\,.}
The second case is proven analogously.
The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola):
The extension toe=0{\displaystyle e=0}, which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be theline at infinityin theprojective plane.
(The choicee=1{\displaystyle e=1}yields a parabola, and ife>1{\displaystyle e>1}, a hyperbola.)
LetF=(f,0),e>0{\displaystyle F=(f,\,0),\ e>0}, and assume(0,0){\displaystyle (0,\,0)}is a point on the curve. The directrixl{\displaystyle l}has equationx=−fe{\displaystyle x=-{\tfrac {f}{e}}}. WithP=(x,y){\displaystyle P=(x,\,y)}, the relation|PF|2=e2|Pl|2{\displaystyle |PF|^{2}=e^{2}|Pl|^{2}}produces the equations
The substitutionp=f(1+e){\displaystyle p=f(1+e)}yieldsx2(e2−1)+2px−y2=0.{\displaystyle x^{2}\left(e^{2}-1\right)+2px-y^{2}=0.}
This is the equation of anellipse(e<1{\displaystyle e<1}), or aparabola(e=1{\displaystyle e=1}), or ahyperbola(e>1{\displaystyle e>1}). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
Ife<1{\displaystyle e<1}, introduce new parametersa,b{\displaystyle a,\,b}so that1−e2=b2a2,andp=b2a{\displaystyle 1-e^{2}={\tfrac {b^{2}}{a^{2}}},{\text{ and }}\ p={\tfrac {b^{2}}{a}}}, and then the equation above becomes(x−a)2a2+y2b2=1,{\displaystyle {\frac {(x-a)^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,,}
which is the equation of an ellipse with center(a,0){\displaystyle (a,\,0)}, thex-axis as major axis, and
the major/minor semi axisa,b{\displaystyle a,\,b}.
Because ofc⋅a2c=a2{\displaystyle c\cdot {\tfrac {a^{2}}{c}}=a^{2}}pointL1{\displaystyle L_{1}}of directrixl1{\displaystyle l_{1}}(see diagram) and focusF1{\displaystyle F_{1}}are inverse with respect to thecircle inversionat circlex2+y2=a2{\displaystyle x^{2}+y^{2}=a^{2}}(in diagram green). HenceL1{\displaystyle L_{1}}can be constructed as shown in the diagram. Directrixl1{\displaystyle l_{1}}is the perpendicular to the main axis at pointL1{\displaystyle L_{1}}.
If the focus isF=(f1,f2){\displaystyle F=\left(f_{1},\,f_{2}\right)}and the directrixux+vy+w=0{\displaystyle ux+vy+w=0}, one obtains the equation(x−f1)2+(y−f2)2=e2(ux+vy+w)2u2+v2.{\displaystyle \left(x-f_{1}\right)^{2}+\left(y-f_{2}\right)^{2}=e^{2}{\frac {\left(ux+vy+w\right)^{2}}{u^{2}+v^{2}}}\ .}
(The right side of the equation uses theHesse normal formof a line to calculate the distance|Pl|{\displaystyle |Pl|}.)
An ellipse possesses the following property:
Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram).
LetL{\displaystyle L}be the point on the linePF2¯{\displaystyle {\overline {PF_{2}}}}with distance2a{\displaystyle 2a}to the focusF2{\displaystyle F_{2}}, wherea{\displaystyle a}is the semi-major axis of the ellipse. Let linew{\displaystyle w}be the external angle bisector of the linesPF1¯{\displaystyle {\overline {PF_{1}}}}andPF2¯.{\displaystyle {\overline {PF_{2}}}.}Take any other pointQ{\displaystyle Q}onw.{\displaystyle w.}By thetriangle inequalityand theangle bisector theorem,2a=|LF2|<{\displaystyle 2a=\left|LF_{2}\right|<{}}|QF2|+|QL|={\displaystyle \left|QF_{2}\right|+\left|QL\right|={}}|QF2|+|QF1|,{\displaystyle \left|QF_{2}\right|+\left|QF_{1}\right|,}soQ{\displaystyle Q}must be outside the ellipse. As this is true for every choice ofQ,{\displaystyle Q,}w{\displaystyle w}only intersects the ellipse at the single pointP{\displaystyle P}so must be the tangent line.
The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (seewhispering gallery).
Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis.
A circle has the following property:
An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.)
Two diametersd1,d2{\displaystyle d_{1},\,d_{2}}of an ellipse areconjugateif the midpoints of chords parallel tod1{\displaystyle d_{1}}lie ond2.{\displaystyle d_{2}\ .}
From the diagram one finds:
Conjugate diameters in an ellipse generalize orthogonal diameters in a circle.
In the parametric equation for a general ellipse given above,x→=p→(t)=f→0+f→1cost+f→2sint,{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}\!_{0}+{\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t,}
any pair of pointsp→(t),p→(t+π){\displaystyle {\vec {p}}(t),\ {\vec {p}}(t+\pi )}belong to a diameter, and the pairp→(t+π2),p→(t−π2){\displaystyle {\vec {p}}\left(t+{\tfrac {\pi }{2}}\right),\ {\vec {p}}\left(t-{\tfrac {\pi }{2}}\right)}belong to its conjugate diameter.
For the common parametric representation(acost,bsint){\displaystyle (a\cos t,b\sin t)}of the ellipse with equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}one gets: The points
In case of a circle the last equation collapses tox1x2+y1y2=0.{\displaystyle x_{1}x_{2}+y_{1}y_{2}=0\ .}
For an ellipse with semi-axesa,b{\displaystyle a,\,b}the following is true:[9][10]
Let the ellipse be in the canonical form with parametric equationp→(t)=(acost,bsint).{\displaystyle {\vec {p}}(t)=(a\cos t,\,b\sin t).}
The two pointsc→1=p→(t),c→2=p→(t+π2){\textstyle {\vec {c}}_{1}={\vec {p}}(t),\ {\vec {c}}_{2}={\vec {p}}\left(t+{\frac {\pi }{2}}\right)}are on conjugate diameters (see previous section). From trigonometric formulae one obtainsc→2=(−asint,bcost)T{\displaystyle {\vec {c}}_{2}=(-a\sin t,\,b\cos t)^{\mathsf {T}}}and|c→1|2+|c→2|2=⋯=a2+b2.{\displaystyle \left|{\vec {c}}_{1}\right|^{2}+\left|{\vec {c}}_{2}\right|^{2}=\cdots =a^{2}+b^{2}\,.}
The area of the triangle generated byc→1,c→2{\displaystyle {\vec {c}}_{1},\,{\vec {c}}_{2}}isAΔ=12det(c→1,c→2)=⋯=12ab{\displaystyle A_{\Delta }={\tfrac {1}{2}}\det \left({\vec {c}}_{1},\,{\vec {c}}_{2}\right)=\cdots ={\tfrac {1}{2}}ab}
and from the diagram it can be seen that the area of the parallelogram is 8 times that ofAΔ{\displaystyle A_{\Delta }}. HenceArea12=4ab.{\displaystyle {\text{Area}}_{12}=4ab\,.}
For the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}the intersection points oforthogonaltangents lie on the circlex2+y2=a2+b2{\displaystyle x^{2}+y^{2}=a^{2}+b^{2}}.
This circle is calledorthopticordirector circleof the ellipse (not to be confused with the circular directrix defined above).
Ellipses appear indescriptive geometryas images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle was known to the 5th century mathematicianProclus, and the tool now known as anelliptical trammelwas invented byLeonardo da Vinci.[11]
If there is no ellipsograph available, one can draw an ellipse using anapproximation by the four osculating circles at the vertices.
For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help ofRytz's constructionthe axes and semi-axes can be retrieved.
The following construction of single points of an ellipse is due tode La Hire.[12]It is based on thestandard parametric representation(acost,bsint){\displaystyle (a\cos t,\,b\sin t)}of an ellipse:
The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using twodrawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is2a{\displaystyle 2a}. The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called thegardener's ellipse. The Byzantine architectAnthemius of Tralles(c.600) described how this method could be used to construct an elliptical reflector,[13]and it was elaborated in a now-lost 9th-century treatise byAl-Ḥasan ibn Mūsā.[14]
A similar method for drawingconfocal ellipseswith aclosedstring is due to the Irish bishopCharles Graves.
The two following methods rely on the parametric representation (see§ Standard parametric representation, above):(acost,bsint){\displaystyle (a\cos t,\,b\sin t)}
This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axesa,b{\displaystyle a,\,b}have to be known.
The first method starts with
The point, where the semi axes meet is marked byP{\displaystyle P}. If the strip slides with both ends on the axes of the desired ellipse, then pointP{\displaystyle P}traces the ellipse. For the proof one shows that pointP{\displaystyle P}has the parametric representation(acost,bsint){\displaystyle (a\cos t,\,b\sin t)}, where parametert{\displaystyle t}is the angle of the slope of the paper strip.
A technical realization of the motion of the paper strip can be achieved by aTusi couple(see animation). The device is able to draw any ellipse with afixedsuma+b{\displaystyle a+b}, which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method.
A variation of the paper strip method 1 uses the observation that the midpointN{\displaystyle N}of the paper strip is moving on the circle with centerM{\displaystyle M}(of the ellipse) and radiusa+b2{\displaystyle {\tfrac {a+b}{2}}}. Hence, the paperstrip can be cut at pointN{\displaystyle N}into halves, connected again by a joint atN{\displaystyle N}and the sliding endK{\displaystyle K}fixed at the centerM{\displaystyle M}(see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged.[15]This variation requires only one sliding shoe.
The second method starts with
One marks the point, which divides the strip into two substrips of lengthb{\displaystyle b}anda−b{\displaystyle a-b}. The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by(acost,bsint){\displaystyle (a\cos t,\,b\sin t)}, where parametert{\displaystyle t}is the angle of slope of the paper strip.
This method is the base for severalellipsographs(see section below).
Similar to the variation of the paper strip method 1 avariation of the paper strip method 2can be established (see diagram) by cutting the part between the axes into halves.
Most ellipsographdraftinginstruments are based on the second paperstrip method.
FromMetric propertiesbelow, one obtains:
The diagram shows an easy way to find the centers of curvatureC1=(a−b2a,0),C3=(0,b−a2b){\displaystyle C_{1}=\left(a-{\tfrac {b^{2}}{a}},0\right),\,C_{3}=\left(0,b-{\tfrac {a^{2}}{b}}\right)}at vertexV1{\displaystyle V_{1}}and co-vertexV3{\displaystyle V_{3}}, respectively:
(proof: simple calculation.)
The centers for the remaining vertices are found by symmetry.
With help of aFrench curveone draws a curve, which has smooth contact to theosculating circles.
The following method to construct single points of an ellipse relies on theSteiner generation of a conic section:
For the generation of points of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}one uses the pencils at the verticesV1,V2{\displaystyle V_{1},\,V_{2}}. LetP=(0,b){\displaystyle P=(0,\,b)}be an upper co-vertex of the ellipse andA=(−a,2b),B=(a,2b){\displaystyle A=(-a,\,2b),\,B=(a,\,2b)}.
P{\displaystyle P}is the center of the rectangleV1,V2,B,A{\displaystyle V_{1},\,V_{2},\,B,\,A}. The sideAB¯{\displaystyle {\overline {AB}}}of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonalAV2{\displaystyle AV_{2}}as direction onto the line segmentV1B¯{\displaystyle {\overline {V_{1}B}}}and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils atV1{\displaystyle V_{1}}andV2{\displaystyle V_{2}}needed. The intersection points of any two related linesV1Bi{\displaystyle V_{1}B_{i}}andV2Ai{\displaystyle V_{2}A_{i}}are points of the uniquely defined ellipse. With help of the pointsC1,…{\displaystyle C_{1},\,\dotsc }the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse.
Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called aparallelogram methodbecause one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle.
The ellipse is a special case of thehypotrochoidwhenR=2r{\displaystyle R=2r}, as shown in the adjacent image. The special case of a moving circle with radiusr{\displaystyle r}inside a circle with radiusR=2r{\displaystyle R=2r}is called aTusi couple.
A circle with equation(x−x∘)2+(y−y∘)2=r2{\displaystyle \left(x-x_{\circ }\right)^{2}+\left(y-y_{\circ }\right)^{2}=r^{2}}is uniquely determined by three points(x1,y1),(x2,y2),(x3,y3){\displaystyle \left(x_{1},y_{1}\right),\;\left(x_{2},\,y_{2}\right),\;\left(x_{3},\,y_{3}\right)}not on a line. A simple way to determine the parametersx∘,y∘,r{\displaystyle x_{\circ },y_{\circ },r}uses theinscribed angle theoremfor circles:
Usually one measures inscribed angles by a degree or radianθ, but here the following measurement is more convenient:
For four pointsPi=(xi,yi),i=1,2,3,4,{\displaystyle P_{i}=\left(x_{i},\,y_{i}\right),\ i=1,\,2,\,3,\,4,\,}no three of them on a line, we have the following (see diagram):
At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord.
For example, forP1=(2,0),P2=(0,1),P3=(0,0){\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)}the three-point equation is:
Using vectors,dot productsanddeterminantsthis formula can be arranged more clearly, lettingx→=(x,y){\displaystyle {\vec {x}}=(x,\,y)}:(x→−x→1)⋅(x→−x→2)det(x→−x→1,x→−x→2)=(x→3−x→1)⋅(x→3−x→2)det(x→3−x→1,x→3−x→2).{\displaystyle {\frac {\left({\color {red}{\vec {x}}}-{\vec {x}}_{1}\right)\cdot \left({\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}{\det \left({\color {red}{\vec {x}}}-{\vec {x}}_{1},{\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}}={\frac {\left({\vec {x}}_{3}-{\vec {x}}_{1}\right)\cdot \left({\vec {x}}_{3}-{\vec {x}}_{2}\right)}{\det \left({\vec {x}}_{3}-{\vec {x}}_{1},{\vec {x}}_{3}-{\vec {x}}_{2}\right)}}.}
The center of the circle(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}satisfies:[1y1−y2x1−x2x1−x3y1−y31][x∘y∘]=[x12−x22+y12−y222(x1−x2)y12−y32+x12−x322(y1−y3)].{\displaystyle {\begin{bmatrix}1&{\dfrac {y_{1}-y_{2}}{x_{1}-x_{2}}}\\[2ex]{\dfrac {x_{1}-x_{3}}{y_{1}-y_{3}}}&1\end{bmatrix}}{\begin{bmatrix}x_{\circ }\\[1ex]y_{\circ }\end{bmatrix}}={\begin{bmatrix}{\dfrac {x_{1}^{2}-x_{2}^{2}+y_{1}^{2}-y_{2}^{2}}{2(x_{1}-x_{2})}}\\[2ex]{\dfrac {y_{1}^{2}-y_{3}^{2}+x_{1}^{2}-x_{3}^{2}}{2(y_{1}-y_{3})}}\end{bmatrix}}.}
The radius is the distance between any of the three points and the center.r=(x1−x∘)2+(y1−y∘)2=(x2−x∘)2+(y2−y∘)2=(x3−x∘)2+(y3−y∘)2.{\displaystyle r={\sqrt {\left(x_{1}-x_{\circ }\right)^{2}+\left(y_{1}-y_{\circ }\right)^{2}}}={\sqrt {\left(x_{2}-x_{\circ }\right)^{2}+\left(y_{2}-y_{\circ }\right)^{2}}}={\sqrt {\left(x_{3}-x_{\circ }\right)^{2}+\left(y_{3}-y_{\circ }\right)^{2}}}.}
This section considers the family of ellipses defined by equations(x−x∘)2a2+(y−y∘)2b2=1{\displaystyle {\tfrac {\left(x-x_{\circ }\right)^{2}}{a^{2}}}+{\tfrac {\left(y-y_{\circ }\right)^{2}}{b^{2}}}=1}with afixedeccentricitye{\displaystyle e}. It is convenient to use the parameter:q=a2b2=11−e2,{\displaystyle {\color {blue}q}={\frac {a^{2}}{b^{2}}}={\frac {1}{1-e^{2}}},}
and to write the ellipse equation as:(x−x∘)2+q(y−y∘)2=a2,{\displaystyle \left(x-x_{\circ }\right)^{2}+{\color {blue}q}\,\left(y-y_{\circ }\right)^{2}=a^{2},}
whereqis fixed andx∘,y∘,a{\displaystyle x_{\circ },\,y_{\circ },\,a}vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: ifq<1{\displaystyle q<1}, the major axis is parallel to thex-axis; ifq>1{\displaystyle q>1}, it is parallel to they-axis.)
Like a circle, such an ellipse is determined by three points not on a line.
For this family of ellipses, one introduces the followingq-analogangle measure, which isnota function of the usual angle measureθ:[16][17]
At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin.
For example, forP1=(2,0),P2=(0,1),P3=(0,0){\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)}andq=4{\displaystyle q=4}one obtains the three-point form
Analogously to the circle case, the equation can be written more clearly using vectors:(x→−x→1)∗(x→−x→2)det(x→−x→1,x→−x→2)=(x→3−x→1)∗(x→3−x→2)det(x→3−x→1,x→3−x→2),{\displaystyle {\frac {\left({\color {red}{\vec {x}}}-{\vec {x}}_{1}\right)*\left({\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}{\det \left({\color {red}{\vec {x}}}-{\vec {x}}_{1},{\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}}={\frac {\left({\vec {x}}_{3}-{\vec {x}}_{1}\right)*\left({\vec {x}}_{3}-{\vec {x}}_{2}\right)}{\det \left({\vec {x}}_{3}-{\vec {x}}_{1},{\vec {x}}_{3}-{\vec {x}}_{2}\right)}},}
where∗{\displaystyle *}is the modifieddot productu→∗v→=uxvx+quyvy.{\displaystyle {\vec {u}}*{\vec {v}}=u_{x}v_{x}+{\color {blue}q}\,u_{y}v_{y}.}
Any ellipse can be described in a suitable coordinate system by an equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}. The equation of the tangent at a pointP1=(x1,y1){\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)}of the ellipse isx1xa2+y1yb2=1.{\displaystyle {\tfrac {x_{1}x}{a^{2}}}+{\tfrac {y_{1}y}{b^{2}}}=1.}If one allows pointP1=(x1,y1){\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)}to be an arbitrary point different from the origin, then
This relation between points and lines is abijection.
Theinverse functionmaps
Such a relation between points and lines generated by a conic is calledpole-polar relationorpolarity. The pole is the point; the polar the line.
By calculation one can confirm the following properties of the pole-polar relation of the ellipse:
Pole-polar relations exist for hyperbolas and parabolas as well.
All metric properties given below refer to an ellipse with equation
except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.(1) will be given.
TheareaAellipse{\displaystyle A_{\text{ellipse}}}enclosed by an ellipse is:
wherea{\displaystyle a}andb{\displaystyle b}are the lengths of the semi-major and semi-minor axes, respectively. The area formulaπab{\displaystyle \pi ab}is intuitive: start with a circle of radiusb{\displaystyle b}(so its area isπb2{\displaystyle \pi b^{2}}) and stretch it by a factora/b{\displaystyle a/b}to make an ellipse. This scales the area by the same factor:πb2(a/b)=πab.{\displaystyle \pi b^{2}(a/b)=\pi ab.}[18]However, using the same approach for the circumference would be fallacious – compare theintegrals∫f(x)dx{\textstyle \int f(x)\,dx}and∫1+f′2(x)dx{\textstyle \int {\sqrt {1+f'^{2}(x)}}\,dx}. It is also easy to rigorously prove the area formula using integration as follows. Equation (1) can be rewritten asy(x)=b1−x2/a2.{\textstyle y(x)=b{\sqrt {1-x^{2}/a^{2}}}.}Forx∈[−a,a],{\displaystyle x\in [-a,a],}this curve is the top half of the ellipse. So twice the integral ofy(x){\displaystyle y(x)}over the interval[−a,a]{\displaystyle [-a,a]}will be the area of the ellipse:Aellipse=∫−aa2b1−x2a2dx=ba∫−aa2a2−x2dx.{\displaystyle {\begin{aligned}A_{\text{ellipse}}&=\int _{-a}^{a}2b{\sqrt {1-{\frac {x^{2}}{a^{2}}}}}\,dx\\&={\frac {b}{a}}\int _{-a}^{a}2{\sqrt {a^{2}-x^{2}}}\,dx.\end{aligned}}}
The second integral is the area of a circle of radiusa,{\displaystyle a,}that is,πa2.{\displaystyle \pi a^{2}.}SoAellipse=baπa2=πab.{\displaystyle A_{\text{ellipse}}={\frac {b}{a}}\pi a^{2}=\pi ab.}
An ellipse defined implicitly byAx2+Bxy+Cy2=1{\displaystyle Ax^{2}+Bxy+Cy^{2}=1}has area2π/4AC−B2.{\displaystyle 2\pi /{\sqrt {4AC-B^{2}}}.}
The area can also be expressed in terms of eccentricity and the length of the semi-major axis asa2π1−e2{\displaystyle a^{2}\pi {\sqrt {1-e^{2}}}}(obtained by solving forflattening, then computing the semi-minor axis).
So far we have dealt witherectellipses, whose major and minor axes are parallel to thex{\displaystyle x}andy{\displaystyle y}axes. However, some applications requiretiltedellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, itsemittance. In this case a simple formula still applies, namely
whereyint{\displaystyle y_{\text{int}}},xint{\displaystyle x_{\text{int}}}are intercepts andxmax{\displaystyle x_{\text{max}}},ymax{\displaystyle y_{\text{max}}}are maximum values. It follows directly fromApollonios's theorem.
The circumferenceC{\displaystyle C}of an ellipse is:C=4a∫0π/21−e2sin2θdθ=4aE(e){\displaystyle C\,=\,4a\int _{0}^{\pi /2}{\sqrt {1-e^{2}\sin ^{2}\theta }}\ d\theta \,=\,4a\,E(e)}
where againa{\displaystyle a}is the length of the semi-major axis,e=1−b2/a2{\textstyle e={\sqrt {1-b^{2}/a^{2}}}}is the eccentricity, and the functionE{\displaystyle E}is thecomplete elliptic integral of the second kind,E(e)=∫0π/21−e2sin2θdθ{\displaystyle E(e)\,=\,\int _{0}^{\pi /2}{\sqrt {1-e^{2}\sin ^{2}\theta }}\ d\theta }which is in general not anelementary function.
The circumference of the ellipse may be evaluated in terms ofE(e){\displaystyle E(e)}usingGauss's arithmetic-geometric mean;[19]this is a quadratically converging iterative method (seeherefor details).
The exactinfinite seriesis:C2πa=1−(12)2e2−(1⋅32⋅4)2e43−(1⋅3⋅52⋅4⋅6)2e65−⋯=1−∑n=1∞((2n−1)!!(2n)!!)2e2n2n−1=−∑n=0∞((2n−1)!!(2n)!!)2e2n2n−1,{\displaystyle {\begin{aligned}{\frac {C}{2\pi a}}&=1-\left({\frac {1}{2}}\right)^{2}e^{2}-\left({\frac {1\cdot 3}{2\cdot 4}}\right)^{2}{\frac {e^{4}}{3}}-\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right)^{2}{\frac {e^{6}}{5}}-\cdots \\&=1-\sum _{n=1}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}\right)^{2}{\frac {e^{2n}}{2n-1}}\\&=-\sum _{n=0}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}\right)^{2}{\frac {e^{2n}}{2n-1}},\end{aligned}}}wheren!!{\displaystyle n!!}is thedouble factorial(extended to negative odd integers in the usual way, giving(−1)!!=1{\displaystyle (-1)!!=1}and(−3)!!=−1{\displaystyle (-3)!!=-1}).
This series converges, but by expanding in terms ofh=(a−b)2/(a+b)2,{\displaystyle h=(a-b)^{2}/(a+b)^{2},}James Ivory,[20]Bessel[21]andKummer[22]derived a series that converges much more rapidly. It is most concisely written in terms of thebinomial coefficient withn=1/2{\displaystyle n=1/2}:Cπ(a+b)=∑n=0∞(12n)2hn=∑n=0∞((2n−3)!!(2n)!!)2hn=∑n=0∞((2n−3)!!2nn!)2hn=∑n=0∞(1(2n−1)4n(2nn))2hn=1+h4+h264+h3256+25h416384+49h565536+441h6220+1089h7222+⋯.{\displaystyle {\begin{aligned}{\frac {C}{\pi (a+b)}}&=\sum _{n=0}^{\infty }{{\frac {1}{2}} \choose n}^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {(2n-3)!!}{(2n)!!}}\right)^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {(2n-3)!!}{2^{n}n!}}\right)^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {1}{(2n-1)4^{n}}}{\binom {2n}{n}}\right)^{2}h^{n}\\&=1+{\frac {h}{4}}+{\frac {h^{2}}{64}}+{\frac {h^{3}}{256}}+{\frac {25\,h^{4}}{16384}}+{\frac {49\,h^{5}}{65536}}+{\frac {441\,h^{6}}{2^{20}}}+{\frac {1089\,h^{7}}{2^{22}}}+\cdots .\end{aligned}}}The coefficients are slightly smaller (by a factor of2n−1{\displaystyle 2n-1}), but alsoe4/16≤h≤e4{\displaystyle e^{4}/16\leq h\leq e^{4}}is numerically much smaller thane{\displaystyle e}except ath=e=0{\displaystyle h=e=0}andh=e=1{\displaystyle h=e=1}. For eccentricities less than 0.5(h<0.005{\displaystyle h<0.005}),the error is at the limits ofdouble-precision floating-pointafter theh4{\displaystyle h^{4}}term.[23]
Srinivasa Ramanujangave two closeapproximationsfor the circumference in §16 of "Modular Equations and Approximations toπ{\displaystyle \pi }";[24]they areCπ≈3(a+b)−(3a+b)(a+3b)=3(a+b)−3(a+b)2+4ab{\displaystyle {\frac {C}{\pi }}\approx 3(a+b)-{\sqrt {(3a+b)(a+3b)}}=3(a+b)-{\sqrt {3(a+b)^{2}+4ab}}}andCπ(a+b)≈1+3h10+4−3h,{\displaystyle {\frac {C}{\pi (a+b)}}\approx 1+{\frac {3h}{10+{\sqrt {4-3h}}}},}whereh{\displaystyle h}takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of orderh3{\displaystyle h^{3}}andh5,{\displaystyle h^{5},}respectively.[25][26]This is because the second formula's infinite series expansion matches Ivory's formula up to theh4{\displaystyle h^{4}}term.[25]: 3
More generally, thearc lengthof a portion of the circumference, as a function of the angle subtended (orxcoordinatesof any two points on the upper half of the ellipse), is given by an incompleteelliptic integral. The upper half of an ellipse is parameterized byy=b1−x2a2.{\displaystyle y=b\ {\sqrt {1-{\frac {x^{2}}{a^{2}}}\ }}~.}
Then the arc lengths{\displaystyle s}fromx1{\displaystyle \ x_{1}\ }tox2{\displaystyle \ x_{2}\ }is:s=−b∫arccosx1aarccosx2a1+(a2b2−1)sin2zdz.{\displaystyle s=-b\int _{\arccos {\frac {x_{1}}{a}}}^{\arccos {\frac {x_{2}}{a}}}{\sqrt {\ 1+\left({\tfrac {a^{2}}{b^{2}}}-1\right)\ \sin ^{2}z~}}\;dz~.}
This is equivalent tos=b[E(z|1−a2b2)]z=arccosx2aarccosx1a{\displaystyle s=b\ \left[\;E\left(z\;{\Biggl |}\;1-{\frac {a^{2}}{b^{2}}}\right)\;\right]_{z\ =\ \arccos {\frac {x_{2}}{a}}}^{\arccos {\frac {x_{1}}{a}}}}
whereE(z∣m){\displaystyle E(z\mid m)}is the incomplete elliptic integral of the second kind with parameterm=k2.{\displaystyle m=k^{2}.}
Some lower and upper bounds on the circumference of the canonical ellipsex2/a2+y2/b2=1{\displaystyle \ x^{2}/a^{2}+y^{2}/b^{2}=1\ }witha≥b{\displaystyle \ a\geq b\ }are[27]2πb≤C≤2πa,π(a+b)≤C≤4(a+b),4a2+b2≤C≤2πa2+b2.{\displaystyle {\begin{aligned}2\pi b&\leq C\leq 2\pi a\ ,\\\pi (a+b)&\leq C\leq 4(a+b)\ ,\\4{\sqrt {a^{2}+b^{2}\ }}&\leq C\leq {\sqrt {2\ }}\pi {\sqrt {a^{2}+b^{2}\ }}~.\end{aligned}}}
Here the upper bound2πa{\displaystyle \ 2\pi a\ }is the circumference of acircumscribedconcentric circlepassing through the endpoints of the ellipse's major axis, and the lower bound4a2+b2{\displaystyle 4{\sqrt {a^{2}+b^{2}}}}is the perimeter of aninscribedrhombuswithverticesat the endpoints of the major and the minor axes.
Given an ellipse whose axes are drawn, we can construct the endpoints of a particular elliptic arc whose length is one eighth of the ellipse's circumference using onlystraightedge and compassin a finite number of steps; for some specific shapes of ellipses, such as when the axes have a length ratio of2:1{\displaystyle {\sqrt {2}}:1}, it is additionally possible to construct the endpoints of a particular arc whose length is one twelfth of the circumference.[28](The vertices and co-vertices are already endpoints of arcs whose length is one half or one quarter of the ellipse's circumference.) However, the general theory of straightedge-and-compass elliptic division appears to be unknown, unlike inthe case of the circleandthe lemniscate. The division in special cases has been investigated byLegendrein his classical treatise.[29]
Thecurvatureis given by:
κ=1a2b2(x2a4+y2b4)−32,{\displaystyle \kappa ={\frac {1}{a^{2}b^{2}}}\left({\frac {x^{2}}{a^{4}}}+{\frac {y^{2}}{b^{4}}}\right)^{-{\frac {3}{2}}}\ ,}
and theradius of curvature, ρ = 1/κ, at point(x,y){\displaystyle (x,y)}:ρ=a2b2(x2a4+y2b4)32=1a4b4(a4y2+b4x2)3.{\displaystyle \rho =a^{2}b^{2}\left({\frac {x^{2}}{a^{4}}}+{\frac {y^{2}}{b^{4}}}\right)^{\frac {3}{2}}={\frac {1}{a^{4}b^{4}}}{\sqrt {\left(a^{4}y^{2}+b^{4}x^{2}\right)^{3}}}\ .}The radius of curvature of an ellipse, as a function of angleθfrom the center, is:R(θ)=a2b(1−e2(2−e2)(cosθ)2)1−e2(cosθ)2)3/2,{\displaystyle R(\theta )={\frac {a^{2}}{b}}{\biggl (}{\frac {1-e^{2}(2-e^{2})(\cos \theta )^{2})}{1-e^{2}(\cos \theta )^{2}}}{\biggr )}^{3/2}\,,}where e is the eccentricity.
Radius of curvature at the twovertices(±a,0){\displaystyle (\pm a,0)}and the centers of curvature:ρ0=b2a=p,(±c2a|0).{\displaystyle \rho _{0}={\frac {b^{2}}{a}}=p\ ,\qquad \left(\pm {\frac {c^{2}}{a}}\,{\bigg |}\,0\right)\ .}
Radius of curvature at the twoco-vertices(0,±b){\displaystyle (0,\pm b)}and the centers of curvature:ρ1=a2b,(0|±c2b).{\displaystyle \rho _{1}={\frac {a^{2}}{b}}\ ,\qquad \left(0\,{\bigg |}\,\pm {\frac {c^{2}}{b}}\right)\ .}The locus of all the centers of curvature is called anevolute. In the case of an ellipse, the evolute is anastroid.
Ellipses appear in triangle geometry as
Ellipses appear as plane sections of the followingquadrics:
If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, afterreflectingoff the walls, converge simultaneously to a single point: thesecond focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.
Similarly, if a light source is placed at one focus of an ellipticmirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, aprolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linearfluorescent lampalong a line of the paper; such mirrors are used in somedocument scanners.
Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under avaulted roofshaped as a section of a prolate spheroid. Such a room is called awhisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are theNational Statuary Hallat theUnited States Capitol(whereJohn Quincy Adamsis said to have used this property for eavesdropping on political matters); theMormon TabernacleatTemple SquareinSalt Lake City,Utah; at an exhibit on sound at theMuseum of Science and IndustryinChicago; in front of theUniversity of Illinois at Urbana–ChampaignFoellinger Auditorium; and also at a side chamber of the Palace of Charles V, in theAlhambra.
In the 17th century,Johannes Keplerdiscovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in hisfirst law of planetary motion. Later,Isaac Newtonexplained this as a corollary of hislaw of universal gravitation.
More generally, in the gravitationaltwo-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits aresimilarellipses with the commonbarycenterbeing one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus.
Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due toelectromagnetic radiationandquantum effects, which become significant when the particles are moving at high speed.)
Forelliptical orbits, useful relations involving the eccentricitye{\displaystyle e}are:e=ra−rpra+rp=ra−rp2ara=(1+e)arp=(1−e)a{\displaystyle {\begin{aligned}e&={\frac {r_{a}-r_{p}}{r_{a}+r_{p}}}={\frac {r_{a}-r_{p}}{2a}}\\r_{a}&=(1+e)a\\r_{p}&=(1-e)a\end{aligned}}}
where
Also, in terms ofra{\displaystyle r_{a}}andrp{\displaystyle r_{p}}, the semi-major axisa{\displaystyle a}is theirarithmetic mean, the semi-minor axisb{\displaystyle b}is theirgeometric mean, and thesemi-latus rectumℓ{\displaystyle \ell }is theirharmonic mean. In other words,a=ra+rp2b=rarpℓ=21ra+1rp=2rarpra+rp.{\displaystyle {\begin{aligned}a&={\frac {r_{a}+r_{p}}{2}}\\[2pt]b&={\sqrt {r_{a}r_{p}}}\\[2pt]\ell &={\frac {2}{{\frac {1}{r_{a}}}+{\frac {1}{r_{p}}}}}={\frac {2r_{a}r_{p}}{r_{a}+r_{p}}}.\end{aligned}}}
The general solution for aharmonic oscillatorin two or moredimensionsis also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elasticspring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion.
Inelectronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of anoscilloscope. If theLissajous figuredisplay is an ellipse, rather than a straight line, the two signals are out of phase.
Twonon-circular gearswith the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by alink chainortiming belt, or in the case of a bicycle the mainchainringmay be elliptical, or anovoidsimilar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variableangular speedortorquefrom a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varyingmechanical advantage.
Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.[30]
An example gear application would be a device that winds thread onto a conicalbobbinon aspinningmachine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.[31]
Instatistics, a bivariaterandom vector(X,Y){\displaystyle (X,Y)}isjointly elliptically distributedif its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is themultivariate normal distribution. The elliptical distributions are important in the financial field because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.[34][35]
Drawing an ellipse as agraphics primitiveis common in standard display libraries, such as the MacIntoshQuickDrawAPI, andDirect2Don Windows.Jack Bresenhamat IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967.[36]Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.[37]
In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties.[38]These algorithms need only a few multiplications and additions to calculate each vector.
It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation.
Composite Bézier curvesmay also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as anaffine transformationof a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituentBézier curvesbehave appropriately under such transformations.
It is sometimes useful to find the minimum bounding ellipse on a set of points. Theellipsoid methodis quite useful for solving this problem.
|
https://en.wikipedia.org/wiki/Ellipse
|
Inmathematics, agroup actionof a groupG{\displaystyle G}on asetS{\displaystyle S}is agroup homomorphismfromG{\displaystyle G}to some group (underfunction composition) of functions fromS{\displaystyle S}to itself. It is said thatG{\displaystyle G}actsonS{\displaystyle S}.
Many sets oftransformationsform agroupunderfunction composition; for example, therotationsaround a point in the plane. It is often useful to consider the group as anabstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of astructureacts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles.
If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group ofEuclidean isometriesacts onEuclidean spaceand also on the figures drawn in it; in particular, it acts on the set of alltriangles. Similarly, the group ofsymmetriesof apolyhedronacts on thevertices, theedges, and thefacesof the polyhedron.
A group action on avector spaceis called arepresentationof the group. In the case of a finite-dimensional vector space, it allows one to identify many groups withsubgroupsof thegeneral linear groupGL(n,K){\displaystyle \operatorname {GL} (n,K)}, the group of theinvertible matricesofdimensionn{\displaystyle n}over afieldK{\displaystyle K}.
Thesymmetric groupSn{\displaystyle S_{n}}acts on anysetwithn{\displaystyle n}elements by permuting the elements of the set. Although the group of allpermutationsof a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the samecardinality.
IfG{\displaystyle G}is agroupwithidentity elemente{\displaystyle e}, andX{\displaystyle X}is a set, then a (left)group actionα{\displaystyle \alpha }ofG{\displaystyle G}onXis afunction
that satisfies the following twoaxioms:[1]
for allgandhinGand allxinX{\displaystyle X}.
The groupG{\displaystyle G}is then said to act onX{\displaystyle X}(from the left). A setX{\displaystyle X}together with an action ofG{\displaystyle G}is called a (left)G{\displaystyle G}-set.
It can be notationally convenient tocurrythe actionα{\displaystyle \alpha }, so that, instead, one has a collection oftransformationsαg:X→X, with one transformationαgfor each group elementg∈G. The identity and compatibility relations then read
and
The second axiom states that the function composition is compatible with the group multiplication; they form acommutative diagram. This axiom can be shortened even further, and written asαg∘αh=αgh{\displaystyle \alpha _{g}\circ \alpha _{h}=\alpha _{gh}}.
With the above understanding, it is very common to avoid writingα{\displaystyle \alpha }entirely, and to replace it with either a dot, or with nothing at all. Thus,α(g,x)can be shortened tog⋅xorgx, especially when the action is clear from context. The axioms are then
From these two axioms, it follows that for any fixedginG{\displaystyle G}, the function fromXto itself which mapsxtog⋅xis abijection, with inverse bijection the corresponding map forg−1. Therefore, one may equivalently define a group action ofGonXas a group homomorphism fromGinto the symmetric groupSym(X)of all bijections fromXto itself.[2]
Likewise, aright group actionofG{\displaystyle G}onX{\displaystyle X}is a function
that satisfies the analogous axioms:[3]
(withα(x,g)often shortened toxgorx⋅gwhen the action being considered is clear from context)
for allgandhinGand allxinX.
The difference between left and right actions is in the order in which a productghacts onx. For a left action,hacts first, followed bygsecond. For a right action,gacts first, followed byhsecond. Because of the formula(gh)−1=h−1g−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a groupGonXcan be considered as a left action of itsopposite groupGoponX.
Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a groupinducesboth a left action and a right action on the group itself—multiplication on the left and on the right, respectively.
LetGbe a group acting on a setX. The action is calledfaithfuloreffectiveifg⋅x=xfor allx∈Ximplies thatg=eG. Equivalently, thehomomorphismfromGto the group of bijections ofXcorresponding to the action isinjective.
The action is calledfree(orsemiregularorfixed-point free) if the statement thatg⋅x=xfor somex∈Xalready implies thatg=eG. In other words, no non-trivial element ofGfixes a point ofX. This is a much stronger property than faithfulness.
For example, the action of any group on itself by left multiplication is free. This observation impliesCayley's theoremthat any group can beembeddedin a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group(Z/ 2Z)n(of cardinality2n) acts faithfully on a set of size2n. This is not always the case, for example thecyclic groupZ/ 2nZcannot act faithfully on a set of size less than2n.
In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric groupS5, the icosahedral groupA5×Z/ 2Zand the cyclic groupZ/ 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively.
The action ofGonXis calledtransitiveif for any two pointsx,y∈Xthere exists ag∈Gso thatg⋅x=y.
The action issimply transitive(orsharply transitive, orregular) if it is both transitive and free. This means that givenx,y∈Xthere is exactly oneg∈Gsuch thatg⋅x=y. IfXis acted upon simply transitively by a groupGthen it is called aprincipal homogeneous spaceforGor aG-torsor.
For an integern≥ 1, the action isn-transitiveifXhas at leastnelements, and for any pair ofn-tuples(x1, ...,xn), (y1, ...,yn) ∈Xnwith pairwise distinct entries (that isxi≠xj,yi≠yjwheni≠j) there exists ag∈Gsuch thatg⋅xi=yifori= 1, ...,n. In other words, the action on the subset ofXnof tuples without repeated entries is transitive. Forn= 2, 3this is often called double, respectively triple, transitivity. The class of2-transitive groups(that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generallymultiply transitive groupsis well-studied in finite group theory.
An action issharplyn-transitivewhen the action on tuples without repeated entries inXnis sharply transitive.
The action of the symmetric group ofXis transitive, in factn-transitive for anynup to the cardinality ofX. IfXhas cardinalityn, the action of thealternating groupis(n− 2)-transitive but not(n− 1)-transitive.
The action of thegeneral linear groupof a vector spaceVon the setV∖ {0}of non-zero vectors is transitive, but not 2-transitive (similarly for the action of thespecial linear groupif the dimension ofvis at least 2). The action of theorthogonal groupof a Euclidean space is not transitive on nonzero vectors but it is on theunit sphere.
The action ofGonXis calledprimitiveif there is nopartitionofXpreserved by all elements ofGapart from the trivial partitions (the partition in a single piece and itsdual, the partition intosingletons).
Assume thatXis atopological spaceand the action ofGis byhomeomorphisms.
The action iswanderingif everyx∈Xhas aneighbourhoodUsuch that there are only finitely manyg∈Gwithg⋅U∩U≠ ∅.[4]
More generally, a pointx∈Xis called a point of discontinuity for the action ofGif there is an open subsetU∋xsuch that there are only finitely manyg∈Gwithg⋅U∩U≠ ∅. Thedomain of discontinuityof the action is the set of all points of discontinuity. Equivalently it is the largestG-stable open subsetΩ ⊂Xsuch that the action ofGonΩis wandering.[5]In a dynamical context this is also called awandering set.
The action isproperly discontinuousif for everycompactsubsetK⊂Xthere are only finitely manyg∈Gsuch thatg⋅K∩K≠ ∅. This is strictly stronger than wandering; for instance the action ofZonR2∖ {(0, 0)}given byn⋅(x,y) = (2nx, 2−ny)is wandering and free but not properly discontinuous.[6]
The action bydeck transformationsof thefundamental groupof a locallysimply connected spaceon auniversal coveris wandering and free. Such actions can be characterized by the following property: everyx∈Xhas a neighbourhoodUsuch thatg⋅U∩U= ∅for everyg∈G∖ {eG}.[7]Actions with this property are sometimes calledfreely discontinuous, and the largest subset on which the action is freely discontinuous is then called thefree regular set.[8]
An action of a groupGon alocally compact spaceXis calledcocompactif there exists a compact subsetA⊂Xsuch thatX=G⋅A. For a properly discontinuous action, cocompactness is equivalent to compactness of thequotient spaceX/G.
Now assumeGis atopological groupandXa topological space on which it acts by homeomorphisms. The action is said to becontinuousif the mapG×X→Xis continuous for theproduct topology.
The action is said to beproperif the mapG×X→X×Xdefined by(g,x) ↦ (x,g⋅x)isproper.[9]This means that given compact setsK,K′the set ofg∈Gsuch thatg⋅K∩K′ ≠ ∅is compact. In particular, this is equivalent to proper discontinuity ifGis adiscrete group.
It is said to belocally freeif there exists a neighbourhoodUofeGsuch thatg⋅x≠xfor allx∈Xandg∈U∖ {eG}.
The action is said to bestrongly continuousif the orbital mapg↦g⋅xis continuous for everyx∈X. Contrary to what the name suggests, this is a weaker property than continuity of the action.[citation needed]
IfGis aLie groupandXadifferentiable manifold, then the subspace ofsmooth pointsfor the action is the set of pointsx∈Xsuch that the mapg↦g⋅xissmooth. There is a well-developed theory ofLie group actions, i.e. action which are smooth on the whole space.
Ifgacts bylinear transformationson amoduleover acommutative ring, the action is said to beirreducibleif there are no proper nonzerog-invariant submodules. It is said to besemisimpleif it decomposes as adirect sumof irreducible actions.
Consider a groupGacting on a setX. Theorbitof an elementxinXis the set of elements inXto whichxcan be moved by the elements ofG. The orbit ofxis denoted byG⋅x:G⋅x={g⋅x:g∈G}.{\displaystyle G{\cdot }x=\{g{\cdot }x:g\in G\}.}
The defining properties of a group guarantee that the set of orbits of (pointsxin)Xunder the action ofGform apartitionofX. The associatedequivalence relationis defined by sayingx~yif and only ifthere exists aginGwithg⋅x=y. The orbits are then theequivalence classesunder this relation; two elementsxandyare equivalent if and only if their orbits are the same, that is,G⋅x=G⋅y.
The group action istransitiveif and only if it has exactly one orbit, that is, if there existsxinXwithG⋅x=X. This is the case if and only ifG⋅x=XforallxinX(given thatXis non-empty).
The set of all orbits ofXunder the action ofGis written asX/G(or, less frequently, asG\X), and is called thequotientof the action. In geometric situations it may be called theorbit space, while in algebraic situations it may be called the space ofcoinvariants, and writtenXG, by contrast with the invariants (fixed points), denotedXG: the coinvariants are aquotientwhile the invariants are asubset. The coinvariant terminology and notation are used particularly ingroup cohomologyandgroup homology, which use the same superscript/subscript convention.
IfYis asubsetofX, thenG⋅Ydenotes the set{g⋅y:g∈Gandy∈Y}. The subsetYis said to beinvariant underGifG⋅Y=Y(which is equivalentG⋅Y⊆Y). In that case,Galso operates onYbyrestrictingthe action toY. The subsetYis calledfixed underGifg⋅y=yfor allginGand allyinY. Every subset that is fixed underGis also invariant underG, but not conversely.
Every orbit is an invariant subset ofXon whichGactstransitively. Conversely, any invariant subset ofXis a union of orbits. The action ofGonXistransitiveif and only if all elements are equivalent, meaning that there is only one orbit.
AG-invariantelement ofXisx∈Xsuch thatg⋅x=xfor allg∈G. The set of all suchxis denotedXGand called theG-invariantsofX. WhenXis aG-module,XGis the zerothcohomologygroup ofGwith coefficients inX, and the higher cohomology groups are thederived functorsof thefunctorofG-invariants.
GivenginGandxinXwithg⋅x=x, it is said that "xis a fixed point ofg" or that "gfixesx". For everyxinX, thestabilizer subgroupofGwith respect tox(also called theisotropy grouporlittle group[10]) is the set of all elements inGthat fixx:Gx={g∈G:g⋅x=x}.{\displaystyle G_{x}=\{g\in G:g{\cdot }x=x\}.}This is asubgroupofG, though typically not a normal one. The action ofGonXisfreeif and only if all stabilizers are trivial. The kernelNof the homomorphism with the symmetric group,G→ Sym(X), is given by theintersectionof the stabilizersGxfor allxinX. IfNis trivial, the action is said to be faithful (or effective).
Letxandybe two elements inX, and letgbe a group element such thaty=g⋅x. Then the two stabilizer groupsGxandGyare related byGy=gGxg−1. Proof: by definition,h∈Gyif and only ifh⋅(g⋅x) =g⋅x. Applyingg−1to both sides of this equality yields(g−1hg)⋅x=x; that is,g−1hg∈Gx. An opposite inclusion follows similarly by takingh∈Gxandx=g−1⋅y.
The above says that the stabilizers of elements in the same orbit areconjugateto each other. Thus, to each orbit, we can associate aconjugacy classof a subgroup ofG(that is, the set of all conjugates of the subgroup). Let(H)denote the conjugacy class ofH. Then the orbitOhas type(H)if the stabilizerGxof some/anyxinObelongs to(H). A maximal orbit type is often called aprincipal orbit type.
Orbits and stabilizers are closely related. For a fixedxinX, consider the mapf:G→Xgiven byg↦g⋅x. By definition the imagef(G)of this map is the orbitG⋅x. The condition for two elements to have the same image isf(g)=f(h)⟺g⋅x=h⋅x⟺g−1h⋅x=x⟺g−1h∈Gx⟺h∈gGx.{\displaystyle f(g)=f(h)\iff g{\cdot }x=h{\cdot }x\iff g^{-1}h{\cdot }x=x\iff g^{-1}h\in G_{x}\iff h\in gG_{x}.}In other words,f(g) =f(h)if and only ifgandhlie in the samecosetfor the stabilizer subgroupGx. Thus, thefiberf−1({y})offover anyyinG⋅xis contained in such a coset, and every such coset also occurs as a fiber. Thereforefinduces abijectionbetween the setG/Gxof cosets for the stabilizer subgroup and the orbitG⋅x, which sendsgGx↦g⋅x.[11]This result is known as theorbit-stabilizer theorem.
IfGis finite then the orbit-stabilizer theorem, together withLagrange's theorem, gives|G⋅x|=[G:Gx]=|G|/|Gx|,{\displaystyle |G\cdot x|=[G\,:\,G_{x}]=|G|/|G_{x}|,}in other words the length of the orbit ofxtimes the order of its stabilizer is theorder of the group. In particular that implies that the orbit length is a divisor of the group order.
This result is especially useful since it can be employed for counting arguments (typically in situations whereXis finite as well).
A result closely related to the orbit-stabilizer theorem isBurnside's lemma:|X/G|=1|G|∑g∈G|Xg|,{\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|,}whereXgis the set of points fixed byg. This result is mainly of use whenGandXare finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element.
Fixing a groupG, the set of formal differences of finiteG-sets forms a ring called theBurnside ringofG, where addition corresponds todisjoint union, and multiplication toCartesian product.
The notion of group action can be encoded by theactiongroupoidG′ =G⋉Xassociated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components.
IfXandYare twoG-sets, amorphismfromXtoYis a functionf:X→Ysuch thatf(g⋅x) =g⋅f(x)for allginGand allxinX. Morphisms ofG-sets are also calledequivariant mapsorG-maps.
The composition of two morphisms is again a morphism. If a morphismfis bijective, then its inverse is also a morphism. In this casefis called anisomorphism, and the twoG-setsXandYare calledisomorphic; for all practical purposes, isomorphicG-sets are indistinguishable.
Some example isomorphisms:
With this notion of morphism, the collection of allG-sets forms acategory; this category is aGrothendieck topos(in fact, assuming a classicalmetalogic, thistoposwill even be Boolean).
We can also consider actions ofmonoidson sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. Seesemigroup action.
Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an objectXof some category, and then define an action onXas a monoid homomorphism into the monoid ofendomorphismsofX. IfXhas an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtaingroup representationsin this fashion.
We can view a groupGas a category with a single object in which every morphism isinvertible.[15]A (left) group action is then nothing but a (covariant)functorfromGto thecategory of sets, and a group representation is a functor fromGto thecategory of vector spaces.[16]A morphism betweenG-sets is then anatural transformationbetween the group action functors.[17]In analogy, an action of agroupoidis a functor from the groupoid to the category of sets or to some other category.
In addition tocontinuous actionsof topological groups on topological spaces, one also often considerssmooth actionsof Lie groups onsmooth manifolds, regular actions ofalgebraic groupsonalgebraic varieties, andactionsofgroup schemesonschemes. All of these are examples ofgroup objectsacting on objects of their respective category.
|
https://en.wikipedia.org/wiki/Stabilizer_subgroup
|
Anoptical disc image(orISO image, from theISO 9660file system used withCD-ROMmedia) is adisk imagethat contains everything that would be written to anoptical disc,disk sectorby disc sector, including the optical discfile system.[3]ISO images contain the binary image of an optical mediafile system(usuallyISO 9660and its extensions orUDF), including the data in its files in binary format, copied exactly as they were stored on the disc. The data inside the ISO image will be structured according to the file system that was used on the optical disc from which it was created.
ISO images can be created from optical discs bydisk imaging software, or from a collection offilesbyoptical disc authoring software, or from a differentdisk image fileby means ofconversion. Software distributed on bootable discs is often available for download in ISO image format; like any other ISO image, it may be written to an optical disc such as CD, DVD and Blu-Ray.
Optical-disc images are uncompressed and do not use a particular container format; they are asector-by-sector copy of the data on an optical disc, stored inside a binary file. Other than ISO 9660 media, an ISO image might also contain aUDF(ISO/IEC 13346) file system (commonly used byDVDsandBlu-ray Discs), including the data in its files in binary format, copied exactly as they were stored on the disc. The data inside the ISO image will be structured according to thefile systemthat was used on the optical disc from which it was created.
The.isofile extensionis the one most commonly used for this type of disc images. The.imgextension can also be found on some ISO image files, such as in some images from MicrosoftDreamSpark; however,IMG files, which also use the.imgextension, tend to have slightly different contents. The.udffile extension is sometimes used to indicate that the file system inside the ISO image is actually UDF and not ISO 9660.
ISO files store only the user data from each sector on an optical disc, ignoring thecontrol headersand error correction data, and are therefore slightly smaller than a raw disc image of optical media. Since the size of the user-data portion of a sector (logical sector) in data optical discs is 2,048 bytes, the size of an ISO image will be a multiple of 2,048.
Any single-trackCD-ROM,DVDorBlu-raydisc can be archived in ISO format as a true digital copy of the original. Unlike a physical optical disc, an image can be transferred over any data link or removable storage medium. An ISO image can be opened with almost every multi-formatfile archiver. Native support for handling ISO images varies from operating system to operating system.
With a suitabledriversoftware, an ISO can be "mounted" – allowing the operating system to interface with it, just as if the ISO were a physical optical disc. MostUnix-based operating systems, includingLinuxandmacOS, have this built-in capability to mount an ISO. Versions of Windows, beginning withWindows 8, also have such a capability.[4]For other operating systems, separately available software drivers can be installed to achieve the same objective.
A CD can have multipletracks, which can contain computer data, audio, or video.File systemssuch asISO 9660are stored inside one of these tracks. Since ISO images are expected to contain a binary copy of the file system and its contents, there is no concept of a "track" inside an ISO image, since a track is a container for the contents of an ISO image. This means that CDs with multiple tracks can not be stored inside a single ISO image; at most, an ISO image will contain the data inside one of those multiple tracks, and only if it is stored inside a standard file system.
This also means thataudio CDs, which are usually composed of multiple tracks, can not be stored inside an ISO image. Furthermore, not even a single track of an audio CD can be stored as an ISO image, since audio tracks do not contain a file system inside them, but only a continuous stream of encoded audio data. This audio is stored onsectors of 2352 bytesdifferent from those that store a file system and it is not stored inside files; it is addressed withtrack numbers,index pointsand aCD time codethat are encoded into thelead-inof each session of the CD-Audio disc.
Video CDsandSuper Video CDsrequire at least two tracks on a CD, so it is also not possible to store an image of one of these discs inside an ISO image file, however an .IMG file can achieve this.
Formats such asCUE/BIN,CCD/IMGandMDS/MDFformats can be used to store multi-track disc images, including audio CDs. These formats store a raw disc image of the complete disc, including information from all tracks, along with a companion file describing the multiple tracks and the characteristics of each of those tracks. This would allow an optical media burning tool to have all the information required to correctly burn the image on a new disc. For audio CDs, one can also transfer the audio data into uncompressed audio files likeWAVorAIFF, optionally reserving the metadata (seeCD ripping).
Most software that is capable of writing from ISO images to hard disks or recordable media (CD / DVD / BD) is generally not able to write from ISO disk images toflash drives. This limitation is more related to the availability of software tools able to perform this task, than to problems in the format itself. However, since 2011, various software has existed to write raw image files to USB flash drives.[5][6]
.ISO files are commonly used inemulatorsto replicate aCDimage. Emulators such asDolphinandPCSX2use .iso files to emulateWiiandGameCubegames, andPlayStation 2games, respectively.[7][8]They can also be used as virtual CD-ROMs for hypervisors such asVMware WorkstationorVirtualBox. Other uses are burning disk images of operating systems to physical install media.
|
https://en.wikipedia.org/wiki/ISO_image
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.