text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Alogic gateis a device that performs aBoolean function, alogical operationperformed on one or morebinaryinputs that produces a single binary output. Depending on the context, the term may refer to anideal logic gate, one that has, for instance, zerorise timeand unlimitedfan-out, or it may refer to a non-ideal physical device[1](seeideal and real op-ampsfor comparison).
The primary way of building logic gates usesdiodesortransistorsacting aselectronic switches. Today, most logic gates are made fromMOSFETs(metal–oxide–semiconductorfield-effect transistors).[2]They can also be constructed usingvacuum tubes, electromagneticrelayswithrelay logic,fluidic logic,pneumatic logic,optics,molecules, acoustics,[3]or evenmechanicalor thermal[4]elements.
Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all ofBoolean logic, and therefore, all of the algorithms andmathematicsthat can be described with Boolean logic.Logic circuitsinclude such devices asmultiplexers,registers,arithmetic logic units(ALUs), andcomputer memory, all the way up through completemicroprocessors,[5]which may contain more than 100 million logic gates.
Compound logic gatesAND-OR-Invert(AOI) andOR-AND-Invert(OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[6]
Thebinary number systemwas refined byGottfried Wilhelm Leibniz(published in 1705), influenced by the ancientI Ching's binary system.[7][8]Leibniz established that using the binary system combined the principles ofarithmeticandlogic.
Theanalytical enginedevised byCharles Babbagein 1837 used mechanical logic gates based on gears.[9]
In an 1886 letter,Charles Sanders Peircedescribed how logical operations could be carried out by electrical switching circuits.[10]EarlyElectromechanical computerswere constructed fromswitchesandrelay logicrather than the later innovations ofvacuum tubes(thermionic valves) ortransistors(from which later electronic computers were constructed).Ludwig Wittgensteinintroduced a version of the 16-rowtruth tableas proposition 5.101 ofTractatus Logico-Philosophicus(1921).Walther Bothe, inventor of thecoincidence circuit,[11]got part of the 1954Nobel Prizein physics, for the first modern electronic AND gate in 1924.Konrad Zusedesigned and built electromechanical logic gates for his computerZ1(from 1935 to 1938).
From 1934 to 1936,NECengineerAkira Nakashima,Claude ShannonandVictor Shestakovintroducedswitching circuit theoryin a series of papers showing thattwo-valuedBoolean algebra, which they discovered independently, can describe the operation of switching circuits.[12][13][14][15]Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digitalcomputers. Switching circuit theory became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II, with theoretical rigor superseding thead hocmethods that had prevailed previously.[15]
In 1948,BardeenandBrattainpatented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today.[16]In 1957 Frosch and Derick were able to manufacturePMOSandNMOSplanar gates.[17]Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates.[18]Both types were later combined and adapted intocomplementary MOS(CMOS) logic byChih-Tang SahandFrank WanlassatFairchild Semiconductorin 1963.[19]
There are two sets of symbols for elementary logic gates in common use, both defined inANSI/IEEEStd 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives fromUnited States Military StandardMIL-STD-806 of the 1950s and 1960s.[20]It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols.[21]The IEC standard,IEC60617-12, has been adopted by other standards, such asEN60617-12:1999 in Europe,BSEN 60617-12:1999 in the United Kingdom, andDINEN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor.
IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them.[21]These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another.
In the 1980s, schematics were the predominant method to design bothcircuit boardsand custom ICs known asgate arrays. Today custom ICs and thefield-programmable gate arrayare typically designed withHardware Description Languages(HDL) such asVerilogorVHDL.
By use ofDe Morgan's laws, anANDfunction is identical to anORfunction with negated inputs and outputs. Likewise, anORfunction is identical to anANDfunction with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol (ANDorOR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
Output comparison of various logic gates:
Charles Sanders Peirce(during 1880–1881) showed thatNOR gates alone(or alternativelyNAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[24]The first published proof was byHenry M. Shefferin 1913, so the NAND logical operation is sometimes calledSheffer stroke; thelogical NORis sometimes calledPeirce's arrow.[25]Consequently, these gates are sometimes calleduniversal logic gates.[26]
Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used instatic random-access memory. More complicated designs that useclock signalsand that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called abistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called asequential logicsystem since its output can be influenced by its previous state(s), i.e. by thesequenceof input states. In contrast, the output fromcombinational logicis purely a combination of its present inputs, unaffected by the previous input and output states.
These logic circuits are used in computermemory. They vary in performance, based on factors ofspeed, complexity, and reliability of storage, and many different types of designs are used based on the application.
Afunctionally completelogic system may be composed ofrelays,valves(vacuum tubes), ortransistors.
Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gainvoltageamplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as theTTL7400 seriesbyTexas Instruments, theCMOS4000 seriesbyRCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced byprogrammable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature ofprogrammable logic devicessuch asFPGAshas reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed.
An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-outlimit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speedsynchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributedcapacitanceof all the inputs and wiring and the finite amount of current that each output can provide.
There are severallogic familieswith different characteristics (power consumption, speed, cost, size) such as:RDL(resistor–diode logic),RTL(resistor-transistor logic),DTL(diode–transistor logic),TTL(transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
The simplest family of logic gates usesbipolar transistors, and is calledresistor–transistor logic(RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in earlyintegrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting indiode–transistor logic(DTL).Transistor–transistor logic(TTL) then supplanted DTL.
As integrated circuits became more complex, bipolar transistors were replaced with smallerfield-effect transistors(MOSFETs); seePMOSandNMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now useCMOSlogic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation.
Other types of logic gates include, but are not limited to:[27]
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used onbusesof theCPUto allow multiple chips to send data. A group of three-state outputs driving a line with a suitable control circuit is basically equivalent to amultiplexer, which may be physically distributed over separate devices or plug-in cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as theHarvard Mark I, were built fromrelay logicgates, using electro-mechanicalrelays. Logic gates can be made usingpneumaticdevices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[29]Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs.[30]Logic gates have been made out ofDNA(seeDNA nanotechnology)[31]and used to create a computer called MAYA (seeMAYA-II). Logic gates can be made fromquantum mechanicaleffects, seequantum logic gate.Photonic logicgates usenonlinear opticaleffects.
In principle any method that leads to a gate that isfunctionally complete(for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
|
https://en.wikipedia.org/wiki/Logic_gate
|
Key escrow(also known as a"fair" cryptosystem)[1]is an arrangement in which thekeysneeded to decryptencrypteddata are held inescrowso that, under certain circumstances, an authorizedthird partymay gain access to those keys. These third parties may include businesses, who may want access to employees' secure business-relatedcommunications, orgovernments, who may wish to be able to view the contents of encrypted communications (also known asexceptional access).[2]
The technical problem is a largely structural one. Access to protectedinformationmust be providedonlyto the intended recipient and at least one third party. The third party should be permitted access only under carefully controlled conditions, for instance, acourt order. Thus far, no system design has been shown to meet this requirement fully on a technical basis alone. All proposed systems also require correct functioning of some social linkage, for instance the process of request for access, examination of request for 'legitimacy' (as by acourt), and granting of access by technical personnel charged with access control. All such linkages / controls have serious problems from a system design security perspective. Systems in which the key may not be changed easily are rendered especially vulnerable as the accidental release of the key will result in many devices becoming totally compromised, necessitating an immediate key change or replacement of the system.
On a national level, key escrow is controversial in many countries for at least two reasons. One involves mistrust of the security of the structural escrow arrangement. Many countries have a long history of less than adequate protection of others' information by assorted organizations, public and private, even when the information is held only under an affirmative legal obligation to protect it from unauthorized access. Another is technical concerns for the additional vulnerabilities likely to be introduced by supporting key escrow operations.[2]Thus far, no key escrow system has been designed which meets both objections and nearly all have failed to meet even one.
Key escrow is proactive, anticipating the need for access to keys; a retroactive alternative iskey disclosure law, where users are required to surrender keys upon demand by law enforcement, or else face legal penalties. Key disclosure law avoids some of the technical issues and risks of key escrow systems, but also introduces new risks like loss of keys and legal issues such as involuntaryself-incrimination. The ambiguous termkey recoveryis applied to both types of systems.
|
https://en.wikipedia.org/wiki/Key_recovery
|
Open spectrum(also known asfree spectrum) is a movement to get theFederal Communications Commissionto provide moreunlicensedradio-frequencyspectrumthat is available for use by all. Proponents of the "commons model" of open spectrum advocate a future where all the spectrum is shared, and in which people useInternetprotocols to communicate with each other, and smart devices, which would find the most effective energy level, frequency, and mechanism.[1]Previous government-imposed limits on who can have stations and who cannot would be removed,[2]and everyone would be givenequal opportunityto use the airwaves for their own radio station, television station, or even broadcast their own website. A notable advocate for Open Spectrum isLawrence Lessig.
National governments currently allocate bands of spectrum (sometimes based on guidelines from theITU) for use by anyone so long as they respect certain technical limits, most notably, a limit on total transmission power. Unlicensed spectrum isdecentralized: there are no license payments or central control for users. However, sharing spectrum between unlicensed equipment requires that mitigation techniques (e.g.: power limitation, duty cycle, dynamic frequency selection) are imposed to ensure that these devices operate without interference.
Traditional users of unlicensed spectrum include cordless telephones, and baby monitors. A collection of new technologies are taking advantage of unlicensed spectrum includingWi-Fi,Ultra Wideband,spread spectrum,software-defined radio,cognitive radio, andmesh networks.[3]
Astronomers use manyradio telescopesto look up at objects such aspulsarsin our ownGalaxyand at distantradio galaxiesup to about half the distance of the observable sphere of ourUniverse. The use of radio frequencies for communication creates pollution from the point of view of astronomers, at best, creating noise or, at worst, totally blinding the astronomical community for certain types of observations of very faint objects. As more and more frequencies are used for communication, astronomical observations are getting more and more difficult.
Negotiations to defend the parts of the spectrum most useful for observing the Universe are mostly carried out by the international astronomical community, as a grassroots community effort, coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science.
|
https://en.wikipedia.org/wiki/Open_spectrum
|
Pirate decryptionis thedecryption, or decoding, ofpay TVor pay radio signals without permission from the original broadcaster. The term "pirate" is used in the sense ofcopyright infringement. TheMPAAand other groups which lobby in favour ofintellectual property(specificallycopyrightand trademark) regulations have labelled such decryption as "signal theft"[1]and object to it, arguing that losing out on a potentialchance to profitfrom aconsumer's subscription fees counts as a loss of actual profit.
The concept of pay TV orpay televisioninvolves a broadcaster deliberately transmitting signals in a non-standard, scrambled or encrypted format in order to charge viewers asubscriptionfee for the use of a special decoder needed to receive the scrambledbroadcastsignal.[citation needed]
Early pay TV broadcasts in countries such as theUnited Statesused standard over-the-air transmitters; many restrictions applied asanti-siphoning lawswere enacted to prevent broadcasters of scrambled signals from engaging in activities to harm the development of standardfree-to-aircommercial broadcasting. Scrambled signals were limited to large communities which already had a certain minimum number of unencrypted broadcast stations, relegated to certain frequencies. Restrictions were placed on access of pay TV broadcasters to content such as recent feature films in order to give free TV broadcasters a chance to air these programs before they were siphoned away by pay channels.
Under these conditions, the pay TV concept was very slow to become commercially viable; most television and radio broadcasts remained in-the-clear and were funded by commercialadvertising, individual and corporate donations toeducationalbroadcasters, direct funding by governments or license fees charged to the owners of receiving apparatus (theBBCin the UK, for example).
Pay TV only began to become common after the widespread installation ofcable televisionsystems in the 1970s and 1980s; early premium channels were most often movie broadcasters such as the US-basedHome Box OfficeandCinemax, both currently owned byWarner Bros. Discovery. Signals were obtained for distribution by cable companies using C-band satellite dish antennae of up to ten feet in diameter; the first satellite signals were originally unencrypted as extremely few individual end-users could afford the large and expensive satellite receiving apparatus.
As satellite dishes became smaller and more affordable, most satellite signal providers adopted various forms ofencryptionin order to limit reception to certain groups (such as hotels, cable companies, or paid subscribers) or to specific political regions. Early encryption attempts such asVideocipher IIwere common targets for pirate decryption as dismayed viewers saw large amounts of formerly-unencrypted programming vanishing. Nowadays some free-to-airsatellitecontent in the USA still remains, but many of the channels still in the clear are ethnic channels, local over-the-air TV stations, international broadcasters, religious programming, backfeeds of network programming destined to local TV stations or signals uplinked from mobile satellite trucks to provide live news and sports coverage.
Specialty channels and premium movie channels are most often encrypted; in most countries[citation needed], broadcasts containing explicitpornographymust be encrypted to prevent accidental viewing.
Initial attempts to encrypt broadcast signals were based on analogue techniques of questionable security, the most common being one or a combination of techniques such as:
These systems were designed to provide decoders to cable operators at low cost; a serious tradeoff was made in security. Some analogue decoders were addressable so that cable companies could turn channels on or off remotely, but this only gave the cable companies control of their own descramblers — valuable if needed to deactivate a stolen cable company decoder but useless against hardware designed by signal pirates.
The first encryption methods used for big-dish satellite systems used a hybrid approach; analogue video and digital encrypted audio. This approach was somewhat more secure, but not completely free of problems due to piracy of video signals.
Digital TV services, by nature can more easily implement encryption technologies. When first introduced, digitalDBSbroadcasts were touted as being secure enough to put an end to piracy once and for all. Often these claims would be made in press releases.
The enthusiasm was short-lived. In theory the system was an ideal solution, but some corners had been cut in the initial implementations in the rush to launch the service. The first US DirecTV smart cards were based on theBSkyBVideoCryptcard known as the Sky 09 card. The Sky 09 card had been introduced in 1994 as a replacement for the compromised Sky 07 card. The former had been totally compromised in Europe at the time (1995). The countermeasure employed byNDS Group, the designers of the VideoCrypt system was to issue a new smartcard (known as the Sky 10 card) that included anASICin addition to the card'smicrocontroller. This innovation made it harder for pirates to manufacture pirate VideoCrypt cards. Previously, the program in the Sky card's microcontroller could be rewritten for other microcontrollers without too much difficulty. The addition of anASICtook the battle between the system designers and pirates to another level and it bought BSkyB at least six months of almost piracy-free broadcasting before the pirate Sky 10 cards appeared on the market in 1996. Initial pirate Sky 10 cards had an implementation of this ASIC but once supplies ran out, pirates resorted to extracting the ASICs from deactivated Sky cards and reusing them.
The first US DirecTV "F" card did not contain an ASIC and it was quickly compromised. Pirate DirecTV cards based on microcontrollers that were often ironically more secure than that used in the official card became a major problem for DirecTV. Similar errors had been made by the developers of the UK's terrestrial digitalXtraview Encryption System, which provided no encryption and relied on hiding channels from listings.
The DirecTV "F" card was replaced with the "H" card, which contained an ASIC to handle decryption. However, due to similarities between the "H" and other existing cards, it became apparent that while the signal could not be received without the card and its ASIC, the card itself was vulnerable to tampering by reprogramming it to add channel tiers or additional programming, which allowed them to be viewed without paying.
Two more card swaps would be necessary before DirecTV could reduce piracy by a significant amount; a number of other providers are also in the middle of swapping out all of their subscribers' smartcards due to compromisedencryptionmethods or technology.
A number of vulnerabilities exist even with digital encryption:
On May 15, 2008, a jury in the Echostar vs NDS civil lawsuit (8:2003cv00950) awarded Echostar just over US$1,500 in damages; Echostar originally sought $1 billion in damages from NDS. However, a jury was not convinced of the allegations Echostar had made against NDS and awarded damages only for the factual claims that were proven and for which the jury believed an award should be given in accordance with the laws of the United States.
In some cases,fraudulent cloninghas been used to assign identical serial numbers to multiple receivers or cards; subscribe (or unsubscribe) one receiver and the same programming changes appear on all of the others. Various techniques have also been used to providewrite protectionfor memory on the smartcards or receivers to make deactivation or sabotage of tampered cards by signal providers more difficult.
Systems based on removable smartcards do facilitate the implementation ofrenewable security, where compromised systems can be repaired by sending new and redesigned cards to legitimate subscribers, but they also make the task of replacing smartcards with tampered cards or inserting devices between card and receiver easier for pirates. In some European systems, theconditional-access module(CAM) which serves as a standardized interface between smartcard and DVB receiver has also been targeted for tampering or replaced by third-party hardware.
Improvements in hardware and system design can be used to significantly reduce the risks of any encryption system being compromised, but many systems once thought secure have been proven vulnerable to sufficiently sophisticated and malicious attackers.
Two-way communication has also been used by designers of proprietary digital cable TV equipment in order to make tampering more difficult or easier to detect. A system involving the use of ahigh-pass filteron the line to prevent two-way communication has been widely promoted by some businesses as a means of disabling communication of billing information forpay-per-viewprogramming but this device is effectively worthless as a cable operator remains free to unsubscribe a digital set-top box if two-way communication has been lost. As a device intended to pass signals in one direction only, the line filters offer nothing that couldn't be done (with the same results) by an inexpensive signal booster - a simple one-way RF amplifier widely available cheaply and readily for other purposes. Also, many such boxes will disallow access to pay-per-view content after a set number of programs are watched before the box can transmit this data to the headend, further reducing the usefulness of such a filter.
Some of the terminology used to describe various devices, programs and techniques dealing with Pay-TV piracy is named for the particular hacks. The "Season" interface for example is named after the Season7 hack on Sky TV which allowed a PC to emulate a legitimate Sky-TV smartcard. The Season7 referred to the seventh and final season ofStar Trek: The Next Generationwhich was then showing on Sky One. The "Phoenix" hack was named after the mythical bird which can reanimate itself. The hack itself reactivated smartcards that had been switched off by the providers.
Some of the terminology used on Internet discussion sites to describe the various devices, programs and techniques used in dealing with video piracy is strange, non-standard, or specific to one system. The terms are often no different from the brand names used by legitimate products and serve the same function.
Smart card piracy involves the unauthorised use of conditional-accesssmart cards, in order to gain, and potentially provide to others, unauthorised access to pay-TV or even privatemediabroadcasts. Smart card piracy generally occurs after a breach of security in the smart card, exploited by computerhackersin order to gain complete access to the card'sencryptionsystem.
Once access has been gained to the smart card's encryption system, the hacker can perform changes to the card's internal information, which in turn tricks the conditional-access system into believing that it has been allowed access, by the legitimate card provider, to other television channels using the same encryption system. In some cases, the channels do not even have to be from the same television provider, since many providers use similar encryption systems, or use cards which have the capacity to store information for decoding those channels also. The information on how to hack the card is normally held within small, underground groups, to which public access is not possible. Instead, the hacking groups may release their hack in several forms. One such way is simply to release the encryption algorithm and key. Another common release method is by releasing acomputer programwhich can be used by the smart card user to reprogram their card. Once complete, the now illegally modified smart card is known as a "MOSC". (Modified Original Smart Card). A third such method, more common in recent times, is to sell the information gained on the encryption to a third party, who will then release their own smart card, such as the K3 card. This third party, for legal reasons, will then use a fourth party to release encrypted files, which then allow the card to decode encrypted content.
Along with modifying original cards, it is possible to use the information provided by the smart card to create an encryption emulator. This, in turn, can beprogrammedinto a cable or satellite receiver's internal software, and offered for download on the internet as afirmwareupgrade. This allows access to the encrypted channels by those who do not even own a smart card. In recent times, many undergroundforumwebsites dedicated to the hobby of satellite piracy and encryption emulatedFree To Air(FTA) receivers have been set up, giving up-to-date information onsatelliteandcablepiracy, including making available firmware downloads for receivers, and very detailed encryption system information available to the public.
Upon gaining the knowledge that their system has been compromised, the smart card providers often have several counter measure systems against unauthorised viewing, which can be put in place over the air, in most cases causing virtually no disruption to legitimate viewers. One such measure isCI revocation. The simplest form of counter measure is a key change. This simply halts viewing for those viewing without authorisation temporarily, since the new key can easily be accessed in the hacked card, and implemented. There are often other more complicated procedures which update a part of the smart card in order to make it inaccessible. These procedures can also, however, be hacked, once again allowing access. This leads to a game of "cat and mouse" between the smart card provider, and the hackers. This, after several stages of progression, can leave the smart card provider in a situation where they no longer have any further counter measures to implement. This leaves them in a situation where they must perform a card and encryption change with all legitimate viewers, in order to eliminate the viewing of the service without permission, at least for the foreseeable future.
Such has been the success of implementing new smart card systems, that another form of smart card piracy has grown in popularity. This method is calledcard sharing, which works by making available the smart card decoding information in real time to other users, via a computer network. Police monitoring of unsecured card sharing networks has led to prosecutions.
Virtually every common encryption system is publicly known to have been compromised. These includeViaccess,Nagravision, SECAMediaguardandConax. The MediaCipher system, owned by Motorola, along with Scientific Atlanta's PowerKEY system, are the only digital TV encryption systems which have not publicly been compromised. This is largely thanks to there being noPC cardconditional-access modules(CAMs) available for either encryption system.
Despite the unauthorised decryption of media being illegal in many countries, smart card piracy is a crime which is very rarely punished, due to it being virtually undetectable, particularly in the case ofsatelliteviewing. Laws in many countries do not clearly specify whether the decryption of foreign media services is illegal or not. This has caused much confusion in places such as Europe, where the proximity of many countries, coupled with the large land mass covered by satellite beams, allows signal access to many different providers. These providers are reluctant to pursue criminal charges against many viewers as they live in different countries. There have, however, been several high-profile prosecution cases in theUSA, where satellite dealers have been taken to court resulting in large fines or jail time.[2]
An Internet key sharing scheme consists of one smart card with a valid, paid subscription which is located on an Internet server. It generates a stream of real-time decryption keys which are broadcast over the Internet to remotely located satellite receivers. Limiting factors in the number of remotely located satellite receivers are the network latency and the period between the updated keys and the ability of the card client's receiver to use the decrypted key stream.[3]
Each receiver is configured in an identical manner, a clone receiving the same television signal from a satellite and, from the internet server, the same decryption keys to unlock that signal. As the server must have individually subscribed smart cards for each channel to be viewed, its continued operation tends to be costly and may require multiple subscriptions under different names and addresses. There is also a risk that as the number of card clients on the card sharing network grows, it will attract the attention of the satellite TV service provider and law enforcement agencies and the monitoring of IP addresses associated with this card sharing network may identify individual users and server operators who then become targets for legal action by the satellite TV service provider or by legal authorities.
Key sharing schemes are typically used where replacement of compromised smart card systems (such as the deprecation of Nagra 1/2 in favour of Nagra 3) has made other pirate decryption methods non-functional.
In February 2014, an episode ofBBC's "Inside Out" disclosed that the completeSky TVpackage could be obtained from black-market sources for as little as £10 per month through Internet key sharing,SwanseaandCardiffwere highlighted with significant activity in pubs using cracked boxes to show Premier League football.[4]
In some countries such asCanadaand manyCaribbeannations (except for theDominican Republic), theblack marketin satellite TV piracy is closely tied to thegray marketactivity of using direct broadcast satellite signals to watch broadcasts intended for one country in some other, adjacent country. Many smaller countries have no domestic DBS operations and therefore few or no legal restrictions on the use of decoders which capture foreign signals.
The refusal of most providers to knowingly issue subscriptions outside their home country leads to a situation where pirate decryption is perceived as being one of the few ways to obtain certain programming. If there is no domestic provider for a channel, a grey market (subscribed using another address) or black market (pirate) system is prerequisite to receive many specific ethnic, sport or premium movie services.
Pirate or grey-market reception also provides viewers a means to bypasslocal blackoutrestrictions onsporting eventsand to accesshard-core pornographywhere some content is not otherwise available.
The grey market for US satellite receivers in Canada at one point was estimated to serve as many as several hundred thousand English-speaking Canadian households. Canadian authorities, acting under pressure from cable companies and domestic broadcasters, have made many attempts to prevent Canadians from subscribing to US direct-broadcast services such as AT&T's DirecTV and Echostar's Dish Network.
While litigation has gone as far as theSupreme Court of Canada, no judicial ruling has yet been made on whether such restrictions violate the safeguards of theCanadian Charter of Rights and Freedomswhich are intended to protectfreedom of expressionand preventlinguisticorethnic discrimination. Domestic satellite and cable providers have adopted a strategy of judicial delay in which their legal counsel will file an endless series of otherwise-useless motions before the courts to ensure that the proponents of the grey-market systems run out of money before the "Charter Challenge" issue is decided.[citation needed]
According to K. William McKenzie, the Orillia Ontario lawyer who won the case in the Supreme Court of Canada, a consortium headed by David Fuss and supported by Dawn Branton and others later launched a constitutional challenge to defeat section 9(1)(c) of the Radiocommunication Act on the basis that it breached the guarantee of Freedom of Expression enshrined in section 2 (c) of the Canadian Charter of Rights.
The evidence compiled by Mr. McKenzie from his broadcasting clients in opposition to this challenge was so overwhelming that it was abandoned and the Court ordered that substantial costs be paid by the applicants.
In most cases, broadcast distributors will require a domestic billing address before issuing a subscription; post boxes and commercial mail receiving agencies are often used by grey-market subscribers to foreign providers to circumvent this restriction.
The situation in the US itself differs as it is complicated by the legal question of subscriber access to distant local TV stations. Satellite providers are severely limited in their ability to offer subscriptions to distant locals due to the risk of further lawsuits by local affiliates of the same network in the subscribers homedesignated market area. California stations have sued satellite providers who distributed New York signals nationally, as the distant stations would have an unfair advantage by broadcasting the same programming three hours earlier.
There is also a small "reverse gray market" for Canadian signals, transmitted with a footprint which sends full-strength DBS signals to many if not all of the contiguous 48US states. This is desirable not only to receive Canadian-only content, but because some US-produced programs air in Canada in advance of their US broadcast. The question ofsignal substitution, by which Canadian cable and satellite providers substitute the signal of a local or domestic channel over a foreign or distant channel carrying the same program, is rendered more complex by the existence of a reverse grey market. Signal substitution had already been the cause of strong diplomatic protests by the United States, which considers the practice to constitute theft of advertising revenue.
The lack of domestic competition for premium movie channels in Canada is one factor encouraging grey-market reception; language is another key issue as most Spanish-language programming inNorth Americais on the US system and most French-language programming is on the Canadian system. A larger selection of sports and ethnic programming is also available to grey-market subscribers.
It could be said that the 1000-channel universe is a "reality" in North America, but only for the signal pirates as many legal and geographic restrictions are placed on the ability to subscribe to many if not most of the physically available channels.
Other countries such asNicaraguaduring Sandinista rule,Cuba,Iran(Islamic Republic of Iran) andAfghanistanduringTalibanrule andIraqduring theSaddam Husseinregime, have attempted to prohibit their citizens from receiving any satellite broadcasts from foreign sources.
The situation inEuropediffers somewhat, due to the much greater linguistic diversity in that region and due to the use of standardized DVB receivers capable of receiving multiple providers and free-to-air signals. North American providers normally lock their subscribers into "package receivers" unable to tune outside their one package; often the receivers are sold at artificially low prices and the subscription cost for programming is increased in order to favour new subscribers over existing ones. Providers are also notorious for using sales tactics such asbundling, in which to obtain one desired channel a subscriber must purchase a block of anywhere from several to more than a hundred other channels at substantial cost.
Many European companies such as British Sky Broadcasting prohibit subscriptions outside the UK and Ireland. But other satellite providers such asSky Deutschlanddo sell yearly subscription cards legally to customers in other European countries without the need for an address or other personal information. The latter also applies to virtually all the Adult channel cards sold in Europe.
The Middle East emerged in the picture with the Kingdom ofSaudi Arabia. In July 2019, global football authorities of various competitions collectively condemned a pirate broadcasting channel of Saudi Arabia,BeoutQ. The right holders runningPremier League,FIFA World CupandUEFA Champions Leaguecalled on the authorities of the Arab nation to halt the operations of its homegrown pirate TV and broadcasting service, which is involved in illegal streaming of matches internationally.[5]
BeoutQ emerged in 2017, and since has been widely available across Saudi Arabia. However, the country denied that it is based in Riyadh, stating that the authorities are committed to fighting piracy. In February 2015, several sports bodies and broadcasters, including theU.S. National Basketball Association,U.S. Tennis AssociationandSkydemanded the United States to add Saudi Arabia its “Priority Watch List” over TV piracy.[6]It was in April 2019, whenOffice of the United States Trade Representative(USTR) released a report placing Saudi Arabia on the Watch List.[7]
A number of strategies have been used by providers to control or prevent the widespread pirate decryption of their signals.
One approach has been to take legal action against dealers who sell equipment which may be of use to satellite pirates; in some cases the objective has been to obtain lists of clients in order to take or threaten to take costly legal action against end-users. Providers have created departments with names like the "office of signal integrity" or the "end-users group" to pursue alleged pirate viewers.
As some equipment (such as a computer interface to communicate with standard ISO/IEC 7816 smartcards) is useful for other purposes, this approach has drawn strong opposition from groups such as theElectronic Frontier Foundation. There have also been US counter-suits alleging that the legal tactics used by some DBS providers to demand large amounts of money from end-users may themselves appear unlawful or border on extortion.
Much of the equipment is perfectly lawful to own; in these cases, only the misuse of the equipment to pirate signals is prohibited. This makes provider attempts at legal harassment of would-be pirates awkward at best, a problem for providers which is growing due to the Internet distribution of third-party software to reprogram some otherwise legitimate free-to-air DVB receivers to decrypt pay TV broadcasts with no extra hardware.
US-based Internet sites containing information about the compromisedencryptionschemes have also been targeted by lawyers, often with the objective of costing the defendants enough in legal fees that they have to shut down or move their sites to offshore or foreign Internet hosts.
In some cases, the serial numbers of unsubscribed smartcards have beenblacklistedby providers, causing receivers to display error messages. A "hashing" approach of writing arbitrary data to every available location on the card and requiring that this data be present as part of the decryption algorithm has also been tried as a way of leaving less available free space for third-party code supplied by pirates.
Another approach has been to loadmalicious codeonto smartcards or receivers; these programs are intended to detect tampered cards and maliciously damage the cards or corrupt the contents ofnon-volatilememories within the receiver. This particularTrojan horseattack is often used as an ECM (electronic countermeasure) by providers, especially in North America where cards and receivers are sold by the providers themselves and are easy targets for insertion ofbackdoorsin their computerfirmware. The most famous ECM incident was the "Black Sunday" attack launched against tampered DirecTV "H" on January 21, 2001.[8]It intended to destroy the cards by overwriting a non-erasable part of the cards internalmemoryin order to lock the processor into anendless loop.
The results of a provider resorting to the use of malicious code are usually temporary at best, as knowledge of how to repair most damage tends to be distributed rapidly by hobbyists through variousInternetforums. There is also a potential legal question involved (which has yet to be addressed) as the equipment is normally the property not of the provider but of the end user. Providers will often print on the smartcard itself that the card is the property of the signal provider, but at least one legal precedent indicates that marking "this is mine" on a card, putting it in a box with a receiver and then selling it can legally mean "this is not mine anymore". Malicious damage to receiver firmware puts providers on even shakier legal ground in the unlikely event that the matter were ever to be heard by the judiciary.
The only solution which has shown any degree of long-term success against tampered smartcards has been the use of digitalrenewable security; if the code has been broken and the contents of the smartcard's programming widely posted across the Internet, replacing every smartcard in every subscriber's receiver with one of different, uncompromised design will effectively put an end to a piracy problem. Providers tend to be slow to go this route due to cost (as many have millions of legitimate subscribers, each of which must be sent a new card) and due to concern that someone may eventually crack the code used in whatever new replacement card is used, causing the process to begin anew.
Premiere in Germany has replaced all of its smartcards with the Nagravision Aladin card; the US DirecTV system has replaced its three compromised card types ("F" had no encryption chip, "H" was vulnerable to being reprogrammed by pirates and "HU" were vulnerable to a "glitch" which could be used to make them skip an instruction). Both providers have been able to eliminate their problems with signal piracy by replacing the compromised smartcards after all other approaches had proved to provide at best limited results.
Dish NetworkandBell Satellite TVhad released new and more tamper-resistant smart cards over the years, known as the ROM2, ROM3, ROM10, ROM11 series. All these cards used theNagravision 1access system. Despite introducing newer and newer security measures, older cards were typically still able to decrypt the satellite signal after new cards were released (A lack ofEEPROMspace on the ROM2 cards eventually led to them being unable to receive updates necessary to view programming). In an effort to stop piracy, as by this point the Nagravision 1 system had been thoroughly reverse-engineered by resourceful hobbyists, an incompatibleNagravision 2encryption system was introduced along with a smart card swap-out for existing customers. As more cards were swapped, channel groups were slowly converted to the new encryption system, starting withpay-per-viewandHDTVchannels, followed by the premium movie channels. This effort culminated in a complete shutdown of the Nagravision 1 datastream for all major channels in September 2005. Despite these efforts to secure their programming, a software hack was released in late August 2005, allowing for the decryption of the new Nagravision 2 channels with aDVB-Scard and aPC. Just a few months later, early revisions of the Nagravision 2 cards had been themselves compromised. Broadcast programming currently[when?]uses asimulcryptof Nagravision 2 and Nagravision 3, a first step toward a possible future shutdown of Nagravision 2 systems.
Various groups have been targeted for lawsuits in connection with pirate decryption issues:
One of the most severe sentences handed out for satellite TV piracy in the United States was to aCanadianbusinessman, Martin Clement Mullen, widely known for over a decade in the satellite industry as "Marty" Mullen.
Mullen was sentenced to seven years in prison with no parole and ordered to pay DirecTV and smart card provider NDS Ltd. US$24 million in restitution. He pleaded guilty in aTampa, Floridacourt in September 2003 after being arrested when he entered the United States using a British passport in the name "Martin Paul Stewart".
Mr. Mullen had operated his satellite piracy business from Florida, the Cayman Islands and from his home in London, Ontario, Canada. Testimony in the Florida court showed that he had a network of over 100 sub-dealers working for him and that during one six-week period, he cleared US$4.4 million in cash from re-programming DirecTV smartcards that had been damaged in an electronic counter measure.
NDS Inc. Chief of Security John Norris pursued Mullen for a decade in three different countries. When Mullen originally fled the United States to Canada in the mid-1990s, Norris launched an investigation that saw an undercover operator (a former Canadian police officer named Don Best) become one of Mullen's sub-dealers and his closest personal friend for over a year. In summer of 2003 when Mullen travelled under another identity to visit his operations in Florida, US federal authorities were waiting for him at the airport after being tipped off by Canadian investigators working for NDS Inc.
However, theNDS Groupwere accused (in several lawsuits) by Canal+ (dismissed as part of an otherwise-unrelated corporate takeover deal) and Echostar (now Dish Network) of hacking the Nagra encryption and releasing the information on the internet. The jury awarded EchoStar $45.69 actual damages (one month's average subscription fee) in Claim 3.
Bell Satellite TV(as Bell ExpressVu) was sued byVidéotron, a Québécor-owned rival which operatescable televisionsystems in majorQuébecmarkets. Québécor also owns TVA, a broadcaster. Bell's inferior security and failure to replace compromised smartcards in a timely fashion cost Vidéotron cable subscribers, as viewers could obtain the same content for free from satellite under the compromised Nagra1 system from 1999 to 2005; pirate decryption also deprived TVA'sFrench languagenews channel LCN of a monthly 48¢/subscriber fee. TheSuperior Court of Quebecawarded$339,000 and $262,000 in damages/interest to Vidéotron and TVA Group in 2012. Québec's Appeal Court ruled these dollar amounts "erroneus" and increased them in 2015; despite an attempt to appeal to theSupreme Court of Canada, a final award of $141 million in damages and interest was upheld.[21]
|
https://en.wikipedia.org/wiki/Pirate_decryption
|
Inprobability theoryandstatistics,Fisher's noncentral hypergeometric distributionis a generalization of thehypergeometric distributionwhere sampling probabilities are modified by weight factors. It can also be defined as theconditional distributionof two or morebinomially distributedvariables dependent upon their fixed sum.
The distribution may be illustrated by the followingurn model. Assume, for example, that an urn containsm1red balls andm2white balls, totallingN=m1+m2balls. Each red ball has the weight ω1and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1/ ω2. Now we are taking balls randomly in such a way that the probability of taking a particular ball is proportional to its weight, but independent of what happens to the other balls. The number of balls taken of a particular color follows thebinomial distribution. If the total numbernof balls taken is known then the conditional distribution of the number of taken red balls for givennis Fisher's noncentral hypergeometric distribution. To generate this distribution experimentally, we have to repeat the experiment until it happens to givenballs.
If we want to fix the value ofnprior to the experiment then we have to take the balls one by one until we havenballs. The balls are therefore no longer independent. This gives a slightly different distribution known asWallenius' noncentral hypergeometric distribution. It is far from obvious why these two distributions are different. See the entry fornoncentral hypergeometric distributionsfor an explanation of the difference between these two distributions and a discussion of which distribution to use in various situations.
The two distributions are both equal to the (central)hypergeometric distributionwhen theodds ratiois 1.
Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
Fisher's noncentral hypergeometric distribution was first given the nameextended hypergeometric distribution(Harkness, 1965), and some authors still use this name today.
The probability function, mean and variance are given in the adjacent table.
An alternative expression of the distribution has both the number of balls taken of each color and the number of balls not taken as random variables, whereby the expression for the probability becomes symmetric.
The calculation time for the probability function can be high when the sum inP0has many terms. The calculation time can be reduced by calculating the terms in the sum recursively relative to the term fory=xand ignoring negligible terms in the tails (Liao and Rosen, 2001).
The mean can be approximated by:
wherea=ω−1{\displaystyle a=\omega -1},b=m1+n−N−(m1+n)ω{\displaystyle b=m_{1}+n-N-(m_{1}+n)\omega },c=m1nω{\displaystyle c=m_{1}n\omega }.
The variance can be approximated by:
Better approximations to the mean and variance are given by Levin (1984, 1990), McCullagh and Nelder (1989), Liao (1992), and Eisinga and Pelzer (2011). The saddlepoint methods to approximate the mean and the variance suggested Eisinga and Pelzer (2011) offer extremely accurate results.
The following symmetry relations apply:
Recurrence relation:
The distribution is affectionately called "finchy-pig," based on the abbreviation convention above.
The univariate noncentral hypergeometric distribution may be derived alternatively as a conditional distribution in the context of two binomially distributed random variables, for example when considering the response to a particular treatment in two different groups of patients participating in a clinical trial. An important application of the noncentral hypergeometric distribution in this context is the computation of exact confidence intervals for the odds ratio comparing treatment response between the two groups.
SupposeXandYare binomially distributed random variables counting the number of responders in two corresponding groups of sizemXandmYrespectively,
Their odds ratio is given as
The responder prevalenceπi{\displaystyle \pi _{i}}is fully defined in terms of the oddsωi{\displaystyle \omega _{i}},i∈{X,Y}{\displaystyle i\in \{X,Y\}}, which correspond to the sampling bias in the urn scheme above, i.e.
The trial can be summarized and analyzed in terms of the following contingency table.
In the table,n=x+y{\displaystyle n=x+y}corresponds to the total number of responders across groups, andNto the total number of patients recruited into the trial. The dots denote corresponding frequency counts of no further relevance.
The sampling distribution of responders in group X conditional upon the trial outcome and prevalences,Pr(X=x|X+Y=n,mX,mY,ωX,ωY){\displaystyle Pr(X=x\;|\;X+Y=n,m_{X},m_{Y},\omega _{X},\omega _{Y})},
is noncentral hypergeometric:
F(X,ω):=Pr(X=x|X+Y=n,mX,mY,ωX,ωY)=Pr(X=x,X+Y=n|mX,mY,ωX,ωY)Pr(X+Y=n|mX,mY,ωX,ωY)=Pr(X=x|mX,ωX)Pr(Y=n−x|mY,ωY,X=x)Pr(X+Y=n|mX,mY,ωX,ωY)=(mXx)πXx(1−πX)mX−x(mYn−x)πYn−x(1−πY)mY−(n−x)Pr(X+Y=n|mX,mY,ωX,ωY)=(mXx)ωXx(1−πX)mX(mYn−x)ωYn−x(1−πY)mYPr(X+Y=n|mX,mY,ωX,ωY)=(mXx)(mYn−x)ωx(1−πX)mXωYn(1−πY)mY(1−πX)mXωYn(1−πY)mY∑u=max(0,n−mY)min(mX,n)(mXu)(mYn−u)ωu=(mXx)(mYn−x)ωx∑u=max(0,n−mY)min(mX,n)(mXu)(mYn−u)ωu{\displaystyle {\begin{aligned}F(X,\omega ):&=Pr(X=x\;|\;X+Y=n,m_{X},m_{Y},\omega _{X},\omega _{Y})\\&={\frac {Pr(X=x,X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {Pr(X=x\;|\;m_{X},\omega _{X})Pr(Y=n-x\;|\;m_{Y},\omega _{Y},X=x)}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}\pi _{X}^{x}(1-\pi _{X})^{m_{X}-x}{\binom {m_{Y}}{n-x}}\pi _{Y}^{n-x}(1-\pi _{Y})^{m_{Y}-(n-x)}}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}\omega _{X}^{x}(1-\pi _{X})^{m_{X}}{\binom {m_{Y}}{n-x}}\omega _{Y}^{n-x}(1-\pi _{Y})^{m_{Y}}}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}{\binom {m_{Y}}{n-x}}\omega ^{x}(1-\pi _{X})^{m_{X}}\omega _{Y}^{n}(1-\pi _{Y})^{m_{Y}}}{(1-\pi _{X})^{m_{X}}\omega _{Y}^{n}(1-\pi _{Y})^{m_{Y}}\sum _{u=\max(0,n-m_{Y})}^{\min(m_{X},n)}{\binom {m_{X}}{u}}{\binom {m_{Y}}{n-u}}\omega ^{u}}}\\&={\frac {{\binom {m_{X}}{x}}{\binom {m_{Y}}{n-x}}\omega ^{x}}{\sum _{u=\max(0,n-m_{Y})}^{\min(m_{X},n)}{\binom {m_{X}}{u}}{\binom {m_{Y}}{n-u}}\omega ^{u}}}\end{aligned}}}
Note that the denominator is essentially just the numerator, summed over all events of the joint sample space(X,Y){\displaystyle (X,Y)}for which it holds thatX+Y=n{\displaystyle X+Y=n}. Terms independent ofXcan be factored out of the sum and cancel out with the numerator.
The distribution can be expanded to any number of colorscof balls in the urn. The multivariate distribution is used when there are more than two colors.
The probability function and a simple approximation to the mean are given to the right. Better approximations to the mean and variance are given by McCullagh and Nelder (1989).
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
Colors with zero number (mi= 0) or zero weight (ωi= 0) can be omitted from the equations.
Colors with the same weight can be joined:
wherehypg(x;n,m,N){\displaystyle \operatorname {hypg} (x;n,m,N)}is the (univariate, central) hypergeometric distribution probability.
Fisher's noncentral hypergeometric distribution is useful for models of biased sampling or biased selection where the individual items are sampled independently of each other with no competition. The bias or odds can be estimated from an experimental value of the mean. UseWallenius' noncentral hypergeometric distributioninstead if items are sampled one by one with competition.
Fisher's noncentral hypergeometric distribution is used mostly for tests incontingency tableswhere a conditional distribution for fixed margins is desired. This can be useful, for example, for testing or measuring the effect of a medicine. See McCullagh and Nelder (1989).
Breslow, N. E.; Day, N. E. (1980),Statistical Methods in Cancer Research, Lyon: International Agency for Research on Cancer.
Eisinga, R.; Pelzer, B. (2011),"Saddlepoint approximations to the mean and variance of the extended hypergeometric distribution"(PDF),Statistica Neerlandica, vol. 65, no. 1, pp.22–31,doi:10.1111/j.1467-9574.2010.00468.x.
Fog, A. (2007),Random number theory.
Fog, A. (2008), "Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions",Communications in Statictics, Simulation and Computation, vol. 37, no. 2, pp.241–257,doi:10.1080/03610910701790236,S2CID14904723.
Johnson, N. L.;Kemp, A. W.; Kotz, S. (2005),Univariate Discrete Distributions, Hoboken, New Jersey: Wiley and Sons.
Levin, B. (1984), "Simple Improvements on Cornfield's approximation to the mean of a noncentral Hypergeometric random variable",Biometrika, vol. 71, no. 3, pp.630–632,doi:10.1093/biomet/71.3.630.
Levin, B. (1990), "The saddlepoint correction in conditional logistic likelihood analysis",Biometrika, vol. 77, no. 2, [Oxford University Press, Biometrika Trust], pp.275–285,doi:10.1093/biomet/77.2.275,JSTOR2336805.
Liao, J. (1992), "An Algorithm for the Mean and Variance of the Noncentral Hypergeometric Distribution",Biometrics, vol. 48, no. 3, [Wiley, International Biometric Society], pp.889–892,doi:10.2307/2532354,JSTOR2532354.
Liao, J. G.; Rosen, O. (2001), "Fast and Stable Algorithms for Computing and Sampling from the Noncentral Hypergeometric Distribution",The American Statistician, vol. 55, no. 4, pp.366–369,doi:10.1198/000313001753272547,S2CID121279235.
McCullagh, P.; Nelder, J. A. (1989),Generalized Linear Models, 2. ed., London: Chapman and Hall.
|
https://en.wikipedia.org/wiki/Fisher%27s_noncentral_hypergeometric_distribution
|
Superoptimizationis the process where acompilerautomatically finds the optimal sequence for a loop-free sequence of instructions. Real-world compilers generally cannot produce genuinelyoptimalcode, and while most standardcompiler optimizationsonly improve code partly, a superoptimizer's goal is to find the optimal sequence, thecanonical form. Superoptimizers can be used to improve conventional optimizers by highlighting missed opportunities so a human can write additional rules.
The term superoptimization was first coined byAlexia Massalinin the 1987 paperSuperoptimizer: A Look at the Smallest Program.[1]The label "program optimization" has been given to a field that does not aspire to optimize but only to improve.
This misnomer forced Massalin to call her system a superoptimizer, which is actually an optimizer to find an optimal program.[2]
In 1992, the GNU Superoptimizer (GSO) was developed to integrate into theGNU Compiler Collection(GCC).[3][4]Later work further developed and extended these ideas.
Traditionally, superoptimizing is performed via exhaustivebrute-force searchin the space of valid instruction sequences. This is a costly method, and largely impractical for general-purpose compilers. Yet, it has been shown to be useful in optimizing performance-critical inner loops. It is also possible to use aSMT solverto approach the problem, vastly improving the search efficiency (although inputs more complex than abasic blockremains out of reach).[5]
In 2001, goal-directed superoptimizing was demonstrated in the Denali project by Compaq research.[6]In 2006,answer setdeclarative programmingwas applied to superoptimization in theTotal Optimisation using Answer Set Technology(TOAST) project[7]at theUniversity of Bath.[8][9]
Superoptimization can be used to automatically generate general-purposepeephole optimizers.[10]
Several superoptimizers are available for free download.
|
https://en.wikipedia.org/wiki/Superoptimizer
|
Resource Public Key Infrastructure(RPKI), also known asResource Certification, is a specializedpublic key infrastructure(PKI) framework to support improved security for theInternet'sBGProutinginfrastructure.
RPKI provides a way to connect Internet number resource information (such asAutonomous Systemnumbers andIP addresses) to atrust anchor. The certificate structure mirrors the way in whichInternet numberresources are distributed. That is, resources are initially distributed by theIANAto theregional Internet registries(RIRs), who in turn distribute them tolocal Internet registries(LIRs), who then distribute the resources to their customers. RPKI can be used by the legitimate holders of the resources to control the operation of Internetrouting protocolsto preventroute hijackingand other attacks. In particular, RPKI is used to secure theBorder Gateway Protocol(BGP) through BGP Route Origin Validation (ROV), as well asNeighbor Discovery Protocol(ND) forIPv6through theSecure Neighbor Discoveryprotocol (SEND).
The RPKI architecture is documented in RFC 6480. The RPKI specification is documented in a spread out series of RFCs: RFC 6481, RFC 6482, RFC 6483, RFC 6484, RFC 6485, RFC 6486, RFC 6487, RFC 6488, RFC 6489, RFC 6490, RFC 6491, RFC 6492, and RFC 6493. SEND is documented in RFC 6494 and RFC 6495. These RFCs are a product of theIETF's SIDR ("Secure Inter-Domain Routing") working group,[1]and are based on a threat analysis which was documented in RFC 4593. These standards cover BGP origin validation, while path validation is provided byBGPsec, which has been standardized separately in RFC 8205. Several implementations for prefix origin validation already exist.[2]
RPKI usesX.509PKI certificates (RFC 5280) with extensions for IP addresses and AS identifiers (RFC 3779). It allows the members ofregional Internet registries, known aslocal Internet registries(LIRs), to obtain a resource certificate listing theInternet numberresources they hold. This offers them validatable proof of holdership, though the certificate does not contain identity information. Using the resource certificate, LIRs can create cryptographic attestations about the route announcements they authorise to be made with the prefixes and ASNs they hold. These attestations are described below.
ARoute Origin Authorization(ROA)[3]states whichautonomous system(AS) is authorised to originate certainIP prefixes. In addition, it can determine the maximum length of the prefix that the AS is authorised to advertise.
The maximum prefix length is an optional field. When not defined, the AS is only authorised to advertise exactly the prefix specified. Any more specific announcement of the prefix will be considered invalid. This is a way to enforce aggregation and prevent hijacking through the announcement of a more specific prefix.
When present, this specifies the length of the most specific IP prefix that the AS is authorised to advertise. For example, if the IP address prefix is10.0.0.0/16and the maximum length is 22, the AS is authorised to advertise any prefix under10.0.0.0/16, as long as it is no more specific than/22. So, in this example, the AS would be authorised to advertise10.0.0.0/16,10.0.128.0/20or10.0.252.0/22, but not10.0.255.0/24.
AnAutonomous System Provider Authorization(ASPA) states which networks are permitted to appear as direct upstream adjacencies of an autonomous system in BGP AS_PATHs.[4]
When a ROA is created for a certain combination of origin AS and prefix, this will have an effect on the RPKI validity[5]of one or more route announcements. They can be:
Note that invalid BGP updates may also be due to incorrectly configured ROAs.[6]
There are open source tools[7]available to run the certificate authority and manage the resource certificate and child objects such as ROAs. In addition, the RIRs have a hosted RPKI platform available in their member portals. This allows LIRs to choose to rely on a hosted system, or run their own software.
The system does not use a single repository publication point to publish RPKI objects. Instead, the RPKI repository system consists of multiple distributed and delegated repository publication points. Each repository publication point is associated with one or more RPKI certificates' publication points. In practice this means that when running a certificate authority, an LIR can either publish all cryptographic material themselves, or they can rely on a third party for publication. When an LIR chooses to use the hosted system provided by the RIR, in principle publication is done in the RIR repository.
Relying party software will fetch, cache, and validate repository data usingrsyncor the RPKI Repository Delta Protocol (RFC 8182).[8]It is important for a relying party to regularly synchronize with all the publication points to maintain a complete and timely view of repository data. Incomplete or stale data can lead to erroneous routing decisions.[9][10]
After validation of ROAs, the attestations can be compared to BGP routing and aid network operators in their decision-making process. This can be done manually, but the validated prefix origin data can also be sent to a supported router using the RPKI to Router Protocol (RFC 6810),[11]Cisco Systemsoffers native support on many platforms[12]for fetching the RPKI data set and using it in the router configuration.[13]Juniper offers support on all platforms[14]that run version 12.2 or newer.Quaggaobtains this functionality through BGP Secure Routing Extensions (BGP-SRx)[15]or a RPKI implementation[16]fully RFC-compliant based on RTRlib. The RTRlib[17]provides an open source C implementation of the RTR protocol and prefix origin verification. The library is useful for developers of routing software but also for network operators.[18]Developers can integrate the RTRlib into the BGP daemon to extend their implementation towards RPKI. Network operators may use the RTRlib to develop monitoring tools (e.g., to check the proper operation of caches or to evaluate their performance).
RFC 6494 updates the certificate validation method of theSecure Neighbor Discoveryprotocol (SEND) security mechanisms forNeighbor Discovery Protocol(ND) to use RPKI for use in IPv6. It defines a SEND certificate profile utilizing a modified RFC 6487 RPKI certificate profile which must include a single RFC 3779 IP address delegation extension.
|
https://en.wikipedia.org/wiki/Resource_Public_Key_Infrastructure
|
Apple Partition Map(APM) is apartitionscheme used to define the low-level organization of data on disks formatted for use with68kandPowerPCMacintoshcomputers. It was introduced with theMacintosh II.[1]
Disks using the Apple Partition Map are divided intological blocks, with 512 bytes usually belonging to eachblock. The first block,Block 0, contains an Apple-specific data structure called "Driver Descriptor Map" for theMacintosh ToolboxROM to load driver updates and patches before loading from an MFS or HFS partition.[2]Because APM allows 32 bits worth of logical blocks, the historical size of an APM formatted disk using small blocks[3]is limited to 2TiB.[4]
TheApple Partition Mapmaps out all space used (including the map) and unused (free space) on disk, unlike the minimal x86master boot recordthat only accounts for used non-map partitions. This means that every block on the disk (with the exception of the first block,Block 0) belongs to a partition.
Some hybrid disks contain both anISO 9660primary volume descriptor and an Apple Partition Map, thus allowing the disc to work on different types of computers, including Apple systems.
For accessing volumes, both APM andGPTpartitions can be used in a standard manner withMac OS X Tiger(10.4) and higher. For starting an operating system,PowerPC-based systemscan only boot from APM disks.[5]In contrast,Intel-based systemsgenerally boot from GPT disks.[1][6][7]Nevertheless, older Intel-based Macs are able to boot from APM, GPT (GUID Partition Table) and MBR (Master Boot Record, using theBIOS-Emulation called EFI-CSM i.e. theCompatibility Support Moduleprovided byEFI).
Intel-based models that came with Mac OS X Tiger (10.4) orLeopard(10.5) preinstalled had to be able to boot from both APM and GPT disks due to the installation media for theseuniversal versionsof Mac OS X, which are APM partitioned in order to remain compatible with PowerPC-based systems.[8]However, the installation of OS X on an Intel-based Mac demands a GPT partitioned disk or will refuse to continue, the same way installation on a PowerPC-based system will demand an APM partitioned destination volume.Cloningan already installed OS X to an APM partition on Intel systems will remain bootable even on 2011 Intel-based Macs. Despite this apparent APM support, Apple never officially supported booting from an internal APM disk on an Intel-based system. The one exception for a universal version of Mac OS X (Tiger or Leopard) is an official Apple document describing how to set up a dual bootable external APM disk for use with PowerPC and Intel.[9]
Each entry of the partition table is the size of one data block, which is normally 512 bytes.[1][10]Each partition entry on the table is the size of one block or sector of data. Because the partition table itself is also a partition, the size of this first partition limits the number of entries to the partition table itself.
The normal case is that 64 sectors (64 × 512 = 32 KB) are used by theApple Partition Map: one block for theDriver Descriptor MapasBlock 0, one block for the partition table itself and 62 blocks for a maximum of 62 data partitions.[11]
Each partition entry includes the starting sector and the size, as well as a name, a type, a position of the data area, and a possible boot code. It also includes the total number of partitions in that partition table.[12]This ensures that, after reading the first partition table entry, the firmware is aware of how many blocks more to read from the media in order to process every partition table entry. All entries are inbig-endianbyte-order.[citation needed]
Types beginning with "Apple_" are reserved for assignment by Apple, all other custom defined types are free to use. However registration
with Apple is encouraged.
Partition status is abit fieldcomposed of the flags:
|
https://en.wikipedia.org/wiki/Apple_Partition_Map
|
Standard formis a way of expressingnumbersthat are too large or too small to be conveniently written indecimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to asscientific formorstandard index form, orScientific notationin the United States. Thisbase tennotation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certainarithmetic operations. Onscientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
ormtimes ten raised to the power ofn, wherenis aninteger, and thecoefficientmis a nonzeroreal number(usually between 1 and 10 in absolute value, and nearly always written as aterminating decimal). The integernis called theexponentand the real numbermis called thesignificandormantissa.[1]The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of thefractional partof thecommon logarithm. If the number is negative then a minus sign precedesm, as in ordinary decimal notation. Innormalized notation, the exponent is chosen so that theabsolute value(modulus) of the significandmis at least 1 but less than 10.
Decimal floating pointis a computer arithmetic system closely related to scientific notation.
For performing calculations with aslide rule, standard form expression is required. Thus, the use of scientific notation increased as engineers and educators used that tool. SeeSlide rule#History.
Any real number can be written in the formm×10^nin many ways: for example, 350 can be written as3.5×102or35×101or350×100.
Innormalizedscientific notation (called "standard form" in the United Kingdom), the exponentnis chosen so that theabsolute valueofmremains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as3.5×102. This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number oforders of magnitudeseparating the numbers. It is also the form that is required when using tables ofcommon logarithms. In normalized notation, the exponentnis negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as5×10−1). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value ofmfor all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such asengineering notation, is desired. Normalized scientific notation is often calledexponentialnotation– although the latter term is more general and also applies whenmis not restricted to the range 1 to 10 (as in engineering notation for instance) and tobasesother than 10 (for example,3.15×2^20).
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponentnis restricted tomultiplesof 3. Consequently, the absolute value ofmis in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their correspondingSI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometres" and written as12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
Calculatorsandcomputer programstypically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Becausesuperscriptexponents like 107can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notationmEnfor a decimal significandmand integer exponentnmeans the same asm× 10n. For example6.022×1023is written as6.022E23or6.022e23, and1.6×10−35is written as1.6E-35or1.6e-35. While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.[2][3]
Most popular programming languages – includingFortran,C/C++,Python, andJavaScript– use this "E" notation, which comes from Fortran and was present in the first version released for theIBM 704in 1956.[4]The E notation was already used by the developers ofSHARE Operating System(SOS) for theIBM 709in 1958.[5]Later versions of Fortran (at least sinceFORTRAN IVas of 1961) also use "D" to signifydouble precisionnumbers in scientific notation,[6]and newer Fortran compilers use "Q" to signifyquadruple precision.[7]TheMATLABprogramming language supports the use of either "E" or "D".
TheALGOL 60(1960) programming language uses a subscript ten "10" character instead of the letter "E", for example:6.0221023.[8][9]This presented a challenge for computer systems which did not provide such a character, soALGOL W(1966) replaced the symbol by a single quote, e.g.6.022'+23,[10]and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g.6.022ю+23[citation needed]. Subsequently, theALGOL 68programming language provided a choice of characters:E,e,\,⊥, or10.[11]The ALGOL "10" character was included in the SovietGOST 10859text encoding (1964), and was added toUnicode5.2 (2009) asU+23E8⏨DECIMAL EXPONENT SYMBOL.[12]
Some programming languages use other symbols. For instance,Simulauses&(or&&forlong), as in6.022&23.[13]Mathematicasupports the shorthand notation6.022*^23(reserving the letterEfor themathematical constante).
The firstpocket calculatorssupporting scientific notation appeared in 1972.[14]To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g.6.022 23, as seen in theHP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g.6.02223, as seen in theCommodore PR100). In 1976,Hewlett-Packardcalculator user Jim Davidson coined the termdecapowerfor the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example,6.022D23); these gained some currency in the programmable calculator user community.[15]The letters "E" or "D" were used as a scientific-notation separator bySharppocket computersreleased between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers.[16]TheTexas InstrumentsTI-83andTI-84series of calculators (1996–present) use asmall capitalEfor the separator.[17]
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103would be written as "6.022③".[18]
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroesindicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number1230400is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus1230400would become1.2304×106if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as1.23040×106or1.230400×106. Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of theprotoncan properly be expressed as1.67262192369(51)×10−27kg, which is shorthand for(1.67262192369±0.00000000051)×10−27kg. However it is still unclear whether the error (5.1×10−37in this case) is the maximum possible error,standard error, or some otherconfidence interval.
In normalized scientific notation, in E notation, and in engineering notation, thespace(which intypesettingmay be represented by a normal width space or athin space) that is allowedonlybefore and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.[19]
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
First, move the decimal separator point sufficient places,n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append× 10n; to the right,× 10−n. To represent the number1,230,400in normalized scientific notation, the decimal separator would be moved 6 digits to the left and× 106appended, resulting in1.2304×106. The number−0.0040321would have its decimal separator shifted 3 digits to the right instead of the left and yield−4.0321×10−3as a result.
Converting a number from scientific notation to decimal notation, first remove the× 10non the end, then shift the decimal separatorndigits to the right (positiven) or left (negativen). The number1.2304×106would have its decimal separator shifted 6 digits to the right and become1,230,400, while−4.0321×10−3would have its decimal separator moved 3 digits to the left and be−0.0040321.
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shiftedxplaces to the left (or right) andxis added to (or subtracted from) the exponent, as shown below.
Given two numbers in scientific notation,x0=m0×10n0{\displaystyle x_{0}=m_{0}\times 10^{n_{0}}}andx1=m1×10n1{\displaystyle x_{1}=m_{1}\times 10^{n_{1}}}
Multiplicationanddivisionare performed using the rules for operation withexponentiation:x0x1=m0m1×10n0+n1{\displaystyle x_{0}x_{1}=m_{0}m_{1}\times 10^{n_{0}+n_{1}}}andx0x1=m0m1×10n0−n1{\displaystyle {\frac {x_{0}}{x_{1}}}={\frac {m_{0}}{m_{1}}}\times 10^{n_{0}-n_{1}}}
Some examples are:5.67×10−5×2.34×102≈13.3×10−5+2=13.3×10−3=1.33×10−2{\displaystyle 5.67\times 10^{-5}\times 2.34\times 10^{2}\approx 13.3\times 10^{-5+2}=13.3\times 10^{-3}=1.33\times 10^{-2}}and2.34×1025.67×10−5≈0.413×102−(−5)=0.413×107=4.13×106{\displaystyle {\frac {2.34\times 10^{2}}{5.67\times 10^{-5}}}\approx 0.413\times 10^{2-(-5)}=0.413\times 10^{7}=4.13\times 10^{6}}
Additionandsubtractionrequire the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
Next, add or subtract the significands:x0±x1=(m0±m1)×10n0{\displaystyle x_{0}\pm x_{1}=(m_{0}\pm m_{1})\times 10^{n_{0}}}
An example:2.34×10−5+5.67×10−6=2.34×10−5+0.567×10−5=2.907×10−5{\displaystyle 2.34\times 10^{-5}+5.67\times 10^{-6}=2.34\times 10^{-5}+0.567\times 10^{-5}=2.907\times 10^{-5}}
While base ten is normally used for scientific notation, powers of other bases can be used too,[25]base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001binbinary(=9d) is written as1.001b× 2d11bor1.001b× 10b11busing binary numbers (or shorter1.001 × 1011if binary context is obvious).[citation needed]In E notation, this is written as1.001bE11b(or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E",[26]a shorthand notation originally proposed byBruce Alan MartinofBrookhaven National Laboratoryin 1968,[27]as in1.001bB11b(or shorter: 1.001B11). For comparison, the same number indecimal representation:1.125 × 23(using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes1.001b× 10b3dor shorter 1.001B3.[26]
This is closely related to the base-2floating-pointrepresentation commonly used in computer arithmetic, and the usage of IECbinary prefixes(e.g. 1B10 for 1×210(kibi), 1B20 for 1×220(mebi), 1B30 for 1×230(gibi), 1B40 for 1×240(tebi)).
Similar to "B" (or "b"[28]), the letters "H"[26](or "h"[28]) and "O"[26](or "o",[28]or "C"[26]) are sometimes also used to indicatetimes 16 or 8 to the poweras in 1.25 =1.40h× 10h0h= 1.40H0 = 1.40h0, or 98000 =2.7732o× 10o5o= 2.7732o5 = 2.7732C5.[26]
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal.[29]This notation can be produced by implementations of theprintffamily of functions following theC99specification and (Single Unix Specification)IEEE Std 1003.1POSIXstandard, when using the%aor%Aconversion specifiers.[29][30][31]Starting withC++11,C++I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard sinceC++17.[32]Apple'sSwiftsupports it as well.[33]It is also required by theIEEE 754-2008binary floating-point standard. Example: 1.3DEp42 represents1.3DEh× 242.
Engineering notationcan be viewed as a base-1000 scientific notation.
Sayre, David, ed. (1956-10-15).The FORTRAN Automatic Coding System for the IBM 704 EDPM: Programmer's Reference Manual(PDF). New York: Applied Science Division and Programming Research Department,International Business Machines Corporation. pp. 9, 27. Retrieved2022-07-04.(2+51+1 pages)
"6. Extensions: 6.1 Extensions implemented in GNU Fortran: 6.1.8 Q exponent-letter".The GNU Fortran Compiler. 2014-06-12. Retrieved2022-12-21.
"The Unicode Standard"(v. 7.0.0 ed.). Retrieved2018-03-23.
Vanderburgh, Richard C., ed. (November 1976)."Decapower"(PDF).52-Notes – Newsletter of the SR-52 Users Club.1(6). Dayton, OH: 1. V1N6P1. Retrieved2017-05-28.Decapower– In the January 1976 issue of65-Notes(V3N1p4) Jim Davidson (HP-65Users Club member #547) suggested the term "decapower" as a descriptor for the power-of-ten multiplier used in scientific notation displays. I'm going to begin using it in place of "exponent" which is technically incorrect, and the letter D to separate the "mantissa" from the decapower for typewritten numbers, as Jim also suggests. For example,123−45[sic] which is displayed in scientific notation as1.23 -43will now be written1.23D-43. Perhaps, as this notation gets more and more usage, the calculator manufacturers will change their keyboard abbreviations. HP's EEX and TI's EE could be changed to ED (for enter decapower).[1]"Decapower".52-Notes – Newsletter of the SR-52 Users Club. Vol. 1, no. 6. Dayton, OH. November 1976. p. 1. Retrieved2018-05-07.(NB. The termdecapowerwas frequently used in subsequent issues of this newsletter up to at least 1978.)
電言板6 PC-U6000 PROGRAM LIBRARY[Telephone board 6 PC-U6000 program library] (in Japanese). Vol. 6. University Co-op. 1993.
"TI-83 Programmer's Guide"(PDF). Retrieved2010-03-09.
"INTOUCH 4GL a Guide to the INTOUCH Language". Archived fromthe originalon 2015-05-03.
|
https://en.wikipedia.org/wiki/P_notation
|
Poly1305is auniversal hash familydesigned byDaniel J. Bernsteinin 2002 for use incryptography.[1][2]
As with any universal hash family, Poly1305 can be used as a one-timemessage authentication codeto authenticate a single message using a secret key shared between sender and recipient,[3]similar to the way that aone-time padcan be used to conceal the content of a single message using a secret key shared between sender and recipient.
Originally Poly1305 was proposed as part of Poly1305-AES,[2]a Carter–Wegman authenticator[4][5][1]that combines the Poly1305 hash withAES-128to authenticate many messages using a single short key and distinct message numbers.
Poly1305 was later applied with a single-use key generated for each message usingXSalsa20in theNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher,[6]and then usingChaChain theChaCha20-Poly1305authenticated cipher[7][8][1]deployed inTLSon the internet.[9]
Poly1305 takes a 16-byte secret keyr{\displaystyle r}and anL{\displaystyle L}-byte messagem{\displaystyle m}and returns a 16-byte hashPoly1305r(m){\displaystyle \operatorname {Poly1305} _{r}(m)}.
To do this, Poly1305:[2][1]
The coefficientsci{\displaystyle c_{i}}of the polynomialc1rq+c2rq−1+⋯+cqr{\displaystyle c_{1}r^{q}+c_{2}r^{q-1}+\cdots +c_{q}r}, whereq=⌈L/16⌉{\displaystyle q=\lceil L/16\rceil }, are:
ci=m[16i−16]+28m[16i−15]+216m[16i−14]+⋯+2120m[16i−1]+2128,{\displaystyle c_{i}=m[16i-16]+2^{8}m[16i-15]+2^{16}m[16i-14]+\cdots +2^{120}m[16i-1]+2^{128},}
with the exception that, ifL≢0(mod16){\displaystyle L\not \equiv 0{\pmod {16}}}, then:
cq=m[16q−16]+28m[16q−15]+⋯+28(Lmod16)−8m[L−1]+28(Lmod16).{\displaystyle c_{q}=m[16q-16]+2^{8}m[16q-15]+\cdots +2^{8(L{\bmod {1}}6)-8}m[L-1]+2^{8(L{\bmod {1}}6)}.}
The secret keyr=(r[0],r[1],r[2],…,r[15]){\displaystyle r=(r[0],r[1],r[2],\dotsc ,r[15])}is restricted to have the bytesr[3],r[7],r[11],r[15]∈{0,1,2,…,15}{\displaystyle r[3],r[7],r[11],r[15]\in \{0,1,2,\dotsc ,15\}},i.e., to have their top four bits clear; and to have the bytesr[4],r[8],r[12]∈{0,4,8,…,252}{\displaystyle r[4],r[8],r[12]\in \{0,4,8,\dotsc ,252\}},i.e., to have their bottom two bits clear.
Thus there are2106{\displaystyle 2^{106}}distinct possible values ofr{\displaystyle r}.
Ifs{\displaystyle s}is a secret 16-byte string interpreted as a little-endian integer, then
a:=(Poly1305r(m)+s)mod2128{\displaystyle a:={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}
is called theauthenticatorfor the messagem{\displaystyle m}.
If a sender and recipient share the 32-byte secret key(r,s){\displaystyle (r,s)}in advance, chosen uniformly at random, then the sender can transmit an authenticated message(a,m){\displaystyle (a,m)}.
When the recipient receives anallegedauthenticated message(a′,m′){\displaystyle (a',m')}(which may have been modified in transmit by an adversary), they can verify its authenticity by testing whether
a′=?(Poly1305r(m′)+s)mod2128.{\displaystyle a'\mathrel {\stackrel {?}{=}} {\bigl (}\operatorname {Poly1305} _{r}(m')+s{\bigr )}{\bmod {2}}^{128}.}Without knowledge of(r,s){\displaystyle (r,s)}, the adversary has probability8⌈L/16⌉/2106{\displaystyle 8\lceil L/16\rceil /2^{106}}of finding any(a′,m′)≠(a,m){\displaystyle (a',m')\neq (a,m)}that will pass verification.
However, the same key(r,s){\displaystyle (r,s)}must not be reused for two messages.
If the adversary learns
a1=(Poly1305r(m1)+s)mod2128,a2=(Poly1305r(m2)+s)mod2128,{\displaystyle {\begin{aligned}a_{1}&={\bigl (}\operatorname {Poly1305} _{r}(m_{1})+s{\bigr )}{\bmod {2}}^{128},\\a_{2}&={\bigl (}\operatorname {Poly1305} _{r}(m_{2})+s{\bigr )}{\bmod {2}}^{128},\end{aligned}}}
form1≠m2{\displaystyle m_{1}\neq m_{2}}, they can subtract
a1−a2≡Poly1305r(m1)−Poly1305r(m2)(mod2128){\displaystyle a_{1}-a_{2}\equiv \operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2}){\pmod {2^{128}}}}
and find a root of the resulting polynomial to recover a small list of candidates for the secret evaluation pointr{\displaystyle r}, and from that the secret pads{\displaystyle s}.
The adversary can then use this to forge additional messages with high probability.
The original Poly1305-AES proposal[2]uses the Carter–Wegman structure[4][5]to authenticate many messages by takingai:=Hr(mi)+pi{\displaystyle a_{i}:=H_{r}(m_{i})+p_{i}}to be the authenticator on theith messagemi{\displaystyle m_{i}}, whereHr{\displaystyle H_{r}}is a universal hash family andpi{\displaystyle p_{i}}is an independent uniform random hash value that serves as a one-time pad to conceal it.
Poly1305-AES usesAES-128to generatepi:=AESk(i){\displaystyle p_{i}:=\operatorname {AES} _{k}(i)}, wherei{\displaystyle i}is encoded as a 16-byte little-endian integer.
Specifically, a Poly1305-AES key is a 32-byte pair(r,k){\displaystyle (r,k)}of a 16-byte evaluation pointr{\displaystyle r}, as above, and a 16-byte AES keyk{\displaystyle k}.
The Poly1305-AES authenticator on a messagemi{\displaystyle m_{i}}is
ai:=(Poly1305r(mi)+AESk(i))mod2128,{\displaystyle a_{i}:={\bigl (}\operatorname {Poly1305} _{r}(m_{i})+\operatorname {AES} _{k}(i){\bigr )}{\bmod {2}}^{128},}
where 16-byte strings and integers are identified by little-endian encoding.
Note thatr{\displaystyle r}is reused between messages.
Without knowledge of(r,k){\displaystyle (r,k)}, the adversary has low probability of forging any authenticated messages that the recipient will accept as genuine.
Suppose the adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries, and candistinguishAESk{\displaystyle \operatorname {AES} _{k}}from a uniform random permutationwith advantage at mostδ{\displaystyle \delta }.
(Unless AES is broken,δ{\displaystyle \delta }is very small.)
The adversary's chance of success at a single forgery is at most:
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
The message numberi{\displaystyle i}must never be repeated with the same key(r,k){\displaystyle (r,k)}.
If it is, the adversary can recover a small list of candidates forr{\displaystyle r}andAESk(i){\displaystyle \operatorname {AES} _{k}(i)}, as with the one-time authenticator, and use that to forge messages.
TheNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher uses a message numberi{\displaystyle i}with theXSalsa20stream cipher to generate a per-messagekey stream, the first 32 bytes of which are taken as a one-time Poly1305 key(ri,si){\displaystyle (r_{i},s_{i})}and the rest of which is used for encrypting the message.
It then uses Poly1305 as a one-time authenticator for the ciphertext of the message.[6]ChaCha20-Poly1305does the same but withChaChainstead ofXSalsa20.[8]XChaCha20-Poly1305 using XChaCha20 instead of XSalsa20 has also been described.[10]
The security of Poly1305 and its derivatives against forgery follows from itsbounded difference probabilityas auniversal hash family:
Ifm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}are messages of up toL{\displaystyle L}bytes each, andd{\displaystyle d}is any 16-byte string interpreted as a little-endian integer, then
Pr[Poly1305r(m1)−Poly1305r(m2)≡d(mod2128)]≤8⌈L/16⌉2106,{\displaystyle \Pr[\operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2})\equiv d{\pmod {2^{128}}}]\leq {\frac {8\lceil L/16\rceil }{2^{106}}},}
wherer{\displaystyle r}is a uniform random Poly1305 key.[2]: Theorem 3.3, p. 8
This property is sometimes calledϵ{\displaystyle \epsilon }-almost-Δ-universalityoverZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }, orϵ{\displaystyle \epsilon }-AΔU,[11]whereϵ=8⌈L/16⌉/2106{\displaystyle \epsilon =8\lceil L/16\rceil /2^{106}}in this case.
With a one-time authenticatora=(Poly1305r(m)+s)mod2128{\displaystyle a={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}, the adversary's success probability for any forgery attempt(a′,m′){\displaystyle (a',m')}on a messagem′{\displaystyle m'}of up toL{\displaystyle L}bytes is:
Pr[a′=Poly1305r(m′)+s∣a=Poly1305r(m)+s]=Pr[a′=Poly1305r(m′)+a−Poly1305r(m)]=Pr[Poly1305r(m′)−Poly1305r(m)=a′−a]≤8⌈L/16⌉/2106.{\displaystyle {\begin{aligned}\Pr[&a'=\operatorname {Poly1305} _{r}(m')+s\mathrel {\mid } a=\operatorname {Poly1305} _{r}(m)+s]\\&=\Pr[a'=\operatorname {Poly1305} _{r}(m')+a-\operatorname {Poly1305} _{r}(m)]\\&=\Pr[\operatorname {Poly1305} _{r}(m')-\operatorname {Poly1305} _{r}(m)=a'-a]\\&\leq 8\lceil L/16\rceil /2^{106}.\end{aligned}}}
Here arithmetic inside thePr[⋯]{\displaystyle \Pr[\cdots ]}is taken to be inZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }for simplicity.
ForNaClcrypto_secretbox_xsalsa20poly1305 andChaCha20-Poly1305, the adversary's success probability at forgery is the same for each message independently as for a one-time authenticator, plus the adversary's distinguishing advantageδ{\displaystyle \delta }against XSalsa20 or ChaCha aspseudorandom functionsused to generate the per-message key.
In other words, the probability that the adversary succeeds at a single forgery afterD{\displaystyle D}attempts of messages up toL{\displaystyle L}bytes is at most:
δ+8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {8D\lceil L/16\rceil }{2^{106}}}.}
The security of Poly1305-AES against forgery follows from the Carter–Wegman–Shoup structure, which instantiates a Carter–Wegman authenticator with a permutation to generate the per-message pad.[12]If an adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries of messages of up toL{\displaystyle L}bytes, and if the adversary has distinguishing advantage at mostδ{\displaystyle \delta }against AES-128 as apseudorandom permutation, then the probability the adversary succeeds at any one of theD{\displaystyle D}forgeries is at most:[2]
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
For instance, assuming that messages are packets up to 1024 bytes; that the attacker sees 264messages authenticated under a Poly1305-AES key; that the attacker attempts a whopping 275forgeries; and that the attacker cannot break AES with probability above δ; then, with probability at least 0.999999 − δ, all the 275are rejected
Poly1305-AES can be computed at high speed in various CPUs: for ann-byte message, no more than 3.1n+ 780 Athlon cycles are needed,[2]for example.
The author has released optimizedsource codeforAthlon,PentiumPro/II/III/M,PowerPC, andUltraSPARC, in addition to non-optimizedreference implementationsinCandC++aspublic domain software.[13]
Below is a list of cryptography libraries that support Poly1305:
|
https://en.wikipedia.org/wiki/Poly1305
|
Neural architecture search(NAS)[1][2]is a technique for automating the design ofartificial neural networks(ANN), a widely used model in the field ofmachine learning. NAS has been used to design networks that are on par with or outperform hand-designed architectures.[3][4]Methods for NAS can be categorized according to the search space, search strategy and performance estimation strategy used:[1]
NAS is closely related tohyperparameter optimization[5]andmeta-learning[6]and is a subfield ofautomated machine learning(AutoML).[7]
Reinforcement learning(RL) can underpin a NAS search strategy. Barret Zoph andQuoc Viet Le[3]applied NAS with RL targeting theCIFAR-10dataset and achieved a network architecture that rivals the best manually-designed architecture for accuracy, with an error rate of 3.65, 0.09 percent better and 1.05x faster than a related hand-designed model. On thePenn Treebankdataset, that model composed a recurrent cell that outperformsLSTM, reaching a test set perplexity of 62.4, or 3.6 perplexity better than the prior leading system. On the PTB character language modeling task it achieved bits per character of 1.214.[3]
Learning a model architecture directly on a large dataset can be a lengthy process. NASNet[4][8]addressed this issue by transferring a building block designed for a small dataset to a larger dataset. The design was constrained to use two types ofconvolutionalcells to return feature maps that serve two main functions when convoluting an input feature map:normal cellsthat return maps of the same extent (height and width) andreduction cellsin which the returned feature map height and width is reduced by a factor of two. For the reduction cell, the initial operation applied to the cell's inputs uses a stride of two (to reduce the height and width).[4]The learned aspect of the design included elements such as which lower layer(s) each higher layer took as input, the transformations applied at that layer and to merge multiple outputs at each layer. In the studied example, the best convolutional layer (or "cell") was designed for the CIFAR-10 dataset and then applied to theImageNetdataset by stacking copies of this cell, each with its own parameters. The approach yielded accuracy of 82.7% top-1 and 96.2% top-5. This exceeded the best human-invented architectures at a cost of 9 billion fewerFLOPS—a reduction of 28%. The system continued to exceed the manually-designed alternative at varying computation levels. The image features learned from image classification can be transferred to other computer vision problems. E.g., for object detection, the learned cells integrated with the Faster-RCNN framework improved performance by 4.0% on theCOCOdataset.[4]
In the so-called Efficient Neural Architecture Search (ENAS), a controller discovers architectures by learning to search for an optimal subgraph within a large graph. The controller is trained withpolicy gradientto select a subgraph that maximizes the validation set's expected reward. The model corresponding to the subgraph is trained to minimize a canonicalcross entropyloss. Multiple child models share parameters, ENAS requires fewer GPU-hours than other approaches and 1000-fold less than "standard" NAS. On CIFAR-10, the ENAS design achieved a test error of 2.89%, comparable to NASNet. On Penn Treebank, the ENAS design reached test perplexity of 55.8.[9]
An alternative approach to NAS is based onevolutionary algorithms, which has been employed by several groups.[10][11][12][13][14][15][16]An Evolutionary Algorithm for Neural Architecture Search generally performs the following procedure.[17]First a pool consisting of different candidate architectures along with their validation scores (fitness) is initialised. At each step the architectures in the candidate pool are mutated (e.g.: 3x3 convolution instead of a 5x5 convolution). Next the new architectures are trained from scratch for a few epochs and their validation scores are obtained. This is followed by replacing the lowest scoring architectures in the candidate pool with the better, newer architectures. This procedure is repeated multiple times and thus the candidate pool is refined over time. Mutations in the context of evolving ANNs are operations such as adding or removing a layer, which include changing the type of a layer (e.g., from convolution to pooling), changing the hyperparameters of a layer, or changing the training hyperparameters. OnCIFAR-10andImageNet, evolution and RL performed comparably, while both slightly outperformedrandom search.[13][12]
Bayesian Optimization(BO), which has proven to be an efficient method for hyperparameter optimization, can also be applied to NAS. In this context, the objective function maps an architecture to its validation error after being trained for a number of epochs. At each iteration, BO uses a surrogate to model this objective function based on previously obtained architectures and their validation errors. One then chooses the next architecture to evaluate by maximizing an acquisition function, such as expected improvement, which provides a balance between exploration and exploitation. Acquisition function maximization and objective function evaluation are often computationally expensive for NAS, and make the application of BO challenging in this context. Recently, BANANAS[18]has achieved promising results in this direction by introducing a high-performing instantiation of BO coupled to a neural predictor.
Another group used ahill climbingprocedure that applies network morphisms, followed by short cosine-annealing optimization runs. The approach yielded competitive results, requiring resources on the same order of magnitude as training a single network. E.g., on CIFAR-10, the method designed and trained a network with an error rate below 5% in 12 hours on a single GPU.[19]
While most approaches solely focus on finding architecture with maximal predictive performance, for most practical applications other objectives are relevant, such as memory consumption, model size or inference time (i.e., the time required to obtain a prediction). Because of that, researchers created amulti-objectivesearch.[16][20]
LEMONADE[16]is an evolutionary algorithm that adoptedLamarckismto efficiently optimize multiple objectives. In every generation, child networks are generated to improve thePareto frontierwith respect to the current population of ANNs.
Neural Architect[20]is claimed to be a resource-aware multi-objective RL-based NAS with network embedding and performance prediction. Network embedding encodes an existing network to a trainable embedding vector. Based on the embedding, a controller network generates transformations of the target network. A multi-objective reward function considers network accuracy, computational resource and training time. The reward is predicted by multiple performance simulation networks that are pre-trained or co-trained with the controller network. The controller network is trained via policy gradient. Following a modification, the resulting candidate network is evaluated by both an accuracy network and a training time network. The results are combined by a reward engine that passes its output back to the controller network.
RL or evolution-based NAS require thousands of GPU-days of searching/training to achieve state-of-the-art computer vision results as described in the NASNet, mNASNet and MobileNetV3 papers.[4][21][22]
To reduce computational cost, many recent NAS methods rely on the weight-sharing idea.[23][24]In this approach, a single overparameterized supernetwork (also known as the one-shot model) is defined. A supernetwork is a very largeDirected Acyclic Graph(DAG) whose subgraphs are different candidate neural networks. Thus, in a supernetwork, the weights are shared among a large number of different sub-architectures that have edges in common, each of which is considered as a path within the supernet. The essential idea is to train one supernetwork that spans many options for the final design rather than generating and training thousands of networks independently. In addition to the learned parameters, a set of architecture parameters are learnt to depict preference for one module over another. Such methods reduce the required computational resources to only a few GPU days.
More recent works further combine this weight-sharing paradigm, with a continuous relaxation of the search space,[25][26][27][28]which enables the use of gradient-based optimization methods. These approaches are generally referred to as differentiable NAS and have proven very efficient in exploring the search space of neural architectures. One of the most popular algorithms amongst the gradient-based methods for NAS is DARTS.[27]However, DARTS faces problems such as performance collapse due to an inevitable aggregation of skip connections and poor generalization which were tackled by many future algorithms.[29][30][31][32]Methods like[30][31]aim at robustifying DARTS and making the validation accuracy landscape smoother by introducing a Hessian norm based regularisation and random smoothing/adversarial attack respectively. The cause of performance degradation is later analyzed from the architecture selection aspect.[33]
Differentiable NAS has shown to produce competitive results using a fraction of the search-time required by RL-based search methods. For example, FBNet (which is short for Facebook Berkeley Network) demonstrated that supernetwork-based search produces networks that outperform the speed-accuracy tradeoff curve of mNASNet and MobileNetV2 on the ImageNet image-classification dataset. FBNet accomplishes this using over 400xlesssearch time than was used for mNASNet.[34][35][36]Further, SqueezeNAS demonstrated that supernetwork-based NAS produces neural networks that outperform the speed-accuracy tradeoff curve of MobileNetV3 on the Cityscapes semantic segmentation dataset, and SqueezeNAS uses over 100x less search time than was used in the MobileNetV3 authors' RL-based search.[37][38]
Neural architecture search often requires large computational resources, due to its expensive training and evaluation phases. This further leads to a large carbon footprint required for the evaluation of these methods. To overcome this limitation, NAS benchmarks[39][40][41][42]have been introduced, from which one can either query or predict the final performance of neural architectures in seconds. A NAS benchmark is defined as a dataset with a fixed train-test split, a search space, and a fixed training pipeline (hyperparameters). There are primarily two types of NAS benchmarks: a surrogate NAS benchmark and a tabular NAS benchmark. A surrogate benchmark uses a surrogate model (e.g.: a neural network) to predict the performance of an architecture from the search space. On the other hand, a tabular benchmark queries the actual performance of an architecture trained up to convergence. Both of these benchmarks are queryable and can be used to efficiently simulate many NAS algorithms using only a CPU to query the benchmark instead of training an architecture from scratch.
Survey articles.
|
https://en.wikipedia.org/wiki/Neural_architecture_search
|
Insoftware designandengineering, theobserver patternis asoftware design patternin which anobject, named thesubject, maintains a list of its dependents, calledobservers, and notifies them automatically of anystate changes, usually by calling one of theirmethods.
It is often used for implementing distributedevent-handlingsystems inevent-driven software. In such systems, the subject is usually named a "stream of events" or "stream source of events" while the observers are called "sinks of events." The stream nomenclature alludes to a physical setup in which the observers are physically separated and have no control over the emitted events from the subject/stream source. This pattern thus suits any process by which data arrives from some input that is not available to theCPUatstartup, but instead may arrive at arbitrary or indeterminate times (HTTP requests,GPIOdata,user inputfromperipheralsanddistributed databases, etc.).
The observer design pattern is a behavioural pattern listed among the 23 well-known"Gang of Four" design patternsthat address recurring design challenges in order to design flexible and reusable object-oriented software, yielding objects that are easier to implement, change, test and reuse.[1]
The observer pattern addresses the following problems:[2]
Defining a one-to-many dependency between objects by defining one object (subject) that updates the state of dependent objects directly is inflexible because it couples the subject to particular dependent objects. However, it might be applicable from a performance point of view or if the object implementation is tightly coupled (such as low-level kernel structures that execute thousands of times per second). Tightly coupled objects can be difficult to implement in some scenarios and are not easily reused because they refer to and are aware of many objects with different interfaces. In other scenarios, tightly coupled objects can be a better option because the compiler is able to detect errors at compile time and optimize the code at the CPU instruction level.
The sole responsibility of a subject is to maintain a list of observers and to notify them of state changes by calling theirupdate()operation. The responsibility of observers is to register and unregister themselves with a subject (in order to be notified of state changes) and to update their state (to synchronize their state with the subject's state) when they are notified. This makes subject and observers loosely coupled. Subject and observers have no explicit knowledge of each other. Observers can be added and removed independently at run time. This notification-registration interaction is also known aspublish-subscribe.
The observer pattern can causememory leaks, known as thelapsed listener problem, because in a basic implementation, it requires both explicit registration and explicit deregistration, as in thedispose pattern, because the subject holds strong references to the observers, keeping them alive. This can be prevented if the subject holdsweak referencesto the observers.
Typically, the observer pattern is implemented so that the subject being observed is part of the object for which state changes are being observed (and communicated to the observers). This type of implementation is consideredtightly coupled, forcing both the observers and the subject to be aware of each other and have access to their internal parts, creating possible issues ofscalability, speed, message recovery and maintenance (also called event or notification loss), the lack of flexibility in conditional dispersion and possible hindrance to desired security measures. In some (non-polling) implementations of thepublish-subscribe pattern, this is solved by creating a dedicated message queue server (and sometimes an extra message handler object) as an extra stage between the observer and the object being observed, thus decoupling the components. In these cases, the message queue server is accessed by the observers with the observer pattern, subscribing to certain messages and knowing (or not knowing, in some cases) about only the expected message, while knowing nothing about the message sender itself; the sender may also know nothing about the observers. Other implementations of the publish-subscribe pattern, which achieve a similar effect of notification and communication to interested parties, do not use the observer pattern.[3][4]
In early implementations of multi-window operating systems such asOS/2andWindows, the terms "publish-subscribe pattern" and "event-driven software development" were used as synonyms for the observer pattern.[5]
The observer pattern, as described in theDesign Patternsbook, is a very basic concept and does not address removing interest in changes to the observed subject or special logic to be performed by the observed subject before or after notifying the observers. The pattern also does not deal with recording change notifications or guaranteeing that they are received. These concerns are typically handled in message-queueing systems, in which the observer pattern plays only a small part.
Related patterns include publish–subscribe,mediatorandsingleton.
The observer pattern may be used in the absence of publish-subscribe, as when model status is frequently updated. Frequent updates may cause the view to become unresponsive (e.g., by invoking manyrepaintcalls); such observers should instead use a timer. Instead of becoming overloaded by change message, the observer will cause the view to represent the approximate state of the model at a regular interval. This mode of observer is particularly useful forprogress bars, in which the underlying operation's progress changes frequently.
In thisUMLclass diagram, theSubjectclass does not update the state of dependent objects directly. Instead,Subjectrefers to theObserverinterface (update()) for updating state, which makes theSubjectindependent of how the state of dependent objects is updated. TheObserver1andObserver2classes implement theObserverinterface by synchronizing their state with subject's state.
TheUMLsequence diagramshows the runtime interactions: TheObserver1andObserver2objects callattach(this)onSubject1to register themselves. Assuming that the state ofSubject1changes,Subject1callsnotify()on itself.notify()callsupdate()on the registeredObserver1andObserver2objects, which request the changed data (getState()) fromSubject1to update (synchronize) their state.
While the library classesjava.util.Observerandjava.util.Observableexist, they have beendeprecatedin Java 9 because the model implemented was quite limited.
Below is an example written inJavathat takes keyboard input and handles each input line as an event. When a string is supplied fromSystem.in, the methodnotifyObservers()is then called in order to notify all observers of the event's occurrence, in the form of an invocation of their update methods.
This is a C++11 implementation.
The program output is like
Output
A similar example inPython:
C# provides theIObservable.[7]andIObserver[8]interfaces as well as documentation on how to implement the design pattern.[9]
JavaScript has a deprecatedObject.observefunction that was a more accurate implementation of the observer pattern.[10]This would fire events upon change to the observed object. Without the deprecatedObject.observefunction, the pattern may be implemented with more explicit code:[11]
|
https://en.wikipedia.org/wiki/Observer_pattern
|
Abiometric passport(also known as anelectronic passport,e-passportor adigital passport) is apassportthat has an embedded electronicmicroprocessorchip, which containsbiometricinformation that can be used to authenticate the identity of the passport holder. It usescontactless smart cardtechnology, including a microprocessor chip (computer chip) and antenna (for both power to the chip and communication) embedded in the front or back cover, or centre page, of the passport. The passport's critical information is printed on the data page of the passport, repeated on themachine readable linesand stored in the chip.Public key infrastructure(PKI) is used to authenticate the data stored electronically in the passport chip, making it expensive and difficult to forge when all security mechanisms are fully and correctly implemented.
Most countries are issuing biometric passports to their citizens.Malaysiawas the first country to issuebiometric passportsin 1998.[1]By the end of 2008, 60 countries were issuing such passports,[2]which increased to over 150 by mid-2019.[3]
The currently standardised biometrics used for this type of identification system arefacial recognition,fingerprint recognition, andiris recognition. These were adopted after assessment of several different kinds of biometrics includingretinal scan. Document and chip characteristics are documented in theInternational Civil Aviation Organization's (ICAO) Doc 9303 (ICAO 9303).[4]The ICAO defines the biometric file formats and communication protocols to be used in passports. Only the digital image (usually inJPEGorJPEG 2000format) of each biometric feature is actually stored in the chip. The comparison of biometric features is performed outside the passport chip by electronic border control systems (e-borders). To store biometric data on the contactless chip, it includes a minimum of 32 kilobytes ofEEPROMstorage memory, and runs on an interface in accordance with theISO/IEC 14443international standard, amongst others. These standards intend interoperability between different countries and different manufacturers of passport books.
Somenational identity cards, such as those fromAlbania,Brazil, theNetherlands, andSaudi Arabiaare fully ICAO 9303 compliant biometrictravel documents. However others, such as theUnited States passport card, are not.[5]
Biometric passports have protection mechanisms to avoid and/or detect attacks:
To assure interoperability and functionality of the security mechanisms listed above, ICAO andGermanFederal Office for Information Security(BSI) have specified several test cases. These test specifications are updated with every new protocol and are covering details starting from the paper used and ending in the chip that is included.[9]
Since the introduction of biometric passports, several attacks have been presented and demonstrated.
Privacyproponents in many countries question and protest the lack of information about exactly what the passports' chip will contain, and whether they affectcivil liberties. The main problem they point out is that data on the passports can be transferred with wirelessRFIDtechnology, which can become a major vulnerability. Although this could allowID-check computers to obtain a person's information without a physical connection, it may also allow anyone with the necessary equipment to perform the same task. If the personal information and passport numbers on the chip are notencrypted, the information might wind up in the wrong hands.
On 15 December 2006, theBBCpublished an article[26]on the British ePassport, citing the above stories and adding that:
and adding that the Future of Identity in the Information Society (FIDIS) network's research team (a body of IT security experts funded by the European Union) has "also come out against the ePassport scheme... [stating that] European governments have forced a document on its people that dramatically decreases security and increases the risk of identity theft."[27]
Most security measures are designed against untrusted citizens (the "provers"), but the scientific security community recently also addressed the threats from untrustworthy verifiers, such as corrupt governmental organizations, or nations using poorly implemented, unsecure electronic systems.[28]New cryptographic solutions such asprivate biometricsare being proposed to mitigate threats of mass theft of identity. These are under scientific study, but not yet implemented in biometric passports.
It was planned that, except for Denmark andIreland,EU passportswould have digital imaging andfingerprintscan biometrics placed on their RFID chips.[116]This combination ofbiometricsaims to create an unrivaled level of security and protection against fraudulent identification papers[vague]. Technical specifications for the new passports have been established by the European Commission.[117]The specifications are binding for theSchengen agreementparties, i.e. the EU countries, except Ireland, and the fourEuropean Free Trade Associationcountries—Iceland, Liechtenstein,[118][119]Norway and Switzerland.[120]These countries are obliged to implement machine readable facial images in the passports by 28 August 2006, and fingerprints by 26 June 2009.[121]TheEuropean Data Protection Supervisorhas stated that the current legal framework fails to "address all the possible and relevant issues triggered by the inherent imperfections of biometric systems".[122]
Irish biometric passports only used a digital image and not fingerprinting. German passports printed after 1 November 2007 contain two fingerprints, one from each hand, in addition to a digital photograph. Romanian passports will also contain two fingerprints, one from each hand. The Netherlands also takes fingerprints and was[123]the only EU member that had plans to store these fingerprints centrally.[124]According to EU requirements, only nations that are signatories to theSchengen acquisare required to add fingerprint biometrics.[125]
In the EU nations, passport prices will be:
In the EFTA, passport prices will be:
The ICAO standard sets a 35x45 mm image with adequate resolution with the following requirements:
Though some countries like USA use a 2x2 inch photo format (51x51 mm), they usually crop it to be closer to 35:45 in ratio when issuing a passport.
|
https://en.wikipedia.org/wiki/E-passport
|
XScaleis amicroarchitectureforcentral processing unitsinitially designed byIntelimplementing theARM architecture(version 5)instruction set. XScale comprises several distinct families: IXP, IXC, IOP, PXA and CE (see more below), with some later models designed assystem-on-a-chip(SoC). Intel sold the PXA family toMarvell Technology Groupin June 2006.[1]Marvell then extended the brand to include processors with othermicroarchitectures, likeArm'sCortex.
The XScale architecture is based on the ARMv5TEISAwithout thefloating-pointinstructions. XScale uses a seven-stage integer and an eight-stage memory super-pipelinedmicroarchitecture. It is the successor to the IntelStrongARMline ofmicroprocessorsandmicrocontrollers, which Intel acquired fromDEC's Digital Semiconductor division as part of a settlement of a lawsuit between the two companies. Intel used the StrongARM to replace its ailing line of outdatedRISCprocessors, thei860andi960.
All the generations of XScale are 32-bit ARMv5TE processors manufactured with a 0.18 μm or 0.13 μm (as in IXP43x parts) process and have a 32KBdatacacheand a 32 KB instruction cache. First- and second-generation XScalemulti-core processorsalso have a 2 KB mini data cache (claimed it "avoids 'thrashing' of the D-Cache for frequently changing data streams"[2]). Products based on the third-generation XScale have up to 512 KB unified L2 cache.[3]
The XScale core is used in a number ofmicrocontrollerfamilies manufactured byInteland Marvell:
There are also standalone processors: the 80200 and 80219 (targeted primarily atPCIapplications).
PXA System on a Chip (SoC) products were designed in Austin, Texas. The code-names for this product line are small towns in Texas, primarily near deer hunting leases frequented by the Intel XScale core and mobile phone SoC marketing team. PXA System on a Chip products were popular on smartphones and PDAs (withWindows Mobile,Symbian OS,Palm OS) during 2000 to 2006.[4]
The PXA210 was Intel's entry-level XScale targeted atmobile phoneapplications. It was released with the PXA250 in February 2002 and comes clocked at 133 MHz and 200 MHz.
The PXA25x family (code-namedCotulla) consists of the PXA250 and PXA255. The PXA250 was Intel's first generation of XScale processors. There was a choice of threeclock speeds: 200MHz, 300 MHz and 400 MHz. It came out in February 2002. In March 2003, the revision C0 of the PXA250 was renamed to PXA255. The main differences were a doubled internal bus speed (100 MHz to 200 MHz) for faster data transfer, lower core voltage (only 1.3 V at 400 MHz) for lower power consumption andwritebackfunctionality for the data cache, the lack of which had severely impaired performance on the PXA250.
Intel XScale Core Features :
The PXA26x family (code-namedDalhart) consists of the PXA260 and PXA261-PXA263. The PXA260 is a stand-alone processor clocked at the same frequency as the PXA25x, but features a TPBGA package which is about 53% smaller than the PXA25x's PBGA package. The PXA261-PXA263 are the same as the PXA260 but have IntelStrataFlashmemory stacked on top of the processor in the same package; 16 MB of 16-bit memory in the PXA261, 32 MB of 16-bit memory in the PXA262 and 32 MB of 32-bit memory in the PXA263. The PXA26x family was released in March 2003.
The PXA27x family (code-namedBulverde) consists of the PXA270 and PXA271-PXA272 processors. This revision is a huge update to the XScale family of processors. The PXA270 is clocked in four different speeds: 312 MHz, 416 MHz, 520 MHz and 624 MHz and is a stand-alone processor with no packaged memory. The PXA271 can be clocked to 13, 104, 208 MHz or 416 MHz and has 32 MB of 16-bit stacked StrataFlash memory and 32 MB of 16-bit SDRAM in the same package. The PXA272 can be clocked to 312 MHz, 416 MHz or 520 MHz and has 64 MB of 32-bit stacked StrataFlash memory.
Intel also added many new technologies to the PXA27x family such as:
The PXA27x family was released in April 2004. Along with the PXA27x family Intel released the2700Gembedded graphicsco-processor(code-named Marathon).
In August 2005 Intel announced the successor toBulverde, codenamedMonahans.
They demonstrated it showing its capability to play back high definition encoded video on aPDAscreen.
The new processor was shown clocked at 1.25 GHz but Intel said it only offered a 25% increase in performance (800MIPSfor the 624 MHz PXA270 processor vs. 1000 MIPS for 1.25 GHzMonahans). An announced successor to the 2700G graphics processor, code named Stanwood, has since been canceled. sd features of Stanwood are integrated intoMonahans. For extra graphics capabilities, Intel recommends third-party chips like theNvidiaGoForcechip family.
In November 2006,Marvell Semiconductorofficially introduced theMonahansfamily as Marvell PXA320, PXA300, and PXA310.[9]PXA320 is currently shipping in high volume, and is scalable up to 806 MHz. PXA300 and PXA310 deliver performance "scalable to 624 MHz", and are software-compatible with PXA320.
CodenamedManitoba,Intel PXA800F was a SoC introduced by Intel in 2003 for use inGSM- andGPRS-enabled mobile phones. The chip was built around an XScale processor core, the likes of which had been used in PDAs, clocked at 312 MHz and manufactured with a 0.13 μm process, with 4 MB of integrated flash memory and adigital signal processor.[10]
A prototype board with the chip was demoed during the Intel Developer Forum.[11]Intel noted it was in talks with leading mobile phone manufacturers, such asNokia,Motorola,Samsung,SiemensandSony Ericsson, about incorporating Manitoba into their phones.[12]
O2XM, released in 2005, was the only mobile phone with a documented use of the Manitoba chip.[13]An Intel executive stated that the chip version used in the phone was reworked to be less expensive than the initial one.[14]
The PXA90x, codenamedHermon, was a successor to Manitoba with3Gsupport. The PXA90x is built using a 130 nm process.[15]The SoC continued being marketed by Marvell as they acquired Intel's XScale business.[16][17]
PXA16x is a processor designed by Marvell, combining the earlier Intel designed PXASoCcomponents with a new ARMv5TE CPU core namedMohawkorPJ1from Marvell'sSheevafamily instead of using wdc Xscale or ARM design. The CPU core is derived from theFeroceoncore used in Marvell's embeddedKirkwoodproduct line, but extended for instruction level compatibility with the XScale IWMMX.
The PXA16x delivers strong performance at a mass market price point for cost sensitive consumer and embedded markets such as digital picture frames, E Readers, multifunction printer user interface (UI) displays, interactive VoIP phones, IP surveillance cameras, and home control gadgets.[18]
The PXA930 and PXA935 processor series were again built using the Sheeva microarchitecture developed by Marvell but upgraded to ARMv7 instruction set compatibility.[19]This core is a so-called Tri-core architecture[20]codenamed Tavor; Tri-core means it supports the ARMv5TE, ARMv6 and ARMv7 instruction sets.[20][21]This new architecture was a significant leap from the old Xscale architecture. The PXA930 uses 65 nm technology[22]while the PXA935 is built using the 45 nm process.[21]
The PXA930 is used in theBlackBerry Bold 9700.
Little is known about the PXA940, although it is known to beARM Cortex-A8compliant.[23]It is utilized in the BlackBerry Torch 9800[24][25]and is built using 45 nm technology.
After XScale and Sheeva, the PXA98x uses the third CPU core design, this time licensed directly from ARM, in form of dual coreCortex A9application processors[26]utilized by devices likeSamsung Galaxy Tab 3 7.0.[27]
It is a quad coreCortex A7application processor withVivanteGPU.[28]
The IXC1100 processor features clock speeds at 266, 400, and 533 MHz, a 133 MHz bus, 32 KB of instruction cache, 32 KB of data cache, and 2 KB of mini-data cache. It is also designed for low power consumption, using 2.4 W at 533 MHz. The chip comes in the 35 mm PBGA package.
The IOP line of processors is designed to allow computers and storage devices to transfer data and increase performance by offloading I/O functionality from the main CPU of the device. The IOP3XX processors are based on the XScale architecture and designed to replace the older 80219 sd and i960 family of chips. There are ten different IOP processors currently available: IOP303, IOP310, IOP315, IOP321, IOP331, IOP332, IOP333, IOP341, IOP342 and IOP348. Clock speeds range from 100 MHz to 1.2 GHz. The processors also differ in PCI bus type, PCI bus speed, memory type, maximum memory allowable, and the number of processor cores.
The XScale core is utilized in the second generation of Intel's IXP network processor line, while the first generation used StrongARM cores. The IXP network processor family ranges from solutions aimed at small/medium office network applications, IXP4XX, to high performance network processors such as the IXP2850, capable of sustaining up toOC-192line rates. In IXP4XX devices the XScale core is used as both a control and data plane processor, providing both system control and data processing. The task of the XScale in the IXP2XXX devices is typically to provide control plane functionality only, with data processing performed by themicroengines, examples of such control plane tasks include routing table updates, microengine control, and memory management.
In April 2007, Intel announced an XScale-based processor targetingconsumer electronicsmarkets, the Intel CE 2110 (codenamed Olo River).[29]
XScale microprocessors were used inRIM'sBlackBerryhandheld, theDell Aximfamily ofPocket PCs, most of theZire,TreoandTungsten Handheldlines byPalm, later versions of theSharp Zaurus, theMotorola A780, the Acer n50, the CompaqiPaq3900 series, and in otherPDAs. It was theCPUin theIyonix PCdesktop computer runningRISC OS, and theNSLU2(Slug) runningLinux. XScale is also used in devices such as PVPs (Portable Video Players), PMCs (Portable Media Centres), including theCreative ZenPortable Media Player andAmazon KindleE-Book reader, and in industrial embedded systems.
At the other end of the market, the XScale IOP33x Storage I/O processors are used in some IntelXeon-based server platforms.
On June 27, 2006, the sale of Intel's XScale PXA mobile processor assets was announced. Intel agreed to sell the XScale PXA business toMarvell Technology Groupfor an estimated $600 million in cash and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses. Marvell holds a full architecture license for ARM, allowing it to design chips to implement the ARM instruction set, not just license a processor core.[30]
The acquisition was completed on November 9, 2006. Intel was expected to continue manufacturing XScale processors until Marvell secures other manufacturing facilities, and would continue manufacturing and selling the IXP and IOP processors, as they were not part of the deal.[31]
The XScale effort at Intel was initiated by the purchase of theStrongARMdivision fromDigital Equipment Corporationin 1998.[32]Intel still holds an ARM license even after the sale of XScale;[32]this license is at the architectural level.[33]
|
https://en.wikipedia.org/wiki/XScale
|
Latent semantic analysis(LSA) is a technique innatural language processing, in particulardistributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (thedistributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique calledsingular value decomposition(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared bycosine similaritybetween any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1]
An information retrieval technique using latent semantic structure was patented in 1988[2]byScott Deerwester,Susan Dumais,George Furnas,Richard Harshman,Thomas Landauer,Karen LochbaumandLynn Streeter. In the context of its application toinformation retrieval, it is sometimes calledlatent semantic indexing(LSI).[3]
LSA can use adocument-term matrixwhich describes the occurrences of terms in documents; it is asparse matrixwhose rows correspond totermsand whose columns correspond to documents. A typical example of the weighting of the elements of the matrix istf-idf(term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.
This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
After the construction of the occurrence matrix, LSA finds alow-rank approximation[5]to theterm-document matrix. There could be various reasons for these approximations:
The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem withpolysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.
LetX{\displaystyle X}be a matrix where element(i,j){\displaystyle (i,j)}describes the occurrence of termi{\displaystyle i}in documentj{\displaystyle j}(this can be, for example, the frequency).X{\displaystyle X}will look like this:
Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:
Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:
Now thedot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}between two term vectors gives thecorrelationbetween the terms over the set of documents. Thematrix productXXT{\displaystyle XX^{T}}contains all these dot products. Element(i,p){\displaystyle (i,p)}(which is equal to element(p,i){\displaystyle (p,i)}) contains the dot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}(=tpTti{\displaystyle ={\textbf {t}}_{p}^{T}{\textbf {t}}_{i}}). Likewise, the matrixXTX{\displaystyle X^{T}X}contains the dot products between all the document vectors, giving their correlation over the terms:djTdq=dqTdj{\displaystyle {\textbf {d}}_{j}^{T}{\textbf {d}}_{q}={\textbf {d}}_{q}^{T}{\textbf {d}}_{j}}.
Now, from the theory of linear algebra, there exists a decomposition ofX{\displaystyle X}such thatU{\displaystyle U}andV{\displaystyle V}areorthogonal matricesandΣ{\displaystyle \Sigma }is adiagonal matrix. This is called asingular value decomposition(SVD):
The matrix products giving us the term and document correlations then become
SinceΣΣT{\displaystyle \Sigma \Sigma ^{T}}andΣTΣ{\displaystyle \Sigma ^{T}\Sigma }are diagonal we see thatU{\displaystyle U}must contain theeigenvectorsofXXT{\displaystyle XX^{T}}, whileV{\displaystyle V}must be the eigenvectors ofXTX{\displaystyle X^{T}X}. Both products have the same non-zero eigenvalues, given by the non-zero entries ofΣΣT{\displaystyle \Sigma \Sigma ^{T}}, or equally, by the non-zero entries ofΣTΣ{\displaystyle \Sigma ^{T}\Sigma }. Now the decomposition looks like this:
The valuesσ1,…,σl{\displaystyle \sigma _{1},\dots ,\sigma _{l}}are called the singular values, andu1,…,ul{\displaystyle u_{1},\dots ,u_{l}}andv1,…,vl{\displaystyle v_{1},\dots ,v_{l}}the left and right singular vectors.
Notice the only part ofU{\displaystyle U}that contributes toti{\displaystyle {\textbf {t}}_{i}}is thei'th{\displaystyle i{\textrm {'th}}}row.
Let this row vector be calledt^iT{\displaystyle {\hat {\textrm {t}}}_{i}^{T}}.
Likewise, the only part ofVT{\displaystyle V^{T}}that contributes todj{\displaystyle {\textbf {d}}_{j}}is thej'th{\displaystyle j{\textrm {'th}}}column,d^j{\displaystyle {\hat {\textrm {d}}}_{j}}.
These arenotthe eigenvectors, butdependonallthe eigenvectors.
It turns out that when you select thek{\displaystyle k}largest singular values, and their corresponding singular vectors fromU{\displaystyle U}andV{\displaystyle V}, you get the rankk{\displaystyle k}approximation toX{\displaystyle X}with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vectort^iT{\displaystyle {\hat {\textbf {t}}}_{i}^{T}}then hask{\displaystyle k}entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vectord^j{\displaystyle {\hat {\textbf {d}}}_{j}}is an approximation in this lower-dimensional space. We write this approximation as
You can now do the following:
To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:
Note here that the inverse of the diagonal matrixΣk{\displaystyle \Sigma _{k}}may be found by inverting each nonzero value within the matrix.
This means that if you have a query vectorq{\displaystyle q}, you must do the translationq^=Σk−1UkTq{\displaystyle {\hat {\textbf {q}}}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {q}}}before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:
The new low-dimensional space typically can be used to:
Synonymy and polysemy are fundamental problems innatural language processing:
LSA has been used to assist in performingprior artsearches forpatents.[9]
The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas offree recalland memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as theSemantic Proximity Effect.[10]
When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[11]
Another model, termedWord Association Spaces(WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[12]
TheSVDis typically computed using large matrix methods (for example,Lanczos methods) but may also be computed incrementally and with greatly reduced resources via aneural network-like approach, which does not require the large, full-rank matrix to be held in memory.[13]A fast, incremental, low-memory, large-matrix SVD algorithm has been developed.[14]MATLAB[15]and Python[16]implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.
In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[17]
Some of LSA's drawbacks include:
In semantic hashing[21]documents are mapped to memory addresses by means of aneural networkin such a way that semantically similar documents are located at nearby addresses.Deep neural networkessentially builds agraphical modelof the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster thanlocality sensitive hashing, which is the fastest current method.[clarification needed]
Latent semantic indexing(LSI) is an indexing and retrieval method that uses a mathematical technique calledsingular value decomposition(SVD) to identify patterns in the relationships between thetermsandconceptscontained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of abody of textby establishing associations between those terms that occur in similarcontexts.[22]
LSI is also an application ofcorrespondence analysis, a multivariate statistical technique developed byJean-Paul Benzécri[23]in the early 1970s, to acontingency tablebuilt from word counts in documents.
Called "latent semanticindexing" because of its ability to correlatesemanticallyrelated terms that arelatentin a collection of text, it was first applied to text atBellcorein the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
LSI helps overcome synonymy by increasingrecall, one of the most problematic constraints ofBoolean keyword queriesand vector space models.[18]Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users ofinformation retrievalsystems.[24]As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
LSI is also used to perform automateddocument categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[25]Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[26]LSI usesexampledocuments to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.
Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguisticconcept searchingand example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.[citation needed]
LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[27]
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[28]This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.
Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
LSI has proven to be a useful solution to a number of conceptual matching problems.[29][30]The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[31]
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing aSingular Value Decompositionon the matrix, and using the matrix to identify the concepts contained in the text.
LSI begins by constructing a term-document matrix,A{\displaystyle A}, to identify the occurrences of them{\displaystyle m}unique terms within a collection ofn{\displaystyle n}documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,aij{\displaystyle a_{ij}}, initially representing the number of times the associated term appears in the indicated document,tfij{\displaystyle \mathrm {tf_{ij}} }. This matrix is usually very large and very sparse.
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,aij{\displaystyle a_{ij}}ofA{\displaystyle A}, to be the product of a local term weight,lij{\displaystyle l_{ij}}, which describes the relative frequency of a term in a document, and a global weight,gi{\displaystyle g_{i}}, which describes the relative frequency of the term within the entire collection of documents.
Some common local weighting functions[33]are defined in the following table.
Some common global weighting functions are defined in the following table.
Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[34]In other words, each entryaij{\displaystyle a_{ij}}ofA{\displaystyle A}is computed as:
A rank-reduced,singular value decompositionis performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[35]It computes the term and document vector spaces by approximating the single term-frequency matrix,A{\displaystyle A}, into three other matrices— anmbyrterm-concept vector matrixT{\displaystyle T}, anrbyrsingular values matrixS{\displaystyle S}, and anbyrconcept-document vector matrix,D{\displaystyle D}, which satisfy the following relations:
A≈TSDT{\displaystyle A\approx TSD^{T}}
TTT=IrDTD=Ir{\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}}
S1,1≥S2,2≥…≥Sr,r>0Si,j=0wherei≠j{\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j}
In the formula,Ais the suppliedmbynweighted matrix of term frequencies in a collection of text wheremis the number of unique terms, andnis the number of documents.Tis a computedmbyrmatrix of term vectors whereris the rank ofA—a measure of its unique dimensions≤ min(m,n).Sis a computedrbyrdiagonal matrix of decreasing singular values, andDis a computednbyrmatrix of document vectors.
The SVD is thentruncatedto reduce the rank by keeping only the largestk«rdiagonal entries in the singular value matrixS,
wherekis typically on the order 100 to 300 dimensions.
This effectively reduces the term and document vector matrix sizes tombykandnbykrespectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space ofA. This reduced set of matrices is often denoted with a modified formula such as:
Efficient LSI algorithms only compute the firstksingular values and term and document vectors as opposed to computing a full SVD and then truncating it.
Note that this rank reduction is essentially the same as doingPrincipal Component Analysis(PCA) on the matrixA, except that PCA subtracts off the means. PCA loses the sparseness of theAmatrix, which can make it infeasible for large lexicons.
The computedTkandDkmatrices define the term and document vector spaces, which with the computed singular values,Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of theA = T S DTequation into the equivalentD = ATT S−1equation, a new vector,d, for a query or for a new document can be created by computing a new column inAand then multiplying the new column byT S−1. The new column inAis computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is calledfolding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in[14]) is needed.
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.
LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[36]Below are some other ways in which LSI is being used:
LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[51]
Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[52]However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open sourcegensimsoftware package.[53]
Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[54]However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[55]Checking the proportion of variance retained, similar toPCAorfactor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[56]When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.
Due to its cross-domain applications inInformation Retrieval,Natural Language Processing(NLP),Cognitive ScienceandComputational Linguistics, LSA has been implemented to support many different kinds of applications.
|
https://en.wikipedia.org/wiki/Latent_semantic_analysis
|
Infunctional analysis, a branch of mathematics, aclosed linear operatoror often aclosed operatoris alinear operatorwhose graph is closed (seeclosed graph property). It is a basic example of anunbounded operator.
Theclosed graph theoremsays a linear operatorf:X→Y{\displaystyle f:X\to Y}betweenBanach spacesis a closed operator if and only if it is abounded operatorand the domain of the operator isX{\displaystyle X}. Hence, a closed linear operator that is used in practice is typically onlydefined on a dense subspaceof a Banach space.
It is common in functional analysis to considerpartial functions, which are functions defined on asubsetof some spaceX.{\displaystyle X.}A partial functionf{\displaystyle f}is declared with the notationf:D⊆X→Y,{\displaystyle f:D\subseteq X\to Y,}which indicates thatf{\displaystyle f}has prototypef:D→Y{\displaystyle f:D\to Y}(that is, itsdomainisD{\displaystyle D}and itscodomainisY{\displaystyle Y})
Every partial function is, in particular, a function and so all terminology for functions can be applied to them. For instance, thegraphof a partial functionf{\displaystyle f}is the setgraph(f)={(x,f(x)):x∈domf}.{\displaystyle \operatorname {graph} {\!(f)}=\{(x,f(x)):x\in \operatorname {dom} f\}.}However, one exception to this is the definition of "closed graph". Apartialfunctionf:D⊆X→Y{\displaystyle f:D\subseteq X\to Y}is said to have aclosed graphifgraphf{\displaystyle \operatorname {graph} f}is a closed subset ofX×Y{\displaystyle X\times Y}in theproduct topology; importantly, note that the product space isX×Y{\displaystyle X\times Y}andnotD×Y=domf×Y{\displaystyle D\times Y=\operatorname {dom} f\times Y}as it was defined above for ordinary functions. In contrast, whenf:D→Y{\displaystyle f:D\to Y}is considered as an ordinary function (rather than as the partial functionf:D⊆X→Y{\displaystyle f:D\subseteq X\to Y}), then "having a closed graph" would instead mean thatgraphf{\displaystyle \operatorname {graph} f}is a closed subset ofD×Y.{\displaystyle D\times Y.}Ifgraphf{\displaystyle \operatorname {graph} f}is a closed subset ofX×Y{\displaystyle X\times Y}then it is also a closed subset ofdom(f)×Y{\displaystyle \operatorname {dom} (f)\times Y}although the converse is not guaranteed in general.
Definition: IfXandYaretopological vector spaces(TVSs) then we call alinear mapf:D(f) ⊆X→Yaclosed linear operatorif its graph is closed inX×Y.
A linear operatorf:D⊆X→Y{\displaystyle f:D\subseteq X\to Y}isclosableinX×Y{\displaystyle X\times Y}if there exists avector subspaceE⊆X{\displaystyle E\subseteq X}containingD{\displaystyle D}and a function (resp. multifunction)F:E→Y{\displaystyle F:E\to Y}whose graph is equal to the closure of the setgraphf{\displaystyle \operatorname {graph} f}inX×Y.{\displaystyle X\times Y.}Such anF{\displaystyle F}is called aclosure off{\displaystyle f}inX×Y{\displaystyle X\times Y}, is denoted byf¯,{\displaystyle {\overline {f}},}and necessarily extendsf.{\displaystyle f.}
Iff:D⊆X→Y{\displaystyle f:D\subseteq X\to Y}is a closable linear operator then acoreor anessential domainoff{\displaystyle f}is a subsetC⊆D{\displaystyle C\subseteq D}such that the closure inX×Y{\displaystyle X\times Y}of the graph of the restrictionf|C:C→Y{\displaystyle f{\big \vert }_{C}:C\to Y}off{\displaystyle f}toC{\displaystyle C}is equal to the closure of the graph off{\displaystyle f}inX×Y{\displaystyle X\times Y}(i.e. the closure ofgraphf{\displaystyle \operatorname {graph} f}inX×Y{\displaystyle X\times Y}is equal to the closure ofgraphf|C{\displaystyle \operatorname {graph} f{\big \vert }_{C}}inX×Y{\displaystyle X\times Y}).
A bounded operator is a closed operator. Here are examples of closed operators that are not bounded.
The following properties are easily checked for a linear operatorf:D(f) ⊆X→Ybetween Banach spaces:
|
https://en.wikipedia.org/wiki/Closed_linear_operator
|
Data defined storage(also referred to as adata centric approach) is amarketingterm formanaging, protecting, and realizing the value from data by combining application, information andstoragetiers.[1]
This is a process in which users, applications, and devices gain access to a repository of capturedmetadatathat allows them toaccess,queryandmanipulaterelevant data, transforming it intoinformationwhile also establishing a flexible andscalableplatform for storing the underlying data. The technology is said toabstractthe data entirely from the storage, trying to provide fully transparent access for users.
Data defined storage explains information aboutmetadatawith an emphasis on the content, meaning and value of information over the media, type and location of data. Data-centric management enables organizations to adopt a single, unified approach to managing data across large,distributed locations, which includes the use of content and metadata indexing. The technology pillars include:
Data defined storage focuses on the benefits of bothobject storageandsoftware-defined storagetechnologies. However, object and software-defined storage can only be mapped to media independent data storage, which enables a media agnostic infrastructure - utilizing any type of storage, including low cost commodity storage to scale out to petabyte-level capacities. Data defined storage unifies all data repositories and exposes globally distributed stores through the global namespace, eliminatingdata silosand improving storage utilization.
The first marketing campaign to use the term data defined storage was from the company Tarmin, for its product GridBank. The term may have been mentioned as early as 2013.[2]
The term was used forobject storagewithopen protocolaccess for file system virtualization, such asCIFS,NFS,FTPas well asREST APIsand other cloud protocols such asAmazon S3,CDMIandOpenStack.
|
https://en.wikipedia.org/wiki/Data_defined_storage
|
Templatesare a feature of theC++programming language that allowsfunctionsandclassesto operate withgeneric types. This allows a function or classdeclarationto reference via a genericvariableanother different class (built-in or newly declareddata type) without creating full declaration for each of these different classes.
In plain terms, a templated class or function would be the equivalent of (before "compiling") copying and pasting the templated block of code where it is used, and then replacing the template parameter with the actual one. For this reason, classes employing templated methods place the implementation in the headers (*.h files) as no symbol could be compiled without knowing the type beforehand.
TheC++ Standard Libraryprovides many useful functions within a framework of connected templates.
Major inspirations for C++ templates were the parameterized modules provided by the languageCLUand the generics provided byAda.[1]
There are three kinds of templates:function templates,class templatesand, sinceC++14,variable templates. SinceC++11, templates may be eithervariadicor non-variadic; in earlier versions of C++ they are always non-variadic.
C++ Templates areTuring complete.[2]
Afunction templatebehaves like a function except that the template can have arguments of many different types (see example). In other words, a function template represents a family of functions. The format for declaring function templates with type parameters is:
Both expressions have the same meaning and behave in exactly the same way. The latter form was introduced to avoid confusion,[3]since a type parameter need not be a class until C++20. (It can be a basic type such asintordouble.)
For example, the C++ Standard Library contains the function templatemax(x, y)which returns the larger ofxandy. That function template could be defined like this:
This single function definition works with many data types. Specifically, it works with all data types for which<(the less-than operator) is defined and returns a value with a type convertible tobool. The usage of a function template saves space in the source code file in addition to limiting changes to one function description and making the code easier to read.
An instantiated function template usually produces the same object code, though, compared to writing separate functions for all the different data types used in a specific program. For example, if a program uses both anintand adoubleversion of themax()function template above, thecompilerwill create an object code version ofmax()that operates onintarguments and another object code version that operates ondoublearguments.[citation needed]The compiler output will be identical to what would have been produced if the source code had contained two separate non-templated versions ofmax(), one written to handleintand one written to handledouble.
Here is how the function template could be used:
In the first two cases, the template argumentTis automatically deduced by the compiler to beintanddouble, respectively. In the third case automatic deduction ofmax(3, 7.0)would fail because the type of the parameters must in general match the template arguments exactly. Therefore, we explicitly instantiate thedoubleversion withmax<double>().
This function template can be instantiated with anycopy-constructibletype for which the expressiony < xis valid. For user-defined types, this implies that the less-than operator (<) must beoverloadedin the type.
SinceC++20, usingautoorConceptautoin any of the parameters of afunction declaration, that declaration becomes anabbreviated function templatedeclaration.[4]Such a declaration declares a function template and one invented template parameter for each placeholder is appended to the template parameter list:
A class template provides a specification for generating classes based on parameters. Class templates are generally used to implementcontainers. A class template is instantiated by passing a given set of types to it as template arguments.[5]The C++ Standard Library contains many class templates, in particular the containers adapted from theStandard Template Library, such asvector.
In C++14, templates can be also used for variables, as in the following example:
Although templating on types, as in the examples above, is the most common form of templating in C++, it is also possible to template on values. Thus, for example, a class declared with
can be instantiated with a specificint.
As a real-world example, thestandard libraryfixed-sizearraytypestd::arrayis templated on both a type (representing the type of object that the array holds) and a number which is of typestd::size_t(representing the number of elements the array holds).std::arraycan be declared as follows:
and an array of sixchars might be declared:
When a function or class is instantiated from a template, aspecializationof that template is created by the compiler for the set of arguments used, and the specialization is referred to as being a generated specialization.
Sometimes, the programmer may decide to implement a special version of a function (or class) for a given set of template type arguments which is called an explicit specialization. In this way certain template types can have a specialized implementation that is optimized for the type or a more meaningful implementation than the generic implementation.
Explicit specialization is used when the behavior of a function or class for particular choices of the template parameters must deviate from the generic behavior: that is, from the code generated by the main template, or templates. For example, the template definition below defines a specific implementation ofmax()for arguments of typeconst char*:
C++11 introducedvariadic templates, which can take a variable number of arguments in a manner somewhat similar tovariadic functionssuch asstd::printf.
C++11 introduced template aliases, which act like parameterizedtypedefs.
The following code shows the definition of a template aliasStrMap. This allows, for example,StrMap<int>to be used as shorthand forstd::unordered_map<int,std::string>.
Initially, the concept of templates was not included in some languages, such asJavaandC#1.0.Java's adoption of genericsmimics the behavior of templates, but is technically different. C# added generics (parameterized types) in.NET2.0. The generics in Ada predate C++ templates.
Although C++ templates, Java generics, and.NETgenerics are often considered similar, generics only mimic the basic behavior ofC++templates.[6]Some of the advanced template features utilized by libraries such asBoostandSTLSoft, and implementations of the STL, fortemplate metaprogramming(explicit or partial specialization, default template arguments, template non-type arguments, template template arguments, ...) are unavailable with generics.
In C++ templates, compile-time cases were historically performed by pattern matching over the template arguments. For example, the template base class in the Factorial example below is implemented by matching 0 rather than with an inequality test, which was previously unavailable. However, the arrival in C++11 of standard library features such as std::conditional has provided another, more flexible way to handle conditional template instantiation.
With these definitions, one can compute, say 6! at compile time using the expressionFactorial<6>::value.
Alternatively,constexprin C++11 /if constexprin C++17 can be used to calculate such values directly using a function at compile-time:
Because of this, template meta-programming is now mostly used to do operations on types.
|
https://en.wikipedia.org/wiki/Template_(C%2B%2B)
|
Incomputational complexity theory, acomputational resourceis a resource used by somecomputational modelsin the solution ofcomputational problems.
The simplest computational resources arecomputation time, the number of steps necessary to solve a problem, andmemory space, the amount of storage needed while solving the problem, but many more complicated resources have been defined.[citation needed]
A computational problem is generally[citation needed]defined in terms of its action on any valid input. Examples of problems might be "given an integern, determine whethernis prime", or "given two numbersxandy, calculate the productx*y". As the inputs get bigger, the amount of computational resources needed to solve a problem will increase. Thus, the resources needed to solve a problem are described in terms ofasymptotic analysis, by identifying the resources as a function of the length or size of the input. Resource usage is often partially quantified usingBigOnotation.
Computational resources are useful because we can study which problems can be computed in a certain amount of each computational resource. In this way, we can determine whetheralgorithmsfor solving the problem are optimal and we can make statements about analgorithm's efficiency. The set of all of the computational problems that can be solved using a certain amount of a certain computational resource is acomplexity class, and relationships between different complexity classes are one of the most important topics in complexity theory.
The term"Computational resource"is commonly used to describe accessible computing equipment and software. SeeUtility computing.
There has been some effort to formally quantify computing capability. A boundedTuring machinehas been used to model specific computations using the number of state transitions and alphabet size to quantify the computational effort required to solve a particular problem.[1][2]
|
https://en.wikipedia.org/wiki/Computational_resource
|
M-PESA(Mfor mobile,PESAisSwahilifor money) is amobile phone-based money transfer service, payments andmicro-financingservice, launched in 2007 byVodafoneandSafaricom, the largest mobile network operator inKenya.[1]It has since expanded toTanzania,Mozambique,DRC,Lesotho,Ghana,Egypt,Afghanistan,South AfricaandEthiopia. The rollouts inIndia,Romania, andAlbaniawere terminated amid low market uptake. M-PESA allows users to deposit, withdraw, transfer money, pay for goods and services (Lipa na M-PESA,Swahilifor "Pay with M-PESA"), access credit and savings, all with a mobile device.[2]
The service allows users to deposit money into an account stored on their cell phones, to send balances usingPIN-securedSMS text messagesto other users, including sellers of goods and services, and to redeem deposits for regular money. Users are charged a fee for sending and withdrawing money using the service.[3]
M-PESA is abranchless bankingservice; M-PESA customers can deposit and withdraw money from a network of agents that includesairtimeresellers and retail outlets acting asbanking agents.
M-PESA spread quickly, and by 2010 had become the most successful mobile-phone-based financial service in the developing world.[4]By 2012, a stock of about 17 million M-PESA accounts had been registered in Kenya. By June 2016, a total of 7 million M-PESA accounts had been opened in Tanzania by Vodacom. The service has been lauded for giving millions of people access to the formal financial system and for reducing crime in otherwise largely cash-based societies.[5]However, the near-monopolistic providers of the M-PESA service are sometimes criticized for the high cost that the service imposes on its often poor users.
Safaricom and Vodafone launched M-PESA, a mobile-based payment service targeting the un-banked, pre-pay mobile subscribers in Kenya on a pilot basis in October 2005.[6]It was started as a public/private sector initiative after Vodafone was successful in winning funds from the Financial Deepening Challenge Fund competition established by the UK government'sDepartment for International Developmentto encourage private sector companies to engage in innovative projects so as to deepen the provision of financial services in emerging economies.[7]
The initial obstacle in the pilot was gaining the agent's trust and encouraging them to process cash withdrawals and agent training.[8]However, once Vodafone introduced the ability to buy airtime using M-PESA, the transaction volume increased rapidly. A 5% discount was offered on any airtime purchased through M-PESA and this served as an effective incentive. By 1 March 2006,KSh50.7 million had been transferred through the system. The successful operation of the pilot was a key component in Vodafone and Safaricom's decision to take the product full scale. The learning from the pilot helped to confirm the market need for the service and although it mainly revolved around facilitating loan repayments and disbursements forFaulucustomers, it also tested features such as airtime purchase and national remittance. The full commercial launch was initiated in March 2007.
A snapshot of the market then depicted that only a small percentage of people in Kenya used traditional banking services. There were low levels of bank income, high bank fees incurred and charged; most of the services were out of geographical reach to the rural Kenyan.[9]Notably, a high level of mobile penetration was evident throughout the country making the adoption of mobile payments a viable alternative to the traditional banking channels. According to a survey done by CBS in 2005, Kenya then had over 5,970,600 people employed in the informal sector.
In 2002, researchers atGamosand theCommonwealth Telecommunications Organisation, funded by the UK'sDepartment for International Development(DFID), documented that in Uganda,Botswanaand Ghana, people were using airtime as a proxy for money transfer.[10]Kenyans were transferring airtime to their relatives or friends who were then using it or reselling it. Gamos researchers approached MCel[11]in Mozambique, and in 2004 MCel introduced the first authorised airtime credit swapping – a precursor step towards M-PESA.[12]The idea was discussed by theCommission for Africa[13]and DFID introduced the researchers to Vodafone who had been discussing supporting microfinance and back office banking with mobile phones. S. Batchelor (Gamos) and N. Hughes (VodafoneCSR) discussed how a system of money transfer could be created in Kenya. DFID amended the terms of reference for its grant to Vodafone, and piloting began in 2005. Safaricom launched a newmobile phone-based paymentand money transfer service, known as M-PESA.[4]
The initial work of developing the product was given to a product and technology development company known asSagentia.[14]Development and second line support responsibilities were transferred toIBMin September 2009, where most of the original Sagentia team transferred to.[15]Following a three-year migration project to a new technology stack, as of 26 February 2017, IBM's responsibilities have been transferred to Huawei in all markets.[16]
The initial concept of M-PESA was to create a service which would allow microfinance borrowers to conveniently receive and repay loans using the network of Safaricom airtime resellers.[17]This would enable microfinance institutions (MFIs) to offer more competitive loan rates to their users, as costs are lower than when dealing in cash. The users of the service would gain through being able to track their finances more easily. When the service was piloted, customers adopted the service for a variety of alternative uses and complications arose withFaulu, the partnering MFI. In discussion with other parties, M-PESA was re-focused and launched with a different value proposition: sending remittances home across the country and making payments.[17]
M-PESA is operated by Safaricom and Vodacom,mobile network operators(MNO) not classed as deposit-taking institutions, such as a bank. M-PESA customers can deposit and withdraw currency from a network of agents that includes airtime resellers and retail outlets acting asbanking agents. The service enables its users to:
Partnerships with Kenyan banks offer expanded banking services like interest-bearing accounts, loans, and insurance.[22]
Theuser interfacetechnology of M-PESA differs between Safaricom of Kenya and Vodacom of Tanzania, although the underlying platform is the same. While Safaricom usesSIM toolkit (STK)to provide handset menus for accessing the service, Vodacom relies mostly onUSSDto provide users with menus, but also supports STK.[23]
Transaction charges depend on the amount of money being transferred and whether the payee is a registered user of the service. The actual cost is a fixed amount for a given range of transaction sizes; for example Safaricom charges up toKSh66 (US$0.6) for a transaction to an unregistered user for transactions between KSh10 and KSh500 (US$0.92–US$4.56). For registered users the charge is KSh27 (US$0.25) or 5.4% to 27% for the same amount. At the highest transfer bracket of KSh50,001–70,000, the fee for a transfer to a registered user is KSh110 (US$1) or 0.16–0.22 %. The maximum amount that can be transferred to a non-registered user of the system is KSh35,000 (US$319.23), with a fee of KSh275 (US$2.51) or 0.8%. Cash withdrawal fees are also charged. With a charge of KSh10 (US$0.09) for a withdrawal of KSh50–100 or 10% to 20%, and up to KSh330 (US$3.01) for a withdrawal of KSh50,001–70,000 or 0.47% to 0.66% .[24][25]
In an article published in 2015, Anja Bengelstorff cites theCentral Bank of Kenyawhen she states thatCHF1 billion is moved in fiscal year 2014, with a profit of CHF 268 million, that is close to 27% of the moved money.[26]In 2016, M-PESA moved KSh15 billion (US$147776845.14) per day, with a revenue of KSh41 billion. In 2017 KSh6,869 billion were moved according to a figure in Safaricoms own annual report, with a revenue of KSh55 billion. This would put Safaricom's profit ratio at around <1 % of total money transferred.[27][28][full citation needed]
With the support ofFinancial Sector Deepening Kenyaand theBill & Melinda Gates Foundation,Tavneet Surifrom theMassachusetts Institute of Technologyand William Jack fromGeorgetown Universityhave produced a series of papers extolling the benefits of M-PESA. In particular, their 2016 article published inSciencehas been influential in the international development community. The much cited result of the paper was that "access to M-PESA increased per capita consumption levels and lifted 194,000 households, or 2 % of Kenyan households, out of poverty."[29]Global development institutions focusing on the development potential of financial technology frequently cite M-PESA as a major success story in this respect, citing the poverty-reduction claim and including a reference to Suri and Jack's 2016 signature article. In a report on "Financing for Development", the United Nations write: "The digitalization of finance offers new possibilities for greater financial inclusion and alignment with the 2030 Agenda for Sustainable Development and implementation of the Social Development Goals. In Kenya, the expansion of mobile money lifted two per cent of households in the country above the poverty line."[30]
However, these findings on the role of M-PESA in reducing poverty have been contested in a 2019 paper, arguing that "Suri and Jack’s work contains so many serious errors, omissions, logical inconsistencies and flawed methodologies that it is actually correct to say that they have helped to catalyse into existence a largely false narrative surrounding the power of the fin-tech industry to advance the cause of poverty reduction and sustainable development in Africa (and elsewhere)".[31]
M-PESA was first launched by the Kenyan mobile network operator Safaricom, whereVodafoneis technically a minority shareholder (40%), in March 2007.[17]M-PESA quickly captured a significant market share for cash transfers, and grew to 17 million subscribers by December 2011 in Kenya alone.[1]
The growth of the service forced formal banking institutions to take note of the new venture. In December 2008, a group of banks reportedly lobbied the Kenyan finance minister to audit M-PESA, in an effort to at least slow the growth of the service. This ploy failed, as the audit found that the service was robust.[32]At this time the Banking Act did not provide a basis to regulate products offered by non-banks, of which M-PESA was one such very successful product. As at November 2014, M-PESA transactions for the 11 months of 2014 were valued at KSh2.1 trillion, a 28% increase from 2013, and almost half the value of the country's GDP.[citation needed]
On 19 November 2014, Safaricom launched a companionAndroidapp,Safaricom M-Ledger[33][non-primary source needed]for its M-Pesa users. The application, currently[when?]available only on Android, but as of now it's supported byiOSdevices. The app gives M-PESA users a historical view of all their transactions. Many other companies business models rely on the M-PESA system in Kenya, such asM-kopaandSportpesa.[34]
On 23 February 2018, it was reported that the Google Play store started taking payments for apps via Kenya's M-PESA service.[35]On 8 January 2019, Safaricom launchedFuliza, an M-PESA overdraft facility.[36]
M-PESA was launched in Tanzania by Vodacom in 2008 but its initial ability to attract customers fell short of expectations. In 2010, theInternational Finance Corporationreleased a report which explored many of these issues in greater depth and analyzed the strategic changes that Vodacom has implemented to improve their market position.[37]As of May 2013, M-PESA in Tanzania had five million subscribers.[38]
In 2008, Vodafone partnered withRoshan, Afghanistan's primary mobile operator, to provide M-PESA, the local brand of the service.[39]When the service was launched, it was initially used to pay policemen's salaries set to be competitive with what theTalibanwere earning. Soon after the product was launched, theAfghan National Policefound that under the previous cash model, 10% of their workforce were ghost police officers who did not exist; their salaries had been pocketed by others. When corrected in the new system, many police officers believed that they had received a raise or that there had been a mistake, as their salaries rose significantly. The National Police discovered that there was so much corruption when payments had been made using the previous model that the policemen did not know their true salary. The service was so successful that it was expanded to include limited merchant payments, peer-to-peer transfers, loan disbursements and payments.[40]
In September 2010, Vodacom and Nedbank announced the launch of the service inSouth Africa, where there were estimated to be more than 13 million "economically active" people without a bank account.[41]M-PESA has been slow to gain a toehold in the South African market compared to Vodacom's projections that it would sign up 10 million users in the following three years. By May 2011, it had registered approximately 100,000 customers.[42]The gap between expectations for M-PESA's performance and its actual performance can be partly attributed to differences between the Kenyan and South African markets, including the banking regulations at the time of M-PESA's launch in each country.[43]According to MoneyWeb,[44]a South African investment website, "A tough regulatory environment with regards to customer registration and the acquisition of outlets also compounded the company's troubles, as the local regulations are more stringent in comparison to our African counterparts. Lack of education and product understanding also hindered efforts in the initial roll out of the product." In June 2011, Vodacom and Nedbank launched a campaign to re-position M-PESA, targeting the product to potential customers who have a higher Living Standard Measures (LSM)[45]than were first targeted.[46]
Despite efforts, as at March 2015, M-PESA still struggled to grow its customer base. South Africa lags behind Tanzania and Kenya with only ca. 1 million subscribers. This comes as no surprise as South Africa is well known for being ahead of financial institutions globally in terms of maturity and technological innovation. According to Genesis Analytics, 70% of South Africans are "banked", meaning that they have at least one bank account with an established financial institution which have their own banking products which directly compete with the M-PESA offering.[47]
M-PESA was launched inIndia[48]as a close partnership withICICIbank in November 2011.[49]Development for the bank began as early as 2008. Vodafone India had partnered with both ICICI and ICICI bank,[50]ICICI launched M-Pesa on 18 April 2013.[51]Vodafone had planned to roll out this service throughout India.[52]The user needed to register for this service, registration was free and there were charges levied per M-PESA transaction for money transfer services and DTH and Prepaid recharges could be done through M-PESA for free.[53][54]
M-PESA was shut down from 15 July 2019 due to regulatory curbs and stress in the sector,[55]with Vodafone surrendering their PPI licence on 1 October 2019.[56]
In March 2014, M-PESA expanded intoRomania, while mentioning that it may continue to expand elsewhere into Eastern Europe, as a number of individuals there possess mobile phones but do not possess traditional bank accounts. As of May 2014, however, it was considered unlikely that the service would expand into Western Europe anytime soon.[57]In December 2017, Vodafone closed its M-PESA product in Romania.[58]
In May 2015, M-PESA was also launched inAlbania. It was shut down on 14 July 2017.[59]
M-PESA expanded intoMozambique,Lesotho, andEgyptin May, June, and July 2013, respectively. A full listing of countries in which M-PESA currently operates can be found on M-Pesa's website.[citation needed]
M-PESA sought to engage Kenyan regulators and keep them updated on the development process. M-PESA also reached out to international regulators, such as the UK'sFinancial Conduct Authority(FCA) and thepayment card industryto understand how best to protect client information and adhere to internationally recognized best practices.[60]
Know your customer(KYC) requirements impose obligations on prospective clients and on banks to collect identification documents of clients and then to have those documents verified by banks.[61]The Kenyan government issues national identity cards that M-PESA leveraged in their business processes to satisfy their KYC requirements.[62]
M-PESA obtained a "special" license from regulators, despite concerns by regulators about non-branch banking adding to the current state of financial instability.
Safaricom released the new M-PESA platform dubbed M-PESA G2 to offer versatile integration capabilities for development partners.
Client-to-business and business-to-client disbursements are some of features available through the API.
The near-monopolistic providers of the M-PESA service are sometimes criticized for the high cost that the service imposes on its often poor users. The Bill and Melinda Gates Foundation warned in 2013 that lack of competition could drive up prices for customers of mobile money services and used M-PESA in Kenya as a negative example. According to the Foundation, a transfer of $1.50 cost $0.30 at the time, while the same provider charged only a tenth of this in neighboring Tanzania, where it was exposed to more competition.[63]A study sponsored by USAID found that poor uneducated customers, who often had bad vision, were a target of unfair practices within M-PESA. They had expensive subscriptions for ring-tones and similar unnecessary services pushed on them, with opaque pricing, and thus did not understand why their M-PESA deposits depleted so quickly. If they did, they were often unable to unsubscribe from those services without help. The authors concluded that it is not the marginalized people in Kenya who benefit from M-PESA, but mostly Safaricom.[64]A similar conclusion was reached by development economist Alan Gibson in a study commissioned byFinancial Sector Deepening Trust Kenya(FSD Kenya) on the occasion of the 10th anniversary of FSD Kenya in 2016.[65]He wrote that credit to business did not improve due to M-PESA and that credit to the agricultural sector even declined. He concluded in his otherwise very friendly survey that the financial sector benefitted handsomely from the expansion of M-PESA, while the living conditions of the people were not noticeably improved.
Milford Bateman et al. even conclude that M-PESA's expansion resulted in holding back economic development in Kenya. They diagnose serious weaknesses in the much cited paper by Suri and Jack, which had found positive effects on poverty, as M-PESA enabled female clients to move out of subsistence agriculture into micro-enterprise or small-scale trading activities. Alleged weaknesses include a failure to incorporate business failures and crowding out of competitors in the analysis. Bateman et al. call M-Pesa an extractive activity, by which large profits are created from taxing small-scale payments, which would be free if cash was used instead. As a large part of these profits are sent abroad to foreign shareholders of Safaricom, local spending power and demand are reduced, and with it the development potential for local enterprise.[66][67]
Kenya does not have a data protection law, which enables Safaricom to use sensitive data of its subscribers rather freely. A data scandal surfaced in 2019 when Safaricom was sued in court for the alleged breach of data privacy of an estimated 11.5 million subscribers who had used their Safaricom numbers for sports betting. The data was allegedly offered on the black market.[68]
|
https://en.wikipedia.org/wiki/M-Pesa
|
Cryptographic primitivesare well-established, low-levelcryptographicalgorithmsthat are frequently used to buildcryptographic protocolsfor computer security systems.[1]These routines include, but are not limited to,one-way hash functionsandencryption functions.
When creatingcryptographic systems,designersuse cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion.
Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable withXnumber of computer operations, and it is broken with significantly fewer thanXoperations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include:
Cryptographic primitives are one of the building blocks of every cryptosystem, e.g.,TLS,SSL,SSH, etc. Cryptosystem designers, not being in a position to definitivelyprovetheir security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any cryptosystem and it is the responsibility of the designer(s) to avoid them.
Cryptographic primitives are not cryptographic systems, as they are quite limited on their own. For example, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined insecurity protocolscan more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it isconfidentialandintegrity-protected), an encoding routine, such asDES, and a hash-routine such asSHA-1can be used in combination. If the attacker does not know the encryption key, they cannot modify the message such that message digest value(s) would be valid.
Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in cryptosystems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design andbuggyor not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature.[citation needed]There are some basic properties that can be verified with automated methods, such asBAN logic. There are even methods for full verification (e.g. theSPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on theOpenSSLvulnerabilitynews pagehere.
|
https://en.wikipedia.org/wiki/Cryptographic_primitive
|
Narrowcastingis the dissemination of information to a specialised audience, rather than to the broader public-at-large; it is the opposite ofbroadcasting. It may refer to advertising or programming via radio,podcast, newspaper, television, or theInternet. The term "multicast" is sometimes used interchangeably, although strictly speaking this refers to the technology used, and narrowcasting to thebusiness model. Narrowcasting is sometimes aimed at paid subscribers, such as in the case ofcable television.
The evolution ofnarrowcastingcame from broadcasting. In the early 20th century,Charles Herrolddesignated radio transmissions meant for a single receiver, distinguished frombroadcasting, meant for a general audience.[1]Merriam-Websterreports the first known use of the word in 1932.[2]Broadcasting was revived in the context ofsubscription radioprograms in the late 1940s,[3]after which the term narrowcasting entered the common lexicon due to computer scientist andpublic broadcastingadvocateJ. C. R. Licklider, who in a 1967 report envisioned[4]
...a multiplicity of television networks aimed at serving the needs of smaller, specialized audiences. 'Here,' stated Licklider, 'I should like to coin the term "narrowcasting," using it to emphasize the rejection or dissolution of the constraints imposed by commitment to a monolithic mass-appeal, broadcast approach.'
The term "multicast" is sometimes used interchangeably, although strictly speaking this refers to the technology used, and narrowcasting to thebusiness model.[5]Narrowcasting is sometimes aimed at paid subscribers,[2]such as in the case ofcable television.[6]
In the beginning of the 1990s, when American television was still mainly ruled by three major networks (ABC,CBSandNBC), it was believed that the greatest achievement was to promote and create content that would be directed towards a huge mass of people, avoiding completely those projects that might appeal to only a reduced audience. That was mainly due to the fact that specially in the earlier days of television, there was not much more competition.[7]Nevertheless, this changed once independent stations, more cable channels, and the success of videocassettes started increasing and rising, which gave the audiences the possibility of having more options. Thus, this previous mass-oriented point of view started to change towards one that was, obviously, narrower.[8]
It was the arrival of cable television that allowed a much larger number of producers and programmers to aim at smaller audiences, such asMTV, which started off as the channel for those who loved music.[7]
Narrowcasting has made its place in, for example, the way television networks schedule shows; while one night they might choose to stream shows directed at teenagers, a different night they might want to focus on another specific kind of audience, such as those interested in documentaries. In this way, they target what could be seen as a narrow audience, but collect their attention altogether as a mass audience on one night.[8]
Related toniche marketingortarget marketing, narrowcasting involves aiming media messages at specific segments of the public defined by values, preferences, demographic attributes, or subscription.[9][10]Narrowcasting is based on thepostmodernidea thatmass audiencesdo not exist.[11]
Marketingexperts are often interested in narrowcast media as a commercialadvertisingtool, since access to such content implies exposure to a specific and clearly defined prospective consumer audience. The theory being that, by identifying particulardemographicsviewing such programs, advertisers can better target their markets. Pre-recorded television programs are often broadcast to captive audiences intaxi cabs, buses,elevators, and queues. For instance, theCabvisionnetwork inLondon'sblack cabsshows limited pre-recordedtelevision programsinterspersed with targeted advertising to cab passengers.[12]Point of saleadvertising is a form of narrowcasting.[13]
Interactive narrowcasting allows users to interact with products or services viatouch screensor other technology, thus allowing them to interact with them before purchasing them.[14]
Narrowcasting has become increasingly used to target audiences with a particular political leaning, such aspolitical satireby the entertainment industry.[15]It has been often pointed out thatDonald Trumpused narrowcasting as well as broadcasting to good effect.[16][17][18]
The termnarrowcastingcan also apply to the spread of information to an audience (private or public) which is by nature geographically limited—a group such as office employees, military troops, or conference attendees—and requires a localised dissemination of information from a shared source.[19]Hotels, hospitals, museums, and offices, use narrowcasting to display relevant information to their visitors and staff.[13]
Both broadcasting and a narrowcasting models are found on the Internet. Sites based on the latter require the user to register an account andlog inbefore viewing content. Narrowcasting may also employ various types ofpush technologies, which send information directly to subscribers;electronic mailing lists, where emails are sent to subscribers, are an example of these.[5]
Narrowcasting is also sometimes applied topodcasting, since the audience for a podcast is often specific and sharply defined.[20]
This evolution towards narrowcasting was discussed in 1993 byHamid Naficy, who focused on this change specifically inLos Angeles, and how such a content directed towards a narrowed audience affected social culture. For example, with the rise of Middle Eastern television programs, more content that did not have the pressure to have a mass audience appeal to watch it was able to be produced and promoted. This made it easier for minorities to feel better represented in television.[21]
Narrowcasting by definition focuses on specific groups of people, which may promote division or even conflict among groups. Political, social, or other ideologies promoted by narrowcasting have the potential to cause harm to society.[13]
There is also the danger of people living in afilter bubble, where the audience is not exposed to different viewpoints, opinions or ideologies, leading to thinking that their opinion is the correct one. The resulting narrow-mindedness can lead to conflict.[13]
The 2022 Australian sci-fi thrillerMonolith, in which the sole on-screen actor is a podcaster, provides commentary on narrowcasting. Ari Mattes ofNotre Dame Universitywrote in an article inThe Conversation: "Monolithis one of the first Australian films to critically navigate the ramifications of narrowcasting technology... the strange solitude of interpersonal communication in the global information economy underpins the whole thing".[22]
|
https://en.wikipedia.org/wiki/Narrowcasting
|
Inspecial relativity,four-momentum(also calledmomentum–energyormomenergy[1]) is the generalization of theclassical three-dimensional momentumtofour-dimensional spacetime. Momentum is a vector inthree dimensions; similarly four-momentum is afour-vectorinspacetime. Thecontravariantfour-momentum of a particle with relativistic energyEand three-momentump= (px,py,pz) =γmv, wherevis the particle's three-velocity andγtheLorentz factor, isp=(p0,p1,p2,p3)=(Ec,px,py,pz).{\displaystyle p=\left(p^{0},p^{1},p^{2},p^{3}\right)=\left({\frac {E}{c}},p_{x},p_{y},p_{z}\right).}
The quantitymvof above is the ordinarynon-relativistic momentumof the particle andmitsrest mass. The four-momentum is useful in relativistic calculations because it is aLorentz covariantvector. This means that it is easy to keep track of how it transforms underLorentz transformations.
Calculating theMinkowski norm squaredof the four-momentum gives aLorentz invariantquantity equal (up to factors of thespeed of lightc) to the square of the particle'sproper mass:
p⋅p=ημνpμpν=pνpν=−E2c2+|p|2=−m2c2{\displaystyle p\cdot p=\eta _{\mu \nu }p^{\mu }p^{\nu }=p_{\nu }p^{\nu }=-{E^{2} \over c^{2}}+|\mathbf {p} |^{2}=-m^{2}c^{2}}where the following denote:
p{\textstyle p}, the four-momentum vector of a particle,
p⋅p{\textstyle p\cdot p}, the Minkowski inner product of the four-momentum with itself,
pμ{\textstyle p^{\mu }}andpν{\textstyle p^{\nu }}, the contravariant components of the four-momentum vector,
pν{\textstyle p_{\nu }}, the covariant form,
E{\textstyle E}, the energy of the particle,
c{\textstyle c}, the speed of light,
|p|{\textstyle |\mathbf {p} |}, the magnitude of the four-momentum vector,
m{\textstyle m}, theinvariant mass(rest) of the particle,
andημν=(−1000010000100001){\displaystyle \eta _{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}is themetric tensorofspecial relativitywithmetric signaturefor definiteness chosen to be(–1, 1, 1, 1). The negativity of the norm reflects that the momentum is atimelikefour-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout.
The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momentapandq, the quantityp⋅qis invariant.
For a massive particle, the four-momentum is given by the particle'sinvariant massmmultiplied by the particle'sfour-velocity,pμ=muμ,{\displaystyle p^{\mu }=mu^{\mu },}where the four-velocityuisu=(u0,u1,u2,u3)=γv(c,vx,vy,vz),{\displaystyle u=\left(u^{0},u^{1},u^{2},u^{3}\right)=\gamma _{v}\left(c,v_{x},v_{y},v_{z}\right),}andγv:=11−v2c2{\displaystyle \gamma _{v}:={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}is the Lorentz factor (associated with the speedv{\displaystyle v}),cis thespeed of light.
There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocityu=dx/dτand simply definep=mu, being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with theprinciple of least actionand use theLagrangian frameworkto derive the four-momentum, including the expression for the energy.[2]One may at once, using the observations detailed below, define four-momentum from theactionS. Given that in general for a closed system withgeneralized coordinatesqiandcanonical momentapi,[3]pi=∂S∂qi=∂S∂xi,E=−∂S∂t=−c⋅∂S∂x0,{\displaystyle p_{i}={\frac {\partial S}{\partial q_{i}}}={\frac {\partial S}{\partial x_{i}}},\quad E=-{\frac {\partial S}{\partial t}}=-c\cdot {\frac {\partial S}{\partial x^{0}}},}it is immediate (recallingx0=ct,x1=x,x2=y,x3=zandx0= −x0,x1=x1,x2=x2,x3=x3in the present metric convention) thatpμ=∂S∂xμ=(−Ec,p){\displaystyle p_{\mu }={\frac {\partial S}{\partial x^{\mu }}}=\left(-{E \over c},\mathbf {p} \right)}is a covariant four-vector with the three-vector part being the canonical momentum.
Consider initially a system of one degree of freedomq. In the derivation of theequations of motionfrom the action usingHamilton's principle, one finds (generally) in an intermediate stage for thevariationof the action,δS=[∂L∂q˙δq]|t1t2+∫t1t2(∂L∂q−ddt∂L∂q˙)δqdt.{\displaystyle \delta S=\left.\left[{\frac {\partial L}{\partial {\dot {q}}}}\delta q\right]\right|_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\left({\frac {\partial L}{\partial q}}-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}}}\right)\delta qdt.}
The assumption is then that the varied paths satisfyδq(t1) =δq(t2) = 0, from whichLagrange's equationsfollow at once. When the equations of motion are known (or simply assumed to be satisfied), one may let go of the requirementδq(t2) = 0. In this case the path isassumedto satisfy the equations of motion, and the action is a function of the upper integration limitδq(t2), butt2is still fixed. The above equation becomes withS=S(q), and definingδq(t2) =δq, and letting in more degrees of freedom,δS=∑i∂L∂q˙iδqi=∑ipiδqi.{\displaystyle \delta S=\sum _{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\delta q_{i}=\sum _{i}p_{i}\delta q_{i}.}
Observing thatδS=∑i∂S∂qiδqi,{\displaystyle \delta S=\sum _{i}{\frac {\partial S}{\partial {q}_{i}}}\delta q_{i},}one concludespi=∂S∂qi.{\displaystyle p_{i}={\frac {\partial S}{\partial q_{i}}}.}
In a similar fashion, keep endpoints fixed, but lett2=tvary. This time, the system is allowed to move through configuration space at "arbitrary speed" or with "more or less energy", the field equations still assumed to hold and variation can be carried out on the integral, but instead observedSdt=L{\displaystyle {\frac {dS}{dt}}=L}by thefundamental theorem of calculus. Compute using the above expression for canonical momenta,dSdt=∂S∂t+∑i∂S∂qiq˙i=∂S∂t+∑ipiq˙i=L.{\displaystyle {\frac {dS}{dt}}={\frac {\partial S}{\partial t}}+\sum _{i}{\frac {\partial S}{\partial q_{i}}}{\dot {q}}_{i}={\frac {\partial S}{\partial t}}+\sum _{i}p_{i}{\dot {q}}_{i}=L.}
Now usingH=∑ipiq˙i−L,{\displaystyle H=\sum _{i}p_{i}{\dot {q}}_{i}-L,}whereHis theHamiltonian, leads to, sinceE=Hin the present case,E=H=−∂S∂t.{\displaystyle E=H=-{\frac {\partial S}{\partial t}}.}
Incidentally, usingH=H(q,p,t)withp=∂S/∂qin the above equation yields theHamilton–Jacobi equations. In this context,Sis calledHamilton's principal function.
The actionSis given byS=−mc∫ds=∫Ldt,L=−mc21−v2c2,{\displaystyle S=-mc\int ds=\int Ldt,\quad L=-mc^{2}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}},}whereLis the relativisticLagrangianfor a free particle. From this,
The variation of the action isδS=−mc∫δds.{\displaystyle \delta S=-mc\int \delta ds.}
To calculateδds, observe first thatδds2= 2dsδdsand thatδds2=δημνdxμdxν=ημν(δ(dxμ)dxν+dxμδ(dxν))=2ημνδ(dxμ)dxν.{\displaystyle \delta ds^{2}=\delta \eta _{\mu \nu }dx^{\mu }dx^{\nu }=\eta _{\mu \nu }\left(\delta \left(dx^{\mu }\right)dx^{\nu }+dx^{\mu }\delta \left(dx^{\nu }\right)\right)=2\eta _{\mu \nu }\delta \left(dx^{\mu }\right)dx^{\nu }.}
Soδds=ημνδdxμdxνds=ημνdδxμdxνds,{\displaystyle \delta ds=\eta _{\mu \nu }\delta dx^{\mu }{\frac {dx^{\nu }}{ds}}=\eta _{\mu \nu }d\delta x^{\mu }{\frac {dx^{\nu }}{ds}},}orδds=ημνdδxμdτdxνcdτdτ,{\displaystyle \delta ds=\eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}{\frac {dx^{\nu }}{cd\tau }}d\tau ,}and thusδS=−m∫ημνdδxμdτdxνdτdτ=−m∫ημνdδxμdτuνdτ=−m∫ημν[ddτ(δxμuν)−δxμddτuν]dτ{\displaystyle \delta S=-m\int \eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}d\tau =-m\int \eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}u^{\nu }d\tau =-m\int \eta _{\mu \nu }\left[{\frac {d}{d\tau }}\left(\delta x^{\mu }u^{\nu }\right)-\delta x^{\mu }{\frac {d}{d\tau }}u^{\nu }\right]d\tau }which is justδS=[−muμδxμ]t1t2+m∫t1t2δxμduμdsds{\displaystyle \delta S=\left[-mu_{\mu }\delta x^{\mu }\right]_{t_{1}}^{t_{2}}+m\int _{t_{1}}^{t_{2}}\delta x^{\mu }{\frac {du_{\mu }}{ds}}ds}
δS=[−muμδxμ]t1t2+m∫t1t2δxμduμdsds=−muμδxμ=∂S∂xμδxμ=−pμδxμ,{\displaystyle \delta S=\left[-mu_{\mu }\delta x^{\mu }\right]_{t_{1}}^{t_{2}}+m\int _{t_{1}}^{t_{2}}\delta x^{\mu }{\frac {du_{\mu }}{ds}}ds=-mu_{\mu }\delta x^{\mu }={\frac {\partial S}{\partial x^{\mu }}}\delta x^{\mu }=-p_{\mu }\delta x^{\mu },}
where the second step employs the field equationsduμ/ds= 0,(δxμ)t1= 0, and(δxμ)t2≡δxμas in the observations above. Now compare the last three expressions to findpμ=∂μ[S]=∂S∂xμ=muμ=m(c1−v2c2,vx1−v2c2,vy1−v2c2,vz1−v2c2),{\displaystyle p^{\mu }=\partial ^{\mu }[S]={\frac {\partial S}{\partial x_{\mu }}}=mu^{\mu }=m\left({\frac {c}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{x}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{y}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{z}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right),}with norm−m2c2, and the famed result for the relativistic energy,
E=mc21−v2c2=mrc2,{\displaystyle E={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}=m_{r}c^{2},}
wheremris the now unfashionablerelativistic mass, follows. By comparing the expressions for momentum and energy directly, one has
p=Evc2,{\displaystyle \mathbf {p} =E{\frac {\mathbf {v} }{c^{2}}},}
that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives theenergy–momentum relation,
E2c2=p⋅p+m2c2.{\displaystyle {\frac {E^{2}}{c^{2}}}=\mathbf {p} \cdot \mathbf {p} +m^{2}c^{2}.}
Substitutingpμ↔−∂S∂xμ{\displaystyle p_{\mu }\leftrightarrow -{\frac {\partial S}{\partial x^{\mu }}}}in the equation for the norm gives the relativisticHamilton–Jacobi equation,[4]
ημν∂S∂xμ∂S∂xν=−m2c2.{\displaystyle \eta ^{\mu \nu }{\frac {\partial S}{\partial x^{\mu }}}{\frac {\partial S}{\partial x^{\nu }}}=-m^{2}c^{2}.}
It is also possible to derive the results from the Lagrangian directly. By definition,[5]p=∂L∂v=(∂L∂x˙,∂L∂y˙,∂L∂z˙)=m(γvx,γvy,γvz)=mγv=mu,E=p⋅v−L=mc21−v2c2,{\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial L}{\partial \mathbf {v} }}=\left({\partial L \over \partial {\dot {x}}},{\partial L \over \partial {\dot {y}}},{\partial L \over \partial {\dot {z}}}\right)=m(\gamma v_{x},\gamma v_{y},\gamma v_{z})=m\gamma \mathbf {v} =m\mathbf {u} ,\\[3pt]E&=\mathbf {p} \cdot \mathbf {v} -L={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},\end{aligned}}}which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector.
The energy and the three-momentum areseparately conservedquantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below.
More pedestrian approaches include expected behavior in electrodynamics.[6]In this approach, the starting point is application ofLorentz force lawandNewton's second lawin the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance ofelectric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector.
It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of thevelocity addition formulaand assuming conservation of momentum.[7][8]This too gives only the three-vector part.
As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa):
Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, sincekinetic energyin the system center-of-mass frame andpotential energyfrom forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta(5 GeV/c, 4 GeV/c, 0, 0)and(5 GeV/c, −4 GeV/c, 0, 0)each have (rest) mass 3GeV/c2separately, but their total mass (the system mass) is 10GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10GeV/c2.
One practical application fromparticle physicsof the conservation of theinvariant massinvolves combining the four-momentapAandpBof two daughter particles produced in the decay of a heavier particle with four-momentumpCto find the mass of the heavier particle. Conservation of four-momentum givespCμ=pAμ+pBμ, while the massMof the heavier particle is given by−PC⋅PC=M2c2. By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal toM. This technique is used, e.g., in experimental searches forZ′ bosonsat high-energy particlecolliders, where the Z′ boson would show up as a bump in the invariant mass spectrum ofelectron–positronormuon–antimuon pairs.
If the mass of an object does not change, the Minkowski inner product of its four-momentum and correspondingfour-accelerationAμis simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, sopμAμ=ημνpμAν=ημνpμddτpνm=12mddτp⋅p=12mddτ(−m2c2)=0.{\displaystyle p^{\mu }A_{\mu }=\eta _{\mu \nu }p^{\mu }A^{\nu }=\eta _{\mu \nu }p^{\mu }{\frac {d}{d\tau }}{\frac {p^{\nu }}{m}}={\frac {1}{2m}}{\frac {d}{d\tau }}p\cdot p={\frac {1}{2m}}{\frac {d}{d\tau }}\left(-m^{2}c^{2}\right)=0.}
For acharged particleofchargeq, moving in an electromagnetic field given by theelectromagnetic four-potential:A=(A0,A1,A2,A3)=(ϕc,Ax,Ay,Az){\displaystyle A=\left(A^{0},A^{1},A^{2},A^{3}\right)=\left({\phi \over c},A_{x},A_{y},A_{z}\right)}whereφis thescalar potentialandA= (Ax,Ay,Az)thevector potential, the components of the (notgauge-invariant) canonical momentum four-vectorPisPμ=pμ+qAμ.{\displaystyle P^{\mu }=p^{\mu }+qA^{\mu }.}
This, in turn, allows the potential energy from the charged particle in an electrostatic potential and theLorentz forceon the charged particle moving in a magnetic field to be incorporated in a compact way, inrelativistic quantum mechanics.
In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is a four-vector with covariant index:[9]
Four-momentumPμ{\displaystyle P_{\mu }}is expressed through the energyE{\displaystyle E}of physical system and relativistic momentumP{\displaystyle \mathbf {P} }. At the same time, the four-momentumPμ{\displaystyle P_{\mu }}can be represented as the sum of two non-local four-vectors of integral type:
Four-vectorpμ{\displaystyle p_{\mu }}is the generalized four-momentum associated with the action of fields on particles; four-vectorKμ{\displaystyle K_{\mu }}is the four-momentum of the fields arising from the action of particles on the fields.
EnergyE{\displaystyle E}and momentumP{\displaystyle \mathbf {P} }, as well as components of four-vectorspμ{\displaystyle p_{\mu }}andKμ{\displaystyle K_{\mu }}can be calculated if theLagrangiandensityL=Lp+Lf{\displaystyle {\mathcal {L}}={\mathcal {L}}_{p}+{\mathcal {L}}_{f}}of the system is given. The following formulas are obtained for the energy and momentum of the system:
HereLp{\displaystyle {\mathcal {L}}_{p}}is that part of the Lagrangian density that contains terms with four-currents;v{\displaystyle \mathbf {v} }is the velocity of matter particles;u0{\displaystyle u^{0}}is the time component of four-velocity of particles;g{\displaystyle g}is determinant of metric tensor;Lf=∫VLf−gdx1dx2dx3{\displaystyle L_{f}=\int _{V}{\mathcal {L}}_{f}{\sqrt {-g}}dx^{1}dx^{2}dx^{3}}is the part of the Lagrangian associated with the Lagrangian densityLf{\displaystyle {\mathcal {L}}_{f}};vn{\displaystyle \mathbf {v} _{n}}is velocity of a particle of matter with numbern{\displaystyle n}.
|
https://en.wikipedia.org/wiki/Four-momentum
|
Acall gateis a mechanism in Intel'sx86 architecturefor changing theprivilege levelof a process when it executes a predefinedfunction callusing a CALL FAR instruction.
Call gates are intended to allow less privileged code to call code with a higher privilege level. This type of mechanism is essential in modern operating systems that employmemory protectionsince it allows user applications to usekernelfunctions andsystem callsin a way that can be controlled by theoperating system.
Call gates use a special selector value to reference a descriptor accessed via theGlobal Descriptor Tableor theLocal Descriptor Table, which contains the information needed for the call across privilege boundaries. This is similar to the mechanism used forinterrupts.
Assuming a call gate has been set up already by theoperating systemkernel, code simply does a CALL FAR with the necessarysegment selector(the offset field is ignored). The processor will perform a number of checks to make sure the entry is valid and the code was operating at sufficient privilege to use the gate. Assuming all checks pass, a new CS/EIPis loaded from thesegment descriptor, and continuation information is pushed onto the stack of the new privilege level (old SS, old ESP, old CS, old EIP, in that order). Parameters may also be copied from the old stack to the new stack if needed. The number of parameters to copy is located in the call gate descriptor.
The kernel may return to the user space program by using a RET FAR instruction which pops the continuation information off the stack and returns to the outer privilege level.
Multicswas the first user of call gates. TheHoneywell 6180had call gates as part of the architecture, but Multics simulated them on the olderGE 645.
OS/2was an early user of Intel call gates to transfer betweenapplication coderunning in ring 3, privileged code running in ring 2, and kernel code in ring 0.
Windows 95 executes drivers and process switching in ring 0, while applications, including API DLL such as kernel32.dll and krnl386.exe are executed in ring 3. Driver VWIN32.VXD provides key operating system primitives at ring 0. It allows calling of driver functions from 16-bit applications (MSDOS and Win16). This address is obtained by calling INT 2Fh, with 1684h in the AX register. To identify which VxD an
entry point is being requested for, the BX register is set to the 16-bit VxD ID. Upon return from the INT instruction, the ES.DI registers contain a far pointer that can be called to transfer control to the VxD running at ring 0. The descriptor pointed by ES is actually a call gate.[1]32-bit applications, however, when they need to access Windows 95 driver code, call undocumented VxDCall function in KERNEL32.DLL which essentially calls INT 30h, which changes ring mode.
Modern x86 operating systems are transitioning away from CALL FAR call gates. With the introduction of x86 instructions forsystem call(SYSENTER/SYSEXIT by Intel and SYSCALL/SYSRET by AMD), a new faster mechanism was introduced for control transfers for x86 programs. As most other architectures do not support call gates, their use was rare even before these new instructions, as software interrupts ortrapswere preferred for portability, even though call gates are significantly faster than interrupts.
Call gates are more flexible than the SYSENTER/SYSEXIT and SYSCALL/SYSRET instructions since unlike the latter two, call gates allow for changing from an arbitrary privilege level to an arbitrary (albeit higher or equal) privilege level. The fast SYS* instructions only allow control transfers fromring3 to 0 and vice versa.
To preserve system security, the Global Descriptor Table must be held in protected memory, otherwise any program will be able to create its own call gate and use it to raise its privilege level. Call gates have been used in softwaresecurity exploits, when ways have been found around this protection.[2]One example of this is the e-mailwormGurong.A, written to exploit theMicrosoft Windowsoperating system, which uses \Device\PhysicalMemory to install a call gate.[3]
|
https://en.wikipedia.org/wiki/Call_gate_(Intel)
|
AKarnaugh map(KMorK-map) is a diagram that can be used to simplify aBoolean algebraexpression.Maurice Karnaughintroduced the technique in 1953[1][2]as a refinement ofEdward W. Veitch's 1952Veitch chart,[3][4]which itself was a rediscovery ofAllan Marquand's 1881logical diagram[5][6]orMarquand diagram.[4]They are also known asMarquand–Veitch diagrams,[4]Karnaugh–Veitch (KV) maps, and (rarely)Svoboda charts.[7]An early advance in the history offormal logicmethodology, Karnaugh maps remain relevant in the digital age, especially in the fields oflogical circuitdesign anddigital engineering.[4]
A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability.[1]It also permits the rapid identification and elimination of potentialrace conditions.[clarification needed]
The required Boolean results are transferred from atruth tableonto a two-dimensional grid where, in Karnaugh maps, the cells are ordered inGray code,[8][4]and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of acanonical formof the logic in the original truth table.[9]These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number oflogic gates. Asum-of-products expression(SOP) can always be implemented usingAND gatesfeeding into anOR gate, and aproduct-of-sums expression(POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F').[10]Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example inconditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.[11]
Karnaugh maps are used to facilitate the simplification ofBoolean algebrafunctions. For example, consider the Boolean function described by the followingtruth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variablesA,B,C,Dand their inverses.
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered inGray coderather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — acanonical form— for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example,ADwould mean a cell which covers the 2x2 area whereAandDare true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand,ADwould mean the cells whereAis true andDis false (that is,Dis true).
The grid istoroidallyconnected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore,ADcan be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as isBD, which includes the four corners.
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
Thus the first minterm in the Boolean sum-of-products expression isAC.
For the green grouping,AandBmaintain the same state, whileCandDchange.Bis 0 and has to be negated before it can be included. The second term is thereforeAB. Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the termBCD.
The solutions of each grouping are combined: the normal form of the circuit isAC¯+AB¯+BCD¯{\displaystyle A{\overline {C}}+A{\overline {B}}+BC{\overline {D}}}.
Thus the Karnaugh map has guided a simplification of
It would also have been possible to derive this simplification by carefully applying theaxioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
The inverse of a function is solved in the same way by grouping the 0s instead.[nb 1]
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
This yields the inverse:
Through the use ofDe Morgan's laws, theproduct of sumscan be determined:
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value off(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is justA, notAC. In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
Through the use ofDe Morgan's laws, theproduct of sumscan be determined:
Karnaugh maps are useful for detecting and eliminatingrace conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding,adjacenthas a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term ofAD¯{\displaystyle A{\overline {D}}}would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term isredundantin terms of the static logic of the system, but such redundant, orconsensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term ofA¯D{\displaystyle {\overline {A}}D}must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression forf, but with a new factor of(A+D¯){\displaystyle \left(A+{\overline {D}}\right)}.
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of∑m(){\textstyle \sum m()}and the race hazard free (seeprevious section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical minimization methods include:
|
https://en.wikipedia.org/wiki/Karnaugh_map
|
Spectral regularizationis any of a class ofregularizationtechniques used inmachine learningto control the impact of noise and preventoverfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart.
Spectral regularization algorithms rely on methods that were originally defined and studied in the theory ofill-posedinverse problems(for instance, see[1]) focusing on the inversion of a linear operator (or a matrix) that possibly has a badcondition numberor an unbounded inverse. In this context, regularization amounts to substituting the original operator by abounded operatorcalled the "regularization operator" that has a condition number controlled by a regularization parameter,[2]a classical example beingTikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise.[2]The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues".[2]Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization,Landweber iteration, andtruncated singular value decomposition(TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this parameter include the discrepancy principle, generalizedcross validation, and the L-curve criterion.[3]
It is of note that the notion of spectral filtering studied in the context of machine learning is closely connected to the literature onfunction approximation(in signal processing).
The training set is defined asS={(x1,y1),…,(xn,yn)}{\displaystyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}, whereX{\displaystyle X}is then×d{\displaystyle n\times d}input matrix andY=(y1,…,yn){\displaystyle Y=(y_{1},\dots ,y_{n})}is the output vector. Where applicable, the kernel function is denoted byk{\displaystyle k}, and then×n{\displaystyle n\times n}kernel matrix is denoted byK{\displaystyle K}which has entriesKij=k(xi,xj){\displaystyle K_{ij}=k(x_{i},x_{j})}andH{\displaystyle {\mathcal {H}}}denotes theReproducing Kernel Hilbert Space(RKHS) with kernelk{\displaystyle k}. The regularization parameter is denoted byλ{\displaystyle \lambda }.
(Note: Forg∈G{\displaystyle g\in G}andf∈F{\displaystyle f\in F}, withG{\displaystyle G}andF{\displaystyle F}being Hilbert spaces, given a linear, continuous operatorL{\displaystyle L}, assume thatg=Lf{\displaystyle g=Lf}holds. In this setting, the direct problem would be to solve forg{\displaystyle g}givenf{\displaystyle f}and the inverse problem would be to solve forf{\displaystyle f}giveng{\displaystyle g}. If the solution exists, is unique and stable, the inverse problem (i.e. the problem of solving forf{\displaystyle f}) is well-posed; otherwise, it is ill-posed.)
The connection between the regularized least squares (RLS) estimation problem (Tikhonov regularization setting) and the theory of ill-posed inverse problems is an example of how spectral regularization algorithms are related to the theory of ill-posed inverse problems.
The RLS estimator solvesminf∈H1n∑i=1n(yi−f(xi))2+λ‖f‖H2{\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}}and the RKHS allows for expressing this RLS estimator asfSλ(X)=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})}where(K+nλI)c=Y{\displaystyle (K+n\lambda I)c=Y}withc=(c1,…,cn){\displaystyle c=(c_{1},\dots ,c_{n})}.[4]The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimizationminf∈H1n∑i=1n(yi−f(xi))2{\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}}can be written asfSλ(X)=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})}such thatKc=Y{\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved:[5]{minf∈H1n∑i=1n(yi−f(xi))2→minf∈H1n∑i=1n(yi−f(xi))2+λ‖f‖H2}≡{Kc=Y→(K+nλI)c=Y}.{\displaystyle \left\{\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}\rightarrow \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}\right\}\equiv {\biggl \{}Kc=Y\rightarrow \left(K+n\lambda I\right)c=Y{\biggr \}}.}
In this learning setting, the kernel matrix can be decomposed asK=QΣQT{\displaystyle K=Q\Sigma Q^{T}}, withσ=diag(σ1,…,σn),σ1≥σ2≥⋯≥σn≥0{\displaystyle \sigma =\operatorname {diag} (\sigma _{1},\dots ,\sigma _{n}),~\sigma _{1}\geq \sigma _{2}\geq \cdots \geq \sigma _{n}\geq 0}andq1,…,qn{\displaystyle q_{1},\dots ,q_{n}}are the corresponding eigenvectors. Therefore, in the initial learning setting, the following holds:c=K−1Y=QΣ−1QTY=∑i=1n1σi⟨qi,Y⟩qi.{\displaystyle c=K^{-1}Y=Q\Sigma ^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}}}\langle q_{i},Y\rangle q_{i}.}
Thus, for small eigenvalues, even small perturbations in the data can lead to considerable changes in the solution. Hence, the problem is ill-conditioned, and solving this RLS problem amounts to stabilizing a possibly ill-conditioned matrix inversion problem, which is studied in the theory of ill-posed inverse problems; in both problems, a main concern is to deal with the issue ofnumerical stability.
Each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function, denoted here byGλ(⋅){\displaystyle G_{\lambda }(\cdot )}. If the Kernel matrix is denoted byK{\displaystyle K}, thenλ{\displaystyle \lambda }should control the magnitude of the smaller eigenvalues ofGλ(K){\displaystyle G_{\lambda }(K)}. In a filtering setup, the goal is to find estimatorsfSλ(X):=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X):=\sum _{i=1}^{n}c_{i}k(x,x_{i})}wherec=Gλ(K)Y{\displaystyle c=G_{\lambda }(K)Y}. To do so, a scalar filter functionGλ(σ){\displaystyle G_{\lambda }(\sigma )}is defined using the eigen-decomposition of the kernel matrix:Gλ(K)=QGλ(Σ)QT,{\displaystyle G_{\lambda }(K)=QG_{\lambda }(\Sigma )Q^{T},}which yieldsGλ(K)Y=∑i=1nGλ(σi)⟨qi,Y⟩qi.{\displaystyle G_{\lambda }(K)Y~=~\sum _{i=1}^{n}G_{\lambda }(\sigma _{i})\langle q_{i},Y\rangle q_{i}.}
Typically, an appropriate filter function should have the following properties:[5]
While the above items give a rough characterization of the general properties of filter functions for all spectral regularization algorithms, the derivation of the filter function (and hence its exact form) varies depending on the specific regularization method that spectral filtering is applied to.
In the Tikhonov regularization setting, the filter function for RLS is described below. As shown in,[4]in this setting,c=(K+nλI)−1Y{\displaystyle c=\left(K+n\lambda I\right)^{-1}Y}. Thus,c=(K+nλI)−1Y=Q(Σ+nλI)−1QTY=∑i=1n1σi+nλ<qi,Y>qi.{\displaystyle c=(K+n\lambda I)^{-1}Y=Q(\Sigma +n\lambda I)^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}+n\lambda }}<q_{i},Y>q_{i}.}
The undesired components are filtered out using regularization:
The filter function for Tikhonov regularization is therefore defined as:[5]Gλ(σ)=1σ+nλ.{\displaystyle G_{\lambda }(\sigma )={\frac {1}{\sigma +n\lambda }}.}
The idea behind the Landweber iteration isgradient descent:[5]
In this setting, ifn{\displaystyle n}is larger thanK{\displaystyle K}'s largest eigenvalue, the above iteration converges by choosingη=2/n{\displaystyle \eta =2/n}as the step-size:.[5]The above iteration is equivalent to minimizing1n‖Y−Kc‖22{\displaystyle {\frac {1}{n}}\left\|Y-Kc\right\|_{2}^{2}}(i.e. the empirical risk) via gradient descent; using induction, it can be proved that at thet{\displaystyle t}-th iteration, the solution is given by[5]c=η∑i=0t−1(I−ηK)iY.{\displaystyle c=\eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}Y.}
Thus, the appropriate filter function is defined by:Gλ(σ)=η∑i=0t−1(I−ησ)i.{\displaystyle G_{\lambda }(\sigma )=\eta \sum _{i=0}^{t-1}\left(I-\eta \sigma \right)^{i}.}
It can be shown that this filter function corresponds to a truncated power expansion ofK−1{\displaystyle K^{-1}};[5]to see this, note that the relation∑i≥0xi=1/(1−x){\displaystyle \sum _{i\geq 0}x^{i}=1/(1-x)}, would still hold ifx{\displaystyle x}is replaced by a matrix; thus, ifK{\displaystyle K}(the kernel matrix), or ratherI−ηK{\displaystyle I-\eta K}, is considered, the following holds:K−1=η∑i=0∞(I−ηK)i∼η∑i=0t−1(I−ηK)i.{\displaystyle K^{-1}=\eta \sum _{i=0}^{\infty }\left(I-\eta K\right)^{i}\sim \eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}.}
In this setting, the number of iterations gives the regularization parameter; roughly speaking,t∼1/λ{\displaystyle t\sim 1/\lambda }.[5]Ift{\displaystyle t}is large, overfitting may be a concern. Ift{\displaystyle t}is small, oversmoothing may be a concern. Thus, choosing an appropriate time for early stopping of the iterations provides a regularization effect.
In the TSVD setting, given the eigen-decompositionK=QΣQT{\displaystyle K=Q\Sigma Q^{T}}and using a prescribed thresholdλn{\displaystyle \lambda n}, a regularized inverse can be formed for the kernel matrix by discarding all the eigenvalues that are smaller than this threshold.[5]Thus, the filter function for TSVD can be defined asGλ(σ)={1/σ,ifσ≥λn0,otherwise{\displaystyle G_{\lambda }(\sigma )={\begin{cases}1/\sigma ,&{\text{if }}\sigma \geq \lambda n\\[1ex]0,&{\text{otherwise}}\end{cases}}}
It can be shown that TSVD is equivalent to the (unsupervised) projection of the data using (kernel)Principal Component Analysis(PCA), and that it is also equivalent to minimizing the empirical risk on the projected data (without regularization).[5]Note that the number of components kept for the projection is the onlyfree parameterhere.
|
https://en.wikipedia.org/wiki/Regularization_by_spectral_filtering
|
Elliptic-curve Diffie–Hellman(ECDH) is akey agreementprotocol that allows two parties, each having anelliptic-curvepublic–private key pair, to establish ashared secretover aninsecure channel.[1][2][3]This shared secret may be directly used as a key, or toderive another key. The key, or the derived key, can then be used to encrypt subsequent communications using asymmetric-key cipher. It is a variant of theDiffie–Hellmanprotocol usingelliptic-curve cryptography.
The following example illustrates how a shared key is established. SupposeAlicewants to establish a shared key withBob, but the only channel available for them may be eavesdropped by a third party. Initially, thedomain parameters(that is,(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}in the prime case or(m,f(x),a,b,G,n,h){\displaystyle (m,f(x),a,b,G,n,h)}in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private keyd{\displaystyle d}(a randomly selected integer in the interval[1,n−1]{\displaystyle [1,n-1]}) and a public key represented by a pointQ{\displaystyle Q}(whereQ=d⋅G{\displaystyle Q=d\cdot G}, that is, the result ofaddingG{\displaystyle G}to itselfd{\displaystyle d}times). Let Alice's key pair be(dA,QA){\displaystyle (d_{\text{A}},Q_{\text{A}})}and Bob's key pair be(dB,QB){\displaystyle (d_{\text{B}},Q_{\text{B}})}. Each party must know the other party's public key prior to execution of the protocol.
Alice computes point(xk,yk)=dA⋅QB{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}. Bob computes point(xk,yk)=dB⋅QA{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}. The shared secret isxk{\displaystyle x_{k}}(thexcoordinate of the point). Most standardized protocols based on ECDH derive a symmetric key fromxk{\displaystyle x_{k}}using some hash-based key derivation function.
The shared secret calculated by both parties is equal, becausedA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}.
The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curvediscrete logarithmproblem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curveDiffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral (also known asECDHE, where final 'E' stands for "ephemeral").Ephemeral keysare temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoidman-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neitherforward secrecynor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a securekey derivation functionto the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, seeMQV.
If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. SeveralTLSlibraries were found to be vulnerable to this attack.[4]
The shared secret is uniformly distributed on a subset of[0,p){\displaystyle [0,p)}of size(n+1)/2{\displaystyle (n+1)/2}. For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function.
LetA,B∈Fp{\displaystyle A,B\in F_{p}}such thatB(A2−4)≠0{\displaystyle B(A^{2}-4)\neq 0}. The Montgomery form elliptic curveEM,A,B{\displaystyle E_{M,A,B}}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfying the equationBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with the point at infinity denoted as∞{\displaystyle \infty }. This is called the affine form of the curve. The set of allFp{\displaystyle F_{p}}-rational points ofEM,A,B{\displaystyle E_{M,A,B}}, denoted asEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfyingBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with∞{\displaystyle \infty }. Under a suitably defined addition operation,EM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is a group with∞{\displaystyle \infty }as the identity element. It is known that the order of this group is a multiple of 4. In fact, it is usually possible to obtainA{\displaystyle A}andB{\displaystyle B}such that the order ofEM,A,B{\displaystyle E_{M,A,B}}is4q{\displaystyle 4q}for a primeq{\displaystyle q}. For more extensive discussions of Montgomery curves and their arithmetic one may follow.[5][6][7]
For computational efficiency, it is preferable to work with projective coordinates. The projective form of the Montgomery curveEM,A,B{\displaystyle E_{M,A,B}}isBY2Z=X(X2+AXZ+Z2){\displaystyle BY^{2}Z=X(X^{2}+AXZ+Z^{2})}. For a pointP=[X:Y:Z]{\displaystyle P=[X:Y:Z]}onEM,A,B{\displaystyle E_{M,A,B}}, thex{\displaystyle x}-coordinate mapx{\displaystyle x}is the following:[7]x(P)=[X:Z]{\displaystyle x(P)=[X:Z]}ifZ≠0{\displaystyle Z\neq 0}andx(P)=[1:0]{\displaystyle x(P)=[1:0]}ifP=[0:1:0]{\displaystyle P=[0:1:0]}.Bernstein[8][9]introduced the mapx0{\displaystyle x_{0}}as follows:x0(X:Z)=XZp−2{\displaystyle x_{0}(X:Z)=XZ^{p-2}}which is defined for all values ofX{\displaystyle X}andZ{\displaystyle Z}inFp{\displaystyle F_{p}}. Following Miller,[10]Montgomery[5]and Bernstein,[9]the Diffie-Hellman key agreement can be carried out on a Montgomery curve as follows. LetQ{\displaystyle Q}be a generator of a prime order subgroup ofEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}. Alice chooses a secret keys{\displaystyle s}and has public keyx0(sQ){\displaystyle x_{0}(sQ)};
Bob chooses a secret keyt{\displaystyle t}and has public keyx0(tQ){\displaystyle x_{0}(tQ)}. The shared secret key of Alice and Bob isx0(stQ){\displaystyle x_{0}(stQ)}. Using classical computers, the best known method of obtainingx0(stQ){\displaystyle x_{0}(stQ)}fromQ,x0(sQ){\displaystyle Q,x_{0}(sQ)}andx0(tQ){\displaystyle x_{0}(tQ)}requires aboutO(p1/2){\displaystyle O(p^{1/2})}time using thePollards rho algorithm.[11]
The most famous example of Montgomery curve isCurve25519which was introduced by Bernstein.[9]For Curve25519,p=2255−19,A=486662{\displaystyle p=2^{255}-19,A=486662}andB=1{\displaystyle B=1}.
The other Montgomery curve which is part of TLS 1.3 isCurve448which was introduced
by Hamburg.[12]For Curve448,p=2448−2224−1,A=156326{\displaystyle p=2^{448}-2^{224}-1,A=156326}andB=1{\displaystyle B=1}. Couple of Montgomery curves named M[4698] and M[4058] competitive toCurve25519andCurve448respectively have been proposed in.[13]For M[4698],p=2251−9,A=4698,B=1{\displaystyle p=2^{251}-9,A=4698,B=1}and for M[4058],p=2444−17,A=4058,B=1{\displaystyle p=2^{444}-17,A=4058,B=1}. At 256-bit security level, three Montgomery curves named M[996558], M[952902] and M[1504058] have been proposed in.[14]For M[996558],p=2506−45,A=996558,B=1{\displaystyle p=2^{506}-45,A=996558,B=1}, for M[952902],p=2510−75,A=952902,B=1{\displaystyle p=2^{510}-75,A=952902,B=1}and for M[1504058],p=2521−1,A=1504058,B=1{\displaystyle p=2^{521}-1,A=1504058,B=1}respectively. Apart from these two, other proposals of Montgomery curves can be found at.[15]
|
https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman
|
The generically namedMacintosh Processor Upgrade Card[1](code namedSTP[2]) is acentral processing unitupgrade card sold byApple Computer, designed for manyMotorola 68040-poweredMacintosh LC,QuadraandPerformamodels. The card contains aPowerPC 601CPU and plugs into the 68040 CPU socket of the upgraded machine.[3]The Processor upgrade card required the original CPU be plugged back into the card itself, and gave the machine the ability to run in its original 68040 configuration, or through the use of a software configuration utility allowed booting as a PowerPC 601 computer running at twice the original speed in MHz (50 MHz or 66 MHz) with 32 KB of L1 Cache, 256 KB of L2 Cache and a PowerPCFloating Point Unitavailable to software. The Macintosh Processor Upgrade requires and shipped withSystem 7.5.[4]
Development of the card started in July 1993.[2]The upgrade card was announced in January 1994 at the MacWorld Expo in San Francisco. Apple described the Macintosh Processor Upgrade Card as giving a performance increase of "two to four times" for general purposes, or "up to 10 times" for floating point intensive programs.
While the Macintosh Processor Upgrade did not plug into the LCProcessor Direct Slot, due to power used and the space taken by the upgrade, LC PDS cards could not be fitted while the card was installed.[3]This limited the usefulness of the Processor Upgrade Card, as internal ethernet, Apple IIe compatibility, video cards and other LC PDS expansion options must be removed.
The Macintosh Processor Upgrade Card can bring a 68k Mac, that can normally only go up to Mac OS 8.1, to be upgraded to Mac OS 8.6 or newer as long as the card is always in use. If the user turns off or disconnect the card, the machine will display a Sad Mac as newer versions of Mac OS aren't compatible with 68k processors. The Macintosh Processor Upgrade Card can only run up to Mac OS 9.1 as 9.2 onwards require a G3 Processor as a minimum.
DayStar Digitalmanufactured the Macintosh Processor Upgrade Card for Apple, sold the same card as theirDaystar PowerCard 601-50/66and also manufactured aDaystar PowerCard 601/100which reached 100 MHz.[5]After Daystar went out of business the 100 MHz model was manufactured and sold bySonnet Technologiesas theirSonnet Presto PPC 605.[6]
|
https://en.wikipedia.org/wiki/Macintosh_Processor_Upgrade_Card
|
Schröder's equation,[1][2][3]named afterErnst Schröder, is afunctional equationwith oneindependent variable: given the functionh, find the functionΨsuch that
∀xΨ(h(x))=sΨ(x).{\displaystyle \forall x\;\;\;\Psi {\big (}h(x){\big )}=s\Psi (x).}
Schröder's equation is an eigenvalue equation for thecomposition operatorChthat sends a functionftof(h(.)).
Ifais afixed pointofh, meaningh(a) =a, then eitherΨ(a) = 0(or∞) ors= 1. Thus, provided thatΨ(a)is finite andΨ′(a)does not vanish or diverge, theeigenvaluesis given bys=h′(a).
Fora= 0, ifhis analytic on the unit disk, fixes0, and0 < |h′(0)| < 1, thenGabriel Koenigsshowed in 1884 that there is an analytic (non-trivial)Ψsatisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf.Koenigs function.
Equations such as Schröder's are suitable to encodingself-similarity, and have thus been extensively utilized in studies ofnonlinear dynamics(often referred to colloquially aschaos theory). It is also used in studies ofturbulence, as well as therenormalization group.[4][5]
An equivalent transpose form of Schröder's equation for the inverseΦ = Ψ−1of Schröder's conjugacy function ish(Φ(y)) = Φ(sy). The change of variablesα(x) = log(Ψ(x))/log(s)(theAbel function) further converts Schröder's equation to the olderAbel equation,α(h(x)) = α(x) + 1. Similarly, the change of variablesΨ(x) = log(φ(x))converts Schröder's equation toBöttcher's equation,φ(h(x)) = (φ(x))s.
Moreover, for the velocity,[5]β(x) = Ψ/Ψ′,Julia's equation,β(f(x)) =f′(x)β(x), holds.
Then-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvaluesn, instead. In the same vein, for an invertible solutionΨ(x)of Schröder's equation, the (non-invertible) functionΨ(x)k(log Ψ(x))is also a solution, foranyperiodic functionk(x)with periodlog(s). All solutions of Schröder's equation are related in this manner.
Schröder's equation was solved analytically ifais an attracting (but not superattracting)
fixed point, that is0 < |h′(a)| < 1byGabriel Koenigs(1884).[6][7]
In the case of a superattracting fixed point,|h′(a)| = 0, Schröder's equation is unwieldy, and had best be transformed toBöttcher's equation.[8]
There are a good number of particular solutions dating back to Schröder's original 1870 paper.[1]
The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized bySzekeres.[9]Several of the solutions are furnished in terms ofasymptotic series, cf.Carleman matrix.
It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated byh(x) looks simpler, a mere dilation.
More specifically, a system for which a discrete unit time step amounts tox→h(x), can have its smoothorbit(orflow) reconstructed from the solution of the above Schröder's equation, itsconjugacy
equation.
That is,h(x) = Ψ−1(sΨ(x)) ≡h1(x).
In general,all of its functional iterates(itsregular iterationgroup, seeiterated function) are provided by theorbit
ht(x)=Ψ−1(stΨ(x)),{\displaystyle h_{t}(x)=\Psi ^{-1}{\big (}s^{t}\Psi (x){\big )},}
fortreal — not necessarily positive or integer. (Thus a fullcontinuous group.)
The set ofhn(x), i.e., of all positive integer iterates ofh(x)(semigroup) is called thesplinter(or Picard sequence) ofh(x).
However,all iterates(fractional, infinitesimal, or negative) ofh(x)are likewise specified through the coordinate transformationΨ(x)determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursionx→h(x)has been constructed;[10]in effect, the entireorbit.
For instance, thefunctional square rootish1/2(x) = Ψ−1(s1/2Ψ(x)), so thath1/2(h1/2(x)) =h(x), and so on.
For example,[11]special cases of thelogistic mapsuch as the chaotic caseh(x) = 4x(1 −x)were already worked out by Schröder in his original article[1](p. 306),
In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials,[12]V(x) ∝x(x− 1) (nπ+ arcsin√x)2, a generic feature of continuous iterates effected by Schröder's equation.
A nonchaotic case he also illustrated with his method,h(x) = 2x(1 −x), yields
Likewise, for theBeverton–Holt model,h(x) =x/(2 −x), one readily finds[10]Ψ(x) =x/(1 −x), so that[13]
|
https://en.wikipedia.org/wiki/Schr%C3%B6der%27s_equation
|
In thetheory of computation, a branch oftheoretical computer science, adeterministic finite automaton(DFA)—also known asdeterministic finite acceptor(DFA),deterministic finite-state machine(DFSM), ordeterministic finite-state automaton(DFSA)—is afinite-state machinethat accepts or rejects a givenstringof symbols, by running through a state sequence uniquely determined by the string.[1]Deterministicrefers to the uniqueness of the computation run. In search of the simplest models to capture finite-state machines,Warren McCullochandWalter Pittswere among the first researchers to introduce a concept similar to finite automata in 1943.[2][3]
The figure illustrates a deterministic finite automaton using astate diagram. In this example automaton, there are three states: S0, S1, and S2(denoted graphically by circles). The automaton takes a finitesequenceof 0s and 1s as input. For each state, there is a transition arrow leading out to a next state for both 0 and 1. Upon reading a symbol, a DFA jumpsdeterministicallyfrom one state to another by following the transition arrow. For example, if the automaton is currently in state S0and the current input symbol is 1, then it deterministically jumps to state S1. A DFA has astart state(denoted graphically by an arrow coming in from nowhere) where computations begin, and asetofaccept states(denoted graphically by a double circle) which help define when a computation is successful.
A DFA is defined as an abstract mathematical concept, but is often implemented in hardware and software for solving various specific problems such aslexical analysisandpattern matching. For example, a DFA can model software that decides whether or not online user input such as email addresses are syntactically valid.[4]
DFAs have been generalized tonondeterministic finite automata(NFA)which may have several arrows of the same label starting from a state. Using thepowerset constructionmethod, every NFA can be translated to a DFA that recognizes the same language. DFAs, and NFAs as well, recognize exactly the set ofregular languages.[1]
A deterministic finite automatonMis a 5-tuple,(Q, Σ,δ,q0,F), consisting of
Letw=a1a2...anbe a string over the alphabetΣ. The automatonMaccepts the stringwif a sequence of states,r0,r1, ...,rn, exists inQwith the following conditions:
In words, the first condition says that the machine starts in the start stateq0. The second condition says that given each character of stringw, the machine will transition from state to state according to the transition functionδ. The last condition says that the machine acceptswif the last input ofwcauses the machine to halt in one of the accepting states. Otherwise, it is said that the automatonrejectsthe string. The set of strings thatMaccepts is thelanguagerecognizedbyMand this language is denoted byL(M).
A deterministic finite automaton without accept states and without a starting state is known as atransition systemorsemiautomaton.
For more comprehensive introduction of the formal definition seeautomata theory.
The following example is of a DFAM, with a binary alphabet, which requires that the input contains an even number of 0s.
M= (Q, Σ,δ,q0,F)where
The stateS1represents that there has been an even number of 0s in the input so far, whileS2signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s,Mwill finish in stateS1, an accepting state, so the input string will be accepted.
The language recognized byMis theregular languagegiven by theregular expression(1*) (0 (1*) 0 (1*))*, where*is theKleene star, e.g.,1*denotes any number (possibly zero) of consecutive ones.
According to the above definition, deterministic finite automata are alwayscomplete: they define from each state a transition for each input symbol.
While this is the most common definition, some authors use the term deterministic finite automaton for a slightly different notion: an automaton that definesat mostone transition for each state and each input symbol; the transition function is allowed to bepartial.[5]When no transition is defined, such an automaton halts.
Alocal automatonis a DFA, not necessarily complete, for which all edges with the same label lead to a single vertex. Local automata accept the class oflocal languages, those for which membership of a word in the language is determined by a "sliding window" of length two on the word.[6][7]
AMyhill graphover an alphabetAis adirected graphwithvertex setAand subsets of vertices labelled "start" and "finish". The language accepted by a Myhill graph is the set of directed paths from a start vertex to a finish vertex: the graph thus acts as an automaton.[6]The class of languages accepted by Myhill graphs is the class of local languages.[8]
When the start state and accept states are ignored, a DFA ofnstates and an alphabet of sizekcan be seen as adigraphofnvertices in which all vertices havekout-arcs labeled1, ...,k(ak-out digraph). It is known that whenk≥ 2is a fixed integer, with high probability, the largeststrongly connected component(SCC) in such ak-out digraph chosen uniformly at random is of linear size and it can be reached by all vertices.[9]It has also been proven that ifkis allowed to increase asnincreases, then the whole digraph has a phase transition for strong connectivity similar toErdős–Rényi modelfor connectivity.[10]
In a random DFA, the maximum number of vertices reachable from one vertex is very close to the number of vertices in the largestSCCwith high probability.[9][11]This is also true for the largestinduced sub-digraphof minimum in-degree one, which can be seen as a directed version of1-core.[10]
If DFAs recognize the languages that are obtained by applying an operation on the DFA recognizable languages then DFAs are said to beclosed underthe operation. The DFAs are closed under the following operations.
For each operation, an optimal construction with respect to the number of states has been determined instate complexityresearch.
Since DFAs areequivalenttonondeterministic finite automata(NFA), these closures may also be proved using closure properties of NFA.
A run of a given DFA can be seen as a sequence of compositions of a very general formulation of the transition function with itself. Here we construct that function.
For a given input symbola∈Σ{\displaystyle a\in \Sigma }, one may construct a transition functionδa:Q→Q{\displaystyle \delta _{a}:Q\rightarrow Q}by definingδa(q)=δ(q,a){\displaystyle \delta _{a}(q)=\delta (q,a)}for allq∈Q{\displaystyle q\in Q}. (This trick is calledcurrying.) From this perspective,δa{\displaystyle \delta _{a}}"acts" on a state in Q to yield another state. One may then consider the result offunction compositionrepeatedly applied to the various functionsδa{\displaystyle \delta _{a}},δb{\displaystyle \delta _{b}}, and so on. Given a pair of lettersa,b∈Σ{\displaystyle a,b\in \Sigma }, one may define a new functionδ^ab=δa∘δb{\displaystyle {\widehat {\delta }}_{ab}=\delta _{a}\circ \delta _{b}}, where∘{\displaystyle \circ }denotes function composition.
Clearly, this process may be recursively continued, giving the following recursive definition ofδ^:Q×Σ⋆→Q{\displaystyle {\widehat {\delta }}:Q\times \Sigma ^{\star }\rightarrow Q}:
δ^{\displaystyle {\widehat {\delta }}}is defined for all wordsw∈Σ∗{\displaystyle w\in \Sigma ^{*}}. A run of the DFA is a sequence of compositions ofδ^{\displaystyle {\widehat {\delta }}}with itself.
Repeated function composition forms amonoid. For the transition functions, this monoid is known as thetransition monoid, or sometimes thetransformation semigroup. The construction can also be reversed: given aδ^{\displaystyle {\widehat {\delta }}}, one can reconstruct aδ{\displaystyle \delta }, and so the two descriptions are equivalent.
DFAs are one of the most practical models of computation, since there is a trivial linear time, constant-space,online algorithmto simulate a DFA on a stream of input. Also, there are efficient algorithms to find a DFA recognizing:
Because DFAs can be reduced to acanonical form(minimal DFAs), there are also efficient algorithms to determine:
DFAs are equivalent in computing power tonondeterministic finite automata(NFAs). This is because, firstly any DFA is also an NFA, so an NFA can do what a DFA can do. Also, given an NFA, using thepowerset constructionone can build a DFA that recognizes the same language as the NFA, although the DFA could have exponentially larger number of states than the NFA.[15][16]However, even though NFAs are computationally equivalent to DFAs, the above-mentioned problems are not necessarily solved efficiently also for NFAs. The non-universality problem for NFAs isPSPACE completesince there are small NFAs with shortest rejecting word in exponential size. A DFA is universal if and only if all states are final states, but this does not hold for NFAs. The Equality, Inclusion and Minimization Problems are also PSPACE complete since they require forming the complement of an NFA which results in an exponential blow up of size.[17]
On the other hand, finite-state automata are of strictly limited power in the languages they can recognize; many simple languages, including any problem that requires more than constant space to solve, cannot be recognized by a DFA. The classic example of a simply described language that no DFA can recognize is bracket orDyck language, i.e., the language that consists of properly paired brackets such as word "(()())". Intuitively, no DFA can recognize the Dyck language because DFAs are not capable of counting: a DFA-like automaton needs to have a state to represent any possible number of "currently open" parentheses, meaning it would need an unbounded number of states. Another simpler example is the language consisting of strings of the formanbnfor some finite but arbitrary number ofa's, followed by an equal number ofb's.[18]
Given a set ofpositivewordsS+⊂Σ∗{\displaystyle S^{+}\subset \Sigma ^{*}}and a set ofnegativewordsS−⊂Σ∗{\displaystyle S^{-}\subset \Sigma ^{*}}one can construct a DFA that accepts all words fromS+{\displaystyle S^{+}}and rejects all words fromS−{\displaystyle S^{-}}: this problem is calledDFA identification(synthesis, learning).
WhilesomeDFA can be constructed in linear time, the problem of identifying a DFA with the minimal number of states is NP-complete.[19]The first algorithm for minimal DFA identification has been proposed by Trakhtenbrot and Barzdin[20]and is called theTB-algorithm.
However, the TB-algorithm assumes that all words fromΣ{\displaystyle \Sigma }up to a given length are contained in eitherS+∪S−{\displaystyle S^{+}\cup S^{-}}.
Later, K. Lang proposed an extension of the TB-algorithm that does not use any assumptions aboutS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}, theTraxbaralgorithm.[21]However, Traxbar does not guarantee the minimality of the constructed DFA.
In his work[19]E.M. Gold also proposed a heuristic algorithm for minimal DFA identification.
Gold's algorithm assumes thatS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}contain acharacteristic setof the regular language; otherwise, the constructed DFA will be inconsistent either withS+{\displaystyle S^{+}}orS−{\displaystyle S^{-}}.
Other notable DFA identification algorithms include the RPNI algorithm,[22]the Blue-Fringe evidence-driven state-merging algorithm,[23]and Windowed-EDSM.[24]Another research direction is the application ofevolutionary algorithms: the smart state labeling evolutionary algorithm[25]allowed to solve a modified DFA identification problem in which the training data (setsS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}) isnoisyin the sense that some words are attributed to wrong classes.
Yet another step forward is due to application ofSATsolvers byMarjin J. H. Heuleand S. Verwer: the minimal DFA identification problem is reduced to deciding the satisfiability of a Boolean formula.[26]The main idea is to build an augmented prefix-tree acceptor (atriecontaining all input words with corresponding labels) based on the input sets and reduce the problem of finding a DFA withC{\displaystyle C}states tocoloringthe tree vertices withC{\displaystyle C}states in such a way that when vertices with one color are merged to one state, the generated automaton is deterministic and complies withS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}.
Though this approach allows finding the minimal DFA, it suffers from exponential blow-up of execution time when the size of input data increases.
Therefore, Heule and Verwer's initial algorithm has later been augmented with making several steps of the EDSM algorithm prior to SAT solver execution: the DFASAT algorithm.[27]This allows reducing the search space of the problem, but leads to loss of the minimality guarantee.
Another way of reducing the search space has been proposed by Ulyantsev et al.[28]by means of new symmetry breaking predicates based on thebreadth-first searchalgorithm:
the sought DFA's states are constrained to be numbered according to the BFS algorithm launched from the initial state. This approach reduces the search space byC!{\displaystyle C!}by eliminating isomorphic automata.
Read-only right-moving Turing machinesare a particular type ofTuring machinethat only moves right; these
are almost exactly equivalent to DFAs.[29]The definition based on a singly infinite tape is a 7-tuple
where
The machine always accepts a regular language. There must exist at least one element of the setF(aHALTstate) for the language to be nonempty.
|
https://en.wikipedia.org/wiki/Deterministic_finite_automaton
|
TheMD2 Message-Digest Algorithmis acryptographic hash functiondeveloped byRonald Rivestin 1989.[2]The algorithm is optimized for8-bitcomputers. MD2 is specified inIETF RFC1319.[3]The "MD" in MD2 stands for "Message Digest".
Even though MD2 is not yet fully compromised, the IETF retired MD2 to "historic" status in 2011, citing "signs of weakness". It is deprecated in favor ofSHA-256and other strong hashing algorithms.[4]
Nevertheless, as of 2014[update], it remained in use inpublic key infrastructuresas part ofcertificatesgenerated with MD2 andRSA.[citation needed]
The 128-bit hash value of any message is formed by padding it to a multiple of the block length (128 bits or 16bytes) and adding a 16-bytechecksumto it. For the actual calculation, a 48-byte auxiliary block and a 256-byteS-tableare used. The constants were generated by shuffling the integers 0 through 255 using a variant ofDurstenfeld's algorithmwith apseudorandom number generatorbased on decimal digits ofπ(pi)[3][5](seenothing up my sleeve number). The algorithm runs through a loop where it permutes each byte in the auxiliary block 18 times for every 16 input bytes processed. Once all of the blocks of the (lengthened) message have been processed, the first partial block of the auxiliary block becomes the hash value of the message.
The S-table values in hex are:
The 128-bit (16-byte) MD2 hashes (also termedmessage digests) are typically represented as 32-digithexadecimalnumbers. The following demonstrates a 43-byteASCIIinput and the corresponding MD2 hash:
As the result of theavalanche effectin MD2, even a small change in the input message will (with overwhelming probability) result in a completely different hash. For example, changing the letterdtocin the message results in:
The hash of the zero-length string is:
Rogier and Chauvaud presented in 1995[6]collisions of MD2'scompression function, although they were unable to extend the attack to the full MD2. The described collisions was published in 1997.[7]
In 2004, MD2 was shown to be vulnerable to apreimage attackwithtime complexityequivalent to 2104applications of the compression function.[8]The author concludes, "MD2 can no longer be considered a secure one-way hash function".
In 2008, MD2 has further improvements on apreimage attackwithtime complexityof 273compression function evaluations and memory requirements of 273message blocks.[9]
In 2009, MD2 was shown to be vulnerable to acollision attackwithtime complexityof 263.3compression function evaluations and memory requirements of 252hash values. This is slightly better than thebirthday attackwhich is expected to take 265.5compression function evaluations.[10]
In 2009, security updates were issued disabling MD2 inOpenSSL,GnuTLS, andNetwork Security Services.[11]
|
https://en.wikipedia.org/wiki/MD2_(cryptography)
|
In mathematics, the notions of anabsolutely monotonic functionand acompletely monotonic functionare two very closely related concepts. Both imply very strong monotonicity properties. Both types of functions have derivatives of all orders. In the case of an absolutely monotonic function, the function as well as its derivatives of all orders must be non-negative in its domain of definition which would imply that the function as well as its derivatives of all orders are monotonically increasing functions in the domain of definition. In the case of a completely monotonic function, the function and its derivatives must be alternately non-negative and non-positive in its domain of definition which would imply that function and its derivatives are alternately monotonically increasing and monotonically decreasing functions.
Such functions were first studied by S. Bernshtein in 1914 and the terminology is also due to him.[1][2][3]There are several other related notions like the concepts of almost completely monotonic function, logarithmically completely monotonic function, strongly logarithmically completely monotonic function, strongly completely monotonic function and almost strongly completely monotonic function.[4][5]Another related concept is that of acompletely/absolutely monotonic sequence. This notion was introduced by Hausdorff in 1921.
The notions of completely and absolutely monotone function/sequence play an important role in several areas of mathematics. For example, in classical analysis they occur in the proof of the positivity of integrals involving Bessel functions or the positivity of Cesàro means of certain
Jacobi series.[6]Such functions occur in other areas of mathematics such as probability theory, numerical analysis, and elasticity.[7]
A real valued functionf(x){\displaystyle f(x)}defined over an intervalI{\displaystyle I}in the real line is called an absolutely monotonic function if it has derivativesf(n)(x){\displaystyle f^{(n)}(x)}of all ordersn=0,1,2,…{\displaystyle n=0,1,2,\ldots }andf(n)(x)≥0{\displaystyle f^{(n)}(x)\geq 0}for allx{\displaystyle x}inI{\displaystyle I}.[1]The functionf(x){\displaystyle f(x)}is called a completely monotonic function if(−1)nf(n)(x)≥0{\displaystyle (-1)^{n}f^{(n)}(x)\geq 0}for allx{\displaystyle x}inI{\displaystyle I}.[1]
The two notions are mutually related. The functionf(x){\displaystyle f(x)}is completely monotonic if and only iff(−x){\displaystyle f(-x)}is absolutely monotonic on−I{\displaystyle -I}where−I{\displaystyle -I}the interval obtained by reflectingI{\displaystyle I}with respect to the origin. (Thus, ifI{\displaystyle I}is the interval(a,b){\displaystyle (a,b)}then−I{\displaystyle -I}is the interval(−b,−a){\displaystyle (-b,-a)}.)
In applications, the interval on the real line that is usually considered is the closed-open right half of the real line, that is, the interval[0,∞){\displaystyle [0,\infty )}.
The following functions are absolutely monotonic in the specified regions.[8]: 142–143
A sequence{μn}n=0∞{\displaystyle \{\mu _{n}\}_{n=0}^{\infty }}is called an absolutely monotonic sequence if its elements are non-negative and its successive differences are all non-negative, that is, if
whereΔkμn=∑m=0k(−1)m(km)μn+k−m{\displaystyle \Delta ^{k}\mu _{n}=\sum _{m=0}^{k}(-1)^{m}{k \choose m}\mu _{n+k-m}}.
A sequence{μn}n=0∞{\displaystyle \{\mu _{n}\}_{n=0}^{\infty }}is called a completely monotonic sequence if its elements are non-negative and its successive differences are alternately non-positive and non-negative,[8]: 101that is, if
The sequences{1n+1}0∞{\displaystyle \left\{{\frac {1}{n+1}}\right\}_{0}^{\infty }}and{cn}0∞{\displaystyle \{c^{n}\}_{0}^{\infty }}for0≤c≤1{\displaystyle 0\leq c\leq 1}are completely monotonic sequences.
Both the extensions and applications of the theory of absolutely monotonic functions derive from theorems.
The following is a selection from the large body of literature on absolutely/completely monotonic functions/sequences.
|
https://en.wikipedia.org/wiki/Absolutely_and_completely_monotonic_functions_and_sequences
|
Inmathematics,Farkas' lemmais a solvability theorem for a finitesystemoflinear inequalities. It was originally proven by the Hungarian mathematicianGyula Farkas.[1]Farkas'lemmais the key result underpinning thelinear programmingduality and has played a central role in the development ofmathematical optimization(alternatively,mathematical programming). It is used amongst other things in the proof of theKarush–Kuhn–Tucker theoreminnonlinear programming.[2]Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set ofBell inequalitiesin the form of necessary and sufficient conditions for the existence of alocal hidden-variable theory, given data from any specific set of measurements.[3]
Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities,[4]i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.[5]
There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).[6]
Farkas' lemma—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two assertions is true:
Here, the notationx≥0{\displaystyle \mathbf {x} \geq 0}means that all components of the vectorx{\displaystyle \mathbf {x} }are nonnegative.
Letm,n= 2,A=[6430],{\displaystyle \mathbf {A} ={\begin{bmatrix}6&4\\3&0\end{bmatrix}},}andb=[b1b2].{\displaystyle \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}.}The lemma says that exactly one of the following two statements must be true (depending onb1andb2):
Here is a proof of the lemma in this special case:
Consider theclosedconvex coneC(A){\displaystyle C(\mathbf {A} )}spanned by the columns ofA; that is,
Observe thatC(A){\displaystyle C(\mathbf {A} )}is the set of the vectorsbfor which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vectoryin the second assertion is orthogonal to ahyperplanethat separatesbandC(A).{\displaystyle C(\mathbf {A} ).}The lemma follows from the observation thatbbelongs toC(A){\displaystyle C(\mathbf {A} )}if and only ifthere is no hyperplane that separates it fromC(A).{\displaystyle C(\mathbf {A} ).}
More precisely, leta1,…,an∈Rm{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}\in \mathbb {R} ^{m}}denote the columns ofA. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true:
The sumsx1a1+⋯+xnan{\displaystyle x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}with nonnegative coefficientsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}form the cone spanned by the columns ofA. Therefore, the first statement tells thatbbelongs toC(A).{\displaystyle C(\mathbf {A} ).}
The second statement tells that there exists a vectorysuch that the angle ofywith the vectorsaiis at most 90°, while the angle ofywith the vectorbis more than 90°. The hyperplane normal to this vector has the vectorsaion one side and the vectorbon the other side. Hence, this hyperplane separates the cone spanned bya1,…,an{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}}from the vectorb.
For example, letn,m= 2,a1= (1, 0)T, anda2= (1, 1)T. The convex cone spanned bya1anda2can be seen as a wedge-shaped slice of the first quadrant in thexyplane. Now, supposeb= (0, 1). Certainly,bis not in the convex conea1x1+a2x2. Hence, there must be a separating hyperplane. Lety= (1, −1)T. We can see thata1·y= 1,a2·y= 0, andb·y= −1. Hence, the hyperplane with normalyindeed separates the convex conea1x1+a2x2fromb.
A particularly suggestive and easy-to-remember version is the following: if a set of linear inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: ifAx≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is unsolvable theny⊤A=0,{\displaystyle \mathbf {y} ^{\top }\mathbf {A} =0,}y⊤b=−1,{\displaystyle \mathbf {y} ^{\top }\mathbf {b} =-1,}y≥0{\displaystyle \mathbf {y} \geq 0}has a solution.[7]Note thaty⊤A{\displaystyle \mathbf {y} ^{\top }\mathbf {A} }is a combination of the left-hand sides,y⊤b{\displaystyle \mathbf {y} ^{\top }\mathbf {b} }a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent.
Thus, Farkas' lemma can be viewed as a theorem oflogical completeness:Ax≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.[8]: 92–94
Farkas' lemma implies that thedecision problem"Given asystem of linear equations, does it have a non-negative solution?" is in the intersection ofNPandco-NP. This is because, according to the lemma, both a "yes" answer and a "no" answer have a proof that can be verified in polynomial time. The problems in the intersectionNP∩coNP{\displaystyle NP\cap coNP}are also calledwell-characterized problems. It is a long-standing open question whetherNP∩coNP{\displaystyle NP\cap coNP}is equal toP. In particular, the question of whether a system of linear equations has a non-negative solution was not known to be in P, until it was proved using theellipsoid method.[9]: 25
The Farkas Lemma has several variants with different sign constraints (the first one is the original version):[8]: 92
The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is anexercise in linear algebra.
There are also Farkas-like lemmas forintegerprograms.[9]: 12--14For systems of equations, the lemma is simple:
For system of inequalities, the lemma is much more complicated. It is based on the following tworules of inference:
The lemma says that:
The variants are summarized in the table below.
Generalized Farkas' lemma—LetA∈Rm×n,{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n},}b∈Rm,{\displaystyle \mathbf {b} \in \mathbb {R} ^{m},}S{\displaystyle \mathbf {S} }is a closed convex cone inRn,{\displaystyle \mathbb {R} ^{n},}and thedual coneofS{\displaystyle \mathbf {S} }isS∗={z∈Rn∣z⊤x≥0,∀x∈S}.{\displaystyle \mathbf {S} ^{*}=\{\mathbf {z} \in \mathbb {R} ^{n}\mid \mathbf {z} ^{\top }\mathbf {x} \geq 0,\forall \mathbf {x} \in \mathbf {S} \}.}If convex coneC(A)={Ax∣x∈S}{\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \in \mathbf {S} \}}is closed, then exactly one of the following two statements is true:
Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closedconvex cone, or there exists ahyperplaneseparating the vector from the cone; there are no other possibilities. The closedness condition is necessary, seeSeparation theorem IinHyperplane separation theorem. For original Farkas' lemma,S{\displaystyle \mathbf {S} }is the nonnegative orthantR+n,{\displaystyle \mathbb {R} _{+}^{n},}hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists aB∈Rn×k{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times k}}such thatS={Bx∣x∈R+k},{\displaystyle \mathbf {S} =\{\mathbf {B} \mathbf {x} \mid \mathbf {x} \in \mathbb {R} _{+}^{k}\},}the closedness condition holds automatically. Inconvex optimization, various kinds of constraint qualification, e.g.Slater's condition, are responsible for closedness of the underlying convex coneC(A).{\displaystyle C(\mathbf {A} ).}
By settingS=Rn{\displaystyle \mathbf {S} =\mathbb {R} ^{n}}andS∗={0}{\displaystyle \mathbf {S} ^{*}=\{0\}}in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities:
Corollary—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two statements is true:
Farkas' lemma can be varied to many further theorems of alternative by simple modifications,[5]such asGordan's theorem: EitherAx<0{\displaystyle \mathbf {Ax} <0}has a solutionx, orA⊤y=0{\displaystyle \mathbf {A} ^{\top }\mathbf {y} =0}has a nonzero solutionywithy≥ 0.
Common applications of Farkas' lemma include proving thestrong duality theorem associated with linear programmingand theKarush–Kuhn–Tucker conditions. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Karush–Kuhn–Tucker conditions using theFredholm alternativebut for the condition to be necessary, one must apply von Neumann'sminimax theoremto show the equations derived by Cauchy are not violated.
This is used forDill'sReluplex method for verifying deep neural networks.
|
https://en.wikipedia.org/wiki/Farkas%27_lemma
|
Quasi-set theoryis a formal mathematical theory for dealing with collections of objects, some of which may be indistinguishable from one another. Quasi-set theory is mainly motivated by the assumption that certain objects treated inquantum physicsare indistinguishable and don't have individuality.
TheAmerican Mathematical Societysponsored a 1974 meeting to evaluate the resolution and consequences of the23 problemsHilbertproposed in 1900. An outcome of that meeting was a new list of mathematical problems, the first of which, due toManin(1976, p. 36), questioned whether classicalset theorywas an adequate paradigm for treating collections of indistinguishableelementary particlesinquantum mechanics. He suggested that such collections cannot be sets in the usual sense, and that the study of such collections required a "new language".
The use of the termquasi-setfollows a suggestion inda Costa's 1980 monographEnsaio sobre os Fundamentos da Lógica(see da Costa and Krause 1994), in which he explored possiblesemanticsfor what he called "Schrödinger Logics". In theselogics, the concept of identity is restricted to some objects of the domain, and has motivation inSchrödinger's claim that the concept of identity does not make sense for elementary particles (Schrödinger 1952). Thus in order to provide a semantics that fits the logic, da Costa submitted that "a theory of quasi-sets should be developed", encompassing "standard sets" as particular cases, yet da Costa did not develop this theory in any concrete way. To the same end and independently of da Costa,Dalla Chiaraand di Francia (1993) proposed a theory ofquasetsto enable asemantictreatment of the language ofmicrophysics. The first quasi-set theory was proposed by D. Krause in his PhD thesis, in 1990 (see Krause 1992). A related physics theory, based on the logic of adding fundamental indistinguishability to equality and inequality, was developed and elaborated independently in the bookThe Theory of IndistinguishablesbyA. F. Parker-Rhodes.[1]
We now expound Krause's (1992) axiomatic theoryQ{\displaystyle {\mathfrak {Q}}}, the first quasi-set theory; other formulations and improvements have since appeared. For an updated paper on the subject, see French and Krause (2010). Krause builds on the set theory ZFU, consisting ofZermelo-Fraenkel set theorywith anontologyextended to include two kinds ofurelements:
Quasi-sets (q-sets) are collections resulting from applying axioms, very similar to those for ZFU, to a basicdomaincomposed ofm-atoms,M-atoms, and aggregates of these. The axioms ofQ{\displaystyle {\mathfrak {Q}}}include equivalents ofextensionality, but in a weaker form, termed "weak extensionality axiom"; axioms asserting the existence of theempty set,unordered pair,union set, andpower set; theaxiom of separation; an axiom stating the image of a q-set under a q-function is also a q-set; q-set equivalents of the axioms ofinfinity,regularity, andchoice. Q-set theories based on other set-theoretical frameworks are, of course, possible.
Q{\displaystyle {\mathfrak {Q}}}has a primitive concept of quasi-cardinal, governed by eight additional axioms, intuitively standing for the quantity of objects in a collection. The quasi-cardinal of a quasi-set is not defined in the usual sense (by means ofordinals) because them-atoms are assumed (absolutely) indistinguishable. Furthermore, it is possible to define a translation from the language of ZFU into the language ofQ{\displaystyle {\mathfrak {Q}}}in such a way so that there is a 'copy' of ZFU inQ{\displaystyle {\mathfrak {Q}}}. In this copy, all the usual mathematical concepts can be defined, and the 'sets' (in reality, the 'Q{\displaystyle {\mathfrak {Q}}}-sets') turn out to be those q-sets whosetransitive closurecontains no m-atoms.
InQ{\displaystyle {\mathfrak {Q}}}there may exist q-sets, called "pure" q-sets, whose elements are all m-atoms, and the axiomatics ofQ{\displaystyle {\mathfrak {Q}}}provides the grounds for saying that nothing inQ{\displaystyle {\mathfrak {Q}}}distinguishes the elements of a pure q-set from one another, for certain pure q-sets. Within the theory, the idea that there is more than one entity inxis expressed by an axiom stating that the quasi-cardinal of the power quasi-set ofxhas quasi-cardinal 2qc(x), where qc(x) is the quasi-cardinal ofx(which is a cardinal obtained in the 'copy' of ZFU just mentioned).
What exactly does this mean? Consider the level 2pof a sodium atom, in which there are six indiscernible electrons. Even so, physicists reason as if there are in fact six entities in that level, and not only one. In this way, by saying that the quasi-cardinal of the power quasi-set ofxis 2qc(x)(suppose thatqc(x) = 6 to follow the example), we are not excluding the hypothesis that there can exist six subquasi-sets ofxthat are 'singletons', although we cannot distinguish among them. Whether there are or not six elements inxis something that cannot be ascribed by the theory (although the notion is compatible with the theory). If the theory could answer this question, the elements ofxwould be individualized and hence counted, contradicting the basic assumption that they cannot be distinguished.
In other words, we may consistently (within the axiomatics ofQ{\displaystyle {\mathfrak {Q}}}) reason as if there are six entities inx, butxmust be regarded as a collection whose elements cannot be discerned as individuals. Using quasi-set theory, we can express some facts of quantum physics without introducingsymmetryconditions (Krause et al. 1999, 2005). As is well known, in order to express indistinguishability, the particles are deemed to beindividuals, say by attaching them to coordinates or to adequate functions/vectors like |ψ>. Thus, given two quantum systems labeled |ψ1⟩ and |ψ2⟩ at the outset, we need to consider a function like |ψ12⟩ = |ψ1⟩|ψ2⟩ ± |ψ2⟩|ψ1⟩ (except for certain constants), which keep the quanta indistinguishable bypermutations; theprobability densityof the joint system independs on which is quanta #1 and which is quanta #2. (Note that precision requires that we talk of "two" quanta without distinguishing them, which is impossible in conventional set theories.) InQ{\displaystyle {\mathfrak {Q}}}, we can dispense with this "identification" of thequanta; for details, see Krause et al. (1999, 2005) and French and Krause (2006).
Quasi-set theory is a way to operationalize Heinz Post's (1963) claim that quanta should be deemed indistinguishable "right from the start."
|
https://en.wikipedia.org/wiki/Quasi-set_theory
|
Tor[6]is a freeoverlay networkfor enablinganonymous communication. Built onfree and open-source softwareand more than seven thousand volunteer-operated relays worldwide, users can have theirInternet trafficrouted via a random path through the network.[7][8]
Using Tor makes it more difficult to trace a user'sInternetactivity by preventing any single point on the Internet (other than the user's device) from being able to view both where traffic originated from and where it is ultimately going to at the same time.[9]This conceals a user's location and usage from anyone performingnetwork surveillanceortraffic analysisfrom any such point, protecting the user's freedom and ability to communicate confidentially.[10]
The core principle of Tor, known asonion routing, was developed in the mid-1990s byUnited States Naval Research Laboratoryemployees,mathematicianPaul Syverson, andcomputer scientistsMichael G. Reed and David Goldschlag, to protectAmerican intelligencecommunications online.[11]Onion routing is implemented by means ofencryptionin theapplication layerof thecommunication protocolstack, nested like the layers of anonion. Thealpha versionof Tor, developed by Syverson and computer scientistsRoger DingledineandNick Mathewsonand then called The Onion Routing project (which was later given the acronym "Tor"), was launched on 20 September 2002.[12][13]The first public release occurred a year later.[14]
In 2004, the Naval Research Laboratory released the code for Tor under a free license, and theElectronic Frontier Foundation(EFF) began funding Dingledine and Mathewson to continue its development.[12]In 2006, Dingledine, Mathewson, and five others foundedThe Tor Project, aMassachusetts-based501(c)(3)research-educationnonprofit organizationresponsible for maintaining Tor. The EFF acted as The Tor Project'sfiscal sponsorin its early years, and early financial supporters included the U.S.Bureau of Democracy, Human Rights, and LaborandInternational Broadcasting Bureau,Internews,Human Rights Watch, theUniversity of Cambridge,Google, and Netherlands-basedStichting NLnet.[15][16]
Over the course of its existence, variousTor vulnerabilitieshave been discovered and occasionally exploited. Attacks against Tor are an active area of academic research[17][18]that is welcomed by The Tor Project itself.[19]
Tor enables its users to surf the Internet, chat and send instant messagesanonymously, and is used by a wide variety of people for bothlicitand illicit purposes.[22]Tor has, for example, been used by criminal enterprises,hacktivismgroups, and law enforcement agencies at cross purposes, sometimes simultaneously;[23][24]likewise, agencies within the U.S. government variously fund Tor (theU.S. State Department, the National Science Foundation, and – through the Broadcasting Board of Governors, which itself partially funded Tor until October 2012 –Radio Free Asia) and seek to subvert it.[25][11]Tor was one of a dozen circumvention tools evaluated by aFreedom House-funded report based on user experience from China in 2010, which includeUltrasurf,Hotspot Shield, andFreegate.[26]
Tor is not meant to completely solve the issue of anonymity on the web. Tor is not designed to completely erase tracking but instead to reduce the likelihood for sites to trace actions and data back to the user.[27]
Tor can also be used for illegal activities. These can include privacy protection or censorship circumvention,[28]as well as distribution of child abuse content, drug sales, or malware distribution.[29]
Tor has been described byThe Economist, in relation toBitcoinandSilk Road, as being "a dark corner of the web".[30]It has been targeted by the AmericanNational Security Agencyand the BritishGCHQsignals intelligenceagencies, albeit with marginal success,[25]and more successfully by the BritishNational Crime Agencyin its Operation Notarise.[31]At the same time, GCHQ has been using a tool named "Shadowcat" for "end-to-end encrypted access to VPS over SSH using the Tor network".[32][33]Tor can be used for anonymous defamation, unauthorizednews leaksof sensitive information,copyright infringement, distribution of illegal sexual content,[34][35][36]sellingcontrolled substances,[37]weapons, and stolen credit card numbers,[38]money laundering,[39]bank fraud,[40]credit card fraud,identity theftand the exchange ofcounterfeit currency;[41]theblack marketutilizes the Tor infrastructure, at least in part, in conjunction with Bitcoin.[23]It has also been used to brickIoTdevices.[42]
In its complaint againstRoss William UlbrichtofSilk Road, the USFederal Bureau of Investigationacknowledged that Tor has "known legitimate uses".[43][44]According toCNET, Tor's anonymity function is "endorsed by theElectronic Frontier Foundation(EFF) and other civil liberties groups as a method forwhistleblowersand human rights workers to communicate with journalists".[45]EFF's Surveillance Self-Defense guide includes a description of where Tor fits in a larger strategy for protecting privacy and anonymity.[46]
In 2014, the EFF'sEva GalperintoldBusinessweekthat "Tor's biggest problem is press. No one hears about that time someone wasn'tstalkedby their abuser. They hear how somebody got away with downloading child porn."[47]
The Tor Project states that Tor users include "normal people" who wish to keep their Internet activities private from websites and advertisers, people concerned about cyber-spying, and users who are evading censorship such as activists, journalists, and military professionals. In November 2013, Tor had about four million users.[48]According to theWall Street Journal, in 2012 about 14% of Tor's traffic connected from the United States, with people in "Internet-censoring countries" as its second-largest user base.[49]Tor is increasingly used by victims ofdomestic violenceand thesocial workersand agencies that assist them, even though shelter workers may or may not have had professional training on cyber-security matters.[50]Properly deployed, however, it precludes digital stalking, which has increased due to the prevalence of digital media in contemporaryonlinelife.[51]Along withSecureDrop, Tor is used by news organizations such asThe Guardian,The New Yorker,ProPublicaandThe Interceptto protect the privacy of whistleblowers.[52]
In March 2015, theParliamentary Office of Science and Technologyreleased a briefing which stated that "There is widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the U.K." and that "Even if it were, there would be technical challenges." The report further noted that Tor "plays only a minor role in the online viewing and distribution of indecent images of children" (due in part to its inherent latency); its usage by theInternet Watch Foundation, the utility of its onion services forwhistleblowers, and its circumvention of theGreat Firewallof China were touted.[53]
Tor's executive director, Andrew Lewman, also said in August 2014 that agents of the NSA and the GCHQ have anonymously provided Tor with bug reports.[54]
The Tor Project's FAQ offers supporting reasons for the EFF's endorsement:
Criminals can already do bad things. Since they're willing to break laws, they already have lots of options available that provide better privacy than Tor provides...
Tor aims to provide protection for ordinary people who want to follow the law. Only criminals have privacy right now, and we need to fix that...
So yes, criminals could in theory use Tor, but they already have better options, and it seems unlikely that taking Tor away from the world will stop them from doing their bad things. At the same time, Tor and other privacy measures can fight identity theft, physical crimes like stalking, and so on.
Tor aims to conceal its users' identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation ofonion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe. These onion routers employencryptionin a multi-layered manner (hence the onion metaphor) to ensureperfect forward secrecybetween relays, thereby providing users with anonymity in a network location. That anonymity extends to the hosting of censorship-resistant content by Tor's anonymous onion service feature.[7]Furthermore, by keeping some of the entry relays (bridge relays) secret, users can evadeInternet censorshipthat relies upon blocking public Tor relays.[56]
Because theIP addressof the sender and the recipient are notbothincleartextat any hop along the way, anyone eavesdropping at any point along the communication channel cannot directly identify both ends. Furthermore, to the recipient, it appears that the last Tornode(called the exit node), rather than the sender, is the originator of the communication.
A Tor user'sSOCKS-aware applications can be configured to direct their network traffic through a Tor instance's SOCKS interface, which is listening on TCP port 9050 (for standalone Tor) or 9150 (for Tor Browser bundle) atlocalhost.[57]Tor periodically creates virtual circuits through the Tor network through which it canmultiplexand onion-route that traffic to its destination. Once inside a Tor network, the traffic is sent from router to router along the circuit, ultimately reaching an exit node at which point thecleartextpacket is available and is forwarded on to its original destination. Viewed from the destination, the traffic appears to originate at the Tor exit node.
Tor's application independence sets it apart from most other anonymity networks: it works at theTransmission Control Protocol(TCP) stream level. Applications whose traffic is commonly anonymized using Tor includeInternet Relay Chat(IRC),instant messaging, andWorld Wide Webbrowsing.
Tor can also provide anonymity to websites and other servers. Servers configured to receive inbound connections only through Tor are calledonion services(formerly,hidden services).[58]Rather than revealing a server's IP address (and thus its network location), an onion service is accessed through itsonion address, usually via theTor Browseror some other software designed to use Tor. The Tor network understands these addresses by looking up their correspondingpublic keysandintroduction pointsfrom adistributed hash tablewithin the network. It can route data to and from onion services, even those hosted behindfirewallsornetwork address translators(NAT), while preserving the anonymity of both parties. Tor is necessary to access these onion services.[59]Because the connection never leaves the Tor network, and is handled by the Tor application on both ends, the connection is alwaysend-to-end encrypted.
Onion services were first specified in 2003[60]and have been deployed on the Tor network since 2004.[61]They are unlisted by design, and can only be discovered on the network if the onion address is already known,[62]though a number of sites and services do catalog publicly known onion addresses. Popular sources of.onionlinks includePastebin,Twitter,Reddit, otherInternet forums, and tailored search engines.[63]
While onion services are often discussed in terms of websites, they can be used for anyTCPservice, and are commonly used for increased security or easier routing to non-web services, such assecure shellremote login, chat services such asIRCandXMPP, orfile sharing.[58]They have also become a popular means of establishingpeer-to-peerconnections in messaging[64][65]and file sharing applications.[66]Web-based onion services can be accessed from a standard web browser withoutclient-sideconnection to the Tor network using services likeTor2web, which remove client anonymity.[67]
Like all software with anattack surface, Tor's protections have limitations, and Tor's implementation or design have been vulnerable to attacks at various points throughout its history.[68][19]While most of these limitations and attacks are minor, either being fixed without incident or proving inconsequential, others are more notable.
Tor is designed to provide relatively high performance network anonymity against an attacker with a single vantage point on the connection (e.g., control over one of the three relays, the destination server, or the user'sinternet service provider). Like all currentlow-latencyanonymity networks, Tor cannot and does not attempt to protect against an attacker performing simultaneous monitoring of traffic at the boundaries of the Tor network—i.e., the traffic entering and exiting the network. While Tor does provide protection againsttraffic analysis, it cannot prevent traffic confirmation via end-to-end correlation.[69][70]
There are no documented cases of this limitation being used at scale; as of the 2013Snowden leaks, law enforcement agencies such as theNSAwere unable to performdragnetsurveillance on Tor itself, and relied on attacking other software used in conjunction with Tor, such asvulnerabilitiesinweb browsers.[71]However, targeted attacks have been able to make use of traffic confirmation on individual Tor users, via police surveillance or investigations confirming that a particular person already under suspicion was sending Tor traffic at the exact times the connections in question occurred.[72][73]Therelay early traffic confirmation attackalso relied on traffic confirmation as part of its mechanism, though on requests for onion service descriptors, rather than traffic to the destination server.
Like many decentralized systems, Tor relies on aconsensus mechanismto periodically update its current operating parameters. For Tor, these include network parameters like which nodes are good and bad relays, exits, guards, and how much traffic each can handle. Tor's architecture for deciding the consensus relies on a small number of directory authority nodes voting on current network parameters. Currently, there are nine directory authority nodes, and their health is publicly monitored.[74]The IP addresses of the authority nodes arehard codedinto each Tor client. The authority nodes vote every hour to update the consensus, and clients download the most recent consensus on startup.[75][76][77]A compromise of the majority of the directory authorities could alter the consensus in a way that is beneficial to an attacker. Alternatively, a network congestion attack, such as aDDoS, could theoretically prevent the consensus nodes from communicating, and thus prevent voting to update the consensus (though such an attack would be visible).[citation needed]
Tor makes no attempt to conceal the IP addresses of exit relays, or hide from a destination server the fact that a user is connecting via Tor.[78]Operators of Internet sites therefore have the ability to prevent traffic from Tor exit nodes or to offer reduced functionality for Tor users. For example,Wikipediagenerally forbids all editing when using Tor or when using an IP address also used by a Tor exit node,[79]and theBBCblocks the IP addresses of all known Tor exit nodes from itsiPlayerservice.[80]
Apart from intentional restrictions of Tor traffic, Tor use can trigger defense mechanisms on websites intended to block traffic from IP addresses observed to generate malicious or abnormal traffic.[81][82]Because traffic from all Tor users is shared by a comparatively small number of exit relays, tools can misidentify distinct sessions as originating from the same user, and attribute the actions of a malicious user to a non-malicious user, or observe an unusually large volume of traffic for one IP address. Conversely, a site may observe a single session connecting from different exit relays, with differentInternet geolocations, and assume the connection is malicious, or triggergeo-blocking. When these defense mechanisms are triggered, it can result in the site blocking access, or presentingcaptchasto the user.
In July 2014, the Tor Project issued a security advisory for a "relay early traffic confirmation" attack, disclosing the discovery of a group of relays attempting to de-anonymize onion service users and operators.[83]A set of onion service directory nodes (i.e., the Tor relays responsible for providing information about onion services) were found to be modifying traffic of requests. The modifications made it so the requesting client's guard relay, if controlled by the same adversary as the onion service directory node, could easily confirm that the traffic was from the same request. This would allow the adversary to simultaneously know the onion service involved in the request, and the IP address of the client requesting it (where the requesting client could be a visitor or owner of the onion service).[83]
The attacking nodes joined the network on 30 January, using aSybil attackto comprise 6.4% of guard relay capacity, and were removed on 4 July.[83]In addition to removing the attacking relays, the Tor application was patched to prevent the specific traffic modifications that made the attack possible.
In November 2014, there was speculation in the aftermath ofOperation Onymous, resulting in 17 arrests internationally, that a Tor weakness had been exploited. A representative ofEuropolwas secretive about the method used, saying: "This is something we want to keep for ourselves. The way we do this, we can't share with the whole world, because we want to do it again and again and again."[84]ABBCsource cited a "technical breakthrough"[85]that allowed tracking physical locations of servers, and the initial number of infiltrated sites led to the exploit speculation. A Tor Project representative downplayed this possibility, suggesting that execution of more traditional police work was more likely.[86][87]
In November 2015, court documents suggested a connection between the attack and arrests, and raised concerns about security research ethics.[88][89]The documents revealed that theFBIobtained IP addresses of onion services and their visitors from a "university-based research institute", leading to arrests. Reporting fromMotherboardfound that the timing and nature of the relay early traffic confirmation attack matched the description in the court documents. Multiple experts, including a senior researcher with theICSIofUC Berkeley,Edward FeltenofPrinceton University, andthe Tor Projectagreed that theCERT Coordination CenterofCarnegie Mellon Universitywas the institute in question.[88][90][89]Concerns raised included the role of an academic institution in policing, sensitive research involving non-consenting users, the non-targeted nature of the attack, and the lack of disclosure about the incident.[88][90][89]
Many attacks targeted at Tor users result from flaws in applications used with Tor, either in the application itself, or in how it operates in combination with Tor. E.g., researchers withInriain 2011 performed an attack onBitTorrentusers by attacking clients that established connections both using and not using Tor, then associating other connections shared by the same Tor circuit.[91]
When using Tor, applications may still provide data tied to a device, such as information about screen resolution, installed fonts, language configuration, or supported graphics functionality, reducing the set of users a connection could possibly originate from, or uniquely identifying them.[92]This information is known as thedevice fingerprint, or browser fingerprint in the case of web browsers. Applications implemented with Tor in mind, such as Tor Browser, can be designed to minimize the amount of information leaked by the application and reduce its fingerprint.[92][93]
Tor cannot encrypt the traffic between an exit relay and the destination server.
If an application does not add an additional layer ofend-to-end encryptionbetween the client and the server, such asTransport Layer Security(TLS, used inHTTPS) or theSecure Shell(SSH) protocol, this allows the exit relay to capture and modify traffic.[94][95]Attacks from malicious exit relays have recorded usernames and passwords,[96]and modifiedBitcoinaddresses to redirect transactions.[97]
Some of these attacks involved actively removing the HTTPS protections that would have otherwise been used.[94]To attempt to prevent this, Tor Browser has since made it so only connections via onion services or HTTPS are allowed by default.[98]
In 2011, theDutch authorityinvestigatingchild pornographydiscovered the IP address of a Tor onion service site from an unprotected administrator's account and gave it to theFBI, who traced it to Aaron McGrath.[99]After a year of surveillance, the FBI launched "Operation Torpedo" which resulted in McGrath's arrest and allowed them to install theirNetwork Investigative Technique(NIT) malware on the servers for retrieving information from the users of the three onion service sites that McGrath controlled.[100]The technique exploited a vulnerability in Firefox/Tor Browser that had already been patched, and therefore targeted users that had not updated. AFlashapplication sent a user's IP address directly back to an FBI server,[101][102][103][104]and resulted in revealing at least 25 US users as well as numerous users from other countries.[105]McGrath was sentenced to 20 years in prison in early 2014, while at least 18 others (including a former ActingHHSCyber Security Director) were sentenced in subsequent cases.[106][107]
In August 2013, it was discovered[108][109]that theFirefoxbrowsers in many older versions of the Tor Browser Bundle were vulnerable to a JavaScript-deployedshellcodeattack, as NoScript was not enabled by default.[110]Attackers used this vulnerability to extract users' MAC and IP addresses and Windows computer names.[111][112][113]News reports linked this to a FBI operation targetingFreedom Hosting's owner, Eric Eoin Marques, who was arrested on a provisionalextraditionwarrant issued by a United States' court on 29 July.[114]The FBI extradited Marques from Ireland to the state of Maryland on 4 charges: distributing; conspiring to distribute; and advertising child pornography, as well as aiding and abetting advertising of child pornography.[115][116][117]The FBI acknowledged the attack in a 12 September 2013 court filing inDublin;[118]further technical details from a training presentation leaked byEdward Snowdenrevealed the code name for the exploit as "EgotisticalGiraffe".[71]
In 2022,Kasperskyresearchers found that when looking up "Tor Browser" in Chinese onYouTube, one of theURLsprovided under the top-ranked Chinese-language video actually pointed to malware disguised as Tor Browser. Once installed, it saved browsing history and form data that genuine Tor forgot by default, and downloaded malicious components if the device's IP addresses was in China. Kaspersky researchers noted that the malware was not stealing data to sell for profit, but was designed to identify users.[119]
Like client applications that use Tor, servers relying on onion services for protection can introduce their own weaknesses. Servers that are reachable through Tor onion services and the public Internet can be subject to correlation attacks, and all onion services are susceptible to misconfigured services (e.g., identifying information included by default in web server error responses), leaking uptime and downtime statistics, intersection attacks, or various user errors.[120][121]The OnionScan program, written by independent security researcherSarah Jamie Lewis, comprehensively examines onion services for such flaws and vulnerabilities.[122]
The main implementation of Tor is written primarily inC.[123]Starting in 2020, the Tor Project began development of a full rewrite of the C Tor codebase inRust.[124]The project, named Arti, was publicly announced in July 2021.[125]
The Tor Browser[131]is aweb browsercapable of accessing the Tor network. It was created as the Tor Browser Bundle bySteven J. Murdoch[132]and announced in January 2008.[133]The Tor Browser consists of a modified MozillaFirefoxESR web browser, the TorButton, TorLauncher,NoScriptand the Tor proxy.[134][135]Users can run the Tor Browser fromremovable media. It can operate underMicrosoft Windows,macOS,AndroidandLinux.[136]
The defaultsearch engineisDuckDuckGo(until version 4.5,Startpage.comwas its default). The Tor Browser automatically starts Tor background processes and routes traffic through the Tor network. Upon termination of a session the browser deletes privacy-sensitive data such as HTTP cookies and the browsing history.[135]This is effective in reducingweb trackingandcanvas fingerprinting, and it also helps to prevent creation of afilter bubble.[citation needed]
To allow download from places where accessing the Tor Project URL may be risky or blocked, aGitHubrepository is maintained with links for releases hosted in other domains.[137]
On 29 October 2015, the Tor Project released Tor Messenger Beta, an instant messaging program based onInstantbirdwith Tor andOTRbuilt in and used by default.[138]LikePidginandAdium, Tor Messenger supports multiple different instant messaging protocols; however, it accomplishes this without relying onlibpurple, implementing all chat protocols in the memory-safe language JavaScript instead.[141][142]
According to Lucian Armasu of Toms Hardware, in April 2018, the Tor Project shut down the Tor Messenger project for three reasons: the developers of "Instabird" [sic] discontinued support for their own software, limited resources and known metadata problems.[143]The Tor Messenger developers explained that overcoming any vulnerabilities discovered in the future would be impossible due to the project relying on outdated software dependencies.[144]
In 2016, Tor developer Mike Perry announced a prototype tor-enabled smartphone based onCopperheadOS.[145][146]It was meant as a direction for Tor on mobile.[147]The project was called 'Mission Improbable'. Copperhead's then lead developer Daniel Micay welcomed the prototype.[148]
TheVuze(formerly Azureus)BitTorrentclient,[149]Bitmessageanonymous messaging system,[150]andTorChatinstant messenger include Tor support. TheBriarmessenger routes all messaging via Tor by default.OnionShareallows users to share files using Tor.[66]
The Guardian Projectis actively developing a free and open-source suite of applications and firmware for theAndroid operating systemto improve the security of mobile communications.[151]The applications include theChatSecureinstant messaging client,[152]OrbotTor implementation[153](also available for iOS),[154]Orweb (discontinued) privacy-enhanced mobile browser,[155][156]Orfox, the mobile counterpart of the Tor Browser, ProxyMobFirefox add-on,[157]and ObscuraCam.[158]
Onion Browser[159]is open-source, privacy-enhancing web browser foriOS, which uses Tor.[160]It is available in the iOSApp Store,[161]and source code is available onGitHub.[162]
Braveadded support for Tor in its desktop browser'sprivate-browsingmode.[163][164]
In September of 2024, it was announced thatTails, asecurity-focused operating system, had become part of the Tor Project.[165]Other security-focused operating systems that make or made extensive use of Tor includeHardened Linux From Scratch, Incognito,Liberté Linux,Qubes OS,Subgraph,Parrot OS, Tor-ramdisk, andWhonix.[166]
Tor has been praised for providing privacy and anonymity to vulnerable Internet users such as political activists fearing surveillance and arrest, ordinary web users seeking to circumvent censorship, and people who have been threatened with violence or abuse by stalkers.[168][169]The U.S. National Security Agency (NSA) has called Tor "the king of high-secure, low-latency Internet anonymity",[25]andBusinessWeekmagazine has described it as "perhaps the most effective means of defeating the online surveillance efforts of intelligence agencies around the world".[11]Other media have described Tor as "a sophisticated privacy tool",[170]"easy to use"[171]and "so secure that even the world's most sophisticated electronic spies haven't figured out how to crack it".[47]
Advocates for Tor say it supportsfreedom of expression, including in countries where the Internet is censored, by protecting the privacy and anonymity of users. The mathematical underpinnings of Tor lead it to be characterized as acting "like a piece ofinfrastructure, and governments naturally fall into paying for infrastructure they want to use".[172]
The project was originally developed on behalf of the U.S. intelligence community and continues to receive U.S. government funding, and has been criticized as "more resembl[ing] a spook project than a tool designed by a culture that values accountability or transparency".[173]As of 2012[update], 80% of The Tor Project's $2M annual budget came from theUnited States government, with theU.S. State Department, theBroadcasting Board of Governors, and theNational Science Foundationas major contributors,[174]aiming "to aid democracy advocates in authoritarian states".[175]Other public sources of funding includeDARPA, theU.S. Naval Research Laboratory, and theGovernment of Sweden.[176][177]Some have proposed that the government values Tor's commitment to free speech, and uses the darknet to gather intelligence.[178][need quotation to verify]Tor also receives funding fromNGOsincludingHuman Rights Watch, and private sponsors includingRedditandGoogle.[179]Dingledine said that theUnited States Department of Defensefunds are more similar to aresearch grantthan aprocurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users.[180]
Critics say that Tor is not as secure as it claims,[181]pointing to U.S. law enforcement's investigations and shutdowns of Tor-using sites such as web-hosting companyFreedom Hostingand online marketplaceSilk Road.[173]In October 2013, after analyzing documents leaked by Edward Snowden,The Guardianreported that the NSA had repeatedly tried to crack Tor and had failed to break its core security, although it had had some success attacking the computers of individual Tor users.[25]The Guardianalso published a 2012 NSA classified slide deck, entitled "Tor Stinks", which said: "We will never be able to de-anonymize all Tor users all the time", but "with manual analysis we can de-anonymize a very small fraction of Tor users".[182]When Tor users are arrested, it is typically due to human error, not to the core technology being hacked or cracked.[183]On 7 November 2014, for example, a joint operation by the FBI, ICE Homeland Security investigations and European Law enforcement agencies led to 17 arrests and the seizure of 27 sites containing 400 pages.[184][dubious–discuss]A late 2014 report byDer Spiegelusing a new cache of Snowden leaks revealed, however, that as of 2012[update]the NSA deemed Tor on its own as a "major threat" to its mission, and when used in conjunction with other privacy tools such asOTR, Cspace,ZRTP,RedPhone,Tails, andTrueCryptwas ranked as "catastrophic," leading to a "near-total loss/lack of insight to target communications, presence..."[185][186]
In March 2011, The Tor Project received theFree Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in bothIranand more recentlyEgypt."[187]
Iran tried to block Tor at least twice in 2011. One attempt simply blocked all servers with 2-hour-expiry security certificates; it was successful for less than 24 hours.[188][189]
In 2012,Foreign Policymagazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers".[190]
In 2013,Jacob Appelbaumdescribed Tor as a "part of an ecosystem of software that helps people regain and reclaim their autonomy. It helps to enable people to have agency of all kinds; it helps others to help each other and it helps you to help yourself. It runs, it is open and it is supported by a large community spread across all walks of life."[191]
In June 2013, whistleblowerEdward Snowdenused Tor to send information aboutPRISMtoThe Washington PostandThe Guardian.[192]
In 2014, the Russian government offered a $111,000 contract to "study the possibility of obtaining technical information about users and users' equipment on the Tor anonymous network".[193][194]
In September 2014, in response to reports thatComcasthad been discouraging customers from using the Tor Browser,Comcastissued a public statement that "We have no policy against Tor, or any other browser or software."[195]
In October 2014, The Tor Project hired the public relations firm Thomson Communications to improve its public image (particularly regarding the terms "Dark Net" and "hidden services," which are widely viewed as being problematic) and to educate journalists about the technical aspects of Tor.[196]
Turkey blockeddownloads of Tor Browser from the Tor Project.[197]
In June 2015, thespecial rapporteurfrom the United Nations'Office of the High Commissioner for Human Rightsspecifically mentioned Tor in the context of the debate in the U.S. about allowing so-calledbackdoorsin encryption programs for law enforcement purposes[198]in an interview forThe Washington Post.
In July 2015, the Tor Project announced an alliance with theLibrary Freedom Projectto establish exit nodes in public libraries.[199][200]The pilot program, which established a middle relay running on the excess bandwidth afforded by the Kilton Library inLebanon, New Hampshire, making it the first library in the U.S. to host a Tor node, was briefly put on hold when the local city manager and deputy sheriff voiced concerns over the cost of defending search warrants for information passed through the Tor exit node. Although theDepartment of Homeland Security(DHS) had alerted New Hampshire authorities to the fact that Tor is sometimes used by criminals, the Lebanon Deputy Police Chief and the Deputy City Manager averred that no pressure to strong-arm the library was applied, and the service was re-established on 15 September 2015.[201]U.S. Rep.Zoe Lofgren(D-Calif) released a letter on 10 December 2015, in which she asked theDHSto clarify its procedures, stating that "While the Kilton Public Library's board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility thatDHSemployees are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens."[202][203][204]In a 2016 interview, Kilton Library IT Manager Chuck McAndrew stressed the importance of getting libraries involved with Tor: "Librarians have always cared deeply about protecting privacy, intellectual freedom, andaccess to information(the freedom to read). Surveillance has a very well-documented chilling effect on intellectual freedom. It is the job of librarians to remove barriers to information."[205]The second library to host a Tor node was the Las Naves Public Library inValencia, Spain, implemented in the first months of 2016.[206]
In August 2015, anIBMsecurity research group, called "X-Force", put out a quarterly report that advised companies to block Tor on security grounds, citing a "steady increase" in attacks from Tor exit nodes as well as botnet traffic.[207]
In September 2015, Luke Millanta created OnionView (now defunct), a web service that plots the location of active Tor relay nodes onto an interactive map of the world. The project's purpose was to detail the network's size and escalating growth rate.[208]
In December 2015,Daniel Ellsberg(of thePentagon Papers),[209]Cory Doctorow(ofBoing Boing),[210]Edward Snowden,[211]and artist-activistMolly Crabapple,[212]amongst others, announced their support of Tor.
In March 2016, New Hampshire state representativeKeith Ammonintroduced a bill[213]allowing public libraries to run privacy software. The bill specifically referenced Tor. The text was crafted with extensive input fromAlison Macrina, the director of theLibrary Freedom Project.[214]The bill was passed by the House 268–62.[215]
Also in March 2016, the first Tor node, specifically a middle relay, was established at a library in Canada, the Graduate Resource Centre (GRC) in the Faculty of Information and Media Studies (FIMS) at theUniversity of Western Ontario.[216]Given that the running of a Tor exit node is an unsettled area of Canadian law,[217]and that in general institutions are more capable than individuals to cope with legal pressures, Alison Macrina of the Library Freedom Project has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established.[218]
On 16 May 2016,CNNreported on the case of core Tor developer "isis agora lovecruft",[219]who had fled to Germany under the threat of a subpoena by the FBI during the Thanksgiving break of the previous year. TheElectronic Frontier Foundationlegally represented lovecruft.[220]
On 2 December 2016,The New Yorkerreported on burgeoningdigital privacyand security workshops in theSan Francisco Bay Area, particularly at thehackerspaceNoisebridge, in the wake of the2016 United States presidential election; downloading the Tor browser was mentioned.[221]Also, in December 2016, Turkey has blocked the usage of Tor, together with ten of the most usedVPNservices in Turkey, which were popular ways of accessing banned social media sites and services.[222]
Tor (andBitcoin) was fundamental to the operation of the dark web marketplaceAlphaBay, which was taken down in an international law enforcement operation in July 2017.[223]Despite federal claims that Tor would not shield a user, however,[224]elementaryoperational securityerrors outside of the ambit of the Tor network led to the site's downfall.[225]
In June 2017 theDemocratic Socialists of Americarecommended intermittent Tor usage for politically active organizations and individuals as a defensive mitigation againstinformation securitythreats.[226][227]And in August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web (which relies on Tor as its infrastructure) on behalf of banks and retailers routinely share their findings with theFBIand with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust.[228]
In June 2018, Venezuela blocked access to the Tor network. The block affected both direct connections to the network and connections being made via bridge relays.[229]
On 20 June 2018, Bavarian police raided the homes of the board members of the non-profitZwiebelfreunde, a member oftorservers.net, which handles the European financial transactions ofriseup.netin connection with a blog post there which apparently promised violence against the upcomingAlternative for Germanyconvention.[230][231]Tor came out strongly against the raid on its support organization, which provides legal and financial aid for the setting up and maintenance of high-speed relays and exit nodes.[232]According to torservers.net, on 23 August 2018 the German court at Landgericht München ruled that the raid and seizures were illegal. The hardware and documentation seized had been kept under seal, and purportedly were neither analyzed nor evaluated by the Bavarian police.[233][234]
Since October 2018,Chinese online communitieswithin Tor have begun to dwindle due to increased efforts to stop them by the Chinese government.[235]
In November 2019,Edward Snowdencalled for a full, unabridgedsimplified Chinesetranslation of his autobiography,Permanent Record, as the Chinese publisher had violated their agreement by expurgating all mentions of Tor and other matters deemed politically sensitive by theChinese Communist Party.[236][237]
On 8 December 2021, the Russian government agencyRoskomnadzorannounced it has banned Tor and six VPN services for failing to abide by theRussian Internet blacklist.[238]Russian ISPs unsuccessfully attempted to block Tor's main website as well as several bridges beginning on 1 December 2021.[239]The Tor Project has appealed to Russian courts over this ban.[240]
In response toInternet censorshipduring theRussian invasion of Ukraine, theBBCandVOAhave directed Russian audiences to Tor.[241]The Russian government increased efforts to block access to Tor through technical and political means, while the network reported an increase in traffic from Russia, and increased Russian use of its anti-censorshipSnowflake tool.[242]
Russian courts temporarily lifted the blockade on Tor's website (but not connections to relays) on May 24, 2022[243]due to Russian law requiring that the Tor Project be involved in the case. However, the blockade was reinstated on July 21, 2022.[244]
Iran implemented rolling internet blackouts during theMahsa Amini protests, and Tor andSnowflakewere used to circumvent them.[245][246][247][248]
China, with its highly centralized control of its internet, had effectively blocked Tor.[242]
Tor responded to earlier vulnerabilities listed above by patching them and improving security. In one way or another, human (user) errors can lead to detection. The Tor Project website provides the best practices (instructions) on how to properly use the Tor browser. When improperly used, Tor is not secure. For example, Tor warns its users that not all traffic is protected; only the traffic routed through the Tor browser is protected. Users are also warned to useHTTPSversions of websites, not totorrentwith Tor, not to enable browser plugins, not to open documents downloaded through Tor while online, and to use safe bridges.[249]Users are also warned that they cannot provide their name or other revealing information in web forums over Tor and stay anonymous at the same time.[250]
Despite intelligence agencies' claims that 80% of Tor users would be de-anonymized within 6 months in the year 2013,[251]that has still not happened. In fact, as late as September 2016, the FBI could not locate, de-anonymize and identify the Tor user who hacked into the email account of a staffer onHillary Clinton's email server.[252]
The best tactic of law enforcement agencies to de-anonymize users appears to remain with Tor-relay adversaries running poisoned nodes, as well as counting on the users themselves using the Tor browser improperly. For example, downloading a video through the Tor browser and then opening the same file on an unprotected hard drive while online can make the users' real IP addresses available to authorities.[253]
When properly used, odds of being de-anonymized through Tor are said to be extremely low. Tor project's co-founderNick Mathewsonexplained that the problem of "Tor-relay adversaries" running poisoned nodes means that a theoretical adversary of this kind is not the network's greatest threat:
"No adversary is truly global, but no adversary needs to be truly global," he says. "Eavesdropping on the entire Internet is a several-billion-dollar problem. Running a few computers to eavesdrop on a lot of traffic, a selective denial of service attack to drive traffic to your computers, that's like a tens-of-thousands-of-dollars problem." At the most basic level, an attacker who runs two poisoned Tor nodes—one entry, one exit—is able to analyse traffic and thereby identify the tiny, unlucky percentage of users whose circuit happened to cross both of those nodes. In 2016 the Tor network offers a total of around 7,000 relays, around 2,000 guard (entry) nodes and around 1,000 exit nodes. So the odds of such an event happening are one in two million (1⁄2000×1⁄1000), give or take."[251]
Tor does not provide protection againstend-to-end timing attacks: if an attacker can watch the traffic coming out of the target computer, and also the traffic arriving at the target's chosen destination (e.g. a server hosting a .onion site), that attacker can use statistical analysis to discover that they are part of the same circuit.[250]
A similar attack has been used by German authorities to track down users related toBoystown.[254]
Depending on individual user needs, Tor browser offers three levels of security located under the Security Level (the small gray shield at the top-right of the screen) icon > Advanced Security Settings. In addition to encrypting the data, including constantly changing an IP address through a virtual circuit comprising successive, randomly selected Tor relays, several other layers of security are at a user's disposal:[255][256]
At this level, all features from the Tor Browser and other websites are enabled.
This level eliminates website features that are often pernicious to the user. This may cause some sites to lose functionality. JavaScript is disabled on all non-HTTPS sites; some fonts and mathematical symbols are disabled. Also, audio and video (HTML5 media) are click-to-play.
This level only allows website features required for static sites and basic services. These changes affect images, media, and scripts. Javascript is disabled by default on all sites; some fonts, icons, math symbols, and images are disabled; audio and video (HTML5 media) are click-to-play.
In 2023, Tor unveiled a new defense mechanism to safeguard its onion services against denial of service (DoS) attacks. With the release of Tor 0.4.8, this proof-of-work (PoW) defense promises to prioritize legitimate network traffic while deterring malicious attacks.[257]
|
https://en.wikipedia.org/wiki/Tor_(network)
|
Minimum message length(MML) is a Bayesian information-theoretic method for statistical model comparison and selection.[1]It provides a formalinformation theoryrestatement ofOccam's Razor: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most conciseexplanationof data is more likely to be correct (where theexplanationconsists of the statement of the model, followed by thelossless encodingof the data using the stated model). MML was invented byChris Wallace, first appearing in the seminal paper "An information measure for classification".[2]MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice.[3]It differs from the related concept ofKolmogorov complexityin that it does not require use of aTuring-completelanguage to model data.[4]
Shannon'sA Mathematical Theory of Communication(1948) states that in an optimal code, the message length (in binary) of an eventE{\displaystyle E},length(E){\displaystyle \operatorname {length} (E)}, whereE{\displaystyle E}has probabilityP(E){\displaystyle P(E)}, is given bylength(E)=−log2(P(E)){\displaystyle \operatorname {length} (E)=-\log _{2}(P(E))}.
Bayes's theoremstates that the probability of a (variable) hypothesisH{\displaystyle H}given fixed evidenceE{\displaystyle E}is proportional toP(E|H)P(H){\displaystyle P(E|H)P(H)}, which, by the definition ofconditional probability, is equal toP(H∧E){\displaystyle P(H\land E)}. We want the model (hypothesis) with the highest suchposterior probability. Suppose we encode a message which represents (describes) both model and data jointly. Sincelength(H∧E)=−log2(P(H∧E)){\displaystyle \operatorname {length} (H\land E)=-\log _{2}(P(H\land E))}, the most probable model will have the shortest such message. The message breaks into two parts:−log2(P(H∧E))=−log2(P(H))+−log2(P(E|H)){\displaystyle -\log _{2}(P(H\land E))=-\log _{2}(P(H))+-\log _{2}(P(E|H))}. The first part encodes the model itself. The second part contains information (e.g., values of parameters, or initial conditions, etc.) that, when processed by the model, outputs the observed data.
MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So, an MML metric won't choose a complicated model unless that model pays for itself.
One reason why a model might be longer would be simply because its various parameters are stated to greater precision, thus requiring transmission of more digits. Much of the power of MML derives from its handling of how accurately to state parameters in a model, and a variety of approximations that make this feasible in practice. This makes it possible to usefully compare, say, a model with many parameters imprecisely stated against a model with fewer parameters more accurately stated.
Original Publication:
Books:
Related Links:
|
https://en.wikipedia.org/wiki/Minimum_message_length
|
Madhava's sine tableis thetableoftrigonometric sinesconstructed by the 14th centuryKeralamathematician-astronomerMadhava of Sangamagrama(c. 1340 – c. 1425). The table lists thejya-sor Rsines of the twenty-fouranglesfrom 3.75°to 90° in steps of 3.75° (1/24 of aright angle, 90°). Rsine is just the sine multiplied by a selected radius and given as an integer. In this table, as inAryabhata's earlier table,Ris taken as 21600 ÷ 2π≈ 3437.75.
The table isencodedin the letters of theSanskritalphabet using theKatapayadi system, giving entries the appearance of the verses of a poem.
Madhava's original work containing the table has not been found. The table is reproduced in theAryabhatiyabhashyaofNilakantha Somayaji[1](1444–1544) and also in theYuktidipika/Laghuvivrticommentary ofTantrasamgrahabySankara Variar(circa. 1500–1560).[2]: 114–123
The verses below are given as inCultural foundations of mathematicsby C.K. Raju.[2]: 114–123They are also given in theMalayalam Commentary ofKaranapaddhatiby P.K. Koru[3]but slightly differently.
The verses are:
श्रेष्ठं नाम वरिष्ठानां हिमाद्रिर्वेदभावनः ।तपनो भानु सूक्तज्ञो मध्यमं विद्धि दोहनम् ॥ १ ॥धिगाज्यो नाशनं कष्टं छन्नभोगाशयाम्बिका ।मृगाहारो नरेशोयं वीरो रणजयोत्सुकः ॥ २ ॥मूलं विशुद्धं नाळस्य गानेषु विरळा नराः ।अशुद्धिगुप्ता चोरश्रीः शङ्कुकर्णो नगेश्वरः ॥ ३ ॥तनुजो गर्भजो मित्रं श्रीमानत्र सुखी सखे ।शशी रात्रौ हिमाहारौ वेगज्ञः पथि सिन्धुरः ॥ ४ ॥छाया लयो गजो नीलो निर्मलो नास्ति सत्कुले ।रात्रौ दर्पणमभ्राङ्गं नागस्तुङ्गनखो बली ॥ ५ ॥धीरो युवा कथालोलः पूज्यो नारीजनैर्भगः ।कन्यागारे नागवल्ली देवो विश्वस्थली भृगुः ॥ ६ ॥तत्परादिकलान्तास्तु महाज्या माधवोदिताः ।स्वस्वपूर्वविशुद्धे तु शिष्टास्तत्खण्डमौर्विकाः ॥ ७ ॥
The quarters of the first six verses represent entries for the twenty-four angles from 3.75° to 90° in steps of 3.75° (first column). The second column contains the Rsine values encoded as Sanskrit words (in Devanagari). The third column contains the same inISO 15919 transliterations. The fourth column contains the numbers decoded into arcminutes, arcseconds, and arcthirds in modern numerals. The modern values scaled by the traditional “radius” (21600 ÷ 2π, with the modern value ofπwith two decimals in the arcthirds are given in the fifth column.
The last verse means: “These are the great R-sines as said by Madhava, comprising arcminutes, seconds and thirds. Subtracting from each the previous will give the R-sine-differences.”
By comparing, one can note that Madhava's values are accurately given rounded to the declared precision of thirds except for Rsin(15°) where one feels he should have rounded up to 889′45″16‴ instead.
Note that in theKatapayadi systemthe digits are written in the reverse order, so for example the literal entry corresponding to 15° is 51549880 which is reversed and then read as 0889′45″15‴. Note that the 0 does not carry a value but is used for the metre of the poem alone.
Without going into the philosophy of why the value ofR= 21600 ÷ 2πwas chosen etc, the simplest way to relate the jya tables to our modern concept of sine tables is as follows:
Even today sine tables are given as decimals to a certain precision. If sin(15°) is given as 0.1736, it means the rational 1736 ÷ 10000 is a good approximation of the actual infinite precision number. The only difference is that in the earlier days they had not standardized on decimal values (or powers of ten as denominator) for fractions. Hence they used other denominators based on other considerations (which are not discussed here).
Hence the sine values represented in the tables may simply be taken as approximated by the given integer values divided by theRchosen for the table.
Another possible confusion point is the usage of angle measures like arcminute etc in expressing the R-sines. Modern sines are unitless ratios. Jya-s or R-sines are the same multiplied by a measure of length or distance. However, since these tables were mostly used for astronomy, and distance on the celestial sphere is expressed in angle measures, these values are also given likewise. However, the unit is not really important and need not be taken too seriously, as the value will anyhow be used as part of a rational and the unit will cancel out.
However, this also leads to the usage of sexagesimal subdivisions in Madhava's refining the earlier table of Aryabhata. Instead of choosing a largerR, he gave the extra precision determined by him on top of the earlier given minutes by using seconds and thirds. As before, these may simply be taken as a different way of expressing fractions and not necessarily as angle measures.
Consider some angle whose measure isA. Consider acircleof unit radius and center O. Let the arc PQ of the circle subtend an angleAat the center O. Drop theperpendicularQR from Q to OP; then the length of the line segment RQ is the value of the trigonometric sine of the angleA. Let PS be an arc of the circle whose length is equal to the length of the segment RQ. For various anglesA, Madhava's table gives the measures of the corresponding angles∠{\displaystyle \angle }POS inarcminutes,arcsecondsand sixtieths of anarcsecond.
As an example, letAbe an angle whose measure is 22.50°. In Madhava's table, the entry corresponding to 22.50° is the measure in arcminutes, arcseconds and sixtieths of an arcsecond of the angle whose radian measure is the value ofsin 22.50°, which is 0.3826834;
For an angle whose measure isA, let
Then:
Each of the lines in the table specifies eight digits. Let the digits corresponding to angleA(read from left to right) be:
Then according to the rules of theKatapayadi systemthey should be taken from right to left and we have:
The value of the above angleBexpressed in radians will correspond to the sine value ofA.
As said earlier, this is the same as dividing the encoded value by the takenRvalue:
The table lists the following digits corresponding to the angleA= 45.00°:
This yields the angle with measure:
From which we get:
The value of the sine ofA= 45.00° as given in Madhava's table is then justBconverted to radians:
Evaluating the above, one can find that sin 45° is 0.70710681… This is accurate to 6 decimal places.
No work of Madhava detailing the methods used by him for the computation of the sine table has survived. However from the writings of later Kerala mathematicians includingNilakantha Somayaji(Tantrasangraha) andJyeshtadeva(Yuktibhāṣā) that give ample references to Madhava's accomplishments, it is conjectured that Madhava computed his sine table using thepower series expansion of sinx:
|
https://en.wikipedia.org/wiki/Madhava%27s_sine_table
|
Laws of Form(hereinafterLoF) is a book byG. Spencer-Brown, published in 1969, that straddles the boundary betweenmathematicsandphilosophy.LoFdescribes three distinctlogical systems:
"Boundary algebra" is aMeguire (2011)term for the union of the primary algebra and the primary arithmetic.Laws of Formsometimes loosely refers to the "primary algebra" as well as toLoF.
The preface states that the work was first explored in 1959, and Spencer Brown citesBertrand Russellas being supportive of his endeavour.[a]He also thanksJ. C. P. MillerofUniversity College Londonfor helping with the proofreading and offering other guidance. In 1963 Spencer Brown was invited by Harry Frost, staff lecturer in the physical sciences at the department of Extra-Mural Studies of theUniversity of London, to deliver a course on the mathematics of logic.[citation needed]
LoFemerged from work in electronic engineering its author did around 1960. Key ideas of theLOFwere first outlined in his 1961 manuscriptDesign with the Nor, which remained unpublished until 2021,[1]and further refined during subsequent lectures onmathematical logiche gave under the auspices of theUniversity of London's extension program.LoFhas appeared in several editions. The second series of editions appeared in 1972 with the "Preface to the First American Edition", which emphasised the use of self-referential paradoxes,[2]and the most recent being a 1997 German translation.LoFhas never gone out of print.
LoF'smysticaland declamatory prose and its love ofparadoxmake it a challenging read for all. Spencer-Brown was influenced byWittgensteinandR. D. Laing.LoFalso echoes a number of themes from the writings ofCharles Sanders Peirce,Bertrand Russell, andAlfred North Whitehead.
The work has had curious effects on some classes of its readership; for example, on obscure grounds, it has been claimed that the entire book is written in an operational way, giving instructions to the reader instead of telling them what "is", and that in accordance with G. Spencer-Brown's interest in paradoxes, the only sentence that makes a statement that somethingis, is the statement which says no such statements are used in this book.[3]Furthermore, the claim asserts that except for this one sentence the book can be seen as an example ofE-Prime. What prompted such a claim, is obscure, either in terms of incentive, logical merit, or as a matter of fact, because the book routinely and naturally uses the verbto bethroughout, and in all its grammatical forms, as may be seen both in the original and in quotes shown below.[4]
Ostensibly a work of formal mathematics and philosophy,LoFbecame something of acult classic: it was praised byHeinz von Foersterwhen he reviewed it for theWhole Earth Catalog.[5]Those who agree point toLoFas embodying an enigmatic "mathematics ofconsciousness", its algebraic symbolism capturing an (perhaps even "the") implicit root ofcognition: the ability to "distinguish".LoFargues that primary algebra reveals striking connections amonglogic,Boolean algebra, and arithmetic, and thephilosophy of languageandmind.
Stafford Beerwrote in a review forNature, "When one thinks of all that Russell went through sixty years ago, to write thePrincipia, and all we his readers underwent in wrestling with those three vast volumes, it is almost sad".[6]
Banaschewski (1977)[7]argues that the primary algebra is nothing but new notation for Boolean algebra. Indeed, thetwo-element Boolean algebra2can be seen as the intended interpretation of the primary algebra. Yet the notation of the primary algebra:
Moreover, the syntax of the primary algebra can be extended to formal systems other than2and sentential logic, resulting in boundary mathematics (see§ Related workbelow).
LoFhas influenced, among others,Heinz von Foerster,Louis Kauffman,Niklas Luhmann,Humberto Maturana,Francisco VarelaandWilliam Bricken. Some of these authors have modified the primary algebra in a variety of interesting ways.
LoFclaimed that certain well-known mathematical conjectures of very long standing, such as thefour color theorem,Fermat's Last Theorem, and theGoldbach conjecture, are provable using extensions of the primary algebra. Spencer-Brown eventually circulated a purported proof of the four color theorem, but it was met with skepticism.[8]
The symbol:
Also called the "mark" or "cross", is the essential feature of the Laws of Form. In Spencer-Brown's inimitable and enigmatic fashion, the Mark symbolizes the root ofcognition, i.e., thedualisticMark indicates the capability of differentiating a "this" from "everything elsebutthis".
InLoF, a Cross denotes the drawing of a "distinction", and can be thought of as signifying the following, all at once:
All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. AsLoFputs it:
"The first command:
can well be expressed in such ways as:
Or:
The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable infinite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing has been crossed. The Marked state and the void are the two primitive values of the Laws of Form.
The Cross can be seen as denoting the distinction between two states, one "considered as a symbol" and another not so considered. From this fact arises a curious resonance with some theories ofconsciousnessandlanguage. Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation.LoF(excluding back matter) closes with the words:
...the first distinction, the Mark and the observer are not only interchangeable, but, in the form, identical.
C. S. Peircecame to a related insight in the 1890s; see§ Related work.
Thesyntaxof the primary arithmetic goes as follows. There are just twoatomic expressions:
There are two inductive rules:
Thesemanticsof the primary arithmetic are perhaps nothing more than the sole explicitdefinitioninLoF: "Distinction is perfect continence".
Let the "unmarked state" be a synonym for the void. Let an empty Cross denote the "marked state". To cross is to move from one value, the unmarked or marked state, to the other. We can now state the "arithmetical"axiomsA1 and A2, which ground the primary arithmetic (and hence all of the Laws of Form):
"A1. The law of Calling". Calling twice from a state is indistinguishable from calling once. To make a distinction twice has the same effect as making it once. For example, saying "Let there be light" and then saying "Let there be light" again, is the same as saying it once. Formally:
"A2. The law of Crossing". After crossing from the unmarked to the marked state, crossing again ("recrossing") starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally:
In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This suggests that every primary arithmetic expression can, by repeated application of A1 and A2, besimplifiedto one of two states: the marked or the unmarked state. This is indeed the case, and the result is the expression's "simplification". The two fundamental metatheorems of the primary arithmetic state that:
Thus therelationoflogical equivalencepartitionsall primary arithmetic expressions into twoequivalence classes: those that simplify to the Cross, and those that simplify to the void.
A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of diagramming processes, including flowcharting. A1 corresponds to a parallel connection and A2 to a series connection, with the understanding that making a distinction corresponds to changing how two points in a circuit are connected, and not simply to adding wiring.
The primary arithmetic is analogous to the following formal languages frommathematicsandcomputer science:
The phrase "calculus of indications" inLoFis a synonym for "primary arithmetic".
WhileLoFdoes not formally define canon, the following two excerpts from the Notes to chpt. 2 are apt:
The more important structures of command are sometimes calledcanons. They are the ways in which the guiding injunctions appear to group themselves in constellations, and are thus by no means independent of each other. A canon bears the distinction of being outside (i.e., describing) the system under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to construct or create.
...the primary form of mathematical communication is not description but injunction... Music is a similar art form, the composer does not even attempt to describe the set of sounds he has in mind, much less the set of feelings occasioned through them, but writes down a set of commands which, if they are obeyed by the performer, can result in a reproduction, to the listener, of the composer's original experience.
These excerpts relate to the distinction inmetalogicbetween the object language, the formal language of the logical system under discussion, and themetalanguage, a language (often a natural language) distinct from the object language, employed to exposit and discuss the object language. The first quote seems to assert that thecanonsare part of the metalanguage. The second quote seems to assert that statements in the object language are essentially commands addressed to the reader by the author. Neither assertion holds in standard metalogic.
Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing optional numerical subscripts; the result is a primary algebraformula. Letters so employed inmathematicsandlogicare calledvariables. A primary algebra variable indicates a location where one can write the primitive valueor its complement. Multiple instances of the same variable denote multiple locations of the same primitive value.
The sign '=' may link two logically equivalent expressions; the result is anequation. By "logically equivalent" is meant that the two expressions have the same simplification.Logical equivalenceis anequivalence relationover the set of primary algebra formulas, governed by the rules R1 and R2. Let "C" and "D" be formulae each containing at least one instance of the subformulaA:
R2is employed very frequently inprimary algebrademonstrations (see below), almost always silently. These rules are routinely invoked inlogicand most of mathematics, nearly always unconsciously.
Theprimary algebraconsists ofequations, i.e., pairs of formulae linked by aninfix operator'='.R1andR2enable transforming one equation into another. Hence theprimary algebrais anequationalformal system, like the manyalgebraic structures, includingBoolean algebra, that arevarieties. Equational logic was common beforePrincipia Mathematica(e.g.Johnson (1892)), and has present-day advocates (Gries & Schneider (1993)).
Conventionalmathematical logicconsists oftautologicalformulae, signalled by a prefixedturnstile. To denote that theprimary algebraformulaAis atautology, simply write "A=". If one replaces '=' inR1andR2with thebiconditional, the resulting rules hold in conventional logic. However, conventional logic relies mainly on the rulemodus ponens; thus conventional logic isponential. The equational-ponential dichotomy distills much of what distinguishes mathematical logic from the rest of mathematics.
Aninitialis aprimary algebraequation verifiable by adecision procedureand as such isnotanaxiom.LoFlays down the initials:
The absence of anything to the right of the "=" above, is deliberate.
J2is the familiardistributive lawofsentential logicandBoolean algebra.
Another set of initials, friendlier to calculations, is:
It is thanks toC2that theprimary algebrais alattice. By virtue ofJ1a, it is acomplemented latticewhose upper bound is. ByJ0,is the corresponding lower bound andidentity element.J0is also an algebraic version ofA2and makes clear the sense in whichaliases with the blank page.
T13 inLoFgeneralizesC2as follows. Anyprimary algebra(or sentential logic) formulaBcan be viewed as anordered treewithbranches. Then:
T13: AsubformulaAcan be copied at will into any depth ofBgreater than that ofA, as long asAand its copy are in the same branch ofB. Also, given multiple instances ofAin the same branch ofB, all instances but the shallowest are redundant.
While a proof of T13 would requireinduction, the intuition underlying it should be clear.
C2or its equivalent is named:
Perhaps the first instance of an axiom or rule with the power ofC2was the "Rule of (De)Iteration", combining T13 andAA=A, ofC. S. Peirce'sexistential graphs.
LoFasserts that concatenation can be read ascommutingandassociatingby default and hence need not be explicitly assumed or demonstrated. (Peirce made a similar assertion about hisexistential graphs.) Let a period be a temporary notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the:
Having demonstrated associativity, the period can be discarded.
The initials inMeguire (2011)areAC.D=CD.A, calledB1;B2, J0 above;B3, J1a above; andB4, C2. By design, these initials are very similar to the axioms for anabelian group,G1-G3below.
Theprimary algebracontains three kinds of proved assertions:
The distinction between consequence andtheoremholds for all formal systems, including mathematics and logic, but is usually not made explicit. A demonstration ordecision procedurecan be carried out and verified by computer. Theproofof atheoremcannot be.
LetAandBbeprimary algebraformulas. A demonstration ofA=Bmay proceed in either of two ways:
OnceA=Bhas been demonstrated,A=Bcan be invoked to justify steps in subsequent demonstrations.primary algebrademonstrations and calculations often require no more thanJ1a,J2,C2, and the consequences(C3inLoF),(C1), andAA=A(C5).
The consequence,C7'inLoF, enables analgorithm, sketched inLoFs proof of T14, that transforms an arbitraryprimary algebraformula to an equivalent formula whose depth does not exceed two. The result is anormal form, theprimary algebraanalog of theconjunctive normal form.LoF(T14–15) proves theprimary algebraanalog of the well-knownBoolean algebratheorem that every formula has a normal form.
LetAbe asubformulaof someformulaB. When paired withC3,J1acan be viewed as the closure condition for calculations:Bis atautologyif and only ifAand (A) both appear in depth 0 ofB. A related condition appears in some versions ofnatural deduction. A demonstration by calculation is often little more than:
The last step of a calculation always invokesJ1a.
LoFincludes elegant new proofs of the following standardmetatheory:
Thatsentential logicis complete is taught in every first university course inmathematical logic. But university courses in Boolean algebra seldom mention the completeness of2.
If the Marked and Unmarked states are read as theBooleanvalues 1 and 0 (orTrueandFalse), theprimary algebrainterprets2(orsentential logic).LoFshows how theprimary algebracan interpret thesyllogism. Each of theseinterpretationsis discussed in a subsection below. Extending theprimary algebraso that it couldinterpretstandardfirst-order logichas yet to be done, butPeirce'sbetaexistential graphssuggest that this extension is feasible.
Theprimary algebrais an elegant minimalist notation for thetwo-element Boolean algebra2. Let:
If join (meet) interpretsAC, then meet (join) interpretsA|¯C|¯|¯{\displaystyle {\overline {{\overline {A|}}\ \ {\overline {C|}}{\Big |}}}}. Hence theprimary algebraand2are isomorphic but for one detail:primary algebracomplementation can be nullary, in which case it denotes a primitive value. Modulo this detail,2is amodelof the primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of2: 1+1=1+0=0+1=1=~0, and 0+0=0=~1.
ThesetB={{\displaystyle \ B=\{},{\displaystyle ,}}{\displaystyle \ \}}is theBoolean domainorcarrier. In the language ofuniversal algebra, theprimary algebrais thealgebraic structure⟨B,−−,−|¯,|¯⟩{\displaystyle \langle B,-\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle }of type⟨2,1,0⟩{\displaystyle \langle 2,1,0\rangle }. Theexpressive adequacyof theSheffer strokepoints to theprimary algebraalso being a⟨B,−−|¯,|¯⟩{\displaystyle \langle B,{\overline {-\ -\ |}},{\overline {\ \ |}}\rangle }algebra of type⟨2,0⟩{\displaystyle \langle 2,0\rangle }. In both cases, the identities are J1a, J0, C2, andACD=CDA. Since theprimary algebraand2areisomorphic,2can be seen as a⟨B,+,¬,1⟩{\displaystyle \langle B,+,\lnot ,1\rangle }algebra of type⟨2,1,0⟩{\displaystyle \langle 2,1,0\rangle }. This description of2is simpler than the conventional one, namely an⟨B,+,×,¬,1,0⟩{\displaystyle \langle B,+,\times ,\lnot ,1,0\rangle }algebra of type⟨2,2,1,0,0⟩{\displaystyle \langle 2,2,1,0,0\rangle }.
The two possible interpretations are dual to each other in the Boolean sense. (In Boolean algebra, exchanging AND ↔ OR and 1 ↔ 0 throughout an equation yields an equally valid equation.) The identities remain invariant regardless of which interpretation is chosen, so the transformations or modes of calculation remain the same; only the interpretation of each form would be different. Example: J1a is. Interpreting juxtaposition as OR andas 1, this translates to¬A∨A=1{\displaystyle \neg A\lor A=1}which is true. Interpreting juxtaposition as AND andas 0, this translates to¬A∧A=0{\displaystyle \neg A\land A=0}which is true as well (and the dual of¬A∨A=1{\displaystyle \neg A\lor A=1}).
The marked state,, is both an operator (e.g., the complement) and operand (e.g., the value 1). This can be summarized neatly by defining two functionsm(x){\displaystyle m(x)}andu(x){\displaystyle u(x)}for the marked and unmarked state, respectively: letm(x)=1−max({0}∪x){\displaystyle m(x)=1-\max(\{0\}\cup x)}andu(x)=max({0}∪x){\displaystyle u(x)=\max(\{0\}\cup x)}, wherex{\displaystyle x}is a (possibly empty) set of boolean values.
This reveals thatu{\displaystyle u}is either the value 0 or the OR operator, whilem{\displaystyle m}is either the value 1 or the NOR operator, depending on whetherx{\displaystyle x}is the empty set or not. As noted above, there is a dual form of these functions exchanging AND ↔ OR and 1 ↔ 0.
Let the blank page denoteFalse, and let a Cross be read asNot. Then the primary arithmetic has the following sentential reading:
Theprimary algebrainterprets sentential logic as follows. A letter represents any given sentential expression. Thus:
Thus any expression insentential logichas aprimary algebratranslation. Equivalently, theprimary algebrainterpretssentential logic. Given an assignment of every variable to the Marked or Unmarked states, thisprimary algebratranslation reduces to a primary arithmetic expression, which can be simplified. Repeating this exercise for all possible assignments of the two primitive values to each variable, reveals whether the original expression istautologicalorsatisfiable. This is an example of adecision procedure, one more or less in the spirit of conventional truth tables. Given someprimary algebraformula containingNvariables, this decision procedure requires simplifying 2Nprimary arithmetic formulae. For a less tedious decision procedure more in the spirit ofQuine's "truth value analysis", seeMeguire (2003).
Schwartz (1981)proved that theprimary algebrais equivalent —syntactically,semantically, andproof theoretically— with theclassical propositional calculus. Likewise, it can be shown that theprimary algebrais syntactically equivalent with expressions built up in the usual way from the classicaltruth valuestrueandfalse, thelogical connectivesNOT, OR, and AND, and parentheses.
Interpreting the Unmarked State asFalseis wholly arbitrary; that state can equally well be read asTrue. All that is required is that the interpretation ofconcatenationchange from OR to AND. IF A THEN B now translates asinstead of. More generally, theprimary algebrais "self-dual", meaning that anyprimary algebraformula has twosententialorBooleanreadings, each thedualof the other. Another consequence of self-duality is the irrelevance ofDe Morgan's laws; those laws are built into the syntax of theprimary algebrafrom the outset.
The true nature of the distinction between theprimary algebraon the one hand, and2and sentential logic on the other, now emerges. In the latter formalisms,complementation/negationoperating on "nothing" is not well-formed. But an empty Cross is a well-formedprimary algebraexpression, denoting the Marked state, a primitive value. Hence a nonempty Cross is anoperator, while an empty Cross is anoperandbecause it denotes a primitive value. Thus theprimary algebrareveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action, the making of a distinction.
Appendix 2 ofLoFshows how to translate traditionalsyllogismsandsoritesinto theprimary algebra. A valid syllogism is simply one whoseprimary algebratranslation simplifies to an empty Cross. LetA* denote aliteral, i.e., eitherAorA|¯{\displaystyle {\overline {A|}}}, indifferently. Then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization ofBarbarawhoseprimary algebraequivalent isA∗B|¯B|¯C∗|¯A∗C∗{\displaystyle {\overline {A^{*}\ B|}}\ \ {\overline {{\overline {B|}}\ C^{*}{\Big |}}}\ A^{*}\ C^{*}}. These 24 possible permutations include the 19 syllogistic forms deemed valid inAristotelianandmedieval logic. Thisprimary algebratranslation of syllogistic logic also suggests that theprimary algebracaninterpretmonadicandterm logic, and that theprimary algebrahas affinities to theBoolean term schemataofQuine (1982), Part II.
The following calculation ofLeibniz's nontrivialPraeclarum Theoremaexemplifies the demonstrative power of theprimary algebra. Let C1 beA|¯|¯{\displaystyle {\overline {{\overline {A|}}{\Big |}}}}=A, C2 beAAB|¯=AB|¯{\displaystyle A\ {\overline {A\ B|}}=A\ {\overline {B|}}}, C3 be|¯A=|¯{\displaystyle {\overline {\ \ |}}\ A={\overline {\ \ |}}}, J1a beA|¯A=|¯{\displaystyle {\overline {A|}}\ A={\overline {\ \ |}}}, and let OI mean that variables and subformulae have been reordered in a way that commutativity and associativity permit.
Theprimary algebraembodies a point noted byHuntingtonin 1933:Boolean algebrarequires, in addition to oneunary operation, one, and not two,binary operations. Hence the seldom-noted fact that Boolean algebras aremagmas. (Magmas were calledgroupoidsuntil the latter term was appropriated bycategory theory.) To see this, note that theprimary algebrais acommutative:
Groupsalso require aunary operation, calledinverse, the group counterpart ofBoolean complementation. Letdenote the inverse ofa. Letdenote the groupidentity element. Then groups and theprimary algebrahave the samesignatures, namely they are both⟨−−,−|¯,|¯⟩{\displaystyle \langle -\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle }algebras of type 〈2,1,0〉. Hence theprimary algebrais aboundary algebra. The axioms for anabelian group, in boundary notation, are:
FromG1andG2, the commutativity and associativity of concatenation may be derived, as above. Note thatG3andJ1aare identical.G2andJ0would be identical if=replacedA2. This is the defining arithmetical identity of group theory, in boundary notation.
Theprimary algebradiffers from anabelian groupin two ways:
BothA2andC2follow fromB's being anordered set.
Chapter 11 ofLoFintroducesequations of the second degree, composed ofrecursiveformulae that can be seen as having "infinite" depth. Some recursive formulae simplify to the marked or unmarked state. Others "oscillate" indefinitely between the two states depending on whether a given depth is even or odd. Specifically, certain recursive formulae can be interpreted as oscillating betweentrueandfalseover successive intervals of time, in which case a formula is deemed to have an "imaginary" truth value. Thus the flow of time may be introduced into theprimary algebra.
Turney (1986)shows how these recursive formulae can be interpreted viaAlonzo Church's Restricted Recursive Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization offinite automata. Turney presents a general method for translating equations of the second degree into Church's RRA, illustrating his method using the formulaeE1,E2, andE4in chapter 11 ofLoF. This translation into RRA sheds light on the names Spencer-Brown gave toE1andE4, namely "memory" and "counter". RRA thus formalizes and clarifiesLoF's notion of an imaginary truth value.
Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, inventedBoolean logic. His notation was isomorphic to that ofLoF: concatenation read asconjunction, and "non-(X)" read as thecomplementofX. Recognition of Leibniz's pioneering role inalgebraic logicwas foreshadowed byLewis (1918)andRescher (1954). But a full appreciation of Leibniz's accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and reviewed inLenzen (2004).
Charles Sanders Peirce(1839–1914) anticipated theprimary algebrain three veins of work:
LoFcites vol. 4 of Peirce'sCollected Papers,the source for the formalisms in (2) and (3) above.
(1)-(3) were virtually unknown at the time when (1960s) and in the place where (UK)LoFwas written. Peirce'ssemiotics, about whichLoFis silent, may yet shed light on the philosophical aspects ofLoF.
Kauffman (2001)discusses another notation similar to that ofLoF, that of a 1917 article byJean Nicod, who was a disciple ofBertrand Russell's.
The above formalisms are, like theprimary algebra, all instances ofboundary mathematics, i.e., mathematics whose syntax is limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a "boundary notation". Boundary notation is free of infix operators,prefix, orpostfixoperator symbols. The very well known curly braces ('{', '}') of set theory can be seen as a boundary notation.
The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote beforeEmil Post's landmark 1920 paper (whichLoFcites), proving thatsentential logicis complete, and beforeHilbertandŁukasiewiczshowed how to proveaxiom independenceusingmodels.
Craig (1979)argued that the world, and how humans perceive and interact with that world, has a rich Boolean structure.Craigwas an orthodox logician and an authority onalgebraic logic.
Second-generationcognitive scienceemerged in the 1970s, afterLoFwas written. On cognitive science and its relevance to Boolean algebra, logic, andset theory, seeLakoff (1987)(see index entries under "Image schema examples: container") andLakoff & Núñez (2000). Neither book citesLoF.
The biologists and cognitive scientistsHumberto Maturanaand his studentFrancisco Varelaboth discussLoFin their writings, which identify "distinction" as the fundamental cognitive act. The Berkeley psychologist and cognitive scientistEleanor Roschhas written extensively on the closely related notion of categorization.
Other formal systems with possible affinities to the primary algebra include:
The primary arithmetic and algebra are a minimalist formalism forsentential logicand Boolean algebra. Other minimalist formalisms having the power ofset theoryinclude:
|
https://en.wikipedia.org/wiki/Laws_of_Form
|
Inmachine learning,instance-based learning(sometimes calledmemory-based learning[1]) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as "lazy."[2]
It is called instance-based because it constructs hypotheses directly from the training instances themselves.[3]This means that the hypothesis complexity can grow with the data:[3]in the worst case, a hypothesis is a list ofntraining items and the computational complexity ofclassifyinga single new instance isO(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Instance-based learners may simply store a new instance or throw an old instance away.
Examples of instance-based learning algorithms are thek-nearest neighbors algorithm,kernel machinesandRBF networks.[2]: ch. 8These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision.
To battle the memory complexity of storing all training instances, as well as the risk ofoverfittingto noise in the training set,instance reductionalgorithms have been proposed.[4]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Instance-based_learning
|
TheGittins indexis a measure of the reward that can be achieved through a givenstochastic processwith certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is arealscalar.
To illustrate the theory we can take two examples from a developing sector, such as from electricity generating technologies: wind power and wave power. If we are presented with the two technologies when they are both proposed as ideas we cannot say which will be better in the long run as we have no data, as yet, to base our judgments on.[1]It would be easy to say that wave power would be too problematic to develop as it seems easier to put up many wind turbines than to make the long floating generators, tow them out to sea and lay the cables necessary.
If we were to make a judgment call at that early time in development we could be condemning one technology to being put on the shelf and the other would be developed and put into operation. If we develop both technologies we would be able to make a judgment call on each by comparing the progress of each technology at a set time interval such as every three months. The decisions we make about investment in the next stage would be based on those results.[1]
In a paper in 1979 calledBandit Processes and Dynamic Allocation IndicesJohn C. Gittinssuggests a solution for problems such as this. He takes the two basic functions of a "schedulingProblem" and a "multi-armed bandit" problem[2]and shows how these problems can be solved usingDynamic allocation indices. He first takes the "Scheduling Problem" and reduces it to a machine which has to perform jobs and has a set time period, every hour or day for example, to finish each job in. The machine is given a reward value, based on finishing or not within the time period, and a probability value of whether it will finish or not for each job is calculated. The problem is "to decide which job to process next at each stage so as to maximize the total expected reward."[1]He then moves on to the "Multi–armed bandit problem" where each pull on a "one armed bandit" lever is allocated a reward function for a successful pull, and a zero reward for an unsuccessful pull. The sequence of successes forms aBernoulli processand has an unknown probability of success. There are multiple "bandits" and the distribution of successful pulls is calculated and different for each machine. Gittins states that the problem here is "to decide which arm to pull next at each stage so as to maximize the total expected reward from an infinite sequence of pulls."[1]
Gittins says that "Both the problems described above involve a sequence of decisions, each of which is based on more information than its predecessors, and these both problems may be tackled by dynamic allocation indices."[2]
In applied mathematics, the "Gittins index" is arealscalarvalue associated to the state of astochastic processwith a reward function and with a probability of termination. It is a measure of the reward that can be achieved by the process evolving from that state on, under the probability that it will be terminated in future. The "index policy" induced by the Gittins index, consisting of choosing at any time the stochastic process with the currently highest Gittins index, is the solution of somestopping problemssuch as the one of dynamic allocation, where a decision-maker has to maximize the total reward by distributing a limited amount of effort to a number of competing projects, each returning a stochastic reward. If the projects are independent from each other and only one project at a time may evolve, the problem is calledmulti-armed bandit(one type ofStochastic schedulingproblems) and the Gittins index policy is optimal. If multiple projects can evolve, the problem is calledRestless banditand the Gittins index policy is a known good heuristic but no optimal solution exists in general. In fact, in general this problem isNP-completeand it is generally accepted that no feasible solution can be found.
Questions about the optimal stopping policies in the context of clinical trials have been open from the 1940s and in the 1960s a few authors analyzed simple models leading to optimal index policies,[3]but it was only in the 1970s thatGittinsand his collaborators demonstrated in a Markovian framework that the optimal solution of the general case is an index policy whose "dynamic allocation index" is computable in principle for every state of each project as a function of the single project's dynamics.[2][4]In parallel to Gittins,Martin Weitzmanestablished the same result in the economics literature.[5]
Soon after the seminal paper of Gittins,Peter Whittle[6]demonstrated that the index emerges as aLagrange multiplierfrom adynamic programmingformulation of the problem calledretirement processand conjectured that the same index would be a good heuristic in a more general setup namedRestless bandit. The question of how to actually calculate the index forMarkov chainswas first addressed by Varaiya and his collaborators[7]with an algorithm that computes the indexes
from the largest first down to the smallest and by Chen and Katehakis[8]who showed that standardLPcould be used to calculate the index of a state without requiring its calculation for all states with higher index values.
LCM Kallenberg[9]provided a parametric LP implementation to compute the indices for all states of a Markov chain. Further, Katehakis and Veinott[10]demonstrated that the index is the expected reward of aMarkov decision processconstructed over the Markov chain and known asRestart in Stateand can be calculated exactly by solving that problem with thepolicy iterationalgorithm, or approximately with thevalue iterationalgorithm. This approach also has the advantage of calculating the index for one specific state without having to calculate all the greater indexes and it is valid under more general state space conditions. A faster algorithm for the calculation of all indices was obtained in 2004 by Sonin[11]as a consequence of hiselimination algorithmfor the optimal stopping of a Markov chain. In this algorithm the termination probability of the process may depend on the current state rather than being a fixed factor. A faster algorithm was proposed in 2007 by Niño-Mora[12]by exploiting the structure of a parametric simplex to reduce the computational effort of the pivot steps and thereby achieving the same complexity as theGaussian eliminationalgorithm. Cowan, W. and Katehakis (2014),[13]provide a solution to the problem, with potentially non-Markovian, uncountable state space reward processes, under frameworks in which, either the discount factors may be non-uniform and vary over time, or the periods of activation of each bandit may be not be fixed or uniform, subject instead to a possibly stochastic duration of activation before a change to a different bandit is allowed. The solution is based on generalized restart-in-state indices.
The classical definition by Gittins et al. is:
whereZ(⋅){\displaystyle Z(\cdot )}is a stochastic process,R(i){\displaystyle R(i)}is the
utility (also called reward) associated to the discrete statei{\displaystyle i},β<1{\displaystyle \beta <1}is the probability that the stochastic process does not
terminate, and⟨⋅⟩c{\displaystyle \langle \cdot \rangle _{c}}is the conditional expectation
operator givenc:
withχ{\displaystyle \chi }being thedomainofX.
The dynamic programming formulation in terms of retirement process, given by Whittle, is:
wherev(i,k){\displaystyle v(i,k)}is thevalue function
with the same notation as above. It holds that
IfZ(⋅){\displaystyle Z(\cdot )}is a Markov chain with rewards, the interpretation ofKatehakisand Veinott (1987) associates to every state the action of restarting from one arbitrary statei{\displaystyle i}, thereby constructing a Markov decision processMi{\displaystyle M_{i}}.
The Gittins Index of that statei{\displaystyle i}is the highest total reward which can be achieved onMi{\displaystyle M_{i}}if one can always choose to continue or restart from that statei{\displaystyle i}.
whereπ{\displaystyle \pi }indicates a policy overMi{\displaystyle M_{i}}. It holds that
If the probability of survivalβ(i){\displaystyle \beta (i)}depends on the statei{\displaystyle i}, a generalization introduced by Sonin[11](2008) defines the Gittins indexα(i){\displaystyle \alpha (i)}as the maximum discounted total reward per chance of termination.
where
Ifβt{\displaystyle \beta ^{t}}is replaced by∏j=0t−1β[Z(j)]{\displaystyle \prod _{j=0}^{t-1}\beta [Z(j)]}in the definitions ofν(i){\displaystyle \nu (i)},w(i){\displaystyle w(i)}andh(i){\displaystyle h(i)}, then it holds that
this observation leads Sonin[11]to conclude thatα(i){\displaystyle \alpha (i)}and notν(i){\displaystyle \nu (i)}is the "true meaning" of the Gittins index.
In queueing theory, Gittins index is used to determine the optimal scheduling of jobs, e.g., in an M/G/1 queue. The mean completion time of jobs under a Gittins index schedule can be determined using the SOAP approach.[14]Note that the dynamics of the queue are intrinsically Markovian, and stochasticity is due to the arrival and service processes. This is in contrast to most of the works in the learning literature, where stochasticity is explicitly accounted through a noise term.
While conventional Gittins indices induce a policy to optimize the accrual of a reward, a common problem setting consists of optimizing the ratio of accrued rewards. For example, this is a case for systems to maximize bandwidth, consisting of data over time, or minimize power consumption, consisting of energy over time.
This class of problems is different from the optimization of a semi-Markov reward process, because the latter one might select states with a disproportionate sojourn time just for accruing a higher reward. Instead, it corresponds to the class of linear-fractional markov reward optimization problem.
However, a detrimental aspect of such ratio optimizations is that, once the achieved ratio in some state is high, the optimization might select states leading to a low ratio because they bear a high probability of termination, so that the process is likely to terminate before the ratio drops significantly. A problem setting to prevent such early terminations consists of defining the optimization as maximization of the future ratio seen by each state. An indexation is conjectured to exist for this problem, be computable as simple variation on existing restart-in-state or state elimination algorithms and evaluated to work well in practice.[15]
|
https://en.wikipedia.org/wiki/Gittins_index
|
Instatistics, aquadratic classifieris astatistical classifierthat uses aquadraticdecision surfaceto separate measurements of two or more classes of objects or events. It is a more general version of thelinear classifier.
Statistical classificationconsiders a set ofvectorsof observationsxof an object or event, each of which has a known typey. This set is referred to as thetraining set. The problem is then to determine, for a given new observation vector, what the best class should be. For a quadratic classifier, the correct solution is assumed to be quadratic in the measurements, soywill be decided based onxTAx+bTx+c{\displaystyle \mathbf {x^{T}Ax} +\mathbf {b^{T}x} +c}
In the special case where each observation consists of two measurements, this means that the surfaces separating the classes will beconic sections(i.e., either aline, acircleorellipse, aparabolaor ahyperbola). In this sense, we can state that a quadratic model is a generalization of the linear model, and its use is justified by the desire to extend the classifier's ability to represent more complex separating surfaces.
Quadratic discriminant analysis (QDA) is closely related tolinear discriminant analysis(LDA), where it is assumed that the measurements from each class arenormally distributed.[1]Unlike LDA however, in QDA there is no assumption that thecovarianceof each of the classes is identical.[2]When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is thelikelihood ratio test. Suppose there are only two groups, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and covariance matricesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}corresponding toy=0{\displaystyle y=0}andy=1{\displaystyle y=1}respectively. Then the likelihood ratio is given byLikelihood ratio=2π|Σ1|−1exp(−12(x−μ1)TΣ1−1(x−μ1))2π|Σ0|−1exp(−12(x−μ0)TΣ0−1(x−μ0))<t{\displaystyle {\text{Likelihood ratio}}={\frac {{\sqrt {2\pi |\Sigma _{1}|}}^{-1}\exp \left(-{\frac {1}{2}}(\mathbf {x} -{\boldsymbol {\mu }}_{1})^{T}\Sigma _{1}^{-1}(\mathbf {x} -{\boldsymbol {\mu }}_{1})\right)}{{\sqrt {2\pi |\Sigma _{0}|}}^{-1}\exp \left(-{\frac {1}{2}}(\mathbf {x} -{\boldsymbol {\mu }}_{0})^{T}\Sigma _{0}^{-1}(\mathbf {x} -{\boldsymbol {\mu }}_{0})\right)}}<t}for some thresholdt{\displaystyle t}. After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic. The sample estimates of the mean vector and variance-covariance matrices will substitute the population quantities in this formula.
While QDA is the most commonly-used method for obtaining a classifier, other methods are also possible. One such method is to create a longer measurement vector from the old one by adding all pairwise products of individual measurements. For instance, the vector[x1,x2,x3]{\displaystyle [x_{1},\;x_{2},\;x_{3}]}would become[x1,x2,x3,x12,x1x2,x1x3,x22,x2x3,x32].{\displaystyle [x_{1},\;x_{2},\;x_{3},\;x_{1}^{2},\;x_{1}x_{2},\;x_{1}x_{3},\;x_{2}^{2},\;x_{2}x_{3},\;x_{3}^{2}].}
Finding a quadratic classifier for the original measurements would then become the same as finding a linear classifier based on the expanded measurement vector. This observation has been used in extending neural network models;[3]the "circular" case, which corresponds to introducing only the sum of pure quadratic termsx12+x22+x32+⋯{\displaystyle \;x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+\cdots \;}with no mixed products (x1x2,x1x3,…{\displaystyle \;x_{1}x_{2},\;x_{1}x_{3},\;\ldots \;}), has been proven to be the optimal compromise between extending the classifier's representation power and controlling the risk of overfitting (Vapnik-Chervonenkis dimension).[4]
For linear classifiers based only ondot products, these expanded measurements do not have to be actually computed, since the dot product in the higher-dimensional space is simply related to that in the original space. This is an example of the so-calledkernel trick, which can be applied to linear discriminant analysis as well as thesupport vector machine.
|
https://en.wikipedia.org/wiki/Quadratic_classifier
|
Exceptionalitymay refer to:
|
https://en.wikipedia.org/wiki/Exceptionality_(disambiguation)
|
Inlinear algebra,eigendecompositionis thefactorizationof amatrixinto acanonical form, whereby the matrix is represented in terms of itseigenvalues and eigenvectors. Onlydiagonalizable matricescan be factorized in this way. When the matrix being factorized is anormalor realsymmetric matrix, the decomposition is called "spectral decomposition", derived from thespectral theorem.
A (nonzero) vectorvof dimensionNis an eigenvector of a squareN×NmatrixAif it satisfies alinear equationof the formAv=λv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }for some scalarλ. Thenλis called the eigenvalue corresponding tov. Geometrically speaking, the eigenvectors ofAare the vectors thatAmerely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvaluesp(λ)=det(A−λI)=0.{\displaystyle p\left(\lambda \right)=\det \left(\mathbf {A} -\lambda \mathbf {I} \right)=0.}We callp(λ)thecharacteristic polynomial, and the equation, called the characteristic equation, is anNth-order polynomial equation in the unknownλ. This equation will haveNλdistinct solutions, where1 ≤Nλ≤N. The set of solutions, that is, the eigenvalues, is called thespectrumofA.[1][2][3]
If the field of scalars isalgebraically closed, then we canfactorpasp(λ)=(λ−λ1)n1(λ−λ2)n2⋯(λ−λNλ)nNλ=0.{\displaystyle p(\lambda )=\left(\lambda -\lambda _{1}\right)^{n_{1}}\left(\lambda -\lambda _{2}\right)^{n_{2}}\cdots \left(\lambda -\lambda _{N_{\lambda }}\right)^{n_{N_{\lambda }}}=0.}The integerniis termed thealgebraic multiplicityof eigenvalueλi. The algebraic multiplicities sum toN:∑i=1Nλni=N.{\textstyle \sum _{i=1}^{N_{\lambda }}{n_{i}}=N.}
For each eigenvalueλi, we have a specific eigenvalue equation(A−λiI)v=0.{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} =0.}There will be1 ≤mi≤nilinearly independentsolutions to each eigenvalue equation. The linear combinations of themisolutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalueλi. The integermiis termed thegeometric multiplicityofλi. It is important to keep in mind that the algebraic multiplicityniand geometric multiplicitymimay or may not be equal, but we always havemi≤ni. The simplest case is of course whenmi=ni= 1. The total number of linearly independent eigenvectors,Nv, can be calculated by summing the geometric multiplicities∑i=1Nλmi=Nv.{\displaystyle \sum _{i=1}^{N_{\lambda }}{m_{i}}=N_{\mathbf {v} }.}
The eigenvectors can be indexed by eigenvalues, using a double index, withvijbeing thejth eigenvector for theith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single indexvk, withk= 1, 2, ...,Nv.
LetAbe a squaren×nmatrix withnlinearly independent eigenvectorsqi(wherei= 1, ...,n). ThenAcan befactoredasA=QΛQ−1{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}whereQis the squaren×nmatrix whoseith column is the eigenvectorqiofA, andΛis thediagonal matrixwhose diagonal elements are the corresponding eigenvalues,Λii=λi. Note that onlydiagonalizable matricescan be factorized in this way. For example, thedefective matrix[1101]{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}(which is ashear matrix) cannot be diagonalized.
Theneigenvectorsqiare usually normalized, but they don't have to be. A non-normalized set ofneigenvectors,vican also be used as the columns ofQ. That can be understood by noting that the magnitude of the eigenvectors inQgets canceled in the decomposition by the presence ofQ−1. If one of the eigenvaluesλihas multiple linearly independent eigenvectors (that is, the geometric multiplicity ofλiis greater than 1), then these eigenvectors for this eigenvalueλican be chosen to be mutuallyorthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that ifAis a normal matrix, then by the spectral theorem, it's always possible to diagonalizeAin anorthonormal basis{qi}.
The decomposition can be derived from the fundamental property of eigenvectors:Av=λvAQ=QΛA=QΛQ−1.{\displaystyle {\begin{aligned}\mathbf {A} \mathbf {v} &=\lambda \mathbf {v} \\\mathbf {A} \mathbf {Q} &=\mathbf {Q} \mathbf {\Lambda } \\\mathbf {A} &=\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}.\end{aligned}}}The linearly independent eigenvectorsqiwith nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible productsAx, forx∈Cn, which is the same as theimage(orrange) of the correspondingmatrix transformation, and also thecolumn spaceof the matrixA. The number of linearly independent eigenvectorsqiwith nonzero eigenvalues is equal to therankof the matrixA, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectorsqiwith an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for thenull space(also known as the kernel) of the matrix transformationA.
The 2 × 2 real matrixAA=[1013]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0\\1&3\\\end{bmatrix}}}may be decomposed into a diagonal matrix through multiplication of a non-singular matrixQQ=[abcd]∈R2×2.{\displaystyle \mathbf {Q} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.}
Then[abcd]−1[1013][abcd]=[x00y],{\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}x&0\\0&y\end{bmatrix}},}for some real diagonal matrix[x00y]{\displaystyle \left[{\begin{smallmatrix}x&0\\0&y\end{smallmatrix}}\right]}.
Multiplying both sides of the equation on the left byQ:[1013][abcd]=[abcd][x00y].{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}The above equation can be decomposed into twosimultaneous equations:{[1013][ac]=[axcx][1013][bd]=[bydy].{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}={\begin{bmatrix}ax\\cx\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}={\begin{bmatrix}by\\dy\end{bmatrix}}\end{cases}}.}Factoring out theeigenvaluesxandy:{[1013][ac]=x[ac][1013][bd]=y[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=x{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=y{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}Lettinga=[ac],b=[bd],{\displaystyle \mathbf {a} ={\begin{bmatrix}a\\c\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b\\d\end{bmatrix}},}this gives us two vector equations:{Aa=xaAb=yb{\displaystyle {\begin{cases}\mathbf {A} \mathbf {a} =x\mathbf {a} \\\mathbf {A} \mathbf {b} =y\mathbf {b} \end{cases}}}And can be represented by a single vector equation involving two solutions as eigenvalues:Au=λu{\displaystyle \mathbf {A} \mathbf {u} =\lambda \mathbf {u} }whereλrepresents the two eigenvaluesxandy, andurepresents the vectorsaandb.
Shiftingλuto the left hand side and factoringuout(A−λI)u=0{\displaystyle \left(\mathbf {A} -\lambda \mathbf {I} \right)\mathbf {u} =\mathbf {0} }SinceQis non-singular, it is essential thatuis nonzero. Therefore,det(A−λI)=0{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}Thus(1−λ)(3−λ)=0{\displaystyle (1-\lambda )(3-\lambda )=0}giving us the solutions of the eigenvalues for the matrixAasλ= 1orλ= 3, and the resulting diagonal matrix from the eigendecomposition ofAis thus[1003]{\displaystyle \left[{\begin{smallmatrix}1&0\\0&3\end{smallmatrix}}\right]}.
Putting the solutions back into the above simultaneous equations{[1013][ac]=1[ac][1013][bd]=3[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=1{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=3{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Solving the equations, we havea=−2candb=0,c,d∈R.{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad c,d\in \mathbb {R} .}Thus the matrixQrequired for the eigendecomposition ofAisQ=[−2c0cd],c,d∈R,{\displaystyle \mathbf {Q} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad c,d\in \mathbb {R} ,}that is:[−2c0cd]−1[1013][−2c0cd]=[1003],c,d∈R{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad c,d\in \mathbb {R} }
If a matrixAcan be eigendecomposed and if none of its eigenvalues are zero, thenAisinvertibleand its inverse is given byA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}IfA{\displaystyle \mathbf {A} }is a symmetric matrix, sinceQ{\displaystyle \mathbf {Q} }is formed from the eigenvectors ofA{\displaystyle \mathbf {A} },Q{\displaystyle \mathbf {Q} }is guaranteed to be anorthogonal matrix, thereforeQ−1=QT{\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }}. Furthermore, becauseΛis adiagonal matrix, its inverse is easy to calculate:[Λ−1]ii=1λi{\displaystyle \left[\mathbf {\Lambda } ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}}
When eigendecomposition is used on a matrix of measured, realdata, theinversemay be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4]
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See alsoTikhonov regularizationas a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of theLaplacianof the sorted eigenvalues:[5]min|∇2λs|{\displaystyle \min \left|\nabla ^{2}\lambda _{\mathrm {s} }\right|}where the eigenvalues are subscripted with ansto denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
The eigendecomposition allows for much easier computation ofpower seriesof matrices. Iff(x)is given byf(x)=a0+a1x+a2x2+⋯{\displaystyle f(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots }then we know thatf(A)=Qf(Λ)Q−1{\displaystyle f\!\left(\mathbf {A} \right)=\mathbf {Q} \,f\!\left(\mathbf {\Lambda } \right)\mathbf {Q} ^{-1}}BecauseΛis adiagonal matrix, functions ofΛare very easy to calculate:[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
The off-diagonal elements off(Λ)are zero; that is,f(Λ)is also a diagonal matrix. Therefore, calculatingf(A)reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with theholomorphic functional calculus, usingA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}fromabove. Once again, we find that[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
A2=(QΛQ−1)(QΛQ−1)=QΛ(Q−1Q)ΛQ−1=QΛ2Q−1An=QΛnQ−1expA=Qexp(Λ)Q−1{\displaystyle {\begin{aligned}\mathbf {A} ^{2}&=\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)=\mathbf {Q} \mathbf {\Lambda } \left(\mathbf {Q} ^{-1}\mathbf {Q} \right)\mathbf {\Lambda } \mathbf {Q} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{2}\mathbf {Q} ^{-1}\\[1.2ex]\mathbf {A} ^{n}&=\mathbf {Q} \mathbf {\Lambda } ^{n}\mathbf {Q} ^{-1}\\[1.2ex]\exp \mathbf {A} &=\mathbf {Q} \exp(\mathbf {\Lambda } )\mathbf {Q} ^{-1}\end{aligned}}}which are examples for the functionsf(x)=x2,f(x)=xn,f(x)=expx{\displaystyle f(x)=x^{2},\;f(x)=x^{n},\;f(x)=\exp {x}}. Furthermore,expA{\displaystyle \exp {\mathbf {A} }}is thematrix exponential.
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.[6]
A complex-valued square matrixA{\displaystyle A}isnormal(meaning ,A∗A=AA∗{\displaystyle \mathbf {A} ^{*}\mathbf {A} =\mathbf {A} \mathbf {A} ^{*}}, whereA∗{\displaystyle \mathbf {A} ^{*}}is theconjugate transpose) if and only if it can be decomposed asA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU{\displaystyle \mathbf {U} }is aunitary matrix(meaningU∗=U−1{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}}) andΛ={\displaystyle \mathbf {\Lambda } =}diag(λ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}) is adiagonal matrix.[7]The columnsu1,⋯,un{\displaystyle \mathbf {u} _{1},\cdots ,\mathbf {u} _{n}}ofU{\displaystyle \mathbf {U} }form anorthonormal basisand are eigenvectors ofA{\displaystyle \mathbf {A} }with corresponding eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}.[8]
For example, consider the 2 x 2 normal matrixA=[1221]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}}.
The eigenvalues areλ1=3{\displaystyle \lambda _{1}=3}andλ2=−1{\displaystyle \lambda _{2}=-1}.
The (normalized) eigenvectors corresponding to these eigenvalues areu1=12[11]{\displaystyle \mathbf {u} _{1}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}andu2=12[−11]{\displaystyle \mathbf {u} _{2}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}-1\\1\end{bmatrix}}}.
The diagonalization isA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU=[1/21/21/2−1/2]{\displaystyle \mathbf {U} ={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}},Λ={\displaystyle \mathbf {\Lambda } =}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}andU∗=U−1={\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}.
The verification isUΛU∗={\displaystyle \mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}=[1221]=A{\displaystyle ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}=\mathbf {A} }.
This example illustrates the process of diagonalizing a normal matrixA{\displaystyle \mathbf {A} }by finding its eigenvalues and eigenvectors, forming the unitary matrixU{\displaystyle \mathbf {U} }, the diagonal matrixΛ{\displaystyle \mathbf {\Lambda } }, and verifying the decomposition.
As a special case, for everyn×nrealsymmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real andorthonormal. Thus a real symmetric matrixAcan be decomposed asA=QΛQT{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{\mathsf {T}}}, whereQis anorthogonal matrixwhose columns are the real, orthonormal eigenvectors ofA, andΛis a diagonal matrix whose entries are the eigenvalues ofA.[9]
Diagonalizable matricescan be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed asA=PDP−1{\displaystyle \mathbf {A} =\mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}, whereP{\displaystyle \mathbf {P} }is a matrix whose columns are eigenvectors ofA{\displaystyle \mathbf {A} }andD{\displaystyle \mathbf {D} }is a diagonal matrix consisting of the corresponding eigenvalues ofA{\displaystyle \mathbf {A} }.[8]
Positivedefinite matricesare matrices for which all eigenvalues are positive. They can be decomposed asA=LLT{\displaystyle \mathbf {A} =\mathbf {L} \mathbf {L} ^{\mathsf {T}}}using theCholesky decomposition, whereL{\displaystyle \mathbf {L} }is a lower triangular matrix.[10]
Unitary matricessatisfyUU∗=I{\displaystyle \mathbf {U} \mathbf {U} ^{*}=\mathbf {I} }(real case) orUU†=I{\displaystyle \mathbf {U} \mathbf {U} ^{\dagger }=\mathbf {I} }(complex case), whereU∗{\displaystyle \mathbf {U} ^{*}}denotes theconjugate transposeandU†{\displaystyle \mathbf {U} ^{\dagger }}denotes the conjugate transpose. They diagonalize usingunitary transformations.[8]
Hermitian matricessatisfyH=H†{\displaystyle \mathbf {H} =\mathbf {H} ^{\dagger }}, whereH†{\displaystyle \mathbf {H} ^{\dagger }}denotes the conjugate transpose. They can be diagonalized using unitary ororthogonal matrices.[8]
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using thecharacteristic polynomial. However, this is often impossible for larger matrices, in which case we must use anumerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: theAbel–Ruffini theoremimplies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply usingnth roots. Therefore, general algorithms to find eigenvectors and eigenvalues areiterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such asNewton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that smallround-off errorsin the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremelyill-conditionedfunction of the coefficients.[11]
A simple and accurate iterative method is thepower method: arandomvectorvis chosen and a sequence ofunit vectorsis computed asAv‖Av‖,A2v‖A2v‖,A3v‖A3v‖,…{\displaystyle {\frac {\mathbf {A} \mathbf {v} }{\left\|\mathbf {A} \mathbf {v} \right\|}},{\frac {\mathbf {A} ^{2}\mathbf {v} }{\left\|\mathbf {A} ^{2}\mathbf {v} \right\|}},{\frac {\mathbf {A} ^{3}\mathbf {v} }{\left\|\mathbf {A} ^{3}\mathbf {v} \right\|}},\ldots }
Thissequencewillalmost alwaysconverge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided thatvhas a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example,Googleuses it to calculate thepage rankof documents in their search engine.[12]Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at thespanofallthe vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis ofArnoldi iteration.[11]Alternatively, the importantQR algorithmis also based on a subtle transformation of a power method.[11]
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation(A−λiI)vi,j=0{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} _{i,j}=\mathbf {0} }usingGaussian eliminationorany other methodfor solvingmatrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. Inpower iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by theRayleigh quotientof the eigenvector).[11]In the QR algorithm for aHermitian matrix(or any normal matrix), the orthonormal eigenvectors are obtained as a product of theQmatrices from the steps in the algorithm.[11](For more general matrices, the QR algorithm yields theSchur decompositionfirst, from which the eigenvectors can be obtained by abacksubstitutionprocedure.[13]) For Hermitian matrices, theDivide-and-conquer eigenvalue algorithmis more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[11]
Recall that thegeometricmultiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, thenullspaceofλI−A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associatedgeneralized eigenspace(1st sense), which is the nullspace of the matrix(λI−A)kforany sufficiently largek. That is, it is the space ofgeneralized eigenvectors(first sense), where a generalized eigenvector is any vector whicheventuallybecomes 0 ifλI−Ais applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with thegeneralized eigenvalue problemdescribed below.
Aconjugate eigenvectororconeigenvectoris a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called theconjugate eigenvalueorconeigenvalueof the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation isAv=λv∗.{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} ^{*}.}For example, in coherent electromagnetic scattering theory, the linear transformationArepresents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. Inoptics, the coordinate system is defined from the wave's viewpoint, known as theForward Scattering Alignment(FSA), and gives rise to a regular eigenvalue equation, whereas inradar, the coordinate system is defined from the radar's viewpoint, known as theBack Scattering Alignment(BSA), and gives rise to a coneigenvalue equation.
Ageneralized eigenvalue problem(second sense) is the problem of finding a (nonzero) vectorvthat obeysAv=λBv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {B} \mathbf {v} }whereAandBare matrices. Ifvobeys this equation, with someλ, then we callvthegeneralized eigenvectorofAandB(in the second sense), andλis called thegeneralized eigenvalueofAandB(in the second sense) which corresponds to the generalized eigenvectorv. The possible values ofλmust obey the following equationdet(A−λB)=0.{\displaystyle \det(\mathbf {A} -\lambda \mathbf {B} )=0.}
Ifnlinearly independent vectors{v1, …,vn}can be found, such that for everyi∈ {1, …,n},Avi=λiBvi, then we define the matricesPandDsuch thatP=[||v1⋯vn||]≡[(v1)1⋯(vn)1⋮⋮(v1)n⋯(vn)n]{\displaystyle P={\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}\equiv {\begin{bmatrix}(\mathbf {v} _{1})_{1}&\cdots &(\mathbf {v} _{n})_{1}\\\vdots &&\vdots \\(\mathbf {v} _{1})_{n}&\cdots &(\mathbf {v} _{n})_{n}\end{bmatrix}}}(D)ij={λi,ifi=j0,otherwise{\displaystyle (D)_{ij}={\begin{cases}\lambda _{i},&{\text{if }}i=j\\0,&{\text{otherwise}}\end{cases}}}Then the following equality holdsA=BPDP−1{\displaystyle \mathbf {A} =\mathbf {B} \mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}And the proof isAP=A[||v1⋯vn||]=[||Av1⋯Avn||]=[||λ1Bv1⋯λnBvn||]=[||Bv1⋯Bvn||]D=BPD{\displaystyle \mathbf {A} \mathbf {P} =\mathbf {A} {\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\A\mathbf {v} _{1}&\cdots &A\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\\lambda _{1}B\mathbf {v} _{1}&\cdots &\lambda _{n}B\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\B\mathbf {v} _{1}&\cdots &B\mathbf {v} _{n}\\|&&|\end{bmatrix}}\mathbf {D} =\mathbf {B} \mathbf {P} \mathbf {D} }
And sincePis invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the formA−λB, whereλis a complex number, is called apencil; the termmatrix pencilcan also refer to the pair(A,B)of matrices.[14]
IfBis invertible, then the original problem can be written in the formB−1Av=λv{\displaystyle \mathbf {B} ^{-1}\mathbf {A} \mathbf {v} =\lambda \mathbf {v} }which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important ifAandBareHermitian matrices, since in this caseB−1Ais not generally Hermitian and important properties of the solution are no longer apparent.
IfAandBare both symmetric or Hermitian, andBis also apositive-definite matrix, the eigenvaluesλiare real and eigenvectorsv1andv2with distinct eigenvalues areB-orthogonal (v1*Bv2= 0).[15]In this case, eigenvectors can be chosen so that the matrixPdefined above satisfiesP∗BP=I{\displaystyle \mathbf {P} ^{*}\mathbf {B} \mathbf {P} =\mathbf {I} }orPP∗B=I,{\displaystyle \mathbf {P} \mathbf {P} ^{*}\mathbf {B} =\mathbf {I} ,}and there exists abasisof generalized eigenvectors (it is not adefectiveproblem).[14]This case is sometimes called aHermitian definite pencilordefinite pencil.[14]
|
https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
|
Inmathematics, askew gradientof aharmonic functionover asimply connected domainwith two real dimensions is avector fieldthat is everywhereorthogonalto thegradientof the function and that has the samemagnitudeas the gradient.
The skew gradient can be defined using complex analysis and theCauchy–Riemann equations.
Letf(z(x,y))=u(x,y)+iv(x,y){\displaystyle f(z(x,y))=u(x,y)+iv(x,y)}be a complex-valued analytic function, whereu,vare real-valued scalar functions of the real variablesx,y.
A skew gradient is defined as:
and from theCauchy–Riemann equations, it is derived that
The skew gradient has two interesting properties. It is everywhere orthogonal to the gradient of u, and of the same length:
|
https://en.wikipedia.org/wiki/Skew_gradient
|
Theiterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful formodel order reduction(MOR) ofsingle-input single-output(SISO) linear time-invariantdynamical systems.[1]At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solvingr{\displaystyle r}shifted pairs oflinear systems, each of sizen×n{\displaystyle n\times n}; wheren{\displaystyle n}is the original system order, andr{\displaystyle r}is the desired reduced model order (usuallyr≪n{\displaystyle r\ll n}).
The algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008.[2]It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967.[3]The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012,[4]for a particular kind of systems.
Consider a SISO linear time-invariant dynamical system, with inputv(t){\displaystyle v(t)}, and outputy(t){\displaystyle y(t)}:
Applying theLaplace transform, with zero initial conditions, we obtain thetransfer functionG{\displaystyle G}, which is a fraction of polynomials:
AssumeG{\displaystyle G}is stable. Givenr<n{\displaystyle r<n}, MOR tries to approximate the transfer functionG{\displaystyle G}, by a stable rational transfer functionGr{\displaystyle G_{r}}, of orderr{\displaystyle r}:
A possible approximation criterion is to minimize the absolute error inH2{\displaystyle H_{2}}norm:
This is known as theH2{\displaystyle H_{2}}optimizationproblem. This problem has been studied extensively, and it is known to be non-convex;[4]which implies that usually it will be difficult to find a global minimizer.
The following first order necessary optimality condition for theH2{\displaystyle H_{2}}problem, is of great importance for the IRKA algorithm.
Theorem([2][Theorem 3.4][4][Theorem 1.2])—Assume that theH2{\displaystyle H_{2}}optimization problem admits a solutionGr{\displaystyle G_{r}}with simple poles. Denote these poles by:λ1(Ar),…,λr(Ar){\displaystyle \lambda _{1}(A_{r}),\ldots ,\lambda _{r}(A_{r})}. Then,Gr{\displaystyle G_{r}}must be an Hermite interpolator ofG{\displaystyle G}, through the reflected poles ofGr{\displaystyle G_{r}}:
Note that the polesλi(Ar){\displaystyle \lambda _{i}(A_{r})}are theeigenvaluesof the reducedr×r{\displaystyle r\times r}matrixAr{\displaystyle A_{r}}.
An Hermite interpolantGr{\displaystyle G_{r}}of the rational functionG{\displaystyle G}, throughr{\displaystyle r}distinct pointsσ1,…,σr∈C{\displaystyle \sigma _{1},\ldots ,\sigma _{r}\in \mathbb {C} }, has components:
where the matricesVr=(v1∣…∣vr)∈Cn×r{\displaystyle V_{r}=(v_{1}\mid \ldots \mid v_{r})\in \mathbb {C} ^{n\times r}}andWr=(w1∣…∣wr)∈Cn×r{\displaystyle W_{r}=(w_{1}\mid \ldots \mid w_{r})\in \mathbb {C} ^{n\times r}}may be found by solvingr{\displaystyle r}dual pairs of linear systems, one for each shift[4][Theorem 1.1]:
As can be seen from the previous section, finding an Hermite interpolatorGr{\displaystyle G_{r}}ofG{\displaystyle G}, throughr{\displaystyle r}given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these "optimal" interpolation points.
For this, it starts withr{\displaystyle r}arbitrary interpolation points (closed under conjugation), and then, at each iterationm{\displaystyle m}, it imposes the first order necessary optimality condition of theH2{\displaystyle H_{2}}problem:
1. find the Hermite interpolantGr{\displaystyle G_{r}}ofG{\displaystyle G}, through the actualr{\displaystyle r}shift points:σ1m,…,σrm{\displaystyle \sigma _{1}^{m},\ldots ,\sigma _{r}^{m}}.
2. update the shifts by using the poles of the newGr{\displaystyle G_{r}}:σim+1=−λi(Ar),∀i=1,…,r.{\displaystyle \sigma _{i}^{m+1}=-\lambda _{i}(A_{r}),\,\forall \,i=1,\ldots ,r.}
The iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:
As already mentioned, each Hermite interpolation requires solvingr{\displaystyle r}shifted pairs of linear systems, each of sizen×n{\displaystyle n\times n}:
Also, updating the shifts requires finding ther{\displaystyle r}poles of the new interpolantGr{\displaystyle G_{r}}. That is, finding ther{\displaystyle r}eigenvalues of the reducedr×r{\displaystyle r\times r}matrixAr{\displaystyle A_{r}}.
The following is a pseudocode for the IRKA algorithm[2][Algorithm 4.1].
A SISO linear system is said to have symmetric state space (SSS), whenever:A=AT,b=c.{\displaystyle A=A^{T},\,b=c.}This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3DMaxwell's equations.[4]For SSS systems with distinct poles, the following convergence result has been proven:[4]"IRKA is a locally convergent fixed point iteration to a local minimizer of theH2{\displaystyle H_{2}}optimization problem."
Although there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.[1][4]
IRKA algorithm has been extended by the original authors tomultiple-input multiple-output(MIMO) systems, and also to discrete time and differential algebraic systems[1][2][Remark 4.1].
Model order reduction
|
https://en.wikipedia.org/wiki/Iterative_rational_Krylov_algorithm
|
Model-theoretic grammars, also known as constraint-based grammars, contrast withgenerative grammarsin the way they define sets of sentences: they state constraints on syntactic structure rather than providing operations for generating syntactic objects.[1]A generative grammar provides a set of operations such as rewriting, insertion, deletion, movement, or combination, and is interpreted as a definition of the set of all and only the objects that these operations are capable of producing through iterative application. A model-theoretic grammar simply states a set of conditions that an object must meet, and can be regarded as defining the set of all and only the structures of a certain sort that satisfy all of the constraints.[2]The approach applies the mathematical techniques ofmodel theoryto the task of syntactic description: a grammar is atheoryin the logician's sense (a consistent set of statements) and the well-formed structures are themodelsthat satisfy the theory.
David E. JohnsonandPaul M. Postalintroduced the idea of model-theoretic syntax in their 1980 bookArc Pair Grammar.[3]
The following is a sample of grammars falling under the model-theoretic umbrella:
One benefit of model-theoretic grammars over generative grammars is that they allow for gradience ingrammaticality. A structure may deviate only slightly from a theory or it may be highly deviant. Generative grammars, in contrast "entail a sharp boundary between the perfect and the nonexistent, and do not even permit gradience in ungrammaticality to be represented."[7]
|
https://en.wikipedia.org/wiki/Model-theoretic_grammar
|
Inlinear algebra, theFrobenius normal formorrational canonical formof asquarematrixAwith entries in afieldFis acanonical formfor matrices obtained by conjugation byinvertible matricesoverF. The form reflects a minimal decomposition of thevector spaceintosubspacesthat are cyclic forA(i.e.,spannedby some vector and its repeatedimagesunderA). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrixBissimilartoAif and only if it has the same rational canonical form asA. Since this form can be found without any operations that might change whenextendingthe fieldF(whence the "rational"), notably withoutfactoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematicianFerdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called theprimary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined overF, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field ofF. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
When trying to find out whether two square matricesAandBare similar, one approach is to try, for each of them, to decompose the vector space as far as possible into adirect sumof stable subspaces, and compare the respective actions on these subspaces. For instance if both arediagonalizable, then one can take the decomposition intoeigenspaces(for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparingeigenvaluesand their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say asrootsof thecharacteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. FinallyAandBmight not be diagonalizable even over this larger field, in which case one must instead use a decomposition intogeneralized eigenspaces, and possibly intoJordan blocks.
But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vectorvand all its images by repeated application of thelinear operatorassociated to the matrix; such subspaces are called cyclic subspaces (by analogy withcyclicsubgroups) and they are clearly stable under the linear operator. Abasisof such a subspace is obtained by takingvand its successive images as long as they arelinearly independent. The matrix of the linear operator with respect to such a basis is thecompanion matrixof amonic polynomial; this polynomial (theminimal polynomialof the operatorrestrictedto the subspace, which notion is analogous to that of theorderof a cyclic subgroup) determines the action of the operator on the cyclic subspace up toisomorphism, and is independent of the choice of the vectorvgenerating the subspace.
A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by theChinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to excludetrivialcyclic subspaces). The resulting list of polynomials are called theinvariant factorsof (theK[X]-moduledefined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrixAis obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors ofA; two matrices are similar if and only if they have the same rational canonical form.
Consider the following matrix A, overQ:
Ahasminimal polynomialμ=X6−4X4−2X3+4X2+4X+1{\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}, so that thedimensionof a subspace generated by the repeated images of a single vector is at most 6. Thecharacteristic polynomialisχ=X8−X7−5X6+2X5+10X4+2X3−7X2−5X−1{\displaystyle \chi =X^{8}-X^{7}-5X^{6}+2X^{5}+10X^{4}+2X^{3}-7X^{2}-5X-1}, which is a multiple of the minimal polynomial by a factorX2−X−1{\displaystyle X^{2}-X-1}. There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vectore1{\displaystyle e_{1}}does so: the vectorsAk(e1){\displaystyle A^{k}(e_{1})}fork=0,1,…,5{\displaystyle k=0,1,\ldots ,5}are linearly independent and span a cyclic subspace with minimal polynomialμ{\displaystyle \mu }. There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectorsv=(3,4,8,0,−1,0,2,−1)⊤{\displaystyle v=(3,4,8,0,-1,0,2,-1)^{\top }}andw=(5,4,5,9,−1,1,1,−2)⊤{\displaystyle w=(5,4,5,9,-1,1,1,-2)^{\top }}is an example. In fact one hasA⋅v=w{\displaystyle A\cdot v=w}, so the complementary subspace is a cyclic subspace generated byv{\displaystyle v}; it has minimal polynomialX2−X−1{\displaystyle X^{2}-X-1}. Sinceμ{\displaystyle \mu }is the minimal polynomial of the whole space, it is clear thatX2−X−1{\displaystyle X^{2}-X-1}must divideμ{\displaystyle \mu }(and it is easily checked that it does), and we have found the invariant factorsX2−X−1{\displaystyle X^{2}-X-1}andμ=X6−4X4−2X3+4X2+4X+1{\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}ofA. Then the rational canonical form ofAis theblock diagonal matrixwith the corresponding companion matrices as diagonal blocks, namely
A basis on which this form is attained is formed by the vectorsv,w{\displaystyle v,w}above, followed byAk(e1){\displaystyle A^{k}(e_{1})}fork=0,1,…,5{\displaystyle k=0,1,\ldots ,5}; explicitly this means that for
one hasA=PCP−1.{\displaystyle A=PCP^{-1}.}
Fix a base fieldFand a finite-dimensionalvector spaceVoverF. Given a polynomialP∈F[X], there is associated to it acompanion matrixCPwhose characteristic polynomial and minimal polynomial are both equal toP.
Theorem: LetVbe a finite-dimensional vector space over a fieldF, andAa square matrix overF. ThenV(viewed as anF[X]-modulewith the action ofXgiven byA) admits aF[X]-module isomorphism
where thefi∈F[X] may be taken to be monic polynomials of positivedegree(so they are non-unitsinF[X]) that satisfy the relations
(where "a | b" is notation for "adividesb"); with these conditions the list of polynomialsfiis unique.
Sketch of Proof: Apply thestructure theorem for finitely generated modules over a principal ideal domaintoV, viewing it as anF[X]-module. The structure theorem provides a decomposition into cyclic factors, each of which is aquotientofF[X] by a properideal; the zero ideal cannot be present since the resultingfree modulewould be infinite-dimensional asFvector space, whileVis finite-dimensional. For the polynomialsfione then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for thefi. See [DF] for details.
Given an arbitrary square matrix, theelementary divisorsused in the construction of theJordan normal formdo not exist overF[X], so theinvariant factorsfias given above must be used instead. The last of these factorsfkis then the minimal polynomial, which all the
invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially theCayley-Hamilton theorem), and that everyirreduciblefactor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity).
For each invariant factorfione takes itscompanion matrixCfi, and the block diagonal matrix formed from these blocks yields therational canonical formofA. When the minimal polynomial is identical to the characteristic polynomial (the casek= 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated toA, and these invariant factors are independent of basis, it follows that two square matricesAandBare similar if and only if they have the same rational canonical form.
The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground fieldF. This implies that it is invariant whenFis replaced by a different field (as long as it contains the entries of the original matrixA). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably thediagonal form(ifAis diagonalizable) or more generally theJordan normal form(if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.
There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same fieldFasA, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors overF, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form[1]is sometimes called thegeneralized Jordan normal form, orprimary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to thedistinctirreducible factorsPof the characteristic polynomial (as stated by thelemme des noyaux[fr][2]), where the characteristic polynomial of each summand is a power of the correspondingP. These summands can be further decomposed, non-canonically, as a direct sum ofcyclicF[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power ofP. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form calledgeneralized Jordan blockin the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself ablock matrixof the form
whereCis the companion matrix of the irreducible polynomialP, andUis a matrix whose sole nonzero entry is a 1 in the upper right-hand corner. For the case of a linear irreducible factorP=x−λ, these blocks are reduced to single entriesC=λandU= 1and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vectorv(one that is not annihilated byPk−1(A)where the minimal polynomial of the cyclic module isPk), and taking as basis
whered= degP.
|
https://en.wikipedia.org/wiki/Frobenius_normal_form
|
Abinary recompileris acompilerthat takesexecutablebinary filesas input, analyzes their structure, applies transformations and optimizations, and outputs new optimized executable binaries.[1]
The foundation to the concepts of binary recompilation were laid out byGary Kildall[2][3][4][5][6][7][8]with the development of the optimizingassembly code translatorXLT86in 1981.[4][9][10][11]
|
https://en.wikipedia.org/wiki/Binary_recompiler
|
Nearest neighbor search(NNS), as a form ofproximity search, is theoptimization problemof finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the lesssimilarthe objects, the larger the function values.
Formally, the nearest-neighbor (NN) search problem is defined as follows: given a setSof points in a spaceMand a query pointq∈M, find the closest point inStoq.Donald Knuthin vol. 3 ofThe Art of Computer Programming(1973) called it thepost-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is ak-NN search, where we need to find thekclosest points.
Most commonlyMis ametric spaceand dissimilarity is expressed as adistance metric, which is symmetric and satisfies thetriangle inequality. Even more common,Mis taken to be thed-dimensionalvector spacewhere dissimilarity is measured using theEuclidean distance,Manhattan distanceor otherdistance metric. However, the dissimilarity function can be arbitrary. One example is asymmetricBregman divergence, for which the triangle inequality does not hold.[1]
The nearest neighbor search problem arises in numerous fields of application, including:
Various solutions to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as thecurse of dimensionalitystates that there is no general-purpose exact solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time.
The simplest solution to the NNS problem is to compute the distance from the query point to every other point in the database, keeping track of the "best so far". This algorithm, sometimes referred to as the naive approach, has arunning timeofO(dN), whereNis thecardinalityofSanddis the dimensionality ofS. There are no search data structures to maintain, so the linear search has no space complexity beyond the storage of the database. Naive search can, on average, outperform space partitioning approaches on higher dimensional spaces.[5]
The absolute distance is not required for distance comparison, only the relative distance. In geometric coordinate systems the distance calculation can be sped up considerably by omitting the square root calculation from the distance calculation between two coordinates. The distance comparison will still yield identical results.
Since the 1970s, thebranch and boundmethodology has been applied to the problem. In the case of Euclidean space, this approach encompassesspatial indexor spatial access methods. Severalspace-partitioningmethods have been developed for solving the NNS problem. Perhaps the simplest is thek-d tree, which iteratively bisects the search space into two regions containing half of the points of the parent region. Queries are performed via traversal of the tree from the root to a leaf by evaluating the query point at each split. Depending on the distance specified in the query, neighboring branches that might contain hits may also need to be evaluated. For constant dimension query time, average complexity isO(logN)[6]in the case of randomly distributed points, worst case complexity isO(kN^(1-1/k))[7]Alternatively theR-treedata structure was designed to support nearest neighbor search in dynamic context, as it has efficient algorithms for insertions and deletions such as theR* tree.[8]R-trees can yield nearest neighbors not only for Euclidean distance, but can also be used with other distances.
In the case of general metric space, the branch-and-bound approach is known as themetric treeapproach. Particular examples includevp-treeandBK-treemethods.
Using a set of points taken from a 3-dimensional space and put into aBSP tree, and given a query point taken from the same space, a possible solution to the problem of finding the nearest point-cloud point to the query point is given in the following description of an algorithm.
(Strictly speaking, no such point may exist, because it may not be unique. But in practice, usually we only care about finding any one of the subset of all point-cloud points that exist at the shortest distance to a given query point.) The idea is, for each branching of the tree, guess that the closest point in the cloud resides in the half-space containing the query point. This may not be the case, but it is a good heuristic. After having recursively gone through all the trouble of solving the problem for the guessed half-space, now compare the distance returned by this result with the shortest distance from the query point to the partitioning plane. This latter distance is that between the query point and the closest possible point that could exist in the half-space not searched. If this distance is greater than that returned in the earlier result, then clearly there is no need to search the other half-space. If there is such a need, then you must go through the trouble of solving the problem for the other half space, and then compare its result to the former result, and then return the proper result. The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result.
An approximate nearest neighbor search algorithm is allowed to return points whose distance from the query is at mostc{\displaystyle c}times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter.[9]
Proximity graph methods (such as navigable small world graphs[10]andHNSW[11][12]) are considered the current state-of-the-art for the approximate nearest neighbors search.
The methods are based on greedy traversing in proximity neighborhood graphsG(V,E){\displaystyle G(V,E)}in which every pointxi∈S{\displaystyle x_{i}\in S}is uniquely associated with vertexvi∈V{\displaystyle v_{i}\in V}. The search for the nearest neighbors to a queryqin the setStakes the form of searching for the vertex in the graphG(V,E){\displaystyle G(V,E)}.
The basic algorithm – greedy search – works as follows: search starts from an enter-point vertexvi∈V{\displaystyle v_{i}\in V}by computing the distances from the query q to each vertex of its neighborhood{vj:(vi,vj)∈E}{\displaystyle \{v_{j}:(v_{i},v_{j})\in E\}}, and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself.
The idea of proximity neighborhood graphs was exploited in multiple publications, including the seminal paper by Arya and Mount,[13]in the VoroNet system for the plane,[14]in the RayNet system for theEn{\displaystyle \mathbb {E} ^{n}},[15]and in the Navigable Small World,[10]Metrized Small World[16]andHNSW[11][12]algorithms for the general case of spaces with a distance function. These works were preceded by a pioneering paper by Toussaint, in which he introduced the concept of arelative neighborhoodgraph.[17]
Locality sensitive hashing(LSH) is a technique for grouping points in space into 'buckets' based on some distance metric operating on the points. Points that are close to each other under the chosen metric are mapped to the same bucket with high probability.[18]
Thecover treehas a theoretical bound that is based on the dataset'sdoubling constant. The bound on search time isO(c12logn) wherecis theexpansion constantof the dataset.
In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem.
This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries.
These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general.
In practice this technique has an average search time ofO(1) orO(K) for thek-nearest neighbor problem when applied to real world stereo vision data.[4]
In high-dimensional spaces, tree indexing structures become useless because an increasing percentage of the nodes need to be examined anyway. To speed up linear search, a compressed version of the feature vectors stored in RAM is used to prefilter the datasets in a first run. The final candidates are determined in a second stage using the uncompressed data from the disk for distance calculation.[19]
The VA-file approach is a special case of a compression based search, where each feature component is compressed uniformly and independently. The optimal compression technique in multidimensional spaces isVector Quantization(VQ), implemented through clustering. The database is clustered and the most "promising" clusters are retrieved. Huge gains over VA-File, tree-based indexes and sequential scan have been observed.[20][21]Also note the parallels between clustering and LSH.
There are numerous variants of the NNS problem and the two most well-known are thek-nearest neighbor searchand theε-approximate nearest neighbor search.
k-nearest neighbor searchidentifies the topknearest neighbors to the query. This technique is commonly used inpredictive analyticsto estimate or classify a point based on the consensus of its neighbors.k-nearest neighbor graphs are graphs in which every point is connected to itsknearest neighbors.
In some applications it may be acceptable to retrieve a "good guess" of the nearest neighbor. In those cases, we can use an algorithm which doesn't guarantee to return the actual nearest neighbor in every case, in return for improved speed or memory savings. Often such an algorithm will find the nearest neighbor in a majority of cases, but this depends strongly on the dataset being queried.
Algorithms that support the approximate nearest neighbor search includelocality-sensitive hashing,best bin firstandbalanced box-decomposition treebased search.[22]
Nearest neighbor distance ratiodoes not apply the threshold on the direct distance from the original point to the challenger neighbor but on a ratio of it depending on the distance to the previous neighbor. It is used inCBIRto retrieve pictures through a "query by example" using the similarity between local features. More generally it is involved in severalmatchingproblems.
Fixed-radius near neighborsis the problem where one wants to efficiently find all points given inEuclidean spacewithin a given fixed distance from a specified point. The distance is assumed to be fixed, but the query point is arbitrary.
For some applications (e.g.entropy estimation), we may haveNdata-points and wish to know which is the nearest neighborfor every one of those N points. This could, of course, be achieved by running a nearest-neighbor search once for every point, but an improved strategy would be an algorithm that exploits the information redundancy between theseNqueries to produce a more efficient search. As a simple example: when we find the distance from pointXto pointY, that also tells us the distance from pointYto pointX, so the same calculation can be reused in two different queries.
Given a fixed dimension, a semi-definite positive norm (thereby including everyLpnorm), andnpoints in this space, the nearest neighbour of every point can be found inO(nlogn) time and themnearest neighbours of every point can be found inO(mnlogn) time.[23][24]
|
https://en.wikipedia.org/wiki/Nearest_neighbor_search
|
A Strange Loopis a musical with book, music, and lyrics byMichael R. Jackson, and winner of the 2020Pulitzer Prize for Drama.[1]First producedoff-Broadwayin 2019, then staged inWashington, D.C.in 2021,[2]A Strange Looppremiered on Broadway at theLyceum Theatrein April 2022.[3][4]The show wonBest MusicalandBest Book of a Musicalat the75th Tony Awards.
While working as an usher atThe Lion King,aspiring musical theater writer Usher contemplates the show he is writing, wanting it to represent what it is like to "travel the world in a fat, black, queer body" ("Intermission Song"). He plans to change himself, but his thoughts are too disruptive ("Today"). His mother, who constantly reminds him how hard she and his father worked to raise him, calls with a request that he write aTyler Perry-stylegospelplay ("We Wanna Know").
Usher wishes he could act more like his "inner white girl" but is held back by expectations put on Black boys ("Inner White Girl"). His Thoughts criticize the show, claiming the main character should have more sex appeal and telling him to add certain elements. Usher's father leaves a voicemail saying he foundScott Rudin's number and urges him to leverage their common sexuality to make a connection ("Didn't Want Nothin'").
At a medical checkup, Usher's doctor inquires about his sex life and prescribesTruvada. Usher starts using dating apps but is rejected, causing him to rage against the gay community ("Exile In Gayville"). A stranger flirts with Usher before revealing he is a figment of Usher's imagination and dismissing the war between the "second-wave feminist" and "the dick-sucking Black gay man" within him ("Second Wave").
Usher's agent tells himTyler Perryis seeking a ghostwriter for a gospel play, but Usher has a low opinion of Perry's work. Appearing as famous Black figures, his Thoughts accuse him of being a race traitor and persuade him to take the job ("Tyler Perry Writes Real Life"). Usher writes the play, acting out all the characters as caricatures ("Writing a Gospel Play").
At work, Usher tells a patron he cannot continue the show without confronting his parents with his artistic self. The patron advises him to live without fear ("A Sympathetic Ear"). Usher speaks with his parents over the phone, his father asking if he has contractedHIVand his mother asking about the status of the gospel play. After hooking up with "Inwood Daddy", a white man who fetishizes him and calls him racial slurs, Usher questions his "Boundaries".
On his birthday, Usher's mother leaves a voicemail reminding him that homosexuality is a sin ("Periodically") while his father calls to inform him their church does not approve of his music ("Didn't Want Nothin' [Reprise]"). While arguing with his parents, Usher's mother, horrified by her portrayal in the play, accuses him of hating and disappointing her. Usher recalls how his friend Darnell, thinking he deserved to die, refused HIV medication and concludes living with AIDS is worse than dying from it ("Precious Little Dream/AIDS Is God's Punishment"). His mother tells Usher he is loved but thinks he is struggling because of his homosexuality.
The Thought who is portraying Usher's mother breaks thefourth wall, asking Usher if he wants the show to end with hateful caricatures of his parents. Usher claims that he is depicting life as it was when he was 17, to which the Thoughts remind him he is now 26. Usher then reflects on his childhood, realizing his perceptions must change before he can change ("Memory Song"). With his back to the audience, Usher wonders what will happen when the show ends. He turns around, concluding that he does not need to change because change is an illusion, and he, like everyone else, is in "A Strange Loop".
A Strange Loopbegan previews atoff-BroadwayvenuePlaywrights Horizonson May 24, 2019. It opened on June 17, 2019, with closing scheduled for July 7, 2019,[9]before extending to July 28, 2019.[10]The show featuredLarry Owensas Usher. The creative team credits included Michael R. Jackson as writer of book, music, and lyrics,Stephen Brackettas director,Raja Feather Kellyas choreographer,Charlie Rosenas orchestrator, and Rona Siddiqui as music director.[11]
TheWashington, DCproduction atWoolly Mammoth Theatre Companywas originally scheduled for September 2020, but postponed to December 2021 due to theCOVID-19 pandemic.[12][2]The six-week limited run began previews November 22, 2021, and opened December 3, 2021.[13]The show extended another week, changing its closing date from January 2 to 9, 2022.[14]
The Broadway production ofA Strange Loopwas announced December 20, 2021.[15]Many notable people from the entertainment industry served as the show's producers. They included lead producerBarbara Whitman, as well asBenj Pasek,Justin Paul,Jennifer Hudson,RuPaul Charles,Marc Platt,Megan EllisonofAnnapurna Pictures,Don Cheadle,Frank Marshall,James L. Nederlander,Alan Cumming,Ilana Glazer,Mindy KalingandBilly Porter.[16]Previews were scheduled to begin on April 6, but were postponed to April 14 due toCOVID-19breakouts among the cast.[17]The show officially opened April 26, 2022. In October 2022, it was announced that the show would play its final performance on Broadway on January 15, 2023.[18]
The London production opened at theBarbican Centreon June 17, 2023, for a limited run until September 9. It was produced byHoward Panterfor Trafalgar Theatre Productions, theNational Theatre,Barbara Whitmanand Wessex Grove.[19]Brackett, Kelly, Rosen, Siddiqui, and the rest of the Broadway creative team returned for the London production.[20]Kyle Ramar Freeman, who understudied Usher on Broadway, starred in the production alongside an all-British cast of Thoughts.[21]Freeman played his final performance on August 12 due to prior commitments. Kyle Birch stepped into the role of Usher on August 14 and continued through the show's final performance.[22]
A Strange Loopplayed as a co-production betweenAmerican Conservatory Theater(A.C.T.) in San Francisco and theCenter Theatre Groupin Los Angeles, presented at A.C.T. from April 18 to May 12, 2024, and the Center Theatre Group'sAhmanson Theatrefrom June 5 to June 30, 2024. The production was helmed by the same creative team as the Broadway and London mountings and featured a largely new cast as Usher and the Thoughts, with John-Andrew Morrison reprising his role as Thought 4.[23][24]
Upon opening off-Broadway on June 17, 2019.A Strange Loopreceived critical acclaim. In particular, it was praised for its emotional honesty and meta themes within both the writing and the musical compositions. It was also praised for the performances that the cast gave, calling them “physically exhaustive.” However, it was deemed as unlikely to become a Broadway show due to it potentially getting "too easily lost in a Broadway House."[25]
A Strange Loopopened on Broadway on April 26, 2022, and also received critical acclaim. In particular, it was praised for its themes and tone which were successfully retained from the off-Broadway version and then cleaned up for the larger, moreconsumerbased crowd which would be found on Broadway. There was a light critiquing about how working within the largeinstitutionof Broadway instead of merely peering in has made some of the commentary become shallow.[26][27]
The original off-Broadway cast recording was released on September 27, 2019, on Yellow Sound Label.[28]The album peaked at number 6 on theBillboardCast Albums chart.[29]A Broadway cast album was recorded on April 10, 2022, and released on June 10, 2022, throughSh-K-Boom Records, Yellow Sound Label, Barbara Whitman Productions, andGhostlight Records.[30]It debuted at number two on the Cast Albums chart.[31]
On June 14, 2022,Deadlinereported that the musical filled 98% of its available seats during the week ending June 12. The musical grossed $676,316 for seven performances.[32]The musical also broke theLyceum Theatrebox office house record for a standard 8-performance week, taking $860,496 for the week ending June 26, a $15,183 bump over the previous week.[33]As of September 2022,A Strange Loopgrossed around $14.2 million from 136,777 attendance and 157 performances.[34]
On May 4, 2020, thePulitzer Prize for Dramawas awarded to Jackson for the musical, with the committee citing the show as "a metafictional musical that tracks the creative process of an artist transforming issues of identity, race, and sexuality that once pushed him to the margins of the cultural mainstream into a meditation on universal human fears and insecurities." The show is the tenth musical to win the award, as well as the first musical written by a Black person to win and first musical to win without a Broadway run.[35]The show premiered on Broadway in April 2022 and won theTony Award for Best Musical. As one of its producers,Jennifer Hudsonbecame the second Black woman to receiveall four of the major American entertainment awards (EGOT).[36]
The cast recording received a nomination for theGrammy Award for Best Musical Theater Albumfor the2023 Grammy Awards.
|
https://en.wikipedia.org/wiki/A_Strange_Loop
|
Informal semanticsconservativityis a proposedlinguistic universalwhich states that anydeterminerD{\displaystyle D}must obey the equivalenceD(A,B)↔D(A,A∩B){\displaystyle D(A,B)\leftrightarrow D(A,A\cap B)}. For instance, theEnglishdeterminer "every" can be seen to be conservative by theequivalenceof the following two sentences, schematized ingeneralized quantifiernotation to the right.[1][2][3]
Conceptually, conservativity can be understood as saying that theelementsofB{\displaystyle B}which are not elements ofA{\displaystyle A}are not relevant for evaluating the truth of thedeterminer phraseas a whole. For instance, truth of the first sentence above does not depend on which biting non-aardvarks exist.[1][2][3]
Conservativity is significant to semantic theory because there are many logically possible determiners which are not attested asdenotationsof natural language expressions. For instance, consider the imaginary determinershmore{\displaystyle shmore}defined so thatshmore(A,B){\displaystyle shmore(A,B)}is true iff|A|>|B|{\displaystyle |A|>|B|}. If there are 50 biting aardvarks, 50 non-biting aardvarks, and millions of non-aardvark biters,shmore(A,B){\displaystyle shmore(A,B)}will be false butshmore(A,A∩B){\displaystyle shmore(A,A\cap B)}will be true.[1][2][3]
Some potential counterexamples to conservativity have been observed, notably, the English expression "only". This expression has been argued to not be a determiner since it can stack with bona fide determiners and can combine with non-nominal constituents such asverb phrases.[4]
Different analyses have treated conservativity as a constraint on thelexicon, a structural constraint arising from the architecture of thesyntax-semantics interface, as well as constraint onlearnability.[5][6][7]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conservativity
|
Instatisticsandpsychometrics,reliabilityis the overall consistency of a measure.[1]A measure is said to have a high reliability if it produces similar results under consistent conditions:
It is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are highly reliable are precise, reproducible, and consistent from one testing occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 (much error) and 1.00 (no error), are usually used to indicate the amount of error in the scores.[2]
For example, measurements of people's height and weight are often extremely reliable.[3][4]
There are several general classes of reliability estimates:
Reliability does not implyvalidity. That is, a reliable measure that is measuring something consistently is not necessarily measuring what is supposed to be measured. For example, while there are many reliable tests of specific abilities, not all of them would be valid for predicting, say, job performance.
While reliability does not implyvalidity, reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.[7]
For example, if a set ofweighing scalesconsistently measured the weight of an object as 500 grams over the true weight, then the scale would be very reliable, but it would not be valid (as the returned weight is not the true weight). For the scale to be valid, it should return the true weight of an object. This example demonstrates that a perfectly reliable measure is not necessarily valid, but that a valid measure necessarily must be reliable.
In practice, testing measures are never perfectly consistent. Theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement. The basic starting point for almost all theories of test reliability is the idea that test scores reflect the influence of two sorts of factors:[7]
These factors include:[7]
The goal of estimating reliability is to determine how much of the variability in test scores is due tomeasurement errorsand how much is due to variability intrue scores(true value).[7]
Atrue scoreis the replicable feature of the concept being measured. It is the part of the observed score that would recur across different measurement occasions in the absence of error.
Errors of measurement are composed of bothrandom errorandsystematic error. It represents the discrepancies between scores obtained on tests and the corresponding true scores.
This conceptual breakdown is typically represented by the simple equation:
X=T+E{\displaystyle X=T+E}where X is the observed test score, T is the true score, and E is the measurement error
The goal of reliability theory is to estimate errors in measurement and to suggest ways of improving tests so that errors are minimized.
The central assumption of reliability theory is that measurement errors are essentially random. This does not mean that errors arise from random processes. For any individual, an error in measurement is not a completely random event. However, across a large number of individuals, the causes of measurement error are assumed to be so varied that measure errors act as random variables.[7]
If errors have the essential characteristics of random variables, then it is reasonable to assume that errors are equally likely to be positive or negative, and that they are not correlated with true scores or with errors on other tests.
It is assumed that:[8]
Reliability theory shows that the variance of obtained scores is simply the sum of the variance oftrue scoresplus the variance oferrors of measurement.[7]
This equation suggests that test scores vary as the result of two factors:
The reliability coefficientρxx′{\displaystyle \rho _{xx'}}provides an index of the relative influence of true and error scores on attained test scores. In its general form, the reliability coefficient is defined as the ratio oftrue scorevariance to the total variance of test scores. Or, equivalently, one minus the ratio of the variation of theerror scoreand the variation of theobserved score:
Unfortunately, there is no way to directly observe or calculate the true score, so a variety of methods are used to estimate the reliability of a test.
Some examples of the methods to estimate reliability includetest-retest reliability,internal consistencyreliability, andparallel-test reliability. Each method comes at the problem of figuring out the source of error in the test somewhat differently.
It was well known to classical test theorists that measurement precision is not uniform across the scale of measurement. Tests tend to distinguish better for test-takers with moderate trait levels and worse among high- and low-scoring test-takers.Item response theoryextends the concept of reliability from a single index to a function called theinformation function. The IRT information function is the inverse of the conditional observed score standard error at any given test score.
The goal of estimating reliability is to determine how much of the variability in test scores is due to errors in measurement and how much is due to variability in true scores.
Four practical strategies have been developed that provide workable methods of estimating test reliability:[7]
Thetest-retest reliabilitymethod directly assesses the degree to which test scores are consistent from one test administration to the next. It involves:
The correlation between scores on the first test and the scores on the retest is used to estimate the reliability of the test using thePearson product-moment correlation coefficient: see alsoitem-total correlation.
The key to this method is the development of alternate test forms that are equivalent in terms of content, response processes and statistical characteristics. For example, alternate forms exist for several tests of general intelligence, and these tests are generally seen equivalent.[7]
With the parallel test model it is possible to develop two forms of a test that are equivalent in the sense that a person's true score on form A would be identical to their true score on form B. If both forms of the test were administered to a number of people, differences between scores on form A and form B may be due to errors in measurement only.[7]It involves:
The correlation between scores on the two alternate forms is used to estimate the reliability of the test.
This method provides a partial solution to many of the problems inherent in thetest-retest reliabilitymethod. For example, since the two forms of the test are different,carryover effectis less of a problem. Reactivity effects are also partially controlled; although taking the first test may change responses to the second test. However, it is reasonable to assume that the effect will not be as strong with alternate forms of the test as with two administrations of the same test.[7]
However, this technique has its disadvantages:
This method treats the two halves of a measure as alternate forms. It provides a simple solution to the problem that the parallel-forms method faces: the difficulty in developing alternate forms.[7]It involves:
The correlation between these two split halves is used in estimating the reliability of the test. This halves reliability estimate is then stepped up to the full test length using theSpearman–Brown prediction formula.
There are several ways of splitting a test to estimate reliability. For example, a 40-item vocabulary test could be split into two subtests, the first one made up of items 1 through 20 and the second made up of items 21 through 40. However, the responses from the first half may be systematically different from responses in the second half due to an increase in item difficulty and fatigue.[7]
In splitting a test, the two halves would need to be as similar as possible, both in terms of their content and in terms of the probable state of the respondent. The simplest method is to adopt an odd-even split, in which the odd-numbered items form one half of the test and the even-numbered items form the other. This arrangement guarantees that each half will contain an equal number of items from the beginning, middle, and end of the original test.[7]
Internal consistencyassesses the consistency of results across items within a test. The most common internal consistency measure isCronbach's alpha, which is usually interpreted as the mean of all possible split-half coefficients.[9]Cronbach's alpha is a generalization of an earlier form of estimating internal consistency,Kuder–Richardson Formula 20.[9]Although the most commonly used, there are some misconceptions regarding Cronbach's alpha.[10][11]
These measures of reliability differ in their sensitivity to different sources of error and so need not be equal. Also, reliability is a property of thescores of a measurerather than the measure itself and are thus said to besample dependent. Reliability estimates from one sample might differ from those of a second sample (beyond what might be expected due to sampling variations) if the second sample is drawn from a different population because the true variability is different in this second population. (This is true of measures of all types—yardsticks might measure houses well yet have poor reliability when used to measure the lengths of insects.)
Reliability may be improved by clarity of expression (for written assessments), lengthening the measure,[9]and other informal means. However, formal psychometric analysis, calleditem analysis, is considered the most effective way to increase reliability. This analysis consists of computation ofitem difficultiesanditem discriminationindices, the latter index involving computation of correlations between the items and sum of the item scores of the entire test. If items that are too difficult, too easy, and/or have near-zero or negative discrimination are replaced with better items, the reliability of the measure will increase.
|
https://en.wikipedia.org/wiki/Reliability_(statistics)
|
UML state machine,[1]formerly known asUML statechart, is an extension of themathematicalconcept of afinite automatonincomputer scienceapplications as expressed in theUnified Modeling Language(UML) notation.
The concepts behind it are about organizing the way a device, computer program, or other (often technical) process works such that an entity or each of its sub-entities is always in exactly one of a number of possible states and where there are well-defined conditional transitions between these states.
UML state machine is an object-based variant ofHarel statechart,[2]adapted and extended by UML.[1][3]The goal of UML state machines is to overcome the main limitations of traditionalfinite-state machineswhile retaining their main benefits.
UML statecharts introduce the new concepts ofhierarchically nested statesandorthogonal regions, while extending the notion ofactions. UML state machines have the characteristics of bothMealy machinesandMoore machines. They supportactionsthat depend on both the state of the system and the triggeringevent, as in Mealy machines, as well asentry and exit actions, which are associated with states rather than transitions, as in Moore machines.[4]
The term "UML state machine" can refer to two kinds of state machines:behavioral state machinesandprotocol state machines.
Behavioral state machines can be used to model the behavior of individual entities (e.g., class instances), a subsystem, a package, or even an entire system.
Protocol state machines are used to express usage protocols and can be used to specify the legal usage scenarios of classifiers, interfaces, and ports.
Many software systems areevent-driven, which means that they continuously wait for the occurrence of some external or internaleventsuch as a mouse click, a button press, a time tick, or an arrival of a data packet. After recognizing the event, such systems react by performing the appropriate computation that may include manipulating the hardware or generating “soft” events that trigger other internal software components. (That's why event-driven systems are alternatively calledreactive systems.) Once the event handling is complete, the system goes back to waiting for the next event.
The response to an event generally depends on both the type of the event and on the internalstateof the system and can include a change of state leading to astate transition. The pattern of events, states, and state transitions among those states can be abstracted and represented as afinite-state machine(FSM).
The concept of a FSM is important inevent-driven programmingbecause it makes the event handling explicitly dependent on both the event-type and on the state of the system. When used correctly, a state machine can drastically cut down the number of execution paths through the code, simplify the conditions tested at each branching point, and simplify the switching between different modes of execution.[5]Conversely, using event-driven programming without an underlying FSM model can lead programmers to produce error prone, difficult to extend and excessively complex application code.[6]
UML preserves the general form of thetraditional state diagrams. The UML state diagrams aredirected graphsin which nodes denote states and connectors denote state transitions. For example, Figure 1 shows a UML state diagram corresponding to the computer keyboard state machine. In UML, states are represented as rounded rectangles labeled with state names. The transitions, represented as arrows, are labeled with the triggering events followed optionally by the list of executed actions. Theinitial transitionoriginates from the solid circle and specifies the default state when the system first begins. Every state diagram should have such a transition, which should not be labeled, since it is not triggered by an event. The initial transition can have associated actions.
Aneventis something that happens that affects the system.
Strictly speaking, in the UML specification,[1]the term event refers to the type of occurrence rather than to any concrete instance of that occurrence.
For example, Keystroke is an event for the keyboard, but each press of a key is not an event but a concrete instance of the Keystroke event. Another event of interest for the keyboard might be Power-on, but turning the power on tomorrow at 10:05:36 will be just an instance of the Power-on event.
An event can have associatedparameters, allowing the event instance to convey not only the occurrence of some interesting incident but also quantitative information regarding that occurrence. For example, the Keystroke event generated by pressing a key on a computer keyboard has associated parameters that convey the character scan code as well as the status of the Shift, Ctrl, and Alt keys.
An event instance outlives the instantaneous occurrence that generated it and might convey this occurrence to one or more state machines. Once generated, the event instance goes through a processing life cycle that can consist of up to three stages. First, the event instance isreceivedwhen it is accepted and waiting for processing (e.g., it is placed on theevent queue). Later, the event instance isdispatchedto the state machine, at which point it becomes the current event. Finally, it isconsumedwhen the state machine finishes processing the event instance. A consumed event instance is no longer available for processing.
Each state machine has astate, which governs reaction of the state machine to events. For example, when you strike a key on a keyboard, the character code generated will be either an uppercase or a lowercase character, depending on whether the Caps Lock is active. Therefore, the keyboard's behavior can be divided into two states: the "default" state and the "caps_locked" state. (Most keyboards include an LED that indicates that the keyboard is in the "caps_locked" state.) The behavior of a keyboard depends only on certain aspects of its history, namely whether the Caps Lock key has been pressed, but not, for example, on how many and exactly which other keys have been pressed previously. A state can abstract away all possible (but irrelevant) event sequences and capture only the relevant ones.
In the context of software state machines (and especially classical FSMs), the termstateis often understood as a singlestate variablethat can assume only a limited number of a priori determined values (e.g., two values in case of the keyboard, or more generally - some kind of variable with anenumtype in many programming languages). The idea ofstate variable(and classical FSM model) is that the value of thestate variablefully defines the current state of the system at any given time. The concept of the state reduces the problem of identifying the execution context in the code to testing just the state variable instead of many variables, thus eliminating a lot of conditional logic.
In practice, however, interpreting the whole state of the state machine as a singlestate variablequickly becomes impractical for all state machines beyond very simple ones. Indeed, even if we have a single 32-bit integer in our machine state, it could contribute to over 4 billion different states - and will lead to a prematurestate explosion. This interpretation is not practical, so in UML state machines the wholestateof the state machine is commonly split into (a) an enumerablestate variableand (b) all the other variables which are namedextended state. Another way to see it is to interpret the enumerablestate variableas a qualitative aspect and theextended stateas quantitative aspects of the whole state. In this interpretation, a change of variable does not always imply a change of the qualitative aspects of the system behavior and therefore does not lead to a change of state.[7]
State machines supplemented withextended statevariables are calledextended state machinesand UML state machines belong to this category. Extended state machines can apply the underlying formalism to much more complex problems than is practical without including extended state variables. For example, if we have to implement some kind of limit in our FSM (say, limiting number of keystrokes on keyboard to 1000), withoutextended statewe'd need to create and process 1000 states - which is not practical; however, with an extended state machine we can introduce akey_countvariable, which is initialized to 1000 and decremented by every keystroke without changingstate variable.
The state diagram from Figure 2 is an example of an extended state machine, in which the complete condition of the system (called theextended state) is the combination of a qualitative aspect—thestate variable—and the quantitative aspects—theextended statevariables.
The obvious advantage of extended state machines is flexibility. For example, changing the limit governed bykey_countfrom 1000 to 10000 keystrokes, would not complicate the extended state machine at all. The only modification required would be changing the initialization value of thekey_countextended state variable during initialization.
This flexibility of extended state machines comes with a price, however, because of the complex coupling between the "qualitative" and the "quantitative" aspects of the extended state. The coupling occurs through the guard conditions attached to transitions, as shown in Figure 2.
Guard conditions(or simply guards) areBoolean expressionsevaluated dynamically based on the value ofextended state variablesandevent parameters. Guard conditions affect the behavior of a state machine by enabling actions or transitions only when they evaluate to TRUE and disabling them when they evaluate to FALSE. In the UML notation, guard conditions are shown in square brackets (e.g.,[key_count == 0]in Figure 2).
The need for guards is the immediate consequence of adding memoryextended state variablesto the state machine formalism. Used sparingly, extended state variables and guards make up a powerful mechanism that can simplify designs.
On the other hand, it is possible to abuse extended states and guards quite easily.[8]
When anevent instanceis dispatched, the state machine responds by performingactions, such as changing a variable, performing I/O, invoking a function, generating another event instance, or changing to another state. Any parameter values associated with the current event are available to all actions directly caused by that event.
Switching from one state to another is calledstate transition, and the event that causes it is called the triggering event, or simply thetrigger. In the keyboard example, if the keyboard is in the "default" state when the CapsLock key is pressed, the keyboard will enter the "caps_locked" state. However, if the keyboard is already in the "caps_locked" state, pressing CapsLock will cause a different transition—from the "caps_locked" to the "default" state. In both cases, pressing CapsLock is the triggering event.
Inextended state machines, a transition can have aguard, which means that the transition can "fire" only if the guard evaluates to TRUE. A state can have many transitions in response to the same trigger, as long as they have nonoverlapping guards; however, this situation could create problems in the sequence of evaluation of the guards when the common trigger occurs. The UML specification[1]intentionally does not stipulate any particular order; rather, UML puts the burden on the designer to devise guards in such a way that the order of their evaluation does not matter. Practically, this means that guard expressions should have no side effects, at least none that would alter evaluation of other guards having the same trigger.
All state machine formalisms, including UML state machines, universally assume that a state machine completes processing of each event before it can start processing the next event. This model of execution is calledrun to completion, or RTC.
In the RTC model, the system processes events in discrete, indivisible RTC steps. New incoming events cannot interrupt the processing of the current event and must be stored (typically in anevent queue) until the state machine becomes idle again. These semantics completely avoid any internal concurrency issues within a single state machine. The RTC model also gets around the conceptual problem of processing actions associated with transitions, where the state machine is not in a well-defined state (is between two states) for the duration of the action. During event processing, the system is unresponsive (unobservable), so the ill-defined state during that time has no practical significance.
Note, however, that RTC does not mean that a state machine has to monopolize the CPU until the RTC step is complete.[1]Thepreemptionrestriction only applies to the task context of the state machine that is already busy processing events. In amultitasking environment, other tasks (not related to the task context of the busy state machine) can be running, possibly preempting the currently executing state machine. As long as other state machines do not share variables or other resources with each other, there are noconcurrency hazards.
The key advantage of RTC processing is simplicity. Its biggest disadvantage is that the responsiveness of a state machine is determined by its longest RTC step. Achieving short RTC steps can often significantly complicate real-time designs.
Though thetraditional FSMsare an excellent tool for tackling smaller problems, it's also generally known that they tend to become unmanageable, even for moderately involved systems. Due to the phenomenon known asstate and transition explosion, the complexity of a traditional FSM tends to grow much faster than the complexity of the system it describes. This happens because the traditional state machine formalism inflicts repetitions. For example, if you try to represent the behavior of a simple pocket calculator with a traditional FSM, you'll immediately notice that many events (e.g., the Clear or Off button presses) are handled identically in many states. A conventional FSM shown in the figure below, has no means of capturing such a commonality and requiresrepeatingthe same actions and transitions in many states. What's missing in the traditional state machines is the mechanism for factoring out the common behavior in order to share it across many states.
UML state machines address exactly this shortcoming of the conventional FSMs. They provide a number of features for eliminating the repetitions so that the complexity of a UML state machine no longer explodes but tends to faithfully represent the complexity of the reactive system it describes. Obviously, these features are very interesting to software developers, because only they make the whole state machine approach truly applicable to real-life problems.
The most important innovation of UML state machines over the traditional FSMs is the introduction ofhierarchically nested states(that is why statecharts are also calledhierarchical state machines, orHSMs). The semantics associated with state nesting are as follows (see Figure 3): If a system is in the nested state, for example "result" (called thesubstate), it also (implicitly) is in the surrounding state "on" (called thesuperstate). This state machine will attempt to handle any event in the context of the substate, which conceptually is at the lower level of the hierarchy. However, if the substate "result" does not prescribe how to handle the event, the event is not quietly discarded as in a traditional "flat" state machine; rather, it is automatically handled at the higher level context of the superstate "on". This is what is meant by the system being in state "result" as well as "on". Of course, state nesting is not limited to one level only, and the simple rule of event processing applies recursively to any level of nesting.
States that contain other states are calledcomposite states; conversely, states without internal structure are calledsimple states. A nested state is called adirect substatewhen it is not contained by any other state; otherwise, it is referred to as atransitively nested substate.
Because the internal structure of a composite state can be arbitrarily complex, any hierarchical state machine can be viewed as an internal structure of some (higher-level) composite state. It is conceptually convenient to define one composite state as the ultimate root of state machine hierarchy. In the UML specification, every state machine has aregion(the abstract root of every state machine hierarchy),[9]which contains all the other elements of the entire state machine.
The graphical rendering of this all-enclosing region is optional.
As you can see, the semantics of hierarchical state decomposition are designed to facilitate reusing of behavior. The substates (nested states) need only define the differences from the superstates (containing states). A substate can easily inherit[6]the common behavior from its superstate(s) by simply ignoring commonly handled events, which are then automatically handled by higher-level states. In other words, hierarchical state nesting enablesprogramming by difference.[10]
The aspect of state hierarchy emphasized most often isabstraction—an old and powerful technique for coping with complexity. Instead of addressing all aspects of a complex system at the same time, it is often possible to ignore (abstract away) some parts of the system. Hierarchical states are an ideal mechanism for hiding internal details because the designer can easily zoom out or zoom in to hide or show nested states.
However, composite states don't simply hide complexity; they also actively reduce it through the powerful mechanism of hierarchical event processing. Without such reuse, even a moderate increase in system complexity could lead to an explosive increase in the number of states and transitions. For example, the hierarchical state machine representing the pocket calculator (Figure 3) avoids repeating the transitions Clear and Off in virtually every state. Avoiding repetition allows the growth of HSMs to remain proportionate to growth in system complexity. As the modeled system grows, the opportunity for reuse also increases and thus potentially counteracts the disproportionate increase in numbers of states and transitions typical of traditional FSMs.
Analysis by hierarchical state decomposition can include the application of the operation 'exclusive-OR' to any given state. For example, if a system is in the "on" superstate (Figure 3), it may be the case that it is also in either "operand1" substate OR the "operand2" substate OR the "opEntered" substate OR the "result" substate. This would lead to description of the "on" superstate as an 'OR-state'.
UML statecharts also introduce the complementary AND-decomposition. Such decomposition means that a composite state can contain two or more orthogonal regions (orthogonal means compatible and independent in this context) and that being in such a composite state entails being in all its orthogonal regions simultaneously.[11]
Orthogonal regions address the frequent problem of a combinatorial increase in the number of states when the behavior of a system is fragmented into independent, concurrently active parts. For example, apart from the main keypad, a computer keyboard has an independent numeric keypad. From the previous discussion, recall the two states of the main keypad already identified: "default" and "caps_locked" (see Figure 1). The numeric keypad also can be in two states—"numbers" and "arrows"—depending on whether Num Lock is active. The complete state space of the keyboard in the standard decomposition is therefore theCartesian productof the two components (main keypad and numeric keypad) and consists of four states: "default–numbers," "default–arrows," "caps_locked–numbers," and "caps_locked–arrows." However, this would be an unnatural representation because the behavior of the numeric keypad does not depend on the state of the main keypad and vice versa. The use of orthogonal regions allows the mixing of independent behaviors as a Cartesian product to be avoided and, instead, for them to remain separate, as shown in Figure 4.
Note that if the orthogonal regions are fully independent of each other, their combined complexity is simply additive, which means that the number of independent states needed to model the system is simply the sumk + l + m + ..., wherek, l, m, ...denote numbers of OR-states in each orthogonal region. However, the general case of mutual dependency, on the other hand, results in multiplicative complexity, so in general, the number of states needed is the productk × l × m × ....
In most real-life situations, orthogonal regions would be only approximately orthogonal (i.e. not truly independent). Therefore, UML statecharts provide a number of ways for orthogonal regions to communicate and synchronize their behaviors. Among these rich sets of (sometimes complex) mechanisms, perhaps the most important feature is that orthogonal regions can coordinate their behaviors by sending event instances to each other.
Even though orthogonal regions imply independence of execution (allowing more or less concurrency), the UML specification does not require that a separate thread of execution be assigned to each orthogonal region (although this can be done if desired). In fact, most commonly, orthogonal regions execute within the same thread.[12]The UML specification requires only that the designer does not rely on any particular order for event instances to be dispatched to the relevant orthogonal regions.
Every state in a UML statechart can have optionalentry actions, which are executed upon entry to a state, as well as optionalexit actions, which are executed upon exit from a state. Entry and exit actions are associated with states, not transitions. Regardless of how a state is entered or exited, all its entry and exit actions will be executed. Because of this characteristic, statecharts behave likeMoore machines. The UML notation for state entry and exit actions is to place the reserved word "entry" (or "exit") in the state right below the name compartment, followed by the forward slash and the list of arbitrary actions (see Figure 5).
The value of entry and exit actions is that they provide means forguaranteed initialization and cleanup, very much like class constructors and destructors inObject-oriented programming. For example, consider the "door_open" state from Figure 5, which corresponds to the toaster oven behavior while the door is open. This state has a very important safety-critical requirement: Always disable the heater when the door is open. Additionally, while the door is open, the internal lamp illuminating the oven should light up.
Of course, such behavior could be modeled by adding appropriate actions (disabling the heater and turning on the light) to every transition path leading to the "door_open" state (the user may open the door at any time during "baking" or "toasting" or when the oven is not used at all). It should not be forgotten to extinguish the internal lamp with every transition leaving the "door_open" state. However, such a solution would cause therepetitionof actions in many transitions. More importantly, such an approach leaves the design error-prone during subsequent amendments to behavior (e.g., the next programmer working on a new feature, such as top-browning, might simply forget to disable the heater on transition to "door_open").
Entry and exit actions allow implementation of desired behavior in a safer, simpler, and more intuitive way. As shown in Figure 5, it could be specified that the exit action from "heating" disables the heater, the entry action to "door_open" lights up the oven lamp, and the exit action from "door_open" extinguishes the lamp. The use of entry and exit actions is preferable to placing an action on a transition because it avoids repetitive coding and improves function by eliminating a safety hazard; (heater on while door open). The semantics of exit actions guarantees that, regardless of the transition path, the heater will be disabled when the toaster is not in the "heating" state.
Because entry actions are executed automatically whenever an associated state is entered, they often determine the conditions of operation or the identity of the state, very much as a class constructor determines the identity of the object being constructed. For example, the identity of the "heating" state is determined by the fact that the heater is turned on. This condition must be established before entering any substate of "heating" because entry actions to a substate of "heating," like "toasting," rely on proper initialization of the "heating" superstate and perform only the differences from this initialization. Consequently, the order of execution of entry actions must always proceed from the outermost state to the innermost state (top-down).
Not surprisingly, this order is analogous to the order in which class constructors are invoked. Construction of a class always starts at the very root of the class hierarchy and follows through all inheritance levels down to the class being instantiated. The execution of exit actions, which corresponds to destructor invocation, proceeds in the exact reverse order (bottom-up).
Very commonly, an event causes only some internal actions to execute but does not lead to a change of state (state transition). In this case, all actions executed comprise theinternal transition. For example, when one types on a keyboard, it responds by generating different character codes. However, unless the Caps Lock key is pressed, the state of the keyboard does not change (no state transition occurs). In UML, this situation should be modeled with internal transitions, as shown in Figure 6. The UML notation for internal transitions follows the general syntax used for exit (or entry) actions, except instead of the word entry (or exit) the internal transition is labeled with the triggering event (e.g., see the internal transition triggered by the ANY_KEY event in Figure 6).
In the absence of entry and exit actions, internal transitions would be identical toself-transitions(transitions in which the target state is the same as the source state). In fact, in a classicalMealy machine, actions are associated exclusively with state transitions, so the only way to execute actions without changing state is through a self-transition (depicted as a directed loop in Figure 1 from the top of this article). However, in the presence of entry and exit actions, as in UML statecharts, a self-transition involves the execution of exit and entry actions and therefore it is distinctively different from an internal transition.
In contrast to a self-transition, no entry or exit actions are ever executed as a result of an internal transition, even if the internal transition is inherited from a higher level of the hierarchy than the currently active state. Internal transitions inherited from superstates at any level of nesting act as if they were defined directly in the currently active state.
State nesting combined with entry and exit actions significantly complicates the state transition semantics in HSMs compared to the traditional FSMs. When dealing withhierarchically nested statesandorthogonal regions, the simple termcurrent statecan be quite confusing. In an HSM, more than one state can be active at once. If the state machine is in a leaf state that is contained in a composite state (which is possibly contained in a higher-level composite state, and so on), all the composite states that either directly or transitively contain the leaf state are also active. Furthermore, because some of the composite states in this hierarchy might have orthogonal regions, the current active state is actually represented by a tree of states starting with the single region at the root down to individual simple states at the leaves. The UML specification refers to such a state tree as state configuration.[1]
In UML, a state transition can directly connect any two states. These two states, which may be composite, are designated as themain sourceand themain targetof a transition. Figure 7 shows a simple transition example and explains the state roles in that transition. The UML specification prescribes that taking a state transition involves executing the actions in the following predefined sequence (see Section 14.2.3.9.6 ofOMG Unified Modeling Language (OMG UML)[1]):
The transition sequence is easy to interpret in the simple case of both the main source and the main target nesting at the same level. For example, transition T1 shown in Figure 7 causes the evaluation of the guard g(); followed by the sequence of actions:a(); b(); t(); c(); d();ande(); assuming that the guardg()evaluates to TRUE.
However, in the general case of source and target states nested at different levels of the state hierarchy, it might not be immediately obvious how many levels of nesting need to be exited.
The UML specification[1]prescribes that a transition involves exiting all nested states from the current active state (which might be a direct or transitive substate of the main source state) up to, but not including, theleast common ancestor(LCA) state of the main source and main target states.
As the name indicates, the LCA is the lowest composite state that is simultaneously a superstate (ancestor) of both the source and the target states. As described before, the order of execution of exit actions is always from the most deeply nested state (the current active state) up the hierarchy to the LCA but without exiting the LCA. For instance, the LCA(s1,s2) of states "s1" and "s2" shown in Figure 7 is state "s."
Entering the target state configuration commences from the level where the exit actions left off (i.e., from inside the LCA). As described before, entry actions must be executed starting from the highest-level state down the state hierarchy to the main target state. If the main target state is composite, the UML semantics prescribes to "drill" into its submachine recursively using the local initial transitions. The target state configuration is completely entered only after encountering a leaf state that has no initial transitions.
Before UML 2,[1]the only transition semantics in use was theexternal transition, in which the main source of the transition is always exited and the main target of the transition is always entered.
UML 2 preserved the "external transition" semantics for backward compatibility, but introduced also a new kind of transition calledlocal transition(see Section 14.2.3.4.4 ofUnified Modeling Language (UML)[1]).
For many transition topologies, external and local transitions are actually identical. However, a local transition doesn't cause exit from and reentry to the main source state if the main target state is a substate of the main source. In addition, a local state transition doesn't cause exit from and reentry to the main target state if the main target is a superstate of the main source state.
Figure 8 contrasts local (a) and external (b) transitions. In the top row, you see the case of the main source containing the main target. The local transition does not cause exit from the source, while the external transition causes exit and reentry to the source. In the bottom row of Figure 8, you see the case of the main target containing the main source. The local transition does not cause entry to the target, whereas the external transition causes exit and reentry to the target.
Sometimes an event arrives at a particularly inconvenient time, when a state machine is in a state that cannot handle the event. In many cases, the nature of the event is such that it can be postponed (within limits) until the system enters another state, in which it is better prepared to handle the original event.
UML state machines provide a special mechanism fordeferring eventsin states. In every state, you can include a clause[event list]/defer. If an event in the current state's deferred event list occurs, the event will be saved (deferred) for future processing until a state is entered that does not list the event in its deferred event list. Upon entry to such a state, the UML state machine will automatically recall any saved event(s) that are no longer deferred and will then either consume or discard these events. It is possible for a superstate to have a transition defined on an event that is deferred by a substate. Consistent with other areas in the specification of UML state machines, the substate takes precedence over the superstate, the event will be deferred and the transition for the superstate will not be executed. In the case of orthogonal regions where one orthogonal region defers an event and another consumes the event, the consumer takes precedence and the event is consumed and not deferred.
Harel statecharts, which are the precursors of UML state machines, have been invented as "a visual formalism for complex systems",[2]so from their inception, they have been inseparably associated with graphical representation in the form of state diagrams.
However, it is important to understand that the concept of UML state machine transcends any particular notation, graphical or textual.
The UML specification[1]makes this distinction apparent by clearly separating state machine semantics from the notation.
However, the notation of UML statecharts is not purely visual. Any nontrivial state machine requires a large amount of textual information (e.g., the specification of actions and guards). The exact syntax of action and guard expressions isn't defined in the UML specification, so many people use either structured English or, more formally, expressions in an implementation language such asC,C++, orJava.[13]In practice, this means that UML statechart notation depends heavily on the specificprogramming language.
Nevertheless, most of the statecharts semantics are heavily biased toward graphical notation. For example, state diagrams poorly represent the sequence of processing, be it order of evaluation ofguardsor order of dispatching events toorthogonal regions. The UML specification sidesteps these problems by putting the burden on the designer not to rely on any particular sequencing. However, it is the case that when UML state machines are actually implemented, there is inevitably full control over order of execution, giving rise to criticism that the UML semantics may be unnecessarily restrictive. Similarly, statechart diagrams require a lot of plumbing gear (pseudostates, like joins, forks, junctions, choicepoints, etc.) to represent the flow of control graphically. In other words, these elements of the graphical notation do not add much value in representing flow of control as compared to plainstructured code.
The UML notation and semantics are really geared toward computerizedUML tools. A UML state machine, as represented in a tool, is not just the state diagram, but rather a mixture of graphical and textual representation that precisely captures both the state topology and the actions. The users of the tool can get several complementary views of the same state machine, both visual and textual, whereas the generated code is just one of the many available views.
|
https://en.wikipedia.org/wiki/UML_state_machine
|
In computerprogramming languages, arecursive data type(also known as arecursively defined,inductively definedorinductive data type) is adata typefor values that may contain other values of the same type. Data of recursive types are usually viewed asdirected graphs.[citation needed]
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used foralgebraic data typeswhich are not necessarily recursive.
An example is thelisttype, inHaskell:
This indicates that a list of a's is either an empty list or acons cellcontaining an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type inJava:
This indicates that non-empty list of typeEcontains a data member of typeE, and a reference to another List object for the rest of the list (or anull referenceto indicate that this is the end of the list).
Data types can also be defined bymutual recursion. The most important basic example of this is atree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
A forestfconsists of a list of trees, while a treetconsists of a pair of a valuevand a forestf(its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
A treetconsists of a pair of a valuevand a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list another, which require disentangling to prove results about.
InStandard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:[1]
In Haskell, the tree and forest data types can be defined similarly:
Intype theory, a recursive type has the general formμα.Twhere thetype variableαmay appear in the typeTand stands for the entire type itself.
For example, the natural numbers (seePeano arithmetic) may be defined by the Haskell datatype:
In type theory, we would say:nat=μα.1+α{\displaystyle {\text{nat}}=\mu \alpha .1+\alpha }where the two arms of thesum typerepresent the Zero and Succ data constructors. Zero takes no arguments (thus represented by theunit type) and Succ takes another Nat (thus another element ofμα.1+α{\displaystyle \mu \alpha .1+\alpha }).
There are two forms of recursive types: the so-called isorecursive types, and equirecursive types. The two forms differ in how terms of a recursive type are introduced and eliminated.
With isorecursive types, the recursive typeμα.T{\displaystyle \mu \alpha .T}and its expansion (orunrolling)T[μα.T/α]{\displaystyle T[\mu \alpha .T/\alpha ]}(where the notationX[Y/Z]{\displaystyle X[Y/Z]}indicates that all instances of Z are replaced with Y in X) are distinct (and disjoint) types with special term constructs, usually calledrollandunroll, that form anisomorphismbetween them. To be precise:roll:T[μα.T/α]→μα.T{\displaystyle roll:T[\mu \alpha .T/\alpha ]\to \mu \alpha .T}andunroll:μα.T→T[μα.T/α]{\displaystyle unroll:\mu \alpha .T\to T[\mu \alpha .T/\alpha ]}, and these two areinverse functions.
Under equirecursive rules, a recursive typeμα.T{\displaystyle \mu \alpha .T}and its unrollingT[μα.T/α]{\displaystyle T[\mu \alpha .T/\alpha ]}areequal– that is, those two type expressions are understood to denote the same type. In fact, most theories of equirecursive types go further and essentially specify that any two type expressions with the same "infinite expansion" are equivalent. As a result of these rules, equirecursive types contribute significantly more complexity to a type system than isorecursive types do. Algorithmic problems such as type checking andtype inferenceare more difficult for equirecursive types as well. Since direct comparison does not make sense on an equirecursive type, they can be converted into a canonical form in O(n log n) time, which can easily be compared.[2]
Isorecursive types capture the form of self-referential (or mutually referential) type definitions seen in nominalobject-orientedprogramming languages, and also arise in type-theoretic semantics of objects andclasses. In functional programming languages, isorecursive types (in the guise of datatypes) are common too.[3]
InTypeScript, recursion is allowed in type aliases.[4]Thus, the following example is allowed.
However, recursion is not allowed in type synonyms inMiranda,OCaml(unless-rectypesflag is used or it is a record or variant), or Haskell; so, for example the following Haskell types are illegal:
Instead, they must be wrapped inside an algebraic data type (even if they only has one constructor):
This is because type synonyms, liketypedefsin C, are replaced with their definition at compile time. (Type synonyms are not "real" types; they are just "aliases" for convenience of the programmer.) But if this is attempted with a recursive type, it will loop infinitely because no matter how many times the alias is substituted, it still refers to itself, e.g. "Bad" will grow indefinitely:Bad→(Int, Bad)→(Int, (Int, Bad))→....
Another way to see it is that a level of indirection (the algebraic data type) is required to allow the isorecursive type system to figure out when torollandunroll.
|
https://en.wikipedia.org/wiki/Recursive_data_type
|
In statistics,response surface methodology(RSM) explores the relationships between severalexplanatory variablesand one or moreresponse variables. RSM is an empirical model which employs the use of mathematical and statistical techniques to relate input variables, otherwise known as factors, to the response. RSM became very useful because other methods available, such as the theoretical model, could be very cumbersome to use, time-consuming, inefficient, error-prone, and unreliable. The method was introduced byGeorge E. P. Boxand K. B. Wilson in 1951. The main idea of RSM is to use a sequence ofdesigned experimentsto obtain an optimal response. Box and Wilson suggest using asecond-degreepolynomialmodel to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process.
Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using properdesign of experiments(DoE), has become extensively used.[1]In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques.[2]
An easy way to estimate a first-degree polynomial model is to use afactorial experimentor afractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as acentral composite designcan be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest.
Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane.
Spherical designsare discussed by Kiefer and by Hardin and Sloane.
Mixture experiments are discussed in many books on thedesign of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell.
Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extensions are used to reduce variability in a single response while targeting a specific value, or attaining a near maximum or minimum while preventing variability in that response from getting too large.
Response surface methodology uses statistical models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.
Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. The engineers had not been able to afford to fit a cubic three-level design to estimate a quadratic model, and theirbiasedlinear-models estimated the gradient to be zero. Box's design reduced the costs of experimentation so that a quadratic model could be fit, which led to a (long-sought) ascent direction.[3][4]
|
https://en.wikipedia.org/wiki/Response_surface_methodology
|
Wiretapping, also known aswire tappingortelephone tapping, is themonitoringoftelephoneandInternet-based conversations by a third party, often by covert means. The wire tap received its name because, historically, the monitoring connection was an actual electrical tap on an analog telephone or telegraph line. Legal wiretapping by agovernment agencyis also calledlawful interception.Passive wiretappingmonitors or records the traffic, whileactive wiretappingalters or otherwise affects it.[1][2]
Lawful interceptionis officially strictly controlled in many countries to safeguardprivacy; this is the case in allliberal democracies. In theory, telephone tapping often needs to be authorized by acourt, and is again in theory, normally only approved whenevidenceshows it is not possible to detectcriminalorsubversiveactivity in less intrusive ways. Oftentimes, the law and regulations require that the crime investigated must be at least of a certain severity.[3][4]Illegal or unauthorized telephone tapping is often a criminal offense.[3]In certain jurisdictions, such asGermanyandFrance, courts will accept illegally recorded phone calls without theother party's consentas evidence, but the unauthorized telephone tapping will still be prosecuted.[5][6]
In theUnited States, under theForeign Intelligence Surveillance Act, federal intelligence agencies can get approval for wiretaps from theUnited States Foreign Intelligence Surveillance Court, a court with secret proceedings, or in certain circumstances from theAttorney Generalwithout a court order.[7][8]
Thetelephone call recording lawsin most U.S. states require only one party to be aware of the recording, while twelve states require both parties to be aware.[9][10]In Nevada, the state legislature enacted a law making it legal for a party to record a conversation if one party to the conversation consented, but the Nevada Supreme Court issued two judicial opinions changing the law and requiring all parties to consent to the recording of a private conversation for it to be legal.[11]It is considered better practice to announce at the beginning of a call that the conversation is being recorded.[12][13]
TheFourth Amendment to the United States Constitutionprotects privacy rights by requiring awarrantto search a person. However, telephone tapping is the subject of controversy surrounding violations of this right. There are arguments that wiretapping invades a person's personal privacy and therefore violates their Fourth Amendment rights. On the other hand, there are certain rules and regulations, which permit wiretapping. A notable example of this is thePatriot Act, which, in certain circumstances, gives the government permission to wiretap citizens.[14]In addition, wiretapping laws vary perstate, making it even more difficult to determine whether the Fourth Amendment is being violated.[15]
In Canadian law, police are allowed to wiretap without the authorization from a court when there is the risk for imminent harm, such askidnappingor abomb threat.[16]They must believe that the interception is immediately necessary to prevent an unlawful act that could cause serious harm to any person or to property. This was introduced byRob Nicholsonon February 11, 2013, and is also known as Bill C-55.[16]The Supreme Court gave Parliament twelve months to rewrite a new law.Bill C-51(also known as the Anti-Terrorism Act) was then released in 2015, which transformed the Canadian Security Intelligence Service from an intelligence-gathering agency to an agency actively engaged in countering national security threats.
Legal protection extends to 'private communications' where the participants would not expect unintended persons to learn the content of the communication. A single participant can legally, and covertly record a conversation. Otherwise police normally need a judicial warrant based upon probable grounds to record a conversation they are not a part of. In order to be valid wiretap authorization must state: 1) the offense being investigated by the wiretap, 2) the type of communication, 3) the identity of the people or places targeted, 4) the period of validity (60 days from issue).[17]
In India, the lawful interception of communication by authorized law enforcement agencies (LEAs) is carried out in accordance with Section 5(2) of the Indian Telegraph Act, 1885 read with Rule 419A of Indian Telegraph (Amendment) Rules, 2007. Directions for interception of any message or class of messages under sub-section (2) of Section 5 of the Indian Telegraph Act, 1885 shall not be issued except by an order made by the Secretary to the Government of India in the Ministry of Home Affairs in the case of Government of India and by the Secretary to the State Government in-charge of the Home Department in the case of a state government.[18]The government has set up theCentralized Monitoring System(CMS) to automate the process of lawful interception and monitoring of telecommunications technology. The government of India on 2015 December 2 in a reply to parliament question no. 595 on scope, objectives and framework of the CMS has struck a balance between national security, online privacy and free speech informed that to take care of the privacy of citizens, lawful interception and monitoring is governed by the Section 5(2) of Indian Telegraph Act, 1885 read with Rule 419A of Indian Telegraph (Amendment) Rules, 2007 wherein oversight mechanism exists in form of review committee under chairmanship of the Cabinet Secretary at Central Government level and Chief Secretary of the State at the state government level.[19][20]Section 5(2) also allows the government to intercept messages that are public emergencies or for public safety.[21]
In Pakistan,Inter-Services Intelligence(ISI) is authorised by the Ministry of Information Technology and Telecommunication to intercept and trace telecommunications, as stipulated under Section 54 of the relevant Act, in July 2024.[22]Under the authorization, ISI officers of at least grade 18, subject to periodic designation, are empowered to surveil calls and messages.[22]
The contracts or licenses by which the state controlstelephone companiesoften require that the companies must provide access to tapping lines to law enforcement. In the U.S., telecommunications carriers are required by law to cooperate in the interception of communications for law enforcement purposes under the terms ofCommunications Assistance for Law Enforcement Act(CALEA).[23]
Whentelephone exchangeswere mechanical, a tap had to be installed by technicians, linking circuits together to route the audio signal from the call. Now that many exchanges have been converted to digital technology, tapping is far simpler and can be ordered remotely by computer. This central office switch wiretapping technology using the Advanced Intelligent Network (AIN) was invented by Wayne Howe and Dale Malik at BellSouth's Advanced Technology R&D group in 1995 and was issued as US Patent #5,590,171.[24]Telephone servicesprovided bycable TVcompanies also use digital switching technology. If the tap is implemented at adigital switch, the switching computer simply copies the digitized bits that represent the phone conversation to a second line and it is impossible to tell whether a line is being tapped. A well-designed tap installed on a phone wire can be difficult to detect. In some places, some law enforcement may be able to even access a mobile phone's internal microphone even while it isn't actively being used on a phone call (unless the battery is removed or drained).[25]The noises that some people believe to be telephone taps are simplycrosstalkcreated by thecouplingof signals from other phone lines.[26]
Data on the calling and called number, time of call and duration, will generally be collected automatically on all calls and stored for later use by thebillingdepartment of the phone company. These data can be accessed by security services, often with fewer legal restrictions than for a tap. This information used to be collected using special equipment known aspen registersandtrap and trace devicesand U.S. law still refers to it under those names. Today, a list of all calls to a specific number can be obtained by sorting billing records. A telephone tap during which only the call information is recorded but not the contents of the phone calls themselves, is called apen registertap.
For telephone services via digital exchanges, the information collected may additionally include a log of the type of communications media being used (some services treat data and voice communications differently, in order to conserve bandwidth).
Conversations can be recorded or monitored unofficially, either by tapping by a third party without the knowledge of the parties to the conversation or recorded by one of the parties. This may or may not be illegal, according to the circumstances and the jurisdiction.
There are a number of ways to monitor telephone conversations. One of the parties may record the conversation, either on a tape or solid-state recording device, or they may use a computer runningcall recording software. The recording, whether overt or covert, may be started manually, automatically when it detects sound on the line (VOX), or automatically whenever the phone is off the hook.
The conversation may be monitored (listened to or recorded) covertly by a third party by using aninduction coilor a direct electrical connection to the line using abeige box. An induction coil is usually placed underneath the base of a telephone or on the back of a telephone handset to pick up the signal inductively. An electrical connection can be made anywhere in the telephone system, and need not be in the same premises as the telephone. Some apparatus may require occasional access to replace batteries or tapes. Poorly designed tapping or transmitting equipment can cause interference audible to users of the telephone.[citation needed]
The tapped signal may either be recorded at the site of the tap or transmitted by radio or over the telephone wires. As of 2007[update]state-of-the-art equipment operates in the 30–300 GHz range to keep up with telephone technology compared to the 772 kHz systems used in the past.[29][30]The transmitter may be powered from the line to be maintenance-free, and only transmits when a call is in progress. These devices are low-powered as not much power can be drawn from the line, but a state-of-the-art receiver could be located as far away as ten kilometers under ideal conditions, though usually located much closer. Research has shown that asatellitecan be used to receive terrestrialtransmissionswith a power of a few milliwatts.[31]Any sort of radio transmitter whose presence is suspected is detectable with suitable equipment.
Conversation on many earlycordless telephonescould be picked up with a simpleradio scanneror sometimes even a domestic radio. Widespread digitalspread spectrumtechnology andencryptionhas made eavesdropping increasingly difficult.
A problem with recording a telephone conversation is that the recorded volume of the two speakers may be very different. A simple tap will have this problem. An in-ear microphone, while involving an additional distorting step by converting the electrical signal to sound and back again, in practice gives better-matched volume. Dedicated, and relatively expensive, telephone recording equipment equalizes the sound at both ends from a direct tap much better.
Mobile phonesare, insurveillanceterms, a major liability.
For mobile phones the major threat is the collection of communications data.[32][33]This data does not only include information about the time, duration, originator and recipient of the call, but also the identification of the base station where the call was made from, which equals its approximate geographical location. This data is stored with the details of the call and has utmost importance fortraffic analysis.
It is also possible to get greater resolution of a phone's location by combining information from a number of cells surrounding the location, which cells routinely communicate (to agree on the next handoff—for a moving phone) and measuring thetiming advance, a correction for the speed of light in theGSMstandard. This additional precision must be specifically enabled by the telephone company—it is not part of the network's ordinary operation.
In 1995,Peter Garza, a Special Agent with theNaval Criminal Investigative Service, conducted the first court-ordered Internet wiretap in the United States while investigating Julio Cesar "Griton" Ardita.[34][35]
As technologies emerge, includingVoIP, new questions are raised about law enforcement access to communications (seeVoIP recording). In 2004, theFederal Communications Commissionwas asked to clarify how theCommunications Assistance for Law Enforcement Act(CALEA) related to Internet service providers. The FCC stated that “providers of broadband Internet access and voice over Internet protocol (“VoIP”) services are regulable as “telecommunications carriers” under the Act.”[36]Those affected by the Act will have to provide access to law enforcement officers who need to monitor or intercept communications transmitted through their networks. As of 2009, warrantless surveillance of internet activity has consistently been upheld inFISA court.[37]
TheInternet Engineering Task Forcehas decided not to consider requirements for wiretapping as part of the process for creating and maintaining IETF standards.[38]
Typically, illegal Internet wiretapping is conducted viaWi-Ficonnection to someone's Internet by cracking theWEPorWPAkey, using a tool such asAircrack-ngorKismet.[39][40]Once in, the intruder relies on a number of potential tactics, for example anARP spoofingattack, allowing the intruder to viewpacketsin a tool such asWiresharkorEttercap.
The first generation mobile phones (c.1978through 1990) could be easily monitored by anyone with a'scanning all-band receiver'because the system used an analog transmission system-like an ordinary radio transmitter. Instead, digital phones are harder to monitor because they use digitally encoded and compressed transmission. However the government can tap mobile phones with the cooperation of the phone company.[41]It is also possible for organizations with the correct technical equipment to monitor mobile phone communications and decrypt the audio.[citation needed]
To the mobile phones in its vicinity, a device called an "IMSI-catcher" pretends to be a legitimate base station of the mobile phone network, thus subjecting the communication between the phone and the network to aman-in-the-middle attack. This is possible because, while the mobile phone has to authenticate itself to the mobile telephone network, the network does not authenticate itself to the phone.[42][failed verification]There is no defense against IMSI-catcher based eavesdropping, except using end-to-end call encryption; products offering this feature,secure telephones, are already beginning to appear on the market, though they tend to be expensive and incompatible with each other, which limits their proliferation.[43]
Logging theIP addressesof users that access certain websites is commonly called "webtapping".[44]
Webtapping is used to monitor websites that presumably contain dangerous or sensitive materials, and the people that access them. It is allowed in the US by thePatriot Act, but is considered a questionable practice by many.[45]
In Canada, anyone is legally allowed to record a conversation as long as they are involved in the conversation. The police must apply for a warrant beforehand to legally eavesdrop on a conversation, which requires that it is expected to reveal evidence to a crime. State agents may record conversations, but must obtain a warrant to use them as evidence in court.
The history of voice communication technology began in 1876 with the invention ofAlexander Graham Bell's telephone. In the 1890s, "law enforcement agencies begin tapping wires on early telephone networks".[46]Remote voice communications "were carried almost exclusively by circuit-switched systems," where telephone switches would connect wires to form a continuous circuit and disconnect the wires when the call ended. All other telephone services, such as call forwarding and message taking, were handled by human operators.[47]
The earliest wiretaps were extra wires — physically inserted to the line between the switchboard and the subscriber — that carried the signal to a pair of earphones and a recorder. Later, wiretaps were installed at the central office on the frames that held the incoming wires.[47]In late 1940, the Nazis tried to secure some telephone lines between their forward headquarters in Paris and a variety ofFuhrerbunkersin Germany. They did this by constantly monitoring the voltage on the lines, looking for any sudden drops or increases in voltage indicating that other wiring had been attached. However, the French telephone engineer Robert Keller succeeded in attaching taps without alerting the Nazis. This was done through an isolated rental property just outside ofParis. Keller's group became known toSOE(and later Allied military intelligence generally) as "Source K". They were later betrayed by a mole within the French resistance, and Keller was murdered in theBergen-Belsenconcentration campin April 1945. The first computerized telephone switch was developed by Bell Labs in 1965; it did not support standard wiretapping techniques.[46]
In theGreek telephone tapping case 2004–2005more than 100 mobile phone numbers belonging mostly to members of the Greek government, including thePrime Minister of Greece, and top-ranking civil servants were found to have been illegally tapped for a period of at least one year. The Greek government concluded this had been done by a foreign intelligence agency, for security reasons related to the2004 Olympic Games, by unlawfully activating the lawful interception subsystem of theVodafone Greecemobile network. An Italian tapping case which surfaced in November 2007 revealed significant manipulation of the news at the national television companyRAI.[48]
Many state legislatures in the United States enacted statutes that prohibited anyone from listening in on telegraph communication. Telephone wiretapping began in the 1890s,[3]and itsconstitutionalitywas established in theProhibition-Era conviction ofbootleggerRoy Olmstead. Wiretapping has also been carried out in the US under most presidents, sometimes with a lawful warrant since theSupreme Courtruled it constitutional in 1928.
Before theattack on Pearl Harborand the subsequent entry of the United States intoWorld War II, the U.S. House of Representatives held hearings on the legality of wiretapping for national defense. Significant legislation and judicial decisions on the legality and constitutionality of wiretapping had taken place years before World War II.[49]However, it took on new urgency at that time of national crisis. The actions of the government regarding wiretapping for the purpose of national defense in the current war on terror have drawn considerable attention and criticism. In the World War II era, the public was also aware of the controversy over the question of the constitutionality and legality of wiretapping. Furthermore, the public was concerned with the decisions that the legislative and judicial branches of the government were making regarding wiretapping.[50]
On October 19, 1963, U.S. Attorney GeneralRobert F. Kennedy, who served underJohn F. KennedyandLyndon B. Johnson, authorized theFBIto begin wiretapping the communications of Rev.Martin Luther King Jr.The wiretaps remained in place until April 1965 at his home and June 1966 at his office.[51]In 1967, the U.S. Supreme Court ruled that wiretapping (or “intercepting communications”) requires a warrant inKatz v. United States.[52]In 1968 Congress passed a law that provided warrants for wiretapping in criminal investigations.[53]
In the 1970s,optical fibersbecome a medium for telecommunications. These fiber lines, "long, thin strands of glass that carry signals via laser light," are more secure than radio and have become very cheap. From the 1990s to the present, the majority of communications between fixed locations has been achieved by fiber. Because these fiber communications are wired, they are protected under U.S. law.[46][47]
In 1978, the US Foreign Intelligence Surveillance Act (FISA) created a "secret federal court" for issuing wiretap warrants in national security cases. This was in response to findings from the Watergate break-in, which allegedly uncovered a history of presidential operations that had used surveillance on domestic and foreign political organizations.[54]
A difference between US wiretapping in the US and elsewhere is that, when operating in other countries, "American intelligence services could not place wiretaps on phone lines as easily as they could in the U.S." Also, in the US, wiretapping is regarded as an extreme investigative technique, while communications are often intercepted in some other countries. The National Security Agency (NSA) "spends billions of dollars every year intercepting foreign communications from ground bases, ships, airplanes and satellites".[47]
FISA distinguishes between U.S. persons and foreigners, between communications inside and outside the U.S., and between wired and wireless communications. Wired communications within the United States are protected, since intercepting them requires a warrant,[47]but there is no regulation of US wiretapping elsewhere.
In 1994, Congress approved the Communications Assistance for Law Enforcement Act (CALEA), which “requires telephone companies to be able to install more effective wiretaps. In 2004, theFederal Bureau of Investigation(FBI),United States Department of Justice(DOJ),Bureau of Alcohol, Tobacco, Firearms, and Explosives(ATF), andDrug Enforcement Administration(DEA) wanted to expand CALEA requirements to VoIP service.”[46][47]
TheFederal Communications Commission(FCC) ruled in August 2005 that “broadband-service providers and interconnectedVoIPproviders fall within CALEA's scope. Currently, instant messaging, web boards and site visits are not included in CALEA's jurisdiction.[55]In 2007 Congress amended FISA to "allow the government to monitor more communications without a warrant". In 2008 PresidentGeorge W. Bushexpanded the surveillance of internet traffic to and from the U.S. government by signing a national security directive.[46]
TheNSA warrantless surveillance (2001–2007)controversy was discovered in December 2005. It aroused much controversy after then PresidentGeorge W. Bushadmitted to violating a specific federal statute (FISA) and the warrant requirement of the Fourth Amendment to theUnited States Constitution. The President claimed his authorization was consistent with other federal statutes (AUMF) and other provisions of the Constitution, also stating that it was necessary to keep America safe fromterrorismand could lead to the capture of notorious terrorists responsible for theSeptember 11 attacksin 2001.[citation needed][56]
In 2008,Wiredand other media reported alamplighterdisclosed a "Quantico Circuit", a 45-megabit/second DS-3 line linking a carrier's most sensitive network in an affidavit that was the basis for a lawsuit against Verizon Wireless. The circuit provides direct access to all content and all information concerning the origin and termination of telephone calls placed on the Verizon Wireless network as well as the actual content of calls, according to the filing.[57]
|
https://en.wikipedia.org/wiki/Telephone_tapping
|
Incryptography, apepperis a secret added to an input such as apasswordduringhashingwith acryptographic hash function. This value differs from asaltin that it is not stored alongside a password hash, but rather the pepper is kept separate in some other medium, such as a Hardware Security Module.[1]Note that theNational Institute of Standards and Technologyrefers to this value as asecret keyrather than apepper. A pepper is similar in concept to asaltor anencryption key. It is like a salt in that it is a randomized value that is added to a password hash, and it is similar to an encryption key in that it should be kept secret.
A pepper performs a comparable role to asaltor anencryption key, but while a salt is not secret (merely unique) and can be stored alongside the hashed output, a pepper is secret and must not be stored with the output. The hash and salt are usually stored in a database, but a pepper must be stored separately to prevent it from being obtained by the attacker in case of a database breach.[2]A pepper should be long enough to remain secret from brute force attempts to discover it (NIST recommends at least 112 bits).
The idea of a site- or service-specific salt (in addition to a per-user salt) has a long history, withSteven M. Bellovinproposing alocal parameterin aBugtraqpost in 1995.[3]In 1996Udi Manberalso described the advantages of such a scheme, terming it asecret salt.[4]The termpepperhas been used, by analogy to salt, but with a variety of meanings. For example, when discussing achallenge-response scheme, pepper has been used for a salt-like quantity, though not used for password storage;[5]it has been used for a data transmission technique where a pepper must be guessed;[6]and even as a part of jokes.[7]
The termpepperwas proposed for a secret or local parameter stored separately from the password in a discussion of protecting passwords fromrainbow tableattacks.[8]This usage did not immediately catch on: for example, Fred Wenzel added support toDjangopassword hashing for storage based on a combination ofbcryptandHMACwith separately storednonces, without using the term.[9]Usage has since become more common.[10][11][12]
There are multiple different types of pepper:
An incomplete example of using a pepper constant to save passwords is given below.
This table contains two combinations of username and password. The password is not saved, and the 8-byte (64-bit) 44534C70C6883DE2 pepper is saved in a safe place separate from the output values of the hash.
Unlike thesalt, the pepper does not provide protection to users who use the same password, but protects againstdictionary attacks, unless the attacker has the pepper value available. Since the same pepper is not shared between different applications, an attacker is unable to reuse the hashes of one compromised database to another. A complete scheme for saving passwords usually includes both salt and pepper use.
In the case of a shared-secret pepper, a single compromised password (via password reuse or other attack) along with a user's salt can lead to an attack to discover the pepper, rendering it ineffective. If an attacker knows a plaintext password and a user's salt, as well as the algorithm used to hash the password, then discovering the pepper can be a matter of brute forcing the values of the pepper. This is why NIST recommends the secret value be at least 112 bits, so that discovering it by exhaustive search is intractable. The pepper must be generated anew for every application it is deployed in, otherwise a breach of one application would result in lowered security of another application. Without knowledge of the pepper, other passwords in the database will be far more difficult to extract from their hashed values, as the attacker would need to guess the password as well as the pepper.
A pepper adds security to a database of salts and hashes because unless the attacker is able to obtain the pepper, cracking even a single hash is intractable, no matter how weak the original password. Even with a list of (salt, hash) pairs, an attacker must also guess the secret pepper in order to find the password which produces the hash. The NIST specification for a secret salt suggests using aPassword-Based Key Derivation Function (PBKDF)with an approvedPseudorandom Functionsuch asHMACwithSHA-3as the hash function of the HMAC. The NIST recommendation is also to perform at least 1000 iterations of the PBKDF, and a further minimum 1000 iterations using the secret salt in place of the non-secret salt.
In the case of a pepper that is unique to each user, the tradeoff is gaining extra security at the cost of storing more information securely. Compromising one password hash and revealing its secret pepper will have no effect on other password hashes and their secret pepper, so each pepper must be individually discovered, which greatly increases the time taken to attack the password hashes.
|
https://en.wikipedia.org/wiki/Pepper_(cryptography)
|
In the study ofcomplex networks, a network is said to havecommunity structureif the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case ofnon-overlappingcommunity finding, this implies that the network divides naturally into groups of nodes with dense connections internally and sparser connections between groups. Butoverlappingcommunities are also allowed. The more general definition is based on the principle that pairs of nodes are more likely to be connected if they are both members of the same community(ies), and less likely to be connected if they do not share communities. A related but different problem iscommunity search, where the goal is to find a community that a certain vertex belongs to.
In the study ofnetworks, such as computer and information networks, social networks and biological networks, a number of different characteristics have been found to occur commonly, including thesmall-world property,heavy-taileddegree distributions, andclustering, among others. Another common characteristic is community structure.[1][2][3][4][5]In the context of networks, community structure refers to the occurrence of groups of nodes in a network that are more densely connected internally than with the rest of the network, as shown in the example image to the right. This inhomogeneity of connections suggests that the network has certain natural divisions within it.
Communities are often defined in terms of thepartition of the setof vertices, that is each node is put into one and only one community, just as in the figure. This is a useful simplification and most community detection methods find this type of community structure. However, in some cases a better representation could be one where vertices are in more than one community. This might happen in a social network where each vertex represents a person, and the communities represent the different groups of friends: one community for family, another community for co-workers, one for friends in the same sports club, and so on. The use ofcliques for community detectiondiscussed below is just one example of how such overlapping community structure can be found.
Some networks may not have any meaningful community structure. Many basic network models, for example, such as therandom graphand theBarabási–Albert model, do not display community structure.
Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc.[5][6]
Finding an underlying community structure in a network, if it exists, is important for a number of reasons. Communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier.[7]
Individual communities also shed light on the function of the system represented by the network since communities often correspond to functional units of the system. In metabolic networks, such functional groups correspond to cycles or pathways whereas in theprotein interaction network, communities correspond to proteins with similar functionality inside a biological cell. Similarly, citation networks form communities by research topic.[1]Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. Such insight can be useful in improving some algorithms on graphs such asspectral clustering.[8]
Importantly, communities often have very different properties than the average properties of the networks. Thus, only concentrating on the average properties usually misses many important and interesting features inside the networks. For example, in a given social network, both gregarious and reticent groups might exists simultaneously.[7]
Existence of communities also generally affects various processes like rumour spreading or epidemic spreading happening on a network. Hence to properly understand such processes, it is important to detect communities and also to study how they affect the spreading processes in various settings.
Finally, an important application that community detection has found in network science is the prediction of missing links and the identification of false links in the network. During the measurement process, some links may not get observed for a number of reasons. Similarly, some links could falsely enter into the data because of the errors in the measurement. Both these cases are well handled by community detection algorithm since it allows one to assign the probability of existence of an edge between a given pair of nodes.[9]
Finding communities within an arbitrary network can be acomputationallydifficult task. The number of communities, if any, within the network is typically unknown and the communities are often of unequal size and/or density. Despite these difficulties, however, several methods for community finding have been developed and employed with varying levels of success.[4]
One of the oldest algorithms for dividing networks into parts is theminimum cutmethod (and variants such as ratio cut and normalized cut). This method sees use, for example, in load balancing forparallel computingin order to minimize communication between processor nodes.
In the minimum-cut method, the network is divided into a predetermined number of parts, usually of approximately the same size, chosen such that the number of edges between groups is minimized. The method works well in many of the applications for which it was originally intended but is less than ideal for finding community structure in general networks since it will find communities regardless of whether they are implicit in the structure, and it will find only a fixed number of them.[10]
Another method for finding community structures in networks ishierarchical clustering. In this method one defines asimilarity measurequantifying some (usually topological) type of similarity between node pairs. Commonly used measures include thecosine similarity, theJaccard index, and theHamming distancebetween rows of theadjacency matrix. Then one groups similar nodes into communities according to this measure. There are several common schemes for performing the grouping, the two simplest beingsingle-linkage clustering, in which two groups are considered separate communities if and only if all pairs of nodes in different groups have similarity lower than a given threshold, andcomplete linkage clustering, in which all nodes within every group have similarity greater than a threshold. An important step is how to determine the threshold to stop the agglomerative clustering, indicating a near-to-optimal community structure. A common strategy consist to build one or several metrics monitoring global properties of the network, which peak at given step of the clustering. An interesting approach in this direction is the use of various similarity or dissimilarity measures, combined throughconvex sums,.[11]Another approximation is the computation of a quantity monitoring the density of edges within clusters with respect to the density between clusters, such as the partition density, which has been proposed when the similarity metric is defined between edges (which permits the definition of overlapping communities),[12]and extended when the similarity is defined between nodes, which allows to consider alternative definitions of communities such as guilds (i.e. groups of nodes sharing a similar number of links with respect to the same neighbours but not necessarily connected themselves).[13]These methods can be extended to consider multidimensional networks, for instance when we are dealing with networks having nodes with different types of links.[13]
Another commonly used algorithm for finding communities is theGirvan–Newman algorithm.[1]This algorithm identifies edges in a network that lie between communities and then removes them, leaving behind just the communities themselves. The identification is performed by employing the graph-theoretic measurebetweenness centrality, which assigns a number to each edge which is large if the edge lies "between" many pairs of nodes.
The Girvan–Newman algorithm returns results of reasonable quality and is popular because it has been implemented in a number of standard software packages. But it also runs slowly, taking time O(m2n) on a network ofnvertices andmedges, making it impractical for networks of more than a few thousand nodes.[14]
In spite of its known drawbacks, one of the most widely used methods for community detection is modularity maximization.[14]Modularityis a benefit function that measures the quality of a particular division of a network into communities. The modularity maximization method detects communities by searching over possible divisions of a network for one or more that have particularly high modularity. Since exhaustive search over all possible divisions is usually intractable, practical algorithms are based on approximate optimization methods such as greedy algorithms, simulated annealing, or spectral optimization, with different approaches offering different balances between speed and accuracy.[15][16]A popular modularity maximization approach is theLouvain method, which iteratively optimizes local communities until global modularity can no longer be improved given perturbations to the current community state.[17][18]
The usefulness of modularity optimization is questionable, as it has been shown that modularity optimization often fails to detect clusters smaller than some scale, depending on the size of the network (resolution limit[19]); on the other hand the landscape of modularity values is characterized by a huge degeneracy of partitions with high modularity, close to the absolute maximum, which may be very different from each other.[20]
Methods based onstatistical inferenceattempt to fit agenerative modelto the network data, which encodes the community structure. The overall advantage of this approach compared to the alternatives is its more principled nature, and the capacity to inherently address issues ofstatistical significance. Most methods in the literature are based on thestochastic block model[21]as well as variants including mixed membership,[22][23]degree-correction,[24]and hierarchical structures.[25]Model selectioncan be performed using principled approaches such asminimum description length[26][27](or equivalently,Bayesian model selection[28]) andlikelihood-ratio test.[29]Currently many algorithms exist to perform efficient inference of stochastic block models, includingbelief propagation[30][31]and agglomerativeMonte Carlo.[32]
In contrast to approaches that attempt to cluster a network given an objective function, this class of methods is based on generative models, which not only serve as a description of the large-scale structure of the network, but also can be used togeneralizethe data and predict the occurrence of missing or spurious links in the network.[33][34]
Cliquesare subgraphs in which every node is connected to every other node in the clique. As nodes can not be more tightly connected than this, it is not surprising that there are many approaches to community detection in networks based on the detection of cliques in a graph and the analysis of how these overlap. Note that as a node can be a member of more than one clique, a node can be a member of more than one community in these methods giving an "overlapping community structure".
One approach is to find the "maximal cliques". That is to find the cliques which are not the subgraph of any other clique. The classic algorithm to find these is theBron–Kerbosch algorithm. The overlap of these can be used to define communities in several ways. The simplest is to consider only maximal cliques bigger than a minimum size (number of nodes). The union of these cliques then defines a subgraph whose components (disconnected parts) then define communities.[35]Such approaches are often implemented insocial network analysis softwaresuch as UCInet.
The alternative approach is to use cliques of fixed sizek{\displaystyle k}. The overlap of these can be used to define a type ofk{\displaystyle k}-regularhypergraphor a structure which is a generalisation of theline graph(the case whenk=2{\displaystyle k=2}) known as a "Clique graph".[36]The clique graphs have vertices which represent the cliques in the original graph while the edges of the clique graph record the overlap of the clique in the original graph. Applying any of the previous community detection methods (which assign each node to a community) to the clique graph then assigns each clique to a community. This can then be used to determine community membership of nodes in the cliques. Again as a node may be in several cliques, it can be a member of several communities.
For instance theclique percolation method[37]defines communities aspercolation clustersofk{\displaystyle k}-cliques. To do this it
finds allk{\displaystyle k}-cliques in a network, that is all the complete sub-graphs ofk{\displaystyle k}-nodes.
It then defines twok{\displaystyle k}-cliques to be adjacent if they sharek−1{\displaystyle k-1}nodes, that is this is used to define edges in a clique graph. A community is then defined to be the maximal union ofk{\displaystyle k}-cliques in which we can reach anyk{\displaystyle k}-clique from any otherk{\displaystyle k}-clique through series ofk{\displaystyle k}-clique adjacencies. That is communities are just the connected components in the clique graph. Since a node can belong to several differentk{\displaystyle k}-clique percolation clusters at the same time, the communities can overlap with each other.
A network can be represented or projected onto alatent spaceviarepresentation learningmethods to efficiently represent a system. Then, variousclusteringmethods can be employed to detect community structures. For Euclidean spaces, methods like embedding-based Silhouette community detection[38]can be utilized. For Hypergeometric latent spaces, critical gap method or modified density-based, hierarchical, or partitioning-based clustering methods can be utilized.[39]
The evaluation of algorithms, to detect which are better at detecting community structure, is still an open question. It must be based on analyses of networks of known structure. A typical example is the "four groups" test, in which a network is divided into four equally-sized groups (usually of 32 nodes each) and the probabilities of connection within and between groups varied to create more or less challenging structures for the detection algorithm. Such benchmark graphs are a special case of theplanted l-partition model[40]ofCondonandKarp, or more generally of "stochastic block models", a general class of random network models containing community structure. Other more flexible benchmarks have been proposed that allow for varying group sizes and nontrivial degree distributions, such asLFR benchmark[41][42]which is an extension of the four groups benchmark that includes heterogeneous distributions of node degree and community size, making it a more severe test of community detection methods.[43][44]
Commonly used computer-generated benchmarks start with a network of well-defined communities. Then, this structure is degraded by rewiring or removing links and it gets harder and harder for the algorithms to detect the original partition. At the end, the network reaches a point where it is essentially random. This kind of benchmark may be called "open". The performance on these benchmarks is evaluated by measures such as normalizedmutual informationorvariation of information. They compare the solution obtained by an algorithm[42]with the original community structure, evaluating the similarity of both partitions.
During recent years, a rather surprising result has been obtained by various groups which shows that a phase transition exists in the community detection problem, showing that as the density of connections inside communities and between communities become more and more equal or both become smaller (equivalently, as the community structure becomes too weak or the network becomes too sparse), suddenly the communities become undetectable. In a sense, the communities themselves still exist, since the presence and absence of edges is still correlated with the community memberships of their endpoints; but it becomes information-theoretically impossible to label the nodes better than chance, or even distinguish the graph from one generated by a null model such as theErdos–Renyi modelwithout community structure. This transition is independent of the type of algorithm being used to detect communities, implying that there exists a fundamental limit on our ability to detect communities in networks, even with optimal Bayesian inference (i.e., regardless of our computational resources).[45][46][47]
Consider astochastic block modelwith totaln{\displaystyle n}nodes,q=2{\displaystyle q=2}groups of equal size, and letpin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}be the connection probabilities inside and between the groups respectively. Ifpin>pout{\displaystyle p_{\text{in}}>p_{\text{out}}}, the network would possess community structure since the link density inside the groups would be more than the density of links between the groups. In the sparse case,pin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}scale asO(1/n){\displaystyle O(1/n)}so that the average degree is constant:
Then it becomes impossible to detect the communities when:[46]
|
https://en.wikipedia.org/wiki/Community_structure
|
Subtraction(which is signified by theminus sign, –) is one of the fourarithmetic operationsalong withaddition,multiplicationanddivision. Subtraction is an operation that represents removal of objects from a collection.[1]For example, in the adjacent picture, there are5 − 2peaches—meaning 5 peaches with 2 taken away, resulting in a total of 3 peaches. Therefore, thedifferenceof 5 and 2 is 3; that is,5 − 2 = 3. While primarily associated with natural numbers inarithmetic, subtraction can also represent removing or decreasing physical and abstract quantities using different kinds of objects includingnegative numbers,fractions,irrational numbers,vectors, decimals, functions, and matrices.[2]
In a sense, subtraction is the inverse of addition. That is,c=a−bif and only ifc+b=a. In words: the difference of two numbers is the number that gives the first one when added to the second one.
Subtraction follows several important patterns. It isanticommutative, meaning that changing the order changes the sign of the answer. It is also notassociative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Because0is theadditive identity, subtraction of it does not change a number. Subtraction also obeys predictable rules concerning related operations, such asadditionandmultiplication. All of these rules can beproven, starting with the subtraction ofintegersand generalizing up through thereal numbersand beyond. Generalbinary operationsthat follow these patterns are studied inabstract algebra.
Incomputability theory, considering subtraction is notwell-definedovernatural numbers, operations between numbers are actually defined using "truncated subtraction" ormonus.[3]
Subtraction is usually written using theminus sign"−" between the terms; that is, ininfix notation. The result is expressed with anequals sign. For example,2−1=1{\displaystyle 2-1=1}(pronounced as "two minus one equals one") and4−6=−2{\displaystyle 4-6=-2}(pronounced as "four minus six equals negative two"). Nonetheless, some situations where subtraction is "understood", even though no symbol appears; inaccounting, a column of two numbers, with the lower number in red, usually indicates that the lower number in the column is to be subtracted, with the difference written below, under a line.[4]
The number being subtracted is thesubtrahend, while the number it is subtracted from is theminuend. The result is thedifference, that is:[5]minuend−subtrahend=difference{\displaystyle {\rm {minuend}}-{\rm {subtrahend}}={\rm {difference}}}.
All of this terminology derives fromLatin. "Subtraction" is anEnglishword derived from the Latinverbsubtrahere, which in turn is acompoundofsub"from under" andtrahere"to pull". Thus, to subtract is todraw from below, or totake away.[6]Using thegerundivesuffix-ndresults in "subtrahend", "thing to be subtracted".[a]Likewise, fromminuere"to reduce or diminish", one gets "minuend", which means "thing to be diminished".
Imagine aline segmentoflengthbwith the left end labeledaand the right end labeledc.
Starting froma, it takesbsteps to the right to reachc. This movement to the right is modeled mathematically byaddition:
Fromc, it takesbsteps to theleftto get back toa. This movement to the left is modeled by subtraction:
Now, a line segment labeled with the numbers1,2, and3. From position 3, it takes no steps to the left to stay at 3, so3 − 0 = 3. It takes 2 steps to the left to get to position 1, so3 − 2 = 1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended.
To subtract arbitrarynatural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so3 − 3 = 0. But3 − 4is still invalid, since it again leaves the line. The natural numbers are not a useful context for subtraction.
The solution is to consider theintegernumber line(..., −3, −2, −1, 0, 1, 2, 3, ...). This way, it takes 4 steps to the left from 3 to get to −1:
Subtraction ofnatural numbersis notclosed: the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number. Such a case uses one of two approaches:
Thefieldof real numbers can be defined specifying only two binary operations, addition and multiplication, together withunary operationsyieldingadditiveandmultiplicativeinverses. The subtraction of a real number (the subtrahend) from another (the minuend) can then be defined as the addition of the minuend and the additive inverse of the subtrahend. For example,3 −π= 3 + (−π). Alternatively, instead of requiring these unary operations, the binary operations of subtraction anddivisioncan be taken as basic.
Subtraction isanti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, ifaandbare any two numbers, then
Subtraction isnon-associative, which comes up when one tries to define repeated subtraction. In general, the expression
can be defined to mean either (a−b) −cora− (b−c), but these two possibilities lead to different answers. To resolve this issue, one must establish anorder of operations, with different orders yielding different results.
In the context of integers, subtraction ofonealso plays a special role: for any integera, the integer(a− 1)is the largest integer less thana, also known as the predecessor ofa.
When subtracting two numbers with units of measurement such askilogramsorpounds, they must have the same unit. In most cases, the difference will have the same unit as the original numbers.
Changes inpercentagescan be reported in at least two forms,percentage changeandpercentage pointchange. Percentage change represents therelative changebetween the two quantities as a percentage, whilepercentage pointchange is simply the number obtained by subtracting the two percentages.[7][8][9]
As an example, suppose that 30% of widgets made in a factory are defective. Six months later, 20% of widgets are defective. The percentage change is20% − 30%/30%= −1/3=−33+1/3%, while the percentage point change is −10 percentage points.
Themethod of complementsis a technique used to subtract one number from another using only the addition of positive numbers. This method was commonly used inmechanical calculators, and is still used in moderncomputers.
To subtract a binary numbery(the subtrahend) from another numberx(the minuend), the ones' complement ofyis added toxand one is added to the sum. The leading digit "1" of the result is then discarded.
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing "0" to "1" and vice versa). And adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example:
becomes the sum:
Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)
Methods used to teach subtraction toelementary schoolvary from country to country, and within a country, different methods are adopted at different times. In what is known in the United States astraditional mathematics, a specific process is taught to students at the end of the 1st year (or during the 2nd year) for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to includedecimal representationsof fractional numbers.
Almost all American schools currently teach a method of subtraction using borrowing or regrouping (the decomposition algorithm) and a system of markings called crutches.[10][11]Although a method of borrowing had been known and published in textbooks previously, the use of crutches in American schools spread afterWilliam A. Brownellpublished a study—claiming that crutches were beneficial to students using this method.[12]This system caught on rapidly, displacing the other methods of subtraction in use in America at that time.
Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country.[13][14]
Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of the subtrahend:
from the minuend
where eachsiandmiis a digit, proceeds by writing downm1−s1,m2−s2, and so forth, as long assidoes not exceedmi. Otherwise,miis increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digitmi+1by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digitsi+1by one.
Example:704 − 512.
−1CDU704512192⟵carry⟵Minuend⟵Subtrahend⟵RestorDifference{\displaystyle {\begin{array}{rrrr}&\color {Red}-1\\&C&D&U\\&7&0&4\\&5&1&2\\\hline &1&9&2\\\end{array}}{\begin{array}{l}{\color {Red}\longleftarrow {\rm {carry}}}\\\\\longleftarrow \;{\rm {Minuend}}\\\longleftarrow \;{\rm {Subtrahend}}\\\longleftarrow {\rm {Rest\;or\;Difference}}\\\end{array}}}
The minuend is 704, the subtrahend is 512. The minuend digits arem3= 7,m2= 0andm1= 4. The subtrahend digits ares3= 5,s2= 1ands1= 2. Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one's place. In the ten's place, 0 is less than 1, so the 0 is increased by 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192.
The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundreds digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundreds place.
There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7.
Example:[citation needed]
Example:[citation needed]
In this method, each digit of the subtrahend is subtracted from the digit above it starting from right to left. If the top number is too small to subtract the bottom number from it, we add 10 to it; this 10 is "borrowed" from the top digit to the left, which we subtract 1 from. Then we move on to subtracting the next digit and borrowing as needed, until every digit has been subtracted. Example:[citation needed]
A variant of the American method where all borrowing is done before all subtraction.[15]
Example:
The partial differences method is different from other vertical subtraction methods because no borrowing or carrying takes place. In their place, one places plus or minus signs depending on whether the minuend is greater or smaller than the subtrahend. The sum of the partial differences is the total difference.[16]
Example:
Instead of finding the difference digit by digit, one can count up the numbers between the subtrahend and the minuend.[17]
Example:
1234 − 567 = can be found by the following steps:
Add up the value from each step to get the total difference:3 + 30 + 400 + 234 = 667.
Another method that is useful formental arithmeticis to split up the subtraction into small steps.[18]
Example:
1234 − 567 = can be solved in the following way:
The same change method uses the fact that adding or subtracting the same number from the minuend and subtrahend does not change the answer. One simply adds the amount needed to get zeros in the subtrahend.[19]
Example:
"1234 − 567 =" can be solved as follows:
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Subtraction
|
Invector calculus, thecurl, also known asrotor, is avector operatorthat describes theinfinitesimalcirculationof avector fieldin three-dimensionalEuclidean space. The curl at a point in the field is represented by avectorwhose length and direction denote themagnitudeand axis of the maximum circulation.[1]The curl of a field is formally defined as the circulation density at each point of the field.
A vector field whose curl is zero is calledirrotational. The curl is a form ofdifferentiationfor vector fields. The corresponding form of thefundamental theorem of calculusisStokes' theorem, which relates thesurface integralof the curl of a vector field to theline integralof the vector field around the boundary curve.
The notationcurlFis more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notationrotFis traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use thecross productnotation with thedel(nabla) operator, as in∇×F{\displaystyle \nabla \times \mathbf {F} },[2]which also reveals the relation between curl (rotor),divergence, andgradientoperators.
Unlike thegradientanddivergence, curl as formulated in vector calculus does not generalize simply to other dimensions; somegeneralizationsare possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator ofgeometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensionalcross product, and indeed the connection is reflected in the notation∇×{\displaystyle \nabla \times }for the curl.
The name "curl" was first suggested byJames Clerk Maxwellin 1871[3]but the concept was apparently first used in the construction of an optical field theory byJames MacCullaghin 1839.[4][5]
The curl of a vector fieldF, denoted bycurlF, or∇×F{\displaystyle \nabla \times \mathbf {F} }, orrotF, is an operator that mapsCkfunctions inR3toCk−1functions inR3, and in particular, it maps continuously differentiable functionsR3→R3to continuous functionsR3→R3. It can be defined in several ways, to be mentioned below:
One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: ifu^{\displaystyle \mathbf {\hat {u}} }is any unit vector, the component of the curl ofFalong the directionu^{\displaystyle \mathbf {\hat {u}} }may be defined to be the limiting value of a closedline integralin a plane perpendicular tou^{\displaystyle \mathbf {\hat {u}} }divided by the area enclosed, as the path of integration is contracted indefinitely around the point.
More specifically, the curl is defined at a pointpas[6][7](∇×F)(p)⋅u^=deflimA→01|A|∮C(p)F⋅dr{\displaystyle (\nabla \times \mathbf {F} )(p)\cdot \mathbf {\hat {u}} \ {\overset {\underset {\mathrm {def} }{}}{{}={}}}\lim _{A\to 0}{\frac {1}{|A|}}\oint _{C(p)}\mathbf {F} \cdot \mathrm {d} \mathbf {r} }where theline integralis calculated along theboundaryCof theareaAcontaining point p,|A|being the magnitude of the area. This equation defines the component of the curl ofFalong the directionu^{\displaystyle \mathbf {\hat {u}} }. The infinitesimal surfaces bounded byChaveu^{\displaystyle \mathbf {\hat {u}} }as theirnormal.Cis oriented via theright-hand rule.
The above formula means that the component of the curl of a vector field along a certain axis is theinfinitesimalarea densityof the circulation of the field in a plane perpendicular to that axis. This formula does nota prioridefine a legitimate vector field, for the individual circulation densities with respect to various axesa priorineed not relate to each other in the same way as the components of a vector do; that theydoindeed relate to each other in this precise manner must be proven separately.
To this definition fits naturally theKelvin–Stokes theorem, as a global formula corresponding to the definition. It equates thesurface integralof the curl of a vector field to the above line integral taken around the boundary of the surface.
Another way one can define the curl vector of a functionFat a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosingpdivided by the volume enclosed, as the shell is contracted indefinitely aroundp.
More specifically, the curl may be defined by the vector formula(∇×F)(p)=deflimV→01|V|∮Sn^×FdS{\displaystyle (\nabla \times \mathbf {F} )(p){\overset {\underset {\mathrm {def} }{}}{{}={}}}\lim _{V\to 0}{\frac {1}{|V|}}\oint _{S}\mathbf {\hat {n}} \times \mathbf {F} \ \mathrm {d} S}where the surface integral is calculated along the boundarySof the volumeV,|V|being the magnitude of the volume, andn^{\displaystyle \mathbf {\hat {n}} }pointing outward from the surfaceSperpendicularly at every point inS.
In this formula, the cross product in the integrand measures the tangential component ofFat each point on the surfaceS, and points along the surface at right angles to thetangential projectionofF. Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation ofFaroundS, and whose direction is at right angles to this circulation. The above formula says that thecurlof a vector field at a point is theinfinitesimal volume densityof this "circulation vector" around the point.
To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates thevolume integralof the curl of a vector field to the above surface integral taken over the boundary of the volume.
Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinearorthogonal coordinates, e.g. inCartesian coordinates,spherical,cylindrical, or evenellipticalorparabolic coordinates:(curlF)1=1h2h3(∂(h3F3)∂u2−∂(h2F2)∂u3),(curlF)2=1h3h1(∂(h1F1)∂u3−∂(h3F3)∂u1),(curlF)3=1h1h2(∂(h2F2)∂u1−∂(h1F1)∂u2).{\displaystyle {\begin{aligned}&(\operatorname {curl} \mathbf {F} )_{1}={\frac {1}{h_{2}h_{3}}}\left({\frac {\partial (h_{3}F_{3})}{\partial u_{2}}}-{\frac {\partial (h_{2}F_{2})}{\partial u_{3}}}\right),\\[5pt]&(\operatorname {curl} \mathbf {F} )_{2}={\frac {1}{h_{3}h_{1}}}\left({\frac {\partial (h_{1}F_{1})}{\partial u_{3}}}-{\frac {\partial (h_{3}F_{3})}{\partial u_{1}}}\right),\\[5pt]&(\operatorname {curl} \mathbf {F} )_{3}={\frac {1}{h_{1}h_{2}}}\left({\frac {\partial (h_{2}F_{2})}{\partial u_{1}}}-{\frac {\partial (h_{1}F_{1})}{\partial u_{2}}}\right).\end{aligned}}}
The equation for each component(curlF)kcan be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).
If(x1,x2,x3)are theCartesian coordinatesand(u1,u2,u3)are the orthogonal coordinates, thenhi=(∂x1∂ui)2+(∂x2∂ui)2+(∂x3∂ui)2{\displaystyle h_{i}={\sqrt {\left({\frac {\partial x_{1}}{\partial u_{i}}}\right)^{2}+\left({\frac {\partial x_{2}}{\partial u_{i}}}\right)^{2}+\left({\frac {\partial x_{3}}{\partial u_{i}}}\right)^{2}}}}is the length of the coordinate vector corresponding toui. The remaining two components of curl result fromcyclic permutationofindices: 3,1,2 → 1,2,3 → 2,3,1.
In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curloperatorcan be applied using some set ofcurvilinear coordinates, for which simpler representations have been derived.
The notation∇×F{\displaystyle \nabla \times \mathbf {F} }has its origins in the similarities to the 3-dimensionalcross product, and it is useful as amnemonicinCartesian coordinatesif∇{\displaystyle \nabla }is taken as a vectordifferential operatordel. Such notation involvingoperatorsis common inphysicsandalgebra.
Expanded in 3-dimensionalCartesian coordinates(seeDel in cylindrical and spherical coordinatesforsphericalandcylindricalcoordinate representations),∇×F{\displaystyle \nabla \times \mathbf {F} }is, forF{\displaystyle \mathbf {F} }composed of[Fx,Fy,Fz]{\displaystyle [F_{x},F_{y},F_{z}]}(where the subscripts indicate the components of the vector, not partial derivatives):∇×F=|ı^ȷ^k^∂∂x∂∂y∂∂zFxFyFz|{\displaystyle \nabla \times \mathbf {F} ={\begin{vmatrix}{\boldsymbol {\hat {\imath }}}&{\boldsymbol {\hat {\jmath }}}&{\boldsymbol {\hat {k}}}\\[5mu]{\dfrac {\partial }{\partial x}}&{\dfrac {\partial }{\partial y}}&{\dfrac {\partial }{\partial z}}\\[5mu]F_{x}&F_{y}&F_{z}\end{vmatrix}}}wherei,j, andkare theunit vectorsfor thex-,y-, andz-axes, respectively. This expands as follows:[8]∇×F=(∂Fz∂y−∂Fy∂z)ı^+(∂Fx∂z−∂Fz∂x)ȷ^+(∂Fy∂x−∂Fx∂y)k^{\displaystyle \nabla \times \mathbf {F} =\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right){\boldsymbol {\hat {\imath }}}+\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right){\boldsymbol {\hat {\jmath }}}+\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right){\boldsymbol {\hat {k}}}}
Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.
In a general coordinate system, the curl is given by[1](∇×F)k=1gεkℓm∇ℓFm{\displaystyle (\nabla \times \mathbf {F} )^{k}={\frac {1}{\sqrt {g}}}\varepsilon ^{k\ell m}\nabla _{\ell }F_{m}}whereεdenotes theLevi-Civita tensor,∇thecovariant derivative,g{\displaystyle g}is thedeterminantof themetric tensorand theEinstein summation conventionimplies that repeated indices are summed over. Due to the symmetry of theChristoffel symbolsparticipating in the covariant derivative, this expression reduces to thepartial derivative:(∇×F)=1gRkεkℓm∂ℓFm{\displaystyle (\nabla \times \mathbf {F} )={\frac {1}{\sqrt {g}}}\mathbf {R} _{k}\varepsilon ^{k\ell m}\partial _{\ell }F_{m}}whereRkare the local basis vectors. Equivalently, using theexterior derivative, the curl can be expressed as:∇×F=(⋆(dF♭))♯{\displaystyle \nabla \times \mathbf {F} =\left(\star {\big (}{\mathrm {d} }\mathbf {F} ^{\flat }{\big )}\right)^{\sharp }}
Here♭and♯are themusical isomorphisms, and★is theHodge star operator. This formula shows how to calculate the curl ofFin any coordinate system, and how to extend the curl to anyorientedthree-dimensionalRiemannianmanifold. Since this depends on a choice of orientation, curl is achiraloperation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.
Suppose the vector field describes thevelocity fieldof afluid flow(such as a large tank ofliquidorgas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.[9]The curl of the vector field at any point is given by the rotation of an infinitesimal area in thexy-plane (forz-axis component of the curl),zx-plane (fory-axis component of the curl) andyz-plane (forx-axis component of the curl vector). This can be seen in the examples below.
Thevector fieldF(x,y,z)=yı^−xȷ^{\displaystyle \mathbf {F} (x,y,z)=y{\boldsymbol {\hat {\imath }}}-x{\boldsymbol {\hat {\jmath }}}}can be decomposed asFx=y,Fy=−x,Fz=0.{\displaystyle F_{x}=y,F_{y}=-x,F_{z}=0.}
Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linearforceacting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.
Calculating the curl:∇×F=0ı^+0ȷ^+(∂∂x(−x)−∂∂yy)k^=−2k^{\displaystyle \nabla \times \mathbf {F} =0{\boldsymbol {\hat {\imath }}}+0{\boldsymbol {\hat {\jmath }}}+\left({\frac {\partial }{\partial x}}(-x)-{\frac {\partial }{\partial y}}y\right){\boldsymbol {\hat {k}}}=-2{\boldsymbol {\hat {k}}}}
The resulting vector field describing the curl would at all points be pointing in the negativezdirection. The results of this equation align with what could have been predicted using theright-hand ruleusing aright-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.
For the vector fieldF(x,y,z)=−x2ȷ^{\displaystyle \mathbf {F} (x,y,z)=-x^{2}{\boldsymbol {\hat {\jmath }}}}
the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the linex= 3, the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negativezdirection. Inversely, if placed onx= −3, the object would rotate counterclockwise and the right-hand rule would result in a positivezdirection.
Calculating the curl:∇×F=0ı^+0ȷ^+∂∂x(−x2)k^=−2xk^.{\displaystyle {\nabla }\times \mathbf {F} =0{\boldsymbol {\hat {\imath }}}+0{\boldsymbol {\hat {\jmath }}}+{\frac {\partial }{\partial x}}\left(-x^{2}\right){\boldsymbol {\hat {k}}}=-2x{\boldsymbol {\hat {k}}}.}
The curl points in the negativezdirection whenxis positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the planex= 0.
In generalcurvilinear coordinates(not only in Cartesian coordinates), the curl of a cross product of vector fieldsvandFcan be shown to be∇×(v×F)=((∇⋅F)+F⋅∇)v−((∇⋅v)+v⋅∇)F.{\displaystyle \nabla \times \left(\mathbf {v\times F} \right)={\Big (}\left(\mathbf {\nabla \cdot F} \right)+\mathbf {F\cdot \nabla } {\Big )}\mathbf {v} -{\Big (}\left(\mathbf {\nabla \cdot v} \right)+\mathbf {v\cdot \nabla } {\Big )}\mathbf {F} \ .}
Interchanging the vector fieldvand∇operator, we arrive at the cross product of a vector field with curl of a vector field:v×(∇×F)=∇F(v⋅F)−(v⋅∇)F,{\displaystyle \mathbf {v\ \times } \left(\mathbf {\nabla \times F} \right)=\nabla _{\mathbf {F} }\left(\mathbf {v\cdot F} \right)-\left(\mathbf {v\cdot \nabla } \right)\mathbf {F} \ ,}where∇Fis the Feynman subscript notation, which considers only the variation due to the vector fieldF(i.e., in this case,vis treated as being constant in space).
Another example is the curl of a curl of a vector field. It can be shown that in general coordinates∇×(∇×F)=∇(∇⋅F)−∇2F,{\displaystyle \nabla \times \left(\mathbf {\nabla \times F} \right)=\mathbf {\nabla } (\mathbf {\nabla \cdot F} )-\nabla ^{2}\mathbf {F} \ ,}and this identity defines thevector LaplacianofF, symbolized as∇2F.
The curl of thegradientofanyscalar fieldφis always thezero vectorfield∇×(∇φ)=0{\displaystyle \nabla \times (\nabla \varphi )={\boldsymbol {0}}}which follows from theantisymmetryin the definition of the curl, and thesymmetry of second derivatives.
Thedivergenceof the curl of any vector field is equal to zero:∇⋅(∇×F)=0.{\displaystyle \nabla \cdot (\nabla \times \mathbf {F} )=0.}
Ifφis a scalar valued function andFis a vector field, then∇×(φF)=∇φ×F+φ∇×F{\displaystyle \nabla \times (\varphi \mathbf {F} )=\nabla \varphi \times \mathbf {F} +\varphi \nabla \times \mathbf {F} }
The vector calculus operations ofgrad, curl, anddivare most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifyingbivectors(2-vectors) in 3 dimensions with thespecial orthogonal Lie algebraso(3){\displaystyle {\mathfrak {so}}(3)}of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) andso(3){\displaystyle {\mathfrak {so}}(3)},these all being 3-dimensional spaces.
In 3 dimensions, a differential 0-form is a real-valued functionf(x,y,z){\displaystyle f(x,y,z)}; a differential 1-form is the following expression, where the coefficients are functions:a1dx+a2dy+a3dz;{\displaystyle a_{1}\,dx+a_{2}\,dy+a_{3}\,dz;}a differential 2-form is the formal sum, again with function coefficients:a12dx∧dy+a13dx∧dz+a23dy∧dz;{\displaystyle a_{12}\,dx\wedge dy+a_{13}\,dx\wedge dz+a_{23}\,dy\wedge dz;}and a differential 3-form is defined by a single term with one function as coefficient:a123dx∧dy∧dz.{\displaystyle a_{123}\,dx\wedge dy\wedge dz.}(Here thea-coefficients are real functions of three variables; thewedge products, e.g.dx∧dy{\displaystyle {\text{d}}x\wedge {\text{d}}y}, can be interpreted asoriented plane segments,dx∧dy=−dy∧dx{\displaystyle {\text{d}}x\wedge {\text{d}}y=-{\text{d}}y\wedge {\text{d}}x}, etc.)
Theexterior derivativeof ak-form inR3is defined as the(k+ 1)-form from above—and inRnif, e.g.,ω(k)=∑1≤i1<i2<⋯<ik≤nai1,…,ikdxi1∧⋯∧dxik,{\displaystyle \omega ^{(k)}=\sum _{1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n}a_{i_{1},\ldots ,i_{k}}\,dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k}},}then the exterior derivativedleads todω(k)=∑j=1i1<⋯<ikn∂ai1,…,ik∂xjdxj∧dxi1∧⋯∧dxik.{\displaystyle d\omega ^{(k)}=\sum _{\scriptstyle {j=1} \atop \scriptstyle {i_{1}<\cdots <i_{k}}}^{n}{\frac {\partial a_{i_{1},\ldots ,i_{k}}}{\partial x_{j}}}\,dx_{j}\wedge dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k}}.}
The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,∂2∂xi∂xj=∂2∂xj∂xi,{\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}},}and antisymmetry,dxi∧dxj=−dxj∧dxi{\displaystyle dx_{i}\wedge dx_{j}=-dx_{j}\wedge dx_{i}}
the twofold application of the exterior derivative yields0{\displaystyle 0}(the zerok+2{\displaystyle k+2}-form).
Thus, denoting the space ofk-forms byΩk(R3){\displaystyle \Omega ^{k}(\mathbb {R} ^{3})}and the exterior derivative bydone gets a sequence:0⟶dΩ0(R3)⟶dΩ1(R3)⟶dΩ2(R3)⟶dΩ3(R3)⟶d0.{\displaystyle 0\,{\overset {d}{\longrightarrow }}\;\Omega ^{0}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{1}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{2}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{3}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\,0.}
HereΩk(Rn){\displaystyle \Omega ^{k}(\mathbb {R} ^{n})}is the space of sections of theexterior algebraΛk(Rn){\displaystyle \Lambda ^{k}(\mathbb {R} ^{n})}vector bundleoverRn, whose dimension is thebinomial coefficient(nk){\displaystyle {\binom {n}{k}}}; note thatΩk(R3)=0{\displaystyle \Omega ^{k}(\mathbb {R} ^{3})=0}fork>3{\displaystyle k>3}ork<0{\displaystyle k<0}. Writing only dimensions, one obtains a row ofPascal's triangle:
0→1→3→3→1→0;{\displaystyle 0\rightarrow 1\rightarrow 3\rightarrow 3\rightarrow 1\rightarrow 0;}
the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.
Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On aRiemannian manifold, or more generallypseudo-Riemannian manifold,k-forms can be identified withk-vectorfields (k-forms arek-covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on anorientedvector space with anondegenerate form(an isomorphism between vectors and covectors), there is an isomorphism betweenk-vectors and(n−k)-vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchangek-forms,k-vector fields,(n−k)-forms, and(n−k)-vector fields; this is known asHodge duality. Concretely, onR3this is given by:
Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:
On the other hand, the fact thatd2= 0corresponds to the identities∇×(∇f)=0{\displaystyle \nabla \times (\nabla f)=\mathbf {0} }for any scalar fieldf, and∇⋅(∇×v)=0{\displaystyle \nabla \cdot (\nabla \times \mathbf {v} )=0}for any vector fieldv.
Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms andn-forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and(n− 1)-forms are always fiberwisen-dimensional and can be identified with vector fields.
Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are
so the curl of a 1-vector field (fiberwise 4-dimensional) is a2-vector field, which at each point belongs to 6-dimensional vector space, and so one hasω(2)=∑i<k=1,2,3,4ai,kdxi∧dxk,{\displaystyle \omega ^{(2)}=\sum _{i<k=1,2,3,4}a_{i,k}\,dx_{i}\wedge dx_{k},}which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (d2= 0). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.
However, one can define a curl of a vector field as a2-vector fieldin general, as described below.
2-vectors correspond to the exterior powerΛ2V; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as thespecial orthogonal Lie algebraso{\displaystyle {\mathfrak {so}}}(V)of infinitesimal rotations. This has(n2)=1/2n(n− 1)dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we haven=1/2n(n− 1), which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebraso(4){\displaystyle {\mathfrak {so}}(4)}.
The curl of a 3-dimensional vector field which only depends on 2 coordinates (sayxandy) is simply a vertical vector field (in thezdirection) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.
Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.[10]
In the case where the divergence of a vector fieldVis zero, a vector fieldWexists such thatV= curl(W).[citation needed]This is why themagnetic field, characterized by zero divergence, can be expressed as the curl of amagnetic vector potential.
IfWis a vector field withcurl(W) =V, then adding any gradient vector fieldgrad(f)toWwill result in another vector fieldW+ grad(f)such thatcurl(W+ grad(f)) =Vas well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknownirrotational fieldwith theBiot–Savart law.
|
https://en.wikipedia.org/wiki/Curl_(mathematics)
|
Instatistics, afixed-effect Poisson modelis aPoisson regressionmodel used for staticpanel datawhen the outcome variable iscount data. Hausman, Hall, and Griliches pioneered the method in the mid 1980s. Their outcome of interest was the number of patents filed by firms, where they wanted to develop methods to control for the firmfixed effects.[1]Linear panel data models use the linear additivity of the fixed effects to difference them out and circumvent theincidental parameter problem. Even though Poisson models are inherently nonlinear, the use of the linear index and the exponential link function lead to multiplicativeseparability, more specifically[2]
This formula looks very similar to the standard Poisson premultiplied by the termai. As the conditioning set includes the observables over all periods, we are in the static panel data world and are imposingstrict exogeneity.[3]Hausman, Hall, and Griliches then use Andersen's conditional Maximum Likelihood methodology to estimateb0. Usingni= Σyitallows them to obtain the following nice distributional result ofyi
At this point, the estimation of the fixed-effect Poisson model is transformed in a useful way and can be estimated by maximum-likelihood estimation techniques formultinomiallog likelihoods. This is computationally not necessarily very restrictive, but the distributional assumptions up to this point are fairly stringent. Wooldridge provided evidence that these models have nice robustness properties as long as the conditional mean assumption (i.e. equation 1) holds.[5]Chamberlain also providedsemi-parametric efficiency boundsfor these estimators under slightly weaker exogeneity assumptions. However, these bounds are practically difficult to attain, as the proposed methodology needshigh-dimensionalnonparametric regressionsfor attaining these bounds.
|
https://en.wikipedia.org/wiki/Fixed-effect_Poisson_model
|
In themathematicalfield ofgraph theory, agraph homomorphismis a mapping between twographsthat respects their structure. More concretely, it is a function between the vertex sets of two graphs that maps adjacentverticesto adjacent vertices.
Homomorphisms generalize various notions ofgraph coloringsand allow the expression of an important class ofconstraint satisfaction problems, such as certainschedulingorfrequency assignmentproblems.[1]The fact that homomorphisms can be composed leads to rich algebraic structures: apreorderon graphs, adistributive lattice, and acategory(one for undirected graphs and one for directed graphs).[2]Thecomputational complexityof finding a homomorphism between given graphs is prohibitive in general, but a lot is known about special cases that are solvable inpolynomial time. Boundaries between tractable and intractable cases have been an active area of research.[3]
In this article, unless stated otherwise,graphsare finite,undirected graphswithloopsallowed, butmultiple edges(parallel edges) disallowed.
Agraph homomorphism[4]ffrom a graphG=(V(G),E(G)){\displaystyle G=(V(G),E(G))}to a graphH=(V(H),E(H)){\displaystyle H=(V(H),E(H))}, written
is a function fromV(G){\displaystyle V(G)}toV(H){\displaystyle V(H)}that preserves edges. Formally,(u,v)∈E(G){\displaystyle (u,v)\in E(G)}implies(f(u),f(v))∈E(H){\displaystyle (f(u),f(v))\in E(H)}, for all pairs of verticesu,v{\displaystyle u,v}inV(G){\displaystyle V(G)}.
If there exists any homomorphism fromGtoH, thenGis said to behomomorphictoHorH-colorable. This is often denoted as just
The above definition is extended to directed graphs. Then, for a homomorphismf:G→H, (f(u),f(v)) is anarc(directed edge) ofHwhenever (u,v) is an arc ofG.
There is aninjectivehomomorphism fromGtoH(i.e., one that maps distinct vertices inGto distinct vertices inH) if and only ifGisisomorphicto asubgraphofH.
If a homomorphismf:G→His abijection, and itsinverse functionf−1is also a graph homomorphism, thenfis a graph isomorphism.[5]
Covering mapsare a special kind of homomorphisms that mirror the definition and many properties ofcovering maps in topology.[6]They are defined assurjectivehomomorphisms (i.e., something maps to each vertex) that are also locally bijective, that is, a bijection on theneighbourhoodof each vertex.
An example is thebipartite double cover, formed from a graph by splitting each vertexvintov0andv1and replacing each edgeu,vwith edgesu0,v1andv0,u1. The function mappingv0andv1in the cover tovin the original graph is a homomorphism and a covering map.
Graphhomeomorphismis a different notion, not related directly to homomorphisms. Roughly speaking, it requires injectivity, but allows mapping edges to paths (not just to edges).Graph minorsare a still more relaxed notion.
Two graphsGandHarehomomorphically equivalentifG→HandH→G.[4]The maps are not necessarily surjective nor injective. For instance, thecomplete bipartite graphsK2,2andK3,3are homomorphically equivalent: each map can be defined as taking the left (resp. right) half of the domain graph and mapping to just one vertex in the left (resp. right) half of the image graph.
A retraction is a homomorphismrfrom a graphGto asubgraphHofGsuch thatr(v) =vfor each vertexvofH.
In this case the subgraphHis called aretractofG.[7]
Acoreis a graph with no homomorphism to any proper subgraph. Equivalently, a core can be defined as a graph that does not retract to any proper subgraph.[8]Every graphGis homomorphically equivalent to a unique core (up to isomorphism), calledthe coreofG.[9]Notably, this is not true in general for infinite graphs.[10]However, the same definitions apply to directed graphs and a directed graph is also equivalent to a unique core.
Every graph and every directed graph contains its core as a retract and as aninduced subgraph.[7]
For example, allcomplete graphsKnand all odd cycles (cycle graphsof odd length) are cores.
Every3-colorablegraphGthat contains a triangle (that is, has thecomplete graphK3as a subgraph) is homomorphically equivalent toK3. This is because, on one hand, a 3-coloring ofGis the same as a homomorphismG→K3, as explained below. On the other hand, every subgraph ofGtrivially admits a homomorphism intoG, implyingK3→G. This also means thatK3is the core of any such graphG. Similarly, everybipartite graphthat has at least one edge is equivalent toK2.[11]
Ak-coloring, for some integerk, is an assignment of one ofkcolors to each vertex of a graphGsuch that the endpoints of each edge get different colors. Thek-colorings ofGcorrespond exactly to homomorphisms fromGto thecomplete graphKk.[12]Indeed, the vertices ofKkcorrespond to thekcolors, and two colors are adjacent as vertices ofKkif and only if they are different. Hence a function defines a homomorphism toKkif and only if it maps adjacent vertices ofGto different colors (i.e., it is ak-coloring). In particular,Gisk-colorable if and only if it isKk-colorable.[12]
If there are two homomorphismsG→HandH→Kk, then their compositionG→Kkis also a homomorphism.[13]In other words, if a graphHcan be colored withkcolors, and there is a homomorphism fromGtoH, thenGcan also bek-colored. Therefore,G→Himplies χ(G) ≤ χ(H), whereχdenotes thechromatic numberof a graph (the leastkfor which it isk-colorable).[14]
General homomorphisms can also be thought of as a kind of coloring: if the vertices of a fixed graphHare the availablecolorsand edges ofHdescribe which colors arecompatible, then anH-coloring ofGis an assignment of colors to vertices ofGsuch that adjacent vertices get compatible colors.
Many notions of graph coloring fit into this pattern and can be expressed as graph homomorphisms into different families of graphs.Circular coloringscan be defined using homomorphisms intocircular complete graphs, refining the usual notion of colorings.[15]Fractionalandb-fold coloringcan be defined using homomorphisms intoKneser graphs.[16]T-coloringscorrespond to homomorphisms into certain infinite graphs.[17]Anoriented coloringof a directed graph is a homomorphism into anyoriented graph.[18]AnL(2,1)-coloringis a homomorphism into thecomplementof thepath graphthat is locally injective, meaning it is required to be injective on the neighbourhood of every vertex.[19]
Another interesting connection concernsorientationsof graphs.
An orientation of an undirected graphGis any directed graph obtained by choosing one of the two possible orientations for each edge.
An example of an orientation of the complete graphKkis the transitive tournamentT→kwith vertices 1,2,…,kand arcs fromitojwheneveri<j.
A homomorphism between orientations of graphsGandHyields a homomorphism between the undirected graphsGandH, simply by disregarding the orientations.
On the other hand, given a homomorphismG→Hbetween undirected graphs, any orientationH→ofHcan be pulled back to an orientationG→ofGso thatG→has a homomorphism toH→.
Therefore, a graphGisk-colorable (has a homomorphism toKk) if and only if some orientation ofGhas a homomorphism toT→k.[20]
A folklore theorem states that for allk, a directed graphGhas a homomorphism toT→kif and only if it admits no homomorphism from the directed pathP→k+1.[21]HereP→nis the directed graph with vertices 1, 2, …,nand edges fromitoi+ 1, fori= 1, 2, …,n− 1.
Therefore, a graph isk-colorable if and only if it has an orientation that admits no homomorphism fromP→k+1.
This statement can be strengthened slightly to say that a graph isk-colorable if and only if some orientation contains no directed path of lengthk(noP→k+1as a subgraph).
This is theGallai–Hasse–Roy–Vitaver theorem.
Some scheduling problems can be modeled as a question about finding graph homomorphisms.[22][23]As an example, one might want to assign workshop courses to time slots in a calendar so that two courses attended by the same student are not too close to each other in time. The courses form a graphG, with an edge between any two courses that are attended by some common student. The time slots form a graphH, with an edge between any two slots that are distant enough in time. For instance, if one wants a cyclical, weekly schedule, such that each student gets their workshop courses on non-consecutive days, thenHwould be thecomplement graphofC7. A graph homomorphism fromGtoHis then a schedule assigning courses to time slots, as specified.[22]To add a requirement saying that, e.g., no single student has courses on both Friday and Monday, it suffices to remove the corresponding edge fromH.
A simplefrequency allocationproblem can be specified as follows: a number of transmitters in awireless networkmust choose a frequency channel on which they will transmit data. To avoidinterference, transmitters that are geographically close should use channels with frequencies that are far apart. If this condition is approximated with a single threshold to define 'geographically close' and 'far apart', then a valid channel choice again corresponds to a graph homomorphism. It should go from the graph of transmittersG, with edges between pairs that are geographically close, to the graph of channelsH, with edges between channels that are far apart. While this model is rather simplified, it does admit some flexibility: transmitter pairs that are not close but could interfere because of geographical features can be added to the edges ofG. Those that do not communicate at the same time can be removed from it. Similarly, channel pairs that are far apart but exhibitharmonicinterference can be removed from the edge set ofH.[24]
In each case, these simplified models display many of the issues that have to be handled in practice.[25]Constraint satisfaction problems, which generalize graph homomorphism problems, can express various additional types of conditions (such as individual preferences, or bounds on the number of coinciding assignments). This allows the models to be made more realistic and practical.
Graphs and directed graphs can be viewed as a special case of the far more general notion called relationalstructures(defined as a set with a tuple of relations on it). Directed graphs are structures with a single binary relation (adjacency) on the domain (the vertex set).[26][3]Under this view,homomorphismsof such structures are exactly graph homomorphisms.
In general, the question of finding a homomorphism from one relational structure to another is aconstraint satisfaction problem(CSP).
The case of graphs gives a concrete first step that helps to understand more complicated CSPs.
Many algorithmic methods for finding graph homomorphisms, likebacktracking,constraint propagationandlocal search, apply to all CSPs.[3]
For graphsGandH, the question of whetherGhas a homomorphism toHcorresponds to a CSP instance with only one kind of constraint,[3]as follows. Thevariablesare the vertices ofGand thedomainfor each variable is the vertex set ofH. Anevaluationis a function that assigns to each variable an element of the domain, so a functionffromV(G) toV(H). Each edge or arc (u,v) ofGthen corresponds to theconstraint((u,v), E(H)). This is a constraint expressing that the evaluation should map the arc (u,v) to a pair (f(u),f(v)) that is in the relationE(H), that is, to an arc ofH. A solution to the CSP is an evaluation that respects all constraints, so it is exactly a homomorphism fromGtoH.
Compositions of homomorphisms are homomorphisms.[13]In particular, the relation → on graphs is transitive (and reflexive, trivially), so it is apreorderon graphs.[27]Let theequivalence classof a graphGunderhomomorphic equivalencebe [G].
The equivalence class can also be represented by the unique core in [G].
The relation → is apartial orderon those equivalence classes; it defines aposet.[28]
LetG<Hdenote that there is a homomorphism fromGtoH, but no homomorphism fromHtoG.
The relation → is adense order, meaning that for all (undirected) graphsG,Hsuch thatG<H, there is a graphKsuch thatG<K<H(this holds except for the trivial casesG=K0orK1).[29][30]For example, between any twocomplete graphs(exceptK0,K1,K2) there are infinitely manycircular complete graphs, corresponding to rational numbers between natural numbers.[31]
The poset of equivalence classes of graphs under homomorphisms is adistributive lattice, with thejoinof [G] and [H] defined as (the equivalence class of) the disjoint union [G∪H], and themeetof [G] and [H] defined as thetensor product[G×H] (the choice of graphsGandHrepresenting the equivalence classes [G] and [H] does not matter).[32]Thejoin-irreducibleelements of this lattice are exactlyconnectedgraphs. This can be shown using the fact that a homomorphism maps a connected graph into one connected component of the target graph.[33][34]Themeet-irreducibleelements of this lattice are exactly themultiplicative graphs. These are the graphsKsuch that a productG×Hhas a homomorphism toKonly when one ofGorHalso does. Identifying multiplicative graphs lies at the heart ofHedetniemi's conjecture.[35][36]
Graph homomorphisms also form acategory, with graphs as objects and homomorphisms as arrows.[37]Theinitial objectis the empty graph, while theterminal objectis the graph with one vertex and oneloopat that vertex.
Thetensor product of graphsis thecategory-theoretic productand
theexponential graphis theexponential objectfor this category.[36][38]Since these two operations are always defined, the category of graphs is acartesian closed category.
For the same reason, the lattice of equivalence classes of graphs under homomorphisms is in fact aHeyting algebra.[36][38]
For directed graphs the same definitions apply. In particular → is apartial orderon equivalence classes of directed graphs. It is distinct from the order → on equivalence classes of undirected graphs, but contains it as a suborder. This is because every undirected graph can be thought of as a directed graph where every arc (u,v) appears together with its inverse arc (v,u), and this does not change the definition of homomorphism. The order → for directed graphs is again a distributive lattice and a Heyting algebra, with join and meet operations defined as before. However, it is not dense. There is also a category with directed graphs as objects and homomorphisms as arrows, which is again acartesian closed category.[39][38]
There are many incomparable graphs with respect to the homomorphism preorder, that is, pairs of graphs such that neither admits a homomorphism into the other.[40]One way to construct them is to consider theodd girthof a graphG, the length of its shortest odd-length cycle.
The odd girth is, equivalently, the smallestodd numbergfor which there exists a homomorphism from thecycle graphongvertices toG. For this reason, ifG→H, then the odd girth ofGis greater than or equal to the odd girth ofH.[41]
On the other hand, ifG→H, then the chromatic number ofGis less than or equal to the chromatic number ofH.
Therefore, ifGhas strictly larger odd girth thanHand strictly larger chromatic number thanH, thenGandHare incomparable.[40]For example, theGrötzsch graphis 4-chromatic and triangle-free (it has girth 4 and odd girth 5),[42]so it is incomparable to the triangle graphK3.
Examples of graphs with arbitrarily large values of odd girth and chromatic number areKneser graphs[43]andgeneralized Mycielskians.[44]A sequence of such graphs, with simultaneously increasing values of both parameters, gives infinitely many incomparable graphs (anantichainin the homomorphism preorder).[45]Other properties, such asdensityof the homomorphism preorder, can be proved using such families.[46]Constructions of graphs with large values of chromatic number and girth, not just odd girth, are also possible, but more complicated (seeGirth and graph coloring).
Among directed graphs, it is much easier to find incomparable pairs. For example, consider the directed cycle graphsC→n, with vertices 1, 2, …,nand edges fromitoi+ 1 (fori= 1, 2, …,n− 1) and fromnto 1.
There is a homomorphism fromC→ntoC→k(n,k≥ 3) if and only ifnis a multiple ofk.
In particular, directed cycle graphsC→nwithnprime are all incomparable.[47]
In the graph homomorphism problem, an instance is a pair of graphs (G,H) and a solution is a homomorphism fromGtoH. The generaldecision problem, asking whether there is any solution, isNP-complete.[48]However, limiting allowed instances gives rise to a variety of different problems, some of which are much easier to solve. Methods that apply when restraining the left sideGare very different than for the right sideH, but in each case a dichotomy (a sharp boundary between easy and hard cases) is known or conjectured.
The homomorphism problem with a fixed graphHon the right side of each instance is also called theH-coloring problem. WhenHis the complete graphKk, this is thegraphk-coloring problem, which is solvable in polynomial time fork= 0, 1, 2, andNP-completeotherwise.[49]In particular,K2-colorability of a graphGis equivalent toGbeingbipartite, which can be tested in linear time.
More generally, wheneverHis a bipartite graph,H-colorability is equivalent toK2-colorability (orK0/K1-colorability whenHis empty/edgeless), hence equally easy to decide.[50]Pavol HellandJaroslav Nešetřilproved that, for undirected graphs, no other case is tractable:
This is also known as thedichotomy theoremfor (undirected) graph homomorphisms, since it dividesH-coloring problems into NP-complete or P problems, with nointermediatecases.
For directed graphs, the situation is more complicated and in fact equivalent to the much more general question of characterizing thecomplexity of constraint satisfaction problems.[53]It turns out thatH-coloring problems for directed graphs are just as general and as diverse as CSPs with any other kinds of constraints.[54][55]Formally, a (finite)constraint language(ortemplate)Γis a finite domain and a finite set of relations over this domain. CSP(Γ) is the constraint satisfaction problem where instances are only allowed to use constraints inΓ.
Intuitively, this means that every algorithmic technique or complexity result that applies toH-coloring problems for directed graphsHapplies just as well to general CSPs. In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint languageΓ, CSP(Γ) is NP-complete or in P.[48]This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary:
The homomorphism problem with a single fixed graphGon left side of input instances can be solved bybrute-forcein time |V(H)|O(|V(G)|), so polynomial in the size of the input graphH.[56]In other words, the problem is trivially in P for graphsGof bounded size. The interesting question is then what other properties ofG, beside size, make polynomial algorithms possible.
The crucial property turns out to betreewidth, a measure of how tree-like the graph is. For a graphGof treewidth at mostkand a graphH, the homomorphism problem can be solved in time |V(H)|O(k)with a standarddynamic programmingapproach. In fact, it is enough to assume that the core ofGhas treewidth at mostk. This holds even if the core is not known.[57][58]
The exponent in the |V(H)|O(k)-time algorithm cannot be lowered significantly: no algorithm with running time |V(H)|o(tw(G) /log tw(G))exists, assuming theexponential time hypothesis(ETH), even if the inputs are restricted to any class of graphs of unbounded treewidth.[59]The ETH is an unproven assumption similar toP ≠ NP, but stronger.
Under the same assumption, there are also essentially no other properties that can be used to get polynomial time algorithms. This is formalized as follows:
One can ask whether the problem is at least solvable in a time arbitrarily highly dependent onG, but with a fixed polynomial dependency on the size ofH.
The answer is again positive if we limitGto a class of graphs with cores of bounded treewidth, and negative for every other class.[58]In the language ofparameterized complexity, this formally states that the homomorphism problem inG{\displaystyle {\mathcal {G}}}parameterized by the size (number of edges) ofGexhibits a dichotomy. It isfixed-parameter tractableif graphs inG{\displaystyle {\mathcal {G}}}have cores of bounded treewidth, andW[1]-complete otherwise.
The same statements hold more generally for constraint satisfaction problems (or for relational structures, in other words). The only assumption needed is that constraints can involve only a bounded number of variables (all relations are of some bounded arity, 2 in the case of graphs). The relevant parameter is then the treewidth of theprimal constraint graph.[59]
|
https://en.wikipedia.org/wiki/Graph_homomorphism
|
In rail transportation, arolling highwayorrolling roadis a form ofcombined transportinvolving the conveying ofroad trucksby rail, referred to as Ro-La trains. The concept is a form ofpiggyback transportation.
The technical challenges to implement rolling highways vary from region to region. In North America, theloading gaugeis often high enough to accommodatedouble stack containers, so the height of asemi-traileron aflatcaris no issue. However, in Europe, except for purpose built lines such as theChannel Tunnelor theGotthard Base Tunnel, the loading gauge height is much smaller, and it is necessary to transport the trailers with the tires about 30 cm (11.81 in) above the rails, so the trailers cannot be simply parked on the surface of a flat car above thewagon wheelsorbogies. Making the wagon wheels smaller limits the maximum speed, so many designs allow the trailer to be transported with its wheels lower than the rail wagon wheels. An early approach in France was theKangourou wagon[1]with modified trailers. This technology did not survive, due to the market resistance to modified trailers. Today, three designs for these special wagons are in commercial service, "Modalohr", "CargoBeamer" and "Niederflurwagen"[2][3][4].
During a rolling-highway journey, if the drivers accompany the trailer, they are accommodated in apassenger caror asleeping car. At both ends of the rail link there are purpose-built terminals that allow the train to be easily loaded and unloaded.
Rolling highways are mostly used for transit routes, e.g. through theAlpsor from western to eastern Europe.
InAustria, rolling highways exist fromBavariaviaTyroltoItalyor toEastern Europe. Traditionally, Austria is a transit country and therefore the rolling highway is of environmental importance. In 1999 theAustrian Federal Railways(ÖBB) carried 254,000 trucks, which equals 8,500,000 tonnes (8,400,000 long tons; 9,400,000 short tons) of load (including vehicle's weight) (158,989 trucks in 1993). The rolling highway trains in Austria are operated by Ökombi GmbH, a division ofRail Cargo Austria, the cargo division of ÖBB. There is a direct rolling highway betweenSalzburgand the harbour ofTrieste, Italy, where the trucks arrive onferriesfromTurkey. In those cases, drivers arrive by plane viaLjubljanaairport, to take over the trucks.
In 1999, theKonkan Railway Corporationintroduced theRoll On Roll Off(RORO) service on the section betweenKoladinMaharashtraandVernainGoa,[5]which was extended up toSurathkalinKarnatakain 2004.[6][7]The RORO service, the first of its kind in India, allowed trucks to be transported onflatcars. It was highly popular,[8]carrying about 110,000 trucks and bringing in about₹740 million worth of earnings to the corporation until 2007.[9]These services are now being extended to other parts of India[10][11]
InSwitzerland, rolling highways across the Alps exist for both theGotthardandLötschberg-Simplonroute. They are operated by RAlpin AG, headquartered inOlten.[12]On April 15, 2015,BLS cargolaunched a service between Cologne and Milan capable of transporting four-metre (13 ft) articulated lorry trailers.[13]
In 2018, 51% of theTen-T networkhas been made adequate to P\C 80loading gauge, required forERATechnical Specifications for Interoperabilityto conveying road trucks by train. Further upgrades are underway.[14]
Two rolling highways are currently in operation in France, both using French Modalohr technology: the 175 km (109 mi)Autoroute Ferroviaire Alpine, connecting theSavoyregion to Turin through theFréjus Rail Tunnelowned and operated jointly bySNCFandTrenitalia, and the 1,050 km (650 mi)Lorry-Railwhich connectsBettembourg,Luxembourg, toPerpignanoperated by SNCF. Lorry-Rail only carries trailers, while the AFA carries accompanied and unaccompanied trailers. Since June 2012, these two are operated under the brand "VIIA" bySNCF Geodis.
In 2013, plans were announced to add more routes in France.[15]One was planned to linkDourges(near Lille) toTarnos(near Bayonne) in spring 2016[16]and the other was an extension North from Bettembourg to Calais. Eurotunnel announced its intention to build a terminal at Folkestone to extend the Dourges-Tarnos route to the UK
.[17]However, in April 2015 the French ministry of transportation announced the cancellation of the Dourges–Tarnos route, citing financial concerns.[18]
In July 2020, the government announced two further routes,Sète–Calais andCherbourg–Bayonne.[19]French Transport Minister Jean-Baptiste Djebbari confirmed in September 2021 €15m funding in 2021 for further development ofautoroutes ferroviairesincluding Calais – Sète, Cherbourg – Bayonne and Perpignan – Rungis.[20]
As of August 2021, the following routes are offered in France:
VIIA[21]
Cargobeamer[22]
|
https://en.wikipedia.org/wiki/Rolling_highway
|
Metricationormetrificationis the act or process of converting to themetric systemof measurement.[1]All over the world, countries have transitioned from local and traditionalunits of measurementto the metric system. This process began inFrance during the 1790s, and has persistently advanced over two centuries, accumulating into 95% of the world officially only using themodern metric system.[2]Nonetheless, this also highlights that certain countries and sectors are either still transitioning or have chosen not to fully adopt the metric system.
The process of metrication is typically initiated and overseen by a country's government, generally motivated by the necessity of establishing a uniform measurement system for effective international cooperation in fields like trade and science. Governments achieve metrication through either mandatory changes to existing units within a specified timeframe or through voluntary adoption.
While metric use is mandatory in some countries and voluntary in others, all countries have recognised and adopted the SI, albeit to different degrees, including the United States. As of 2011, ninety-five percent of the world's population live in countries where the metric system is the only legal system of measurement.[3]: p. 49, ch 2
According to theNational Institute of Standards and Technology (NIST), there are only three countries that do not have mandatory metric laws (Liberia,Myanmar, and theUnited States),[4][5]however a research paper completed by Vera (2011) stated in practice there were four additional countries, namely theUnited States COFAcountries (Federated States of Micronesia,Marshall IslandsandPalau), andSamoa.
Samoahas since mandated metric trade.[6][3]: 60, 494–496
In 2018, the Liberian government had pledged to adopt the metric system.[7]In 2013, the Myanmar Ministry of Commerce announced that Myanmar was preparing to adopt the metric system as the country's official system of measurement, andmetrication in Myanmarbegan with some progress was made (road signs and temperature are legislated to be in metric), however there had been very little progress in local trade.[8][9]
As of 2023[update], the United States has a national policy of adopting the metric system based on theMetric Conversion Actof 1975, amended by the Omnibus Trade and Competitiveness Act of 1988, and Presidential Executive Order 12770 of 1991, and allUnited States governmentagencies are required to adopt it.[10]
The metrication process can take years to implement and complete: for instance,Guyanaadopted the metric system in 2002 and was only able to make it mandatory in local trade 2017 after the metric system was fully adopted in schools.[11]Antigua and Barbuda, also officially metric, is moving slowly in its metrication process, with a new push in 2011 for all government agencies to convert by 2013 and the entire country to use the metric system by the first quarter of 2015.[12]Other metricCaribbeancountries, such asSaint Luciaofficially metric 2000, are still in the process toward full conversion.[13]
TheUnited Kingdomhas officially embraced a dual measurement system. The United Kingdom as of 2007 haltedits metrication process, and retain imperial units of the mile and yard in road markings,pintsfor returnable milk containers, and (with Ireland) for the pint for draught beer and cider sold in pubs.[14]Throughout the 1990s, theEuropean Commissionhelped accelerate the metrication process for member states, for the implemented theUnits of Measure Directiveto promote trade. This acceleration caused public backlash in the United Kingdom, and in 2007 the United Kingdom announced that it had secured permanent exemptions listed above and, to appease British public opinion and to facilitate trade with the United States, the option to include imperials units alongside metric units could continue indefinitely.[14][15]
The United Kingdom and the United States face ongoing resistance toward metrication, which may be partially rooted in a belief that their cultural identity is intertwined with the traditional measurement systems they historically have used.[16]This has resulted in a review of mandatory sales and trade of metric units by the UK government. The outcome of this review with over 100 000 respondents was that a majority had limited or no appetite for increased use of imperial measures.[17]
The metre was adopted as exclusive measure in 1801 under theFrench Consulate, then theFirst French Empireuntil 1812, whenNapoleondecreed the introduction of themesures usuelleswhich remained in use in France up to 1840 in the reign ofLouis Philippe.[18]Meanwhile, the metre was adopted by the Republic of Geneva.[19]After the joining ofcanton of Genevato Switzerland in 1815,Guillaume Henri Dufourpublished the first Swiss official map for which the metre was adopted as unit of length.[20][21]A Swiss-French binational officer,Louis Napoléon Bonapartewas present when a baseline was measured nearZürichforDufour mapwhich would win the gold medal for the national map at theExposition Universelle of 1855.[22][23][24]Among the scientific instruments calibrated on the metre, which were displayed at the Exposition Universelle, wasBrunnerapparatus, a geodetic instrument devised for measuring the central baseline of Spain whose designer,Carlos Ibáñez e Ibáñez de Iberowould represent Spain at theInternational Statistical Institute. In addition to the Exposition Universelle and the second Statistical Congress held in Paris, an International Association for obtaining a uniform decimal system of measures, weights, and coins was created there in 1855.[25][26][27][28]Copies of the Spanish standard would be made for Egypt, France and Germany.[29][30]These standards were compared to each other and with Borda apparatus which was the main reference for measuring all geodetic baselines in France.[31][32][33]These comparisons were essential, because of theexpansibilityof solid materials with raise in temperature. Indeed, one fact had constantly dominated all the fluctuations of ideas on the measurement of geodesic bases: it was the constant concern to accurately assess the temperature of standards in the field; and the determination of this variable, on which depended the length of the instrument of measurement, had always been considered bygeodesistsas so difficult and so important that one could almost say that the history of measuring instruments is almost identical with that of the precautions taken to avoid temperature errors.[34]In 1867, the second general Conference of theEuropean Arc Measurementrecommended the adoption of the metre in replacement of the toise. In 1869, theSaint Petersburg Academy of Sciencessent to theFrench Academy of Sciencesa report drafted byOtto Wilhelm von Struve,Heinrich von WildandMoritz von Jacobiinviting his French counterpart to undertake joint action with a view to ensuring the universal use of themetric systemin all scientific work.[35]The same year, Napoleon III convened the International Metre Commission which was to meet in Paris in 1870. TheFranco-Prussian Warbroke out, theSecond French Empirecollapsed, but the metre survived.[36][37]
During the nineteenth century the metric system of weights and measures proved a convenient political compromise during the unification processes in the Netherlands, Germany and Italy. In 1814, Portugal became the second country not part of the French Empire to officially adopt the metric system. Spain found it expedient in 1849 to follow the French example and within a decadeLatin Americahad also adopted the metric system, or had already adopted the system, such as the case of Chile by 1848. There was considerable resistance to metrication in the United Kingdom and in the United States. Despite this, they were actually the first countries in the World to use a metric standard for cartography.[38][39][40][33][41]
The introduction of the metric system into France in 1795 was done on a district by district basis with Paris being the first district. By modern standards the transition was poorly managed. Although thousands of pamphlets were distributed, the Agency of Weights and Measures who oversaw the introduction underestimated the work involved. Paris alone needed 500,000 metre sticks, yet one month after the metre became the sole legal unit of measure, they only had 25,000 in store.[42]: 269This, combined with the excesses of the Revolution and the high level of illiteracy in 18th century France, made the metric system unpopular.
Napoleonhimself ridiculed the metric system but, as an able administrator, recognised the value of a sound basis for a system of measurement. Under thedécret impérial du 12 février 1812(imperial decree of 12 February 1812), a new system of measure – themesures usuelles("customary measures") was introduced for use in small retail businesses – all government, legal and similar works still had to use the metric system and the metric system continued to be taught at all levels of education.[43]That system reintroduced the names of many units used during the ancient regime, but their values were redefined in terms of metric units. Thus thetoisewas defined as being two metres, with sixpiedsmaking up onetoise, twelvepoucesmaking up onepiedand twelvelignesmaking up onepouce. Likewise thelivrewas defined as being 500 g, eachlivrecomprising sixteenonceand eachonceeightgrosand theauneas 120 centimetres.[44]This intermediate step eased the transition to a metric-based system.
By theLoi du 4 juillet 1837(the law of 4 July 1837),Louis Philippe Ieffectively revoked the use ofmesures usuellesby reaffirming the laws of measurement of 1795 and 1799 to be used from 1 May 1840.[45][46]However, many units of measure, such as thelivre(for half a kilogram), remained in everyday use for many years,[46][47]and to a residual extent up to this day.
At the outbreak of the French Revolution, much of modern-day Germany and Austria were part of theHoly Roman Empirewhich had become a loose federation of kingdoms, principalities, free cities, bishoprics and other fiefdoms, each with its own system of measurement, though in most cases the systems were loosely derived from theCarolingiansystem instituted byCharlemagnea thousand years earlier.
During the Napoleonic era, some of the German states moved to reform their systems of measurement using the prototype metre and kilogram as the basis of the new units.Baden, in 1810, for example, redefined theRuthe(rods) as being 3.0 m exactly and defined the subunits of theRutheas 1Ruthe= 10Fuß(feet) = 100Zoll(inches) = 1,000Linie(lines) = 10,000Punkt(points) (for simplicity at the expense of grammar, these are the singular forms of each name) while thePfundwas defined as 500 g, divided into 30 Loth, each of 16.67 g.[48][49]Bavaria, in its reform of 1811, trimmed the BavarianPfundfrom 561.288 g to 560 g exactly, consisting of 32Loth, each of 17.5 g[50]while thePrussianPfundremained at 467.711 g.[51]
After theCongress of Viennathere was a degree of commercial cooperation between the various German states resulting in the German Customs Union (Zollverein). There were, however, still many barriers to trade untilBavariatook the lead in establishing the General German Commercial Code in 1856. As part of the code theZollvereinintroduced theZollpfund(Customs Pound) which was defined as exactly 500 g and could be split into 30 'lot'.[52]This unit was used for inter-state movement of goods, but was not applied in all states for internal use.
In 1832,Carl Friedrich Gaussstudied theEarth's magnetic fieldand proposed adding thesecondto the basic units of themetreand thekilogramin the form of theCGS system(centimetre,gram, second). In 1836, he founded theMagnetischer Verein, the first international scientific association, in collaboration withAlexander von HumboldtandWilhelm Edouard Weber.Geophysics(the study of the Earth by the means ofphysics) preceded physics[citation needed]and contributed to the development of its methods. It was primarily anatural philosophywhose object was the study of natural phenomena such as the Earth's magnetic field,lightningandgravity. The coordination of the observation of geophysical phenomena in different points of the globe was of paramount importance and was at the origin of the creation of the first international scientific associations. The foundation of theMagnetischer Vereinwould be followed by that of the Central European Arc Measurement (German:Mitteleuropäische Gradmessung) on the initiative ofJohann Jacob Baeyerin 1863, and by that of theInternational Meteorological Organisationwhose second president, the Swissmeteorologistandphysicist,Heinrich von WildrepresentedRussiaat theInternational Committee for Weights and Measures(CIPM).[53][54][55]In 1867, the European Arc Measurement (German:Europäische Gradmessung) called for the creation of a new,international prototype metre(IPM) and the arrangement of a system where national standards could be compared with it. The French government gave practical support to the creation of an International Metre Commission, which met in Paris in 1870 and again in 1872 with the participation of about thirty countries. TheMetre Conventionwas signed on 20 May 1875 in Paris and theInternational Bureau of Weights and Measureswas created under the supervision of theCIPM.
Although the Zollverein collapsed after theAustro-Prussian Warof 1866, the metric system became the official system of measurement in the newly formedGerman Empirein 1872[42]: 350and of Austria in 1875.[56]The Zollpfund ceased to be legal in Germany after 1877.[57]
TheCisalpine Republic, a North Italian republic set up by Napoleon in 1797 with its capital atMilan, first adopted a modified form of the metric system based on thebraccio cisalpino(Cisalpine cubit) which was defined to be half a metre.[58]In 1802 the Cisalpine Republic was renamed theItalian Republic, with Napoleon as its head of state. The following year the Cisalpine system of measure was replaced by the metric system.[58]
In 1806, the Italian Republic was replaced by theKingdom of Italywith Napoleon as its emperor. By 1812, all of Italy from Rome northwards was under the control of Napoleon, either as French Departments or as part of the Kingdom of Italy, ensuring that the metric system was in use throughout this region.
After theCongress of Vienna, the various Italian states reverted to their original systems of measurements, but in 1845 theKingdom of Piedmont and Sardiniapassed legislation to introduce the metric system within five years. By 1860, most of Italy had been unified under the King of SardiniaVictor Emmanuel II; and underLaw 132 of 28 July 1861the metric system became the official system of measurement throughout the kingdom. NumerousTavole di ragguaglio(conversion tables) were displayed in shops until 31 December 1870.[58]
The Netherlands (as the revolutionaryBatavian Republic) began to use the metric system from 1799 but, as with its co-revolutionaries in France, encounterednumerous practical difficulties. Subsequently, as part of theFirst French Empiresince 1809, the Netherlands used Napoleon'smesures usuellesfrom their introduction in 1812 until the fall of his Empire in 1815. Under the (Dutch)Weights and Measures Actof 21 August 1816 and the Royal decree of 27 March 1817 (Koningklijk besluit van den 27 Maart 1817), the newly formedKingdom of the Netherlandsabandoned themesures usuellesin favour of the "Dutch"metric system(Nederlands metrisch stelsel) in which metric units were simply given the names of units of measure that were then in use: for instance theons(ounce) was defined as 100 g.[59]
In 1875, Norway was the first country to ratify the metre convention, and it was seen as an important step towards Norwegian independence. The decision to adopt the metric system is said to have beenthe Norwegian Parliament's fastest decisionin peacetime.
In August 1814, Portugal officially adopted the metric system but with the names of the units substituted byPortuguese traditional ones. In this system the basic units were themão-travessa(hand) = 1decimetre(10mão-travessas= 1vara(yard) = 1 metre), thecanada= 1 litre and thelibra(pound) = 1 kilogram.[60]
Until the ascent of theBourbonmonarchy in Spain in 1700, each region of Spain had its own system of measurement. The new Bourbon monarchy tried to centralise control and with it the system of measurement. There were debates regarding the desirability of retaining theCastilianunits of measure or, in the interests of harmonisation, adopting the French system.[61]Although Spain assistedMéchainin his meridian survey, the Government feared the French revolutionary movement and reinforced the Castilian units of measure to counter such movements. By 1849 however, it proved difficult to maintain the old system and in that year the metric system became the legal system of measure in Spain.[61]
TheSpanish Royal Academy of Scienceurged the Government to approve the creation of a large-scale map ofSpainin 1852. The following yearCarlos Ibáñez e Ibáñez de Iberowas appointed to undertake this task. All the scientific and technical material had to be created. Ibáñez e Ibáñez de Ibero and Saavedra went toParisto supervise the production by Brunner of a measuring instrument which they had devised and which they later compared withBorda's double-toise N°1 which was the main reference for measuring all geodetic bases in France and whose length was by definition 3.8980732 metres at a specified temperature.[38][62]
In 1865 the triangulation ofSpainwas connected with that ofPortugalandFrance. In 1866 at the conference of the Association of Geodesy inNeuchâtel, Ibáñez announced thatSpainwould collaborate in remeasuring theFrench meridian arc. In 1879 Ibáñez andFrançois Perrier(representing France) completed the junction between the geodetic network of Spain andAlgeriaand thus completed the measurement of theFrench meridian arcwhich extended fromShetlandto theSahara.
In 1866, Spain and Portugal joined the Central European Arc Measurement which would become theEuropean Arc Measurementthe next year. In 1867 at the second general conference of the geodetic association held in Berlin, the question of an international standard unit of length was discussed in order to combine the measurements made in different countries to determine the size and shape of the Earth. The conference proposed according to recommendations drawn up by a committee chaired byOtto Wilhelm von Struvedirector of thePulkovo Observatoryin St. Petersburg the adoption of themetreand the creation of an international metre commission, after a preliminary discussion held in Neuchâtel betweenJohann Jacob Baeyerdirector of the Royal Prussian Geodetic Institute,Adolphe Hirschfounder of theNeuchâtel ObservatoryandCarlos Ibáñez e Ibáñez de IberoSpanish representative, founder and first director of theInstituto Geográfico Nacional.[63][64][65][66][67]
In November 1869 the French government issued invitations to join this commission. Spain accepted andCarlos Ibáñez e Ibáñez de Iberotook part in the Committee of preparatory research from the first meeting of the International Metre Commission in 1870. He became president of the permanent Committee of the International Metre Commission in 1872. In 1874 he was elected as president of the Permanent Commission of theEuropean Arc Measurement. He also presided the General Conference of theEuropean Arc Measurementheld in Paris in 1875, when the association decided the creation of an international geodetic standard for the bases' measurement.[68]He represented Spain at the 1875 conference of theMetre Convention, which was ratified the same year in Paris. The Spanish geodesist was elected as the first president of theInternational Committee for Weights and Measures. His activities resulted in the distribution of a platinum and iridium prototype of themetreto all States parties to theMetre Conventionduring the first meeting of theGeneral Conference on Weights and Measuresin 1889. These prototypes defined the metre right up until 1960.
In 1801, theHelvetic Republicat the instigation ofJohann Georg Trallespromulgated a law introducing the metric system. However this was never applied, because in 1803 the competence for weights and measures returned to thecantons. On the territory of the currentcanton of Jura, then annexed to France (Mont-Terrible), the metre was adopted in 1800. TheCanton of Genevaadopted the metric system in 1813, thecanton of Vaudin 1822, thecanton of Valaisin 1824 and thecanton of Neuchâtelin 1857. In 1835, twelve cantons of theSwiss Plateauand the north-east adopted a concordat based on the federal foot (exactly 0.3 m) which entered into force in 1836. The cantons ofcentralandeastern Switzerland, as well as the Alpine cantons, continued to use the old measures.[19][69]
Guillaume-Henri Dufourfounded in 1838 inGenevaa topographic office (the futureFederal Office of topography), which published under his direction, from 1845 to 1864, the first officialmap of Switzerland, on the basis of new cantonal measurements. This map at1:100,000engraved on copper, suggested the relief by hatching and shadows. The map projection adopted by the commission was theBonne projection, centred on theBernObservatory (5° 6' 10.8'' east ofParis meridian), although this point was much closer to the western end of Switzerland than to its eastern end. But its position was well known, and there was no more centralobservatory. The scale was set at 1:100 000 because it was considered more suitable for a country as rugged as Switzerland than the 1:80 000 adopted for the largemap of Franceand the two maps were in any case inconsistent, as the meridians of themap of Switzerlandtilted in the opposite direction to those of the map of France. The map commission wanted to adopt decimal measures; and Switzerland did not have an already existing map which, like theCassini map, used a scale close to 1:86 400, i.e. 1line(1⁄12of a French inch) to 100toises(i.e. 600 French feet). Themetrewas adopted as a linear measure, and the entire map was divided into twenty-five sheets: five east–west and five north–south. Each sheet of the map showed two scales, one purelymetric, the other in Swissleagues4800 metres in length. The frame was divided intosexagesimalminutesand centesimal minutes; the latter, each subdivided into ten parts, had the advantage of showingKilometresin the direction of themeridians; so that there were new scales on the sides of the sheet to evaluate the distances.[20][21][70]
According to the1848 Constitutionthe federal foot was to come into force throughout the country. InGeneva, a committee chaired byGuillaume Henri Dufourmilitated in favor of maintaining the decimal metric system in the French-speaking cantons and against the standardization of weights and measures in Switzerland on the basis of the metric foot. In 1868 the metric system was legalized alongside the federal foot, which was a first step towards its definitive introduction. Cantonal calibrators were supervised by aFederal Bureau of Verificationcreated in 1862, whose management was entrusted toHeinrich von Wildfrom 1864. In 1875, the responsibility for weights and measures was transferred back from the cantons to the Confederation, andSwitzerland(represented byAdolphe Hirsch) joined theMetre Convention. The same year a federal law imposed the metric system from 1 January 1877. In 1977, Switzerland joined theInternational System of Units.[19][54][71][72][73]
TheWeights and Measures Act 1824(5 Geo. 4. c. 74) imposed one standard 'imperial' system of weights and measures on the British Empire.[74]The effect of this act was to standardise existing British units of measure rather than to align them with the metric system.
During the next eighty years a number of parliamentary select committees recommended the adoption of the metric system, each with a greater degree of urgency, but Parliament prevaricated. A select committee report of 1862 recommended compulsory metrication, but with an "Intermediate permissive phase"; Parliament responded in 1864 by legalising metric units only for 'contracts and dealings'.[75]The United Kingdom initially declined to sign theTreaty of the Metre, but did so in 1883. Meanwhile, British scientists and technologists were at the forefront of the metrication movement – it was theBritish Association for the Advancement of Sciencethat promoted theCGS system of unitsas a coherent system[76]: 109and it was the British firmJohnson Mattheythat was accepted by the CGPM in 1889 to cast the international prototype metre and kilogram.[77]
In 1895, another parliamentary select committee recommended the compulsory adoption of the metric system after a two-year permissive period. TheWeights and Measures (Metric System) Act 1897(60 & 61 Vict.c. 46) legalised the metric units for trade, but did not make them mandatory.[75]A bill to make the metric system compulsory to help the British industrial base fight off the challenge of the nascent German base passed through the House of Lords in 1904, but did not pass in the House of Commons before the next general election was called. Following opposition by the Lancashire cotton industry, a similar bill was defeated in the House of Commons in 1907 by 150 votes to 118.[75]
In 1965, the UK began an official programme of metrication, and as of 2025, in theUnited Kingdomthe metric is the official measurement system for all regulated trading by weight or measure purposes, however the imperialpintremains the sole legal unit for milk in returnable bottles and for draught beer and cider in British pubs. Imperial units are also legally permitted to be used alongside metric units on food packaging and price indications for goods sold loose.[78]The UK government undertook a "Choice on units of measurement: consultation response", and found just over 1% of respondents wish to revert to an increase the use of imperial units, and as such kept the current regulations on the sale of goods.[79]
In addition imperial units may be used exclusively where a product is sold by description, rather than by weight/mass/volume: e.g. television screen and clothing sizes tend to be given in inches only, but a piece of material priced per inch would be unlawful unless the metric price was also shown.
The general public still use imperial units in common langange for their height and weight, and imperial units are the norm when discussing longer distances such as journeys by car, but otherwise metric measurements are often used.[citation needed]
In 1805 a Swiss geodesistFerdinand Rudolph Hasslerbrought copies of the French metre and kilogram to the United States.[80][81]In 1830 the Congress decided to create uniform standards for length and weight in the United States.[82]Hasslerwas mandated to work out the new standards and proposed to adopt themetric system.[82]The Congress opted for the British Parliamentary Standard from 1758 and the Troy Pound of Great Britain from 1824 as length and weight standards.[82]Nevertheless, the primary baseline of the Survey of the Coast (renamed the United States Coast Survey in 1836 and theUnited States Coast and Geodetic Surveyin 1878) was measured in 1834 atFire Islandusing four 2-metre (6 ft 7 in) iron bars constructed after Hassler's specification in the United Kingdom and brought back in the United States in 1815.[83][84][81]All distances measured by the Survey of the Coast, Coast Survey, and Coast and Geodetic Survey were referred to the metre.[38][39]In 1866 theUnited States Congresspassed a bill making it lawful to use the metric system in the United States. The bill, which was permissive rather than mandatory in nature, defined the metric system in terms ofcustomary unitsrather than with reference to the international prototype metre and kilogram.[85][86]: 10–13Ferdinand Rudolph Hassler's use of the metre in coastal surveying, which had been an argument for the introduction of theMetric Act of 1866allowing the use of the metre in the United States, probably also played a role in the choice of the metre as international scientific unit of length and the proposal by theEuropean Arc Measurement(German:Europäische Gradmessung) to “establish a Europeaninternational bureau for weights and measures”.[39][38][87][88]
By 1893, the reference standards for customary units had become unreliable. Moreover, the United States, being a signatory of theMetre Conventionwas in possession of national prototype metres and kilograms that were calibrated against those in use elsewhere in the world. This led to theMendenhall Orderwhich redefined the customary units by referring to the national metric prototypes, but used the conversion factors of the 1866 act.[86]: 16–20In 1896, a bill that would make the metric system mandatory in the United States was presented to Congress. Twenty-three of the 29 people who gave evidence before the congressional committee who were considering the bill were in favor of it, but six were against. Four of these six dissenters represented manufacturing interests and the other two were from the United States Revenue service. The grounds cited were the cost and inconvenience of the change-over. The bill was not enacted. Subsequent bills suffered a similar fate.[56]
The United States mandated the acceptance of the metric system in 1866 for commercial and legal proceedings, without displacing their customary units.[89]The non-mandatory nature of the adoption of the SI has resulted in a much slower pace of adoption in the US than in other countries.[90]
In 1971, the USNational Bureau of Standardscompleted a three-year study of the impact of increasing worldwide metric use on the US. The study concluded with a report to Congress entitledA Metric America – A Decision Whose Time Has Come. Since then metric use has increased in the US, principally in the manufacturing and educational sectors. Public Law 93-380, enacted 21 August 1974, states that it is the policy of the US to encourage educational agencies and institutions to prepare students to use the metric system of measurement with ease and facility as a part of the regular education program. On 23 December 1975, President Gerald Ford signed Public Law 94–168, theMetric Conversion Actof 1975. This act declares a national policy of coordinating the increasing use of the metric system in the US. It established a US Metric Board whose functions as of 1 October 1982 were transferred to the Dept of Commerce, Office of Metric Programs, to coordinate the voluntary conversion to the metric system.[91]
In January 2007NASAdecided to use metric units for all future Moon missions, in line with the practice of other space agencies.[92]
The British metrication programme signalled the start of metrication programmes elsewhere in theCommonwealth, though India had started its programme in 1959, six years before the United Kingdom. South Africa (then not a member of the Commonwealth) set up a Metrication Advisory Board in 1967, New Zealand set up its Metric Advisory Board in 1969, Australia passed the Metric Conversion Act in 1970 and Canada appointed a Metrication Commission in 1971.
Metrication in Australia, New Zealand and South Africa was essentially complete within a decade, while in Canada metrication has been halted since the 1970s. In Canada, thesquare footis still widespread for commercial and residential advertisements and partially in construction because of the close trade relations with the United States. Metric measurements on food products such as canned food are often merely the equivalent of the still widely used imperial units of measurement such as the ounce and the pound. Butter in Canada is sold in 454 g packagings, which is the equivalent of one pound. Therailways of Canadasuch as theCanadian NationalandCanadian Pacificas well ascommuter railservices continue to measure their trackage in miles and speed limits in miles per hour because they also operate in the United States (although urban railways includingsubwaysandlight railhave adopted kilometres and kilometres per hour).[93]Canadian railcars show weight figures in both imperial and metric. Most other Commonwealth countries adopted the metric system during the 1970s.[94]
Apart from the United Kingdom and Canada, which have effectively halted their metrication programs, the great majority of countries using the imperial system have completed official metrication during the second half of the 20th century or the first decade of the 21st century. The most recent to complete this process was theRepublic of Ireland, which began metric conversion in the 1970s and completed it in early 2005.[95]Hong Kong uses three systems(Chinese, imperial, and metric) and all three are permitted for use in trade.[96]
Links in thecountry/regionpoint to articles about metrication in that country/region.[97]
Notes
There are three common ways that nations convert from traditional measurement systems to the metric system. The first is the quick or "Big-Bang" route. The second way is to phase in units over time and progressively outlaw traditional units. This method, favoured by someindustrial nations, is slower and generally less complete. The third way is to redefine traditional units in metric terms. This has been used successfully where traditional units were ill-defined and had regional variations.
The "Big-Bang" way is to simultaneously outlaw the use ofpre-metricmeasurement, metricate, reissue all government publications and laws, and change education systems to metric.Indiawas the first Commonwealth country to use this method of conversion. Its changeover lasted from 1 April 1960, when metric measurements became legal, to 1 April 1962, when all other systems were banned. The Indian model was extremely successful and was copied over much of the developing world. Two industrialized Commonwealth countries, Australia andNew Zealand, also did a quick conversion to metric.
The phase-in way is to pass a law permitting the use of metric units in parallel with traditional ones, followed by education of metric units, then progressively ban the use of the older measures. This has generally been a slow route to metric. TheBritish Empirepermitted the use of metric measures in 1873, but the changeover was not completed in most Commonwealth countries other than India and Australia until the 1970s and 1980s when governments took an active role in metric conversion. In the United Kingdom and Canada, the process is still incomplete. Japan also followed this route and did not complete the changeover for 70 years. By law, loose goods sold with reference to units of quantity have to be weighed and sold using the metric system. In 2001, the EU directive80/181/EECstated that supplementary units (imperial units alongside metric including labelling on packages) would become illegal from the beginning of 2010. In September 2007,[15]a consultation process was started which resulted in the directive being modified to permit supplementary units to be used indefinitely.
The third method is to redefine traditional units in terms of metric values. These redefined "quasi-metric" units often stay in use long after metrication is said to have been completed. Resistance to metrication in post-revolutionary France convinced Napoleon to revert tomesures usuelles(usual measures), and, to some extent, the names remain throughout Europe. In 1814, Portugal adopted the metric system, but with the names of the units substituted byPortuguese traditional ones. In this system, the basic units were themão-travessa(hand) = 1decimetre(10mão-travessas= 1vara(yard) = 1 metre), thecanada= 1 litre and thelibra(pound) = 1 kilogram.[60]In theNetherlands, 500 g is informally referred to as apond(pound) and 100 g as anons(ounce), and in Germany and France, 500 g is informally referred to respectively asein Pfundandune livre("one pound").[137]
In Denmark, the re-definedpund(500 g) is occasionally used, particularly among older people and (older) fruit growers, since these were originally paid according to the number of pounds of fruit produced. InSwedenandNorway, amil(Scandinavian mile) is informally equal to 10 km, and this has continued to be the predominantly used unit in conversation when referring to geographical distances. In the 19th century,Switzerlandhad a non-metric system completely based on metric terms (e.g. 1Fuss(foot) = 30 cm, 1Zoll(inch) = 3 cm, 1Linie(line) = 3 mm). In China, thejinnow has a value of 500 g and theliangis 50 g.
Surveys are performed by various interest groups or the government to determine the degree to which ordinary people change to using metric in their daily lives. In countries that have recently changed, older segments of the population tend still to use the older units.[138]
As of 2024[update], the metric system predominates in most of the world, however specific industries are more resistant to metrication. For example:
Air and sea transportation commonly use thenautical mile. This is about oneminute of arcoflatitudealong anymeridian arcand it is precisely defined as 1 852 metres (about 1.151miles). It is not anSIunit. The prime unit ofspeedorvelocityfor maritime and air navigation remains theknot(nautical mile per hour).
The prime unit of measure for aviation (altitude, orflight level) is usually estimated based on air pressure values, and in many countries, it is still described in nominal feet, although many others employ nominal metres. The policies of theInternational Civil Aviation Organization(ICAO) relating to measurement are:
Consistent with ICAO policy, aviation has undergone a significant amount of metrication over the years. For example, runway lengths are usually given in metres. The United States metricated the data interchange format (METAR) for temperature reports in 1996, and since indicates temperature inCelsius.[146]Metrication is also gradually taking place in cargo mass and dimensions and in fuel volume and mass.
In former Soviet countries and China, the metric system is used in aviation (whereby in Russia altitudes above thetransition levelare given in feet).[147][148]Sailplanes use the metric system in many European countries.
In 1975, the assembly of theInternational Maritime Organization(IMO) decided that future conventions of theInternational Convention for the Safety of Life at Sea(SOLAS) and other future IMO instruments should use SI units only.[149]
In the United Kingdom, some of the population continues to resist metrication to varying degrees. The traditional imperial measures are preferred by a majority and continue to have widespread use in some applications.[150][151]The metric system is used by most businesses,[152]and is used for most trade transactions. Metric units must be used for certain trading activities (selling by weight or measure for example), although imperial units may continue to be displayed in parallel.[153]
British law has enacted the provisions ofEuropean Union directive 80/181/EEC, which catalogues the units of measure that may be used for "economic, public health, public safety and administrative purposes".[154]These units consist of the recommendations of theGeneral Conference on Weights and Measures,[76]supplemented by some additional units of measure that may be used for specified purposes.[155]Metric units could be legally used for trading purposes for nearly a century beforemetrication effortsbegan in earnest. The government had been making preparations for the conversion of theimperial unitsince the 1862Select Committee on Weights and Measuresrecommended the conversion[156]and theWeights and Measures Act of 1864and theWeights and Measures (Metric System) Act of 1896legalised the metric system.[157]
In 1965, with lobbying from British industries and the prospects of joining theCommon Market, the government set a 10-year target for full conversion, and created theMetrication Boardin 1969. Metrication occurred in some areas during this time period, including the re-surveying ofOrdnance Surveymaps in 1970,decimalisationof thecurrencyin 1971, and teaching the metric system in schools. No plans were made to make the use of the metric system compulsory, and the Metrication Board was abolished in 1980 following achange in government.[158]
The United Kingdom avoided having to comply with the 1989 European Units of Measurement Directive (89/617/EEC), which required all member states to make the metric system compulsory, by negotiatingderogations(delayed switchovers), including for miles on road signs and for pints for draught beer, cider, and milk sales.[159]
Immediately following the United Kingdom'svote to withdraw from the European Union, it was reported that some retailers requested to revert to imperial units, with some reverting without permission. A poll following the 2016 vote also found that 45% of Britons sought to revert to selling produce in imperial units.[160]
The UK government started a consultation on 3 June 2022 on the choice of units of measurement markings.[161]
Imperial units remain in common everyday use for human body measurements, in particularstonesandpoundsfor weight, andfeetandinchesfor height.
Fuel economy is often advertised in miles per imperial gallon, which may lead to some confusion for users of US gallons for American manufactured cars.[162]
Heating, air conditioning, and gas cooking appliances occasionally display power in British thermal units per hour (BTU/h).[163]
Over time, the metric system has influenced the United States through international trade and standardisation. The use of the metric system was made legal as a system of measurement in 1866[164]and the United States was a founding member of theInternational Bureau of Weights and Measuresin 1875.[165]The system was officially adopted by the federal government in 1975 for use in the military and government agencies, and as preferred system for trade and commerce.[166]Attempts in the 1990s to make it mandatory for federal and state road signage to use metric units failed and it remains voluntary.[167]
A 1992 amendment to theFair Packaging and Labeling Act (FPLA), which took effect in 1994, required labels on federally regulated "consumer commodities"[168]to include bothmetricandUS customary units. As of 2013, all but one US state (New York) have passed laws permitting metric-only labels for the products they regulate.[169]
After many years of informal or optional metrication, the American public and much of the private business and industry still use US customary units today.[170]At least two states, Kentucky and California, have even moved towards demetrication of highway construction projects.[171][172][173]
Canada legally allows for dual labelling of goods provided that the metric unit is listed first and that there is a distinction of whether a liquid measure is a US or a Canadian (imperial) unit.[174]
Belize, which is a former British colony, uses both the metric and British imperial systems.[175][176]Milesare the most commonly used unit for measuring distance,[177]and gasoline is sold inUS gallons(similar to neighboring countries in Central America).
Confusion over units during the process of metrication can sometimes lead to accidents. In 1983, anAir CanadaBoeing 767, nicknamed the "Gimli Glider" following the incident, ran out of fuel in midflight. The incident was caused, in a large part, by the confusion over the conversion between litres, kilograms, and pounds, resulting in the aircraft receiving 22,300 pounds (10,100 kg) of fuel instead of the required 22,300 kilograms (49,200 lb).[178]
While not strictly an example of national metrication, the use of two different measurement systems was a contributing factor in the loss of theMars Climate Orbiterin 1999. TheNational Aeronautics and Space Administration(NASA) specified metric units in the contract. NASA and other organisations worked in metric units, but one subcontractor,Lockheed Martin, provided thruster performance data to the team inpound force-seconds instead ofnewton-seconds. The spacecraft was intended to orbitMarsat about 150 kilometres (93 mi) in altitude, but the incorrect data meant that it descended to about 57 kilometres (35 mi). As a result, it burned up in theMartian atmosphere.[179]
|
https://en.wikipedia.org/wiki/Metrication
|
Fractional calculusis a branch ofmathematical analysisthat studies the several different possibilities of definingreal numberpowers orcomplex numberpowers of thedifferentiationoperatorD{\displaystyle D}Df(x)=ddxf(x),{\displaystyle Df(x)={\frac {d}{dx}}f(x)\,,}
and of theintegrationoperatorJ{\displaystyle J}[Note 1]Jf(x)=∫0xf(s)ds,{\displaystyle Jf(x)=\int _{0}^{x}f(s)\,ds\,,}
and developing acalculusfor such operators generalizing the classical one.
In this context, the termpowersrefers to iterative application of alinear operatorD{\displaystyle D}to afunctionf{\displaystyle f},that is, repeatedlycomposingD{\displaystyle D}with itself, as inDn(f)=(D∘D∘D∘⋯∘D⏟n)(f)=D(D(D(⋯D⏟n(f)⋯))).{\displaystyle {\begin{aligned}D^{n}(f)&=(\underbrace {D\circ D\circ D\circ \cdots \circ D} _{n})(f)\\&=\underbrace {D(D(D(\cdots D} _{n}(f)\cdots ))).\end{aligned}}}
For example, one may ask for a meaningful interpretation ofD=D12{\displaystyle {\sqrt {D}}=D^{\scriptstyle {\frac {1}{2}}}}
as an analogue of thefunctional square rootfor the differentiation operator, that is, an expression for some linear operator that, when appliedtwiceto any function, will have the same effect asdifferentiation. More generally, one can look at the question of defining a linear operatorDa{\displaystyle D^{a}}
for every real numbera{\displaystyle a}in such a way that, whena{\displaystyle a}takes anintegervaluen∈Z{\displaystyle n\in \mathbb {Z} },it coincides with the usualn{\displaystyle n}-folddifferentiationD{\displaystyle D}ifn>0{\displaystyle n>0},and with then{\displaystyle n}-thpower ofJ{\displaystyle J}whenn<0{\displaystyle n<0}.
One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operatorD{\displaystyle D}is that thesetsof operator powers{Da∣a∈R}{\displaystyle \{D^{a}\mid a\in \mathbb {R} \}}defined in this way arecontinuoussemigroupswith parametera{\displaystyle a},of which the originaldiscretesemigroup of{Dn∣n∈Z}{\displaystyle \{D^{n}\mid n\in \mathbb {Z} \}}for integern{\displaystyle n}is adenumerablesubgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics.
Fractionaldifferential equations, also known as extraordinary differential equations,[1]are a generalization of differential equations through the application of fractional calculus.
Inapplied mathematicsand mathematical analysis, afractional derivativeis a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written toGuillaume de l'HôpitalbyGottfried Wilhelm Leibnizin 1695.[2]Around the same time, Leibniz wrote toJohann Bernoulliabout derivatives of "general order".[3]In the correspondence between Leibniz andJohn Wallisin 1697, Wallis's infinite product forπ/2{\displaystyle \pi /2}is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notationd1/2y{\displaystyle {d}^{1/2}{y}}to denote the derivative of order ½.[3]
Fractional calculus was introduced in one ofNiels Henrik Abel's early papers[4]where all the elements can be found: the idea of fractional-order integration and differentiation, the mutually inverse relationship between them, the understanding that fractional-order differentiation and integration can be considered as the same generalized operation, and the unified notation for differentiation and integration of arbitrary real order.[5]Independently, the foundations of the subject were laid byLiouvillein a paper from 1832.[6][7][8]Oliver Heavisideintroduced the practical use offractional differential operatorsin electrical transmission line analysis circa 1890.[9]The theory and applications of fractional calculus expanded greatly over the 19th and 20th centuries, and numerous contributors have given different definitions for fractional derivatives and integrals.[10]
Letf(x){\displaystyle f(x)}be a function defined forx>0{\displaystyle x>0}. Form the definite integral from 0 tox{\displaystyle x}. Call this(Jf)(x)=∫0xf(t)dt.{\displaystyle (Jf)(x)=\int _{0}^{x}f(t)\,dt\,.}
Repeating this process gives(J2f)(x)=∫0x(Jf)(t)dt=∫0x(∫0tf(s)ds)dt,{\displaystyle {\begin{aligned}\left(J^{2}f\right)(x)&=\int _{0}^{x}(Jf)(t)\,dt\\&=\int _{0}^{x}\left(\int _{0}^{t}f(s)\,ds\right)dt\,,\end{aligned}}}
and this can be extended arbitrarily.
TheCauchy formula for repeated integration, namely(Jnf)(x)=1(n−1)!∫0x(x−t)n−1f(t)dt,{\displaystyle \left(J^{n}f\right)(x)={\frac {1}{(n-1)!}}\int _{0}^{x}\left(x-t\right)^{n-1}f(t)\,dt\,,}leads in a straightforward way to a generalization for realn: using thegamma functionto remove the discrete nature of the factorial function gives us a natural candidate for applications of the fractional integral operator as(Jαf)(x)=1Γ(α)∫0x(x−t)α−1f(t)dt.{\displaystyle \left(J^{\alpha }f\right)(x)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}\left(x-t\right)^{\alpha -1}f(t)\,dt\,.}
This is in fact a well-defined operator.
It is straightforward to show that theJoperator satisfies(Jα)(Jβf)(x)=(Jβ)(Jαf)(x)=(Jα+βf)(x)=1Γ(α+β)∫0x(x−t)α+β−1f(t)dt.{\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&=\left(J^{\beta }\right)\left(J^{\alpha }f\right)(x)\\&=\left(J^{\alpha +\beta }f\right)(x)\\&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-t\right)^{\alpha +\beta -1}f(t)\,dt\,.\end{aligned}}}
(Jα)(Jβf)(x)=1Γ(α)∫0x(x−t)α−1(Jβf)(t)dt=1Γ(α)Γ(β)∫0x∫0t(x−t)α−1(t−s)β−1f(s)dsdt=1Γ(α)Γ(β)∫0xf(s)(∫sx(x−t)α−1(t−s)β−1dt)ds{\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}(x-t)^{\alpha -1}\left(J^{\beta }f\right)(t)\,dt\\&={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}\int _{0}^{t}\left(x-t\right)^{\alpha -1}\left(t-s\right)^{\beta -1}f(s)\,ds\,dt\\&={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}f(s)\left(\int _{s}^{x}\left(x-t\right)^{\alpha -1}\left(t-s\right)^{\beta -1}\,dt\right)\,ds\end{aligned}}}
where in the last step we exchanged the order of integration and pulled out thef(s)factor from thetintegration.
Changing variables tordefined byt=s+ (x−s)r,(Jα)(Jβf)(x)=1Γ(α)Γ(β)∫0x(x−s)α+β−1f(s)(∫01(1−r)α−1rβ−1dr)ds{\displaystyle \left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}\left(x-s\right)^{\alpha +\beta -1}f(s)\left(\int _{0}^{1}\left(1-r\right)^{\alpha -1}r^{\beta -1}\,dr\right)\,ds}
The inner integral is thebeta functionwhich satisfies the following property:∫01(1−r)α−1rβ−1dr=B(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \int _{0}^{1}\left(1-r\right)^{\alpha -1}r^{\beta -1}\,dr=B(\alpha ,\beta )={\frac {\Gamma (\alpha )\,\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}
Substituting back into the equation:(Jα)(Jβf)(x)=1Γ(α+β)∫0x(x−s)α+β−1f(s)ds=(Jα+βf)(x){\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-s\right)^{\alpha +\beta -1}f(s)\,ds\\&=\left(J^{\alpha +\beta }f\right)(x)\end{aligned}}}
Interchangingαandβshows that the order in which theJoperator is applied is irrelevant and completes the proof.
This relationship is called the semigroup property of fractionaldifferintegraloperators.
The classical form of fractional calculus is given by theRiemann–Liouville integral, which is essentially what has been described above. The theory of fractional integration forperiodic functions(therefore including the "boundary condition" of repeating after a period) is given by theWeyl integral. It is defined onFourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on theunit circlewhose integrals evaluate to zero). The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval[a,b], the integrals are defined asDaDt−αf(t)=IaItαf(t)=1Γ(α)∫at(t−τ)α−1f(τ)dτDtDb−αf(t)=ItIbαf(t)=1Γ(α)∫tb(τ−t)α−1f(τ)dτ{\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{-\alpha }}Df(t)&=\sideset {_{a}}{_{t}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau \\\sideset {_{t}}{_{b}^{-\alpha }}Df(t)&=\sideset {_{t}}{_{b}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{t}^{b}\left(\tau -t\right)^{\alpha -1}f(\tau )\,d\tau \end{aligned}}}
Where the former is valid fort>aand the latter is valid fort<b.[11]
It has been suggested[12]that the integral on the positive real axis (i.e.a=0{\displaystyle a=0}) would be more appropriately named the Abel–Riemann integral, on the basis of history of discovery and use, and in the same vein the integral over the entire real line be named Liouville–Weyl integral.
By contrast theGrünwald–Letnikov derivativestarts with the derivative instead of the integral.
TheHadamard fractional integralwas introduced byJacques Hadamard[13]and is given by the following formula,DaDt−αf(t)=1Γ(α)∫at(logtτ)α−1f(τ)dττ,t>a.{\displaystyle \sideset {_{a}}{_{t}^{-\alpha }}{\mathbf {D} }f(t)={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(\log {\frac {t}{\tau }}\right)^{\alpha -1}f(\tau ){\frac {d\tau }{\tau }},\qquad t>a\,.}
The Atangana–Baleanu fractional integral of a continuous function is defined as:IAaABItαf(t)=1−αAB(α)f(t)+αAB(α)Γ(α)∫at(t−τ)α−1f(τ)dτ{\displaystyle \sideset {_{{\hphantom {A}}a}^{\operatorname {AB} }}{_{t}^{\alpha }}If(t)={\frac {1-\alpha }{\operatorname {AB} (\alpha )}}f(t)+{\frac {\alpha }{\operatorname {AB} (\alpha )\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau }
Unfortunately, the comparable process for the derivative operatorDis significantly more complex, but it can be shown thatDis neithercommutativenoradditivein general.[14]
Unlike classical Newtonian derivatives, fractional derivatives can be defined in a variety of different ways that often do not all lead to the same result even for smooth functions. Some of these are defined via a fractional integral. Because of the incompatibility of definitions, it is frequently necessary to be explicit about which definition is used.
The corresponding derivative is calculated using Lagrange's rule for differential operators. To find theαth order derivative, thenth order derivative of the integral of order(n−α)is computed, wherenis the smallest integer greater thanα(that is,n=⌈α⌉). The Riemann–Liouville fractional derivative and integral has multiple applications such as in case of solutions to the equation in the case of multiple systems such as the tokamak systems, and Variable order fractional parameter.[15][16]Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants.[17]DaDtαf(t)=dndtnDaDt−(n−α)f(t)=dndtnIaItn−αf(t)DtDbαf(t)=dndtnDtDb−(n−α)f(t)=dndtnItIbn−αf(t){\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{n-\alpha }}If(t)\\\sideset {_{t}}{_{b}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{n-\alpha }}If(t)\end{aligned}}}
Another option for computing fractional derivatives is theCaputo fractional derivative. It was introduced byMichele Caputoin his 1967 paper.[18]In contrast to the Riemann–Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows, where againn= ⌈α⌉:DCDtαf(t)=1Γ(n−α)∫0tf(n)(τ)(t−τ)α+1−ndτ.{\displaystyle \sideset {^{C}}{_{t}^{\alpha }}Df(t)={\frac {1}{\Gamma (n-\alpha )}}\int _{0}^{t}{\frac {f^{(n)}(\tau )}{\left(t-\tau \right)^{\alpha +1-n}}}\,d\tau .}
There is the Caputo fractional derivative defined as:Dνf(t)=1Γ(n−ν)∫0t(t−u)(n−ν−1)f(n)(u)du(n−1)<ν<n{\displaystyle D^{\nu }f(t)={\frac {1}{\Gamma (n-\nu )}}\int _{0}^{t}(t-u)^{(n-\nu -1)}f^{(n)}(u)\,du\qquad (n-1)<\nu <n}which has the advantage that it is zero whenf(t)is constant and its Laplace Transform is expressed by means of the initial values of the function and its derivative. Moreover, there is the Caputo fractional derivative of distributed order defined asDabDnuf(t)=∫abϕ(ν)[D(ν)f(t)]dν=∫ab[ϕ(ν)Γ(1−ν)∫0t(t−u)−νf′(u)du]dν{\displaystyle {\begin{aligned}\sideset {_{a}^{b}}{^{n}u}Df(t)&=\int _{a}^{b}\phi (\nu )\left[D^{(\nu )}f(t)\right]\,d\nu \\&=\int _{a}^{b}\left[{\frac {\phi (\nu )}{\Gamma (1-\nu )}}\int _{0}^{t}\left(t-u\right)^{-\nu }f'(u)\,du\right]\,d\nu \end{aligned}}}
whereϕ(ν)is a weight function and which is used to represent mathematically the presence of multiple memory formalisms.
In a paper of 2015, M. Caputo and M. Fabrizio presented a definition of fractional derivative with a non singular kernel, for a functionf(t){\displaystyle f(t)}ofC1{\displaystyle C^{1}}given by:DCaCFDtαf(t)=11−α∫atf′(τ)e(−αt−τ1−α)dτ,{\displaystyle \sideset {_{{\hphantom {C}}a}^{\text{CF}}}{_{t}^{\alpha }}Df(t)={\frac {1}{1-\alpha }}\int _{a}^{t}f'(\tau )\ e^{\left(-\alpha {\frac {t-\tau }{1-\alpha }}\right)}\ d\tau ,}
wherea<0,α∈(0,1]{\displaystyle a<0,\alpha \in (0,1]}.[19]
In 2016, Atangana and Baleanu suggested differential operators based on the generalizedMittag-Leffler functionEα{\displaystyle E_{\alpha }}. The aim was to introduce fractional differential operators with non-singular nonlocal kernel. Their fractional differential operators are given below in Riemann–Liouville sense and Caputo sense respectively. For a functionf(t){\displaystyle f(t)}ofC1{\displaystyle C^{1}}given by[20][21]DABaABCDtαf(t)=AB(α)1−α∫atf′(τ)Eα(−α(t−τ)α1−α)dτ,{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}\int _{a}^{t}f'(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,}
If the function is continuous, the Atangana–Baleanu derivative in Riemann–Liouville sense is given by:DABaABCDtαf(t)=AB(α)1−αddt∫atf(τ)Eα(−α(t−τ)α1−α)dτ,{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}{\frac {d}{dt}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,}
The kernel used in Atangana–Baleanu fractional derivative has some properties of a cumulative distribution function. For example, for allα∈(0,1]{\displaystyle \alpha \in (0,1]},the functionEα{\displaystyle E_{\alpha }}is increasing on the real line, converges to0{\displaystyle 0}in−∞{\displaystyle -\infty },andEα(0)=1{\displaystyle E_{\alpha }(0)=1}.Therefore, we have that, the functionx↦1−Eα(−xα){\displaystyle x\mapsto 1-E_{\alpha }(-x^{\alpha })}is the cumulative distribution function of a probability measure on the positive real numbers. The distribution is therefore defined, and any of its multiples is called aMittag-Leffler distributionof orderα{\displaystyle \alpha }.It is also very well-known that, all these probability distributions areabsolutely continuous. In particular, the function Mittag-Leffler has a particular caseE1{\displaystyle E_{1}},which is the exponential function, the Mittag-Leffler distribution of order1{\displaystyle 1}is therefore anexponential distribution. However, forα∈(0,1){\displaystyle \alpha \in (0,1)},the Mittag-Leffler distributions areheavy-tailed. Their Laplace transform is given by:E(e−λXα)=11+λα,{\displaystyle \mathbb {E} (e^{-\lambda X_{\alpha }})={\frac {1}{1+\lambda ^{\alpha }}},}
This directly implies that, forα∈(0,1){\displaystyle \alpha \in (0,1)},the expectation is infinite. In addition, these distributions aregeometric stable distributions.
The Riesz derivative is defined asF{∂αu∂|x|α}(k)=−|k|αF{u}(k),{\displaystyle {\mathcal {F}}\left\{{\frac {\partial ^{\alpha }u}{\partial \left|x\right|^{\alpha }}}\right\}(k)=-\left|k\right|^{\alpha }{\mathcal {F}}\{u\}(k),}
whereF{\displaystyle {\mathcal {F}}}denotes theFourier transform.[22][23]
The conformable fractional derivative of a functionf{\displaystyle f}of orderα{\displaystyle \alpha }is given byTa(f)(t)=limϵ→0f(t+ϵt1−α)−f(t)ϵ{\displaystyle T_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}{\frac {f\left(t+\epsilon t^{1-\alpha }\right)-f(t)}{\epsilon }}}Unlike other definitions of the fractional derivative, the conformable fractional derivative obeys theproductandquotient rulehas analogs toRolle's theoremand themean value theorem.[24][25]However, this fractional derivative produces significantly different results compared to the Riemann-Liouville and Caputo fractional derivative. In 2020, Feng Gao and Chunmei Chi defined the improved Caputo-type conformable fractional derivative, which more closely approximates the behavior of the Caputo fractional derivative:[25]aCT~a(f)(t)=limϵ→0[(1−α)(f(t)−f(a))+αf(t+ϵ(t−a)1−α)−f(t)ϵ]{\displaystyle _{a}^{C}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )(f(t)-f(a))+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}wherea{\displaystyle a}andt{\displaystyle t}arereal numbersanda<t{\displaystyle a<t}. They also defined the improved Riemann-Liouville-type conformable fractional derivative to similarly approximate the Riemann-Liouville fractional derivative:[25]
aRLT~a(f)(t)=limϵ→0[(1−α)f(t)+αf(t+ϵ(t−a)1−α)−f(t)ϵ]{\displaystyle _{a}^{RL}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )f(t)+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}wherea{\displaystyle a}andt{\displaystyle t}arereal numbersanda<t{\displaystyle a<t}. Both improved conformable fractional derivatives have analogs to Rolle's theorem and theinterior extremum theorem.[26]
Classical fractional derivatives include:
New fractional derivatives include:
TheCoimbra derivativeis used for physical modeling:[35]A number of applications in both mechanics and optics can be found in the works by Coimbra and collaborators,[36][37][38][39][40][41][42]as well as additional applications to physical problems and numerical implementations studied in a number of works by other authors[43][44][45][46]
where the lower limita{\displaystyle a}can be taken as either0−{\displaystyle 0^{-}}or−∞{\displaystyle -\infty }as long asf(t){\displaystyle f(t)}is identically zero from or−∞{\displaystyle -\infty }to0−{\displaystyle 0^{-}}. Note that this operator returns the correct fractional derivatives for all values oft{\displaystyle t}and can be applied to either the dependent function itselff(t){\displaystyle f(t)}with a variable order of the formq(f(t)){\displaystyle q(f(t))}or to the independent variable with a variable order of the formq(t){\displaystyle q(t)}.[1]{\displaystyle ^{[1]}}
The Coimbra derivative can be generalized to any order,[47]leading to the Coimbra Generalized Order Differintegration Operator (GODO)[48]
wherem{\displaystyle m}is an integer larger than the larger value ofq(t){\displaystyle q(t)}for all values oft{\displaystyle t}. Note that the second (summation) term on the right side of the definition above can be expressed as
so to keep the denominator on the positive branch of the Gamma (Γ{\displaystyle \Gamma }) function and for ease of numerical calculation.
Thea{\displaystyle a}-thderivative of a functionf{\displaystyle f}at a pointx{\displaystyle x}is alocal propertyonly whena{\displaystyle a}is an integer; this is not the case for non-integer power derivatives. In other words, a non-integer fractional derivative off{\displaystyle f}atx=c{\displaystyle x=c}depends on all values off{\displaystyle f},even those far away fromc{\displaystyle c}.Therefore, it is expected that the fractional derivative operation involves some sort ofboundary conditions, involving information on the function further out.[49]
The fractional derivative of a function of ordera{\displaystyle a}is nowadays often defined by means of theFourierorMellinintegral transforms.[citation needed]
TheErdélyi–Kober operatoris an integral operator introduced byArthur Erdélyi(1940).[50]andHermann Kober(1940)[51]and is given byx−ν−α+1Γ(α)∫0x(t−x)α−1t−α−νf(t)dt,{\displaystyle {\frac {x^{-\nu -\alpha +1}}{\Gamma (\alpha )}}\int _{0}^{x}\left(t-x\right)^{\alpha -1}t^{-\alpha -\nu }f(t)\,dt\,,}
which generalizes theRiemann–Liouville fractional integraland the Weyl integral.
In the context offunctional analysis, functionsf(D)more general than powers are studied in thefunctional calculusofspectral theory. The theory ofpseudo-differential operatorsalso allows one to consider powers ofD. The operators arising are examples ofsingular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory ofRiesz potentials. So there are a number of contemporary theories available, within whichfractional calculuscan be discussed. See alsoErdélyi–Kober operator, important inspecial functiontheory (Kober 1940), (Erdélyi 1950–1951).
As described by Wheatcraft and Meerschaert (2008),[52]a fractional conservation of mass equation is needed to model fluid flow when thecontrol volumeis not large enough compared to the scale ofheterogeneityand when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is:−ρ(∇α⋅u→)=Γ(α+1)Δx1−αρ(βs+ϕβw)∂p∂t{\displaystyle -\rho \left(\nabla ^{\alpha }\cdot {\vec {u}}\right)=\Gamma (\alpha +1)\Delta x^{1-\alpha }\rho \left(\beta _{s}+\phi \beta _{w}\right){\frac {\partial p}{\partial t}}}
When studying the redox behavior of a substrate in solution, a voltage is applied at an electrode surface to force electron transfer between electrode and substrate. The resulting electron transfer is measured as a current. The current depends upon the concentration of substrate at the electrode surface. As substrate is consumed, fresh substrate diffuses to the electrode as described byFick's laws of diffusion. Taking the Laplace transform of Fick's second law yields an ordinary second-order differential equation (here in dimensionless form):d2dx2C(x,s)=sC(x,s){\displaystyle {\frac {d^{2}}{dx^{2}}}C(x,s)=sC(x,s)}
whose solutionC(x,s)contains a one-half power dependence ons. Taking the derivative ofC(x,s)and then the inverse Laplace transform yields the following relationship:ddxC(x,t)=d12dt12C(x,t){\displaystyle {\frac {d}{dx}}C(x,t)={\frac {d^{\scriptstyle {\frac {1}{2}}}}{dt^{\scriptstyle {\frac {1}{2}}}}}C(x,t)}
which relates the concentration of substrate at the electrode surface to the current.[53]This relationship is applied in electrochemical kinetics to elucidate mechanistic behavior. For example, it has been used to study the rate of dimerization of substrates upon electrochemical reduction.[54]
In 2013–2014 Atangana et al. described some groundwater flow problems using the concept of a derivative with fractional order.[55][56]In these works, the classicalDarcy lawis generalized by regarding the water flow as a function of a non-integer order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow.
This equation[clarification needed]has been shown useful for modeling contaminant flow in heterogenous porous media.[57][58][59]
Atangana and Kilicman extended the fractional advection dispersion equation to a variable order equation. In their work, the hydrodynamic dispersion equation was generalized using the concept of avariational order derivative. The modified equation was numerically solved via theCrank–Nicolson method. The stability and convergence in numerical simulations showed that the modified equation is more reliable in predicting the movement of pollution in deformable aquifers than equations with constant fractional and integer derivatives[60]
Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models.[61][62]The time derivative term corresponds to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as∂αu∂tα=−K(−Δ)βu.{\displaystyle {\frac {\partial ^{\alpha }u}{\partial t^{\alpha }}}=-K(-\Delta )^{\beta }u.}
A simple extension of the fractional derivative is the variable-order fractional derivative,αandβare changed intoα(x,t)andβ(x,t). Its applications in anomalous diffusion modeling can be found in the reference.[60][63][64]
Fractional derivatives are used to modelviscoelasticdampingin certain types of materials like polymers.[12]
GeneralizingPID controllersto use fractional orders can increase their degree of freedom. The new equation relating thecontrol variableu(t)in terms of a measurederror valuee(t)can be written asu(t)=Kpe(t)+KiDt−αe(t)+KdDtβe(t){\displaystyle u(t)=K_{\mathrm {p} }e(t)+K_{\mathrm {i} }D_{t}^{-\alpha }e(t)+K_{\mathrm {d} }D_{t}^{\beta }e(t)}
whereαandβare positive fractional orders andKp,Ki, andKd, all non-negative, denote the coefficients for theproportional,integral, andderivativeterms, respectively (sometimes denotedP,I, andD).[65]
The propagation of acoustical waves in complex media, such as in biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives:∇2u−1c02∂2u∂t2+τσα∂α∂tα∇2u−τϵβc02∂β+2u∂tβ+2=0.{\displaystyle \nabla ^{2}u-{\dfrac {1}{c_{0}^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}+\tau _{\sigma }^{\alpha }{\dfrac {\partial ^{\alpha }}{\partial t^{\alpha }}}\nabla ^{2}u-{\dfrac {\tau _{\epsilon }^{\beta }}{c_{0}^{2}}}{\dfrac {\partial ^{\beta +2}u}{\partial t^{\beta +2}}}=0\,.}
See also Holm & Näsholm (2011)[66]and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in Näsholm & Holm (2011b)[67]and in the survey paper,[68]as well as theAcoustic attenuationarticle. See Holm & Nasholm (2013)[69]for a paper which compares fractional wave equations which model power-law attenuation. This book on power-law attenuation also covers the topic in more detail.[70]
Pandey and Holm gave a physical meaning to fractional differential equations by deriving them from physical principles and interpreting the fractional-order in terms of the parameters of the acoustical media, example in fluid-saturated granular unconsolidated marine sediments.[71]Interestingly, Pandey and Holm derivedLomnitz's lawinseismologyand Nutting's law innon-Newtonian rheologyusing the framework of fractional calculus.[72]Nutting's law was used to model the wave propagation in marine sediments using fractional derivatives.[71]
Thefractional Schrödinger equation, a fundamental equation offractional quantum mechanics, has the following form:[73][74]iℏ∂ψ(r,t)∂t=Dα(−ℏ2Δ)α2ψ(r,t)+V(r,t)ψ(r,t).{\displaystyle i\hbar {\frac {\partial \psi (\mathbf {r} ,t)}{\partial t}}=D_{\alpha }\left(-\hbar ^{2}\Delta \right)^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t)\,.}
where the solution of the equation is thewavefunctionψ(r,t)– the quantum mechanicalprobability amplitudefor the particle to have a givenposition vectorrat any given timet, andħis thereduced Planck constant. Thepotential energyfunctionV(r,t)depends on the system.
Further,Δ=∂2∂r2{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}is theLaplace operator, andDαis a scale constant with physicaldimension[Dα] = J1 −α·mα·s−α= kg1 −α·m2 −α·sα− 2, (atα= 2,D2=12m{\textstyle D_{2}={\frac {1}{2m}}}for a particle of massm), and the operator(−ħ2Δ)α/2is the 3-dimensional fractional quantum Riesz derivative defined by(−ℏ2Δ)α2ψ(r,t)=1(2πℏ)3∫d3peiℏp⋅r|p|αφ(p,t).{\displaystyle (-\hbar ^{2}\Delta )^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)={\frac {1}{(2\pi \hbar )^{3}}}\int d^{3}pe^{{\frac {i}{\hbar }}\mathbf {p} \cdot \mathbf {r} }|\mathbf {p} |^{\alpha }\varphi (\mathbf {p} ,t)\,.}
The indexαin the fractional Schrödinger equation is the Lévy index,1 <α≤ 2.
As a natural generalization of thefractional Schrödinger equation, the variable-order fractional Schrödinger equation has been exploited to study fractional quantum phenomena:[75]iℏ∂ψα(r)(r,t)∂tα(r)=(−ℏ2Δ)β(t)2ψ(r,t)+V(r,t)ψ(r,t),{\displaystyle i\hbar {\frac {\partial \psi ^{\alpha (\mathbf {r} )}(\mathbf {r} ,t)}{\partial t^{\alpha (\mathbf {r} )}}}=\left(-\hbar ^{2}\Delta \right)^{\frac {\beta (t)}{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t),}
whereΔ=∂2∂r2{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}is theLaplace operatorand the operator(−ħ2Δ)β(t)/2is the variable-order fractional quantum Riesz derivative.
|
https://en.wikipedia.org/wiki/Fractional_calculus
|
Indata miningandmachine learning,kq-flats algorithm[1][2]is an iterative method which aims to partitionmobservations intokclusters where each cluster is close to aq-flat, whereqis a given integer.
It is a generalization of thek-means algorithm. Ink-means algorithm, clusters are formed in the way that each cluster is close to one point, which is a0-flat.kq-flats algorithm gives better clustering result thank-means algorithm
for some data set.
Given a setAofmobservations(a1,a2,…,am){\displaystyle (a_{1},a_{2},\dots ,a_{m})}where each observationai{\displaystyle a_{i}}is an n-dimensional real vector,kq-flats algorithm aims to partitionmobservation points by generatingkq-flats that minimize the sum of the squares of distances of each observation to a nearestq-flat.
Aq-flat is a subset ofRn{\displaystyle \mathbb {R} ^{n}}that is congruent toRq{\displaystyle \mathbb {R} ^{q}}. For example, a0-flat is a point; a1-flat is a line; a2-flat is a plane; an−1{\displaystyle n-1}-flat is ahyperplane.q-flat can be characterized by the solution set of a linear system of equations:F={x∣x∈Rn,W′x=γ}{\displaystyle F=\left\{x\mid x\in \mathbb {R} ^{n},W'x=\gamma \right\}}, whereW∈Rn×(n−q){\displaystyle W\in \mathbb {R} ^{n\times (n-q)}},γ∈R1×(n−q){\displaystyle \gamma \in \mathbb {R} ^{1\times (n-q)}}.
Denote apartitionof{1,2,…,n}{\displaystyle \{1,2,\dots ,n\}}asS=(S1,S2,…,Sk){\displaystyle S=(S_{1},S_{2},\dots ,S_{k})}. The problem can be formulated as
wherePFi(aj){\displaystyle P_{F_{i}}(a_{j})}is the projection ofaj{\displaystyle a_{j}}ontoFi{\displaystyle F_{i}}. Note that‖aj−PFi(aj)‖=dist(aj,Fl){\displaystyle \|a_{j}-P_{F_{i}}(a_{j})\|=\operatorname {dist} (a_{j},F_{l})}is the distance fromaj{\displaystyle a_{j}}toFl{\displaystyle F_{l}}.
The algorithm is similar to the k-means algorithm (i.e. Lloyd's algorithm) in that it alternates between cluster assignment and cluster update. In specific, the algorithm starts with an initial set ofq-flatsFl(0)={x∈Rn∣(Wl(0))′x=γl(0)},l=1,…,k{\displaystyle F_{l}^{(0)}=\left\{x\in R^{n}\mid \left(W_{l}^{(0)}\right)'x=\gamma _{l}^{(0)}\right\},l=1,\dots ,k}, and proceeds by alternating between the following two steps:
Stop whenever the assignments no longer change.
The cluster assignment step uses the following fact: given aq-flatFl={x∣W′x=γ}{\displaystyle F_{l}=\{x\mid W'x=\gamma \}}and a vectora, whereW′W=I{\displaystyle W'W=I}, the distance fromato theq-flatFl{\displaystyle F_{l}}isdist(a,Fl)=minx:W′x=γ‖x−a‖F2=‖W(W′W)−1(W′x−γ)‖F2=‖W′x−γ‖F2.{\textstyle \operatorname {dist} (a,F_{l})=\min _{x:W'x=\gamma }\left\|x-a\right\|_{F}^{2}=\left\|W(W'W)^{-1}(W'x-\gamma )\right\|_{F}^{2}=\left\|W'x-\gamma \right\|_{F}^{2}.}
The key part of this algorithm is how to update the cluster, i.e. givenmpoints, how to find aq-flat that minimizes the sum of squares of distances of each point to theq-flat. Mathematically, this problem is: givenA∈Rm×n,{\displaystyle A\in R^{m\times n},}solve the quadratic optimization problem
whereA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}is given, ande=(1,…,1)′∈Rm×1{\displaystyle e=(1,\dots ,1)'\in \mathbb {R} ^{m\times 1}}.
The problem can be solved using Lagrangian multiplier method and the solution is as given in the cluster update step.
It can be shown that the algorithm will terminate in a finite number of iterations (no more than the total number of possible assignments, which is bounded bykm{\displaystyle k^{m}}). In addition, the algorithm will terminate at a point that the overall objective cannot be decreased either by a different assignment or by defining new cluster planes for these clusters (such point is called "locally optimal" in the references).
This convergence result is a consequence of the fact thatproblem (P2)can be solved exactly.
The same convergence result holds fork-means algorithm because the cluster update problem can be solved exactly.
kq-flats algorithm is a generalization ofk-means algorithm. In fact,k-means algorithm isk0-flats algorithm since a point is a 0-flat. Despite their connection, they should be used in different scenarios.kq-flats algorithm for the case that data lie in a few low-dimensional spaces.k-means algorithm is desirable for the case the clusters are of the ambient dimension. For example, if all observations lie in two lines,kq-flats algorithm withq=1{\displaystyle q=1}may be used; if the observations are twoGaussian clouds,k-means algorithm may be used.
Natural signals lie in a high-dimensional space. For example, the dimension of a 1024-by-1024 image is about 106, which is far too high for most signal processing algorithms. One way to get rid of the high dimensionality is to find a set of basis functions, such that the high-dimensional signal can be represented by only a few basis functions. In other words, the coefficients of the signal representation lies in a low-dimensional space, which is easier to apply signal processing algorithms. In the literature, wavelet transform is usually used in image processing, and fourier transform is usually used in audio processing. The set of basis functions is usually called adictionary.
However, it is not clear what is the best dictionary to use once given a signal data set. One popular approach is to find a dictionary when given a data set using the idea of Sparse Dictionary Learning. It aims to find a dictionary, such that the signal can be sparsely represented by the dictionary. The optimization problem can be written as follows.
where
The idea ofkq-flats algorithm is similar to sparse dictionary learning in nature. If we restrict theq-flat toq-dimensional subspace, then thekq-flats algorithm is simply finding the closedq-dimensional subspace to a given signal. Sparse dictionary learning is also doing the same thing, except for an additional constraints on the sparsity of the representation. Mathematically, it is possible to show thatkq-flats algorithm is of the form of sparse dictionary learning with an additional block structure onR.
LetBk{\displaystyle B_{k}}be ad×q{\displaystyle d\times q}matrix, where columns ofBk{\displaystyle B_{k}}are basis of thek-th flat. Then the projection of the signalxto thek-th flat isBkrk{\displaystyle B_{k}r_{k}}, whererk{\displaystyle r_{k}}is aq-dimensional coefficient. LetB=[B1,⋯,BK]{\displaystyle B=[B_{1},\cdots ,B_{K}]}denote concatenation of basis ofKflats, it is easy to show that thekq-flat algorithm is the same as the following.
The block structure ofRrefers the fact that each signal is labeled to only one flat. Comparing the two formulations,kq-flat is the same as sparse dictionary modeling whenl=K×q{\displaystyle l=K\times q}and with an additional block structure onR. Users may refer to Szlam's paper[3]for more discussion about the relationship between the two concept.
Classificationis a procedure that classifies an input signal into different classes. One example is to classify an email intospamornon-spamclasses. Classification algorithms usually require a supervised learning stage. In the supervised learning stage, training data for each class is used for the algorithm to learn the characteristics of the class. In the classification stage, a new observation is classified into a class by using the characteristics that were already trained.
kq-flat algorithm can be used for classification. Suppose there are total of m classes. For each class,kflats are trained a priori via training data set. When a new data comes, find the flat that is closest to the new data. Then the new data is associate to class of the closest flat.
However, the classification performance can be further improved if we impose some structure on the flats. One possible choice is to require different flats from different class be sufficiently far apart. Some researchers[4]use this idea and develop a discriminative k q-flat algorithm.
Source:[3]
Inkq-flats algorithm,‖x−PF(x)‖2{\displaystyle \|x-P_{F}(x)\|^{2}}is used to measure the representation error.PF(x){\displaystyle P_{F}(x)}denotes the projection ofxto the flatF. If data lies in aq-dimension flat, then a singleq-flat can represent the data very well. On the contrary, if data lies in a very high dimension space but near a common center, then k-means algorithm is a better way thankq-flat algorithm to represent the data. It is becausek-means algorithm use‖x−xc‖2{\displaystyle \|x-x_{c}\|^{2}}to measure the error, wherexc{\displaystyle x_{c}}denotes the center. K-metrics is a generalization that use both the idea of flat and mean. In k-metrics, error is measured by the following Mahalanobis metric.
‖x−y‖A2=(x−y)TA(x−y){\displaystyle \left\|x-y\right\|_{A}^{2}=(x-y)^{\mathsf {T}}A(x-y)}
whereAis a positive semi-definite matrix.
IfAis the identity matrix, then the Mahalanobis metric is exactly the same as the error measure used ink-means. IfAis not the identity matrix, then‖x−y‖A2{\displaystyle \|x-y\|_{A}^{2}}will favor certain directions as thekq-flat error measure.
|
https://en.wikipedia.org/wiki/K_q-flats
|
Jungle computingis a form ofhigh performance computingthatdistributescomputational work acrosscluster,gridandcloud computing.[1][2]
The increasing complexity of the high performance computing environment has provided a range of choices beside traditional supercomputers andclusters. Scientists can now usegridandcloudinfrastructures, in a variety of combinations along with traditionalsupercomputers- all connected via fast networks. And the emergence ofmany-coretechnologies such asGPUs, as well as supercomputers on chip within these environments has added to the complexity. Thus, high-performance computing can now use multiple diverse platforms and systems simultaneously, giving rise to the term "computing jungle".[1]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Jungle_computing
|
Inmathematics, thePythagorean theoremorPythagoras' theoremis a fundamental relation inEuclidean geometrybetween the three sides of aright triangle. It states that the area of thesquarewhose side is thehypotenuse(the side opposite theright angle) is equal to the sum of the areas of the squares on the other two sides.
Thetheoremcan be written as anequationrelating the lengths of the sidesa,band the hypotenusec, sometimes called thePythagorean equation:[1]
The theorem is named for theGreekphilosopherPythagoras, born around 570 BC. The theorem has beenprovednumerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including bothgeometricproofs andalgebraicproofs, with some dating back thousands of years.
WhenEuclidean spaceis represented by aCartesian coordinate systeminanalytic geometry,Euclidean distancesatisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points.
The theorem can begeneralizedin various ways: tohigher-dimensional spaces, tospaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all butn-dimensionalsolids.
In one rearrangement proof, two squares are used whose sides have a measure ofa+b{\displaystyle a+b}and which contain four right triangles whose sides area,bandc, with the hypotenuse beingc. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are lengthc. Each outer square has an area of(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+c2{\displaystyle 2ab+c^{2}}, with2ab{\displaystyle 2ab}representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of lengthaandb. These rectangles in their new position have now delineated two new squares, one having side lengthais formed in the bottom-left corner, and another square of side lengthbformed in the top-right corner. In this new position, this left side now has a square of area(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}.Since both squares have the area of(a+b)2{\displaystyle (a+b)^{2}}it follows that the other measure of the square area also equal each other such that2ab+c2{\displaystyle 2ab+c^{2}}=2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. With the area of the four triangles removed from both side of the equation what remains isa2+b2=c2.{\displaystyle a^{2}+b^{2}=c^{2}.}[2]
In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areasa2{\displaystyle a^{2}}andb2{\displaystyle b^{2}}which will again lead to a second square of with the area2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}.
English mathematicianSir Thomas Heathgives this proof in his commentary on Proposition I.47 inEuclid'sElements, and mentions the proposals of German mathematiciansCarl Anton BretschneiderandHermann Hankelthat Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him."[3]Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues.[4]
The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with sidec, as shown in the lower part of the diagram.[5]This results in a larger square, with sidea+band area(a+b)2. The four triangles and the square sidecmust have the same area as the larger square,
giving
A similar proof uses four copies of a right triangle with sidesa,bandc, arranged inside a square with sidecas in the top half of the diagram.[6]The triangles are similar with area12ab{\displaystyle {\tfrac {1}{2}}ab}, while the small square has sideb−aand area(b−a)2. The area of the large square is therefore
But this is a square with sidecand areac2, so
This theorem may have more known proofs than any other (thelawofquadratic reciprocitybeing another contender for that distinction); the bookThe Pythagorean Propositioncontains 370 proofs.[7]
This proof is based on theproportionalityof the sides of threesimilartriangles, that is, upon the fact that theratioof any two corresponding sides of similar triangles is the same regardless of the size of the triangles.
LetABCrepresent a right triangle, with theright anglelocated atC, as shown on the figure. Draw thealtitudefrom pointC, and callHits intersection with the sideAB. PointHdivides the length of the hypotenusecinto partsdande. The new triangle,ACH,issimilarto triangleABC, because they both have a right angle (by definition of the altitude), and they share the angle atA, meaning that the third angle will be the same in both triangles as well, marked asθin the figure. By a similar reasoning, the triangleCBHis also similar toABC. The proof of similarity of the triangles requires thetriangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to theparallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides:
The first result equates thecosinesof the anglesθ, whereas the second result equates theirsines.
These ratios can be written as
Summing these two equalities results in
which, after simplification, demonstrates the Pythagorean theorem:
The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. Oneconjectureis that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in theElements, and that the theory of proportions needed further development at that time.[8]
Albert Einsteingave a proof by dissection in which the pieces do not need to be moved.[9]Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and twosimilarshapes that each include one of two legs instead of the hypotenuse (seeSimilar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well.
In outline, here is how the proof inEuclid'sElementsproceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to becongruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow.
LetA,B,Cbe theverticesof a right triangle, with a right angle atA. Drop a perpendicular fromAto the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs.
For the formal proof, we require four elementarylemmata:
Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square.[10]
The proof is as follows:
This proof, which appears in Euclid'sElementsas that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares.[12][13]This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used.[14][15]
Another by rearrangement is given by the middle animation. A large square is formed with areac2, from four identical right triangles with sidesa,bandc, fitted around a small central square. Then two rectangles are formed with sidesaandbby moving the triangles. Combining the smaller square with these rectangles produces two squares of areasa2andb2, which must have the same area as the initial large square.[16]
The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is calleddissection. This shows the area of the large square equals that of the two smaller ones.[17]
As shown in the accompanying animation, area-preservingshear mappingsand translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly.[18]Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse.
A relatedproof by U.S. President James A. Garfieldwas published before he was elected president; while he was aU.S. Representative.[19][20][21]Instead of a square it uses atrapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. Thearea of the trapezoidcan be calculated to be half the area of the square, that is
The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of12{\displaystyle {\frac {1}{2}}}, which is removed by multiplying by two to give the result.
One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employingcalculus.[22][23][24]
The triangleABCis a right triangle, as shown in the upper part of the diagram, withBCthe hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of lengthy, the sideACof lengthxand the sideABof lengtha, as seen in the lower diagram part.
Ifxis increased by a small amountdxby extending the sideACslightly toD, thenyalso increases bydy. These form two sides of a triangle,CDE, which (withEchosen soCEis perpendicular to the hypotenuse) is a right triangle approximately similar toABC. Therefore, the ratios of their sides must be the same, that is:
This can be rewritten asydy=xdx{\displaystyle y\,dy=x\,dx}, which is adifferential equationthat can be solved by direct integration:
giving
The constant can be deduced fromx= 0,y=ato give the equation
This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place ofdxanddy.
Theconverseof the theorem is also true:[25]
Given a triangle with sides of lengtha,b, andc, ifa2+b2=c2,then the angle between sidesaandbis aright angle.
For any three positivereal numbersa,b, andcsuch thata2+b2=c2, there exists a triangle with sidesa,bandcas a consequence of theconverse of the triangle inequality.
This converse appears in Euclid'sElements(Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right."[26]
It can be proved using thelaw of cosinesor as follows:
LetABCbe a triangle with side lengthsa,b, andc, witha2+b2=c2.Construct a second triangle with sides of lengthaandbcontaining a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has lengthc=√a2+b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengthsa,bandc, the triangles arecongruentand must have the same angles. Therefore, the angle between the side of lengthsaandbin the original triangle is a right angle.
The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.[27][28]
Acorollaryof the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Letcbe chosen to be the longest of the three sides anda+b>c(otherwise there is no triangle according to thetriangle inequality). The following statements apply:[29]
Edsger W. Dijkstrahas stated this proposition about acute, right, and obtuse triangles in this language:
whereαis the angle opposite to sidea,βis the angle opposite to sideb,γis the angle opposite to sidec, and sgn is thesign function.[30]
A Pythagorean triple has three positive integersa,b, andc, such thata2+b2=c2.In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths.[1]Such a triple is commonly written(a,b,c).Some well-known examples are(3, 4, 5)and(5, 12, 13).
A primitive Pythagorean triple is one in whicha,bandcarecoprime(thegreatest common divisorofa,bandcis 1).
The following is a list of primitive Pythagorean triples with values less than 100:
There are manyformulas for generating Pythagorean triples. Of these,Euclid's formulais the most well-known: given arbitrary positive integersmandn, the formula states that the integers
forms a Pythagorean triple.
Given aright trianglewith sidesa,b,c{\displaystyle a,b,c}andaltituded{\displaystyle d}(a line from the right angle and perpendicular to thehypotenusec{\displaystyle c}). The Pythagorean theorem has,
while theinverse Pythagorean theoremrelates the twolegsa,b{\displaystyle a,b}to the altituded{\displaystyle d},[31]
The equation can be transformed to,
wherex2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}for any non-zerorealx,y,z{\displaystyle x,y,z}. If thea,b,d{\displaystyle a,b,d}are to beintegers, the smallest solutiona>b>d{\displaystyle a>b>d}is then
using the smallest Pythagorean triple3,4,5{\displaystyle 3,4,5}. The reciprocal Pythagorean theorem is a special case of theoptic equation
where the denominators are squares and also for aheptagonal trianglewhose sidesp,q,r{\displaystyle p,q,r}are square numbers.
One of the consequences of the Pythagorean theorem is that line segments whose lengths areincommensurable(so the ratio of which is not arational number) can be constructed using astraightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by thesquare rootoperation.
The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer.[32]Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as√2,√3,√5. For more detail, seeQuadratic irrational.
Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit.[33]According to one legend,Hippasus of Metapontum(ca.470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable.[34]A careful discussion of Hippasus's contributions is found inFritz.[35]
For anycomplex number
theabsolute valueor modulus is given by
So the three quantities,r,xandyare related by the Pythagorean equation,
Note thatris defined to be a positive number or zero butxandycan be negative as well as positive. Geometricallyris the distance of thezfrom zero or the originOin thecomplex plane.
This can be generalised to find the distance between two points,z1andz2say. The required distance is given by
so again they are related by a version of the Pythagorean equation,
The distance formula inCartesian coordinatesis derived from the Pythagorean theorem.[36]If(x1,y1)and(x2,y2)are points in the plane, then the distance between them, also called theEuclidean distance, is given by
More generally, inEuclideann-space, the Euclidean distance between two points,A=(a1,a2,…,an){\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})}andB=(b1,b2,…,bn){\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})}, is defined, by generalization of the Pythagorean theorem, as:
If instead of Euclidean distance, the square of this value (thesquared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates:
The squared form is a smooth,convex functionof both points, and is widely used inoptimization theoryandstatistics, forming the basis ofleast squares.
If Cartesian coordinates are not used, for example, ifpolar coordinatesare used in two dimensions or, in more general terms, ifcurvilinear coordinatesare used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in theapplications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates(r,θ)can be introduced as:
Then two points with locations(r1,θ1)and(r2,θ2)are separated by a distances:
Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as:
using the trigonometricproduct-to-sum formulas. This formula is thelaw of cosines, sometimes called the generalized Pythagorean theorem.[37]From this result, for the case where the radii to the two locations are at right angles, the enclosed angleΔθ=π/2,and the form corresponding to Pythagoras' theorem is regained:s2=r12+r22.{\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.}The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles.
In a right triangle with sidesa,band hypotenusec,trigonometrydetermines thesineandcosineof the angleθbetween sideaand the hypotenuse as:
From that it follows:
where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity.[38]In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of sizesinθand adjacent side of sizecosθin units of the hypotenuse.
The Pythagorean theorem relates thecross productanddot productin a similar way:[39]
This can be seen from the definitions of the cross product and dot product, as
withnaunit vectornormal to bothaandb. The relationship follows from these definitions and the Pythagorean trigonometric identity.
This can also be used to define the cross product. By rearranging the following equation is obtained
This can be considered as a condition on the cross product and so part of its definition, for example inseven dimensions.[40][41]
If the first four of theEuclidean geometry axiomsare assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is,Euclid's fifth postulateimplies the Pythagorean theorem and vice-versa.
The Pythagorean theorem generalizes beyond the areas of squares on the three sides to anysimilar figures. This was known byHippocrates of Chiosin the 5th century BC,[42]and was included byEuclidin hisElements:[43]
If one erects similar figures (seeEuclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side.
This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures area:b:c).[44]While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle).[44]
The basic idea behind this generalization is that the area of a plane figure isproportionalto the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areasA,BandCare erected on sides with corresponding lengthsa,bandcthen:
But, by the Pythagorean theorem,a2+b2=c2, soA+B=C.
Conversely, if we can prove thatA+B=Cfor three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangleCon its hypotenuse, and two similar right triangles (AandB) constructed on the other two sides, formed by dividing the central triangle by itsaltitude. The sum of the areas of the two smaller triangles therefore is that of the third, thusA+B=Cand reversing the above logic leads to the Pythagorean theorema2+b2=c2. (See alsoEinstein's proof by dissection without rearrangement)
The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states thata2+b2−2abcosθ=c2{\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}}whereθ{\displaystyle \theta }is the angle between sidesa{\displaystyle a}andb{\displaystyle b}.[45]
Whenθ{\displaystyle \theta }isπ2{\displaystyle {\frac {\pi }{2}}}radians or 90°, thencosθ=0{\displaystyle \cos {\theta }=0}, and the formula reduces to the usual Pythagorean theorem.
At any selected angle of a general triangle of sidesa, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeledc. Inscribing the isosceles triangle forms triangleCADwith angle θ opposite sideband with sideralongc. A second triangle is formed with angle θ opposite sideaand a side with lengthsalongc, as shown in the figure.Thābit ibn Qurrastated that the sides of the three triangles were related as:[47][48]
As the angle θ approachesπ/2, the base of the isosceles triangle narrows, and lengthsrandsoverlap less and less. Whenθ=π/2,ADBbecomes a right triangle,r+s=c, and the original Pythagorean theorem is regained.
One proof observes that triangleABChas the same angles as triangleCAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by thetriangle postulate.) Consequently,ABCis similar to the reflection ofCAD, the triangleDACin the lower panel. Taking the ratio of sides opposite and adjacent to θ,
Likewise, for the reflection of the other triangle,
Clearing fractionsand adding these two relations:
the required result.
The theorem remains valid if the angleθ{\displaystyle \theta }is obtuse so the lengthsrandsare non-overlapping.
Pappus's area theoremis a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization byPappus of Alexandriain 4 AD[49][50]
The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same baseband heighth. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms.
In terms ofsolid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider thecuboidshown in the figure. The length offace diagonalACis found from Pythagoras' theorem as:
where these three sides form a right triangle. Using diagonalACand the horizontal edgeCD, the length ofbody diagonalADthen is found by a second application of Pythagoras' theorem as:
or, doing it all in one step:
This result is the three-dimensional expression for the magnitude of a vectorv(the diagonal AD) in terms of its orthogonal components{vk}(the three mutually perpendicular sides):
This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes.
A substantial generalization of the Pythagorean theorem to three dimensions isde Gua's theorem, named forJean Paul de Gua de Malves: If atetrahedronhas a right angle corner (like a corner of acube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem":[51]
Letx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}be orthogonal vectors inRn. Consider then-dimensional simplexSwith vertices0,x1,…,xn{\displaystyle 0,x_{1},\ldots ,x_{n}}. (Think of the(n− 1)-dimensional simplex with verticesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}not including the origin as the "hypotenuse" ofSand the remaining(n− 1)-dimensional faces ofSas its "legs".) Then the square of the volume of the hypotenuse ofSis the sum of the squares of the volumes of thenlegs.
This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording:[52]
Given ann-rectangularn-dimensional simplex, the square of the(n− 1)-content of thefacetopposing the right vertex will equal the sum of the squares of the(n− 1)-contents of the remaining facets.
The Pythagorean theorem can be generalized toinner product spaces,[53]which are generalizations of the familiar 2-dimensional and 3-dimensionalEuclidean spaces. For example, afunctionmay be considered as avectorwith infinitely many components in an inner product space, as infunctional analysis.[54]
In an inner product space, the concept ofperpendicularityis replaced by the concept oforthogonality: two vectorsvandware orthogonal if their inner product⟨v,w⟩{\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle }is zero. Theinner productis a generalization of thedot productof vectors. The dot product is called thestandardinner product or theEuclideaninner product. However, other inner products are possible.[55]
The concept of length is replaced by the concept of thenorm‖v‖ of a vectorv, defined as:[56]
In an inner-product space, thePythagorean theoremstates that for any two orthogonal vectorsvandwwe have
Here the vectorsvandware akin to the sides of a right triangle with hypotenuse given by thevector sumv+w. This form of the Pythagorean theorem is a consequence of theproperties of the inner product:
where⟨v,w⟩=⟨w,v⟩=0{\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0}because of orthogonality.
A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is theparallelogram law:[56]
which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality isipso factoa norm corresponding to an inner product.[56]
The Pythagorean identity can be extended to sums of more than two orthogonal vectors. Ifv1,v2, ...,vnare pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section onsolid geometry) results in the equation[57]
Another generalization of the Pythagorean theorem applies toLebesgue-measurablesets of objects in any number of dimensions. Specifically, the square of the measure of anm-dimensional set of objects in one or more parallelm-dimensionalflatsinn-dimensionalEuclidean spaceis equal to the sum of the squares of the measures of theorthogonalprojections of the object(s) onto allm-dimensional coordinate subspaces.[58]
In mathematical terms:
where:
The Pythagorean theorem is derived from theaxiomsofEuclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theoremimplies, and is implied by, Euclid's Parallel (Fifth) Postulate.[59][60]Thus, right triangles in anon-Euclidean geometry[61]do not satisfy the Pythagorean theorem. For example, inspherical geometry, all three sides of the right triangle (saya,b, andc) bounding an octant of the unit sphere have length equal toπ/2, and all its angles are right angles, which violates the Pythagorean theorem becausea2+b2=2c2>c2{\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}}.
Here two cases of non-Euclidean geometry are considered—spherical geometryandhyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines.
However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, sayA+B=C. The sides are then related as follows: the sum of the areas of the circles with diametersaandbequals the area of the circle with diameterc.[62]
For any righttriangle on a sphereof radiusR(for example, ifγin the figure is a right angle), with sidesa,b,c,the relation between the sides takes the form:[63]
This equation can be derived as a special case of thespherical law of cosinesthat applies to all spherical triangles:
For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengthsa,b,andcon a sphere with expanding radiusR. AsRapproaches infinity the quantitiesa/R,b/R,andc/Rtend to zero and the spherical Pythagorean identity reduces to1=1,{\displaystyle 1=1,}so we must look at itsasymptotic expansion.
TheMaclaurin seriesfor the cosine function can be written ascosx=1−12x2+O(x4){\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}}with the remainder term inbig O notation. Lettingx=c/R{\displaystyle x=c/R}be a side of the triangle, and treating the expression as an asymptotic expansion in terms ofRfor a fixedc,
and likewise foraandb. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields
Subtracting 1 and then negating each side,
Multiplying through by2R2,the asymptotic expansion forcin terms of fixeda,band variableRis
The Euclidean Pythagorean relationshipc2=a2+b2{\textstyle c^{2}=a^{2}+b^{2}}is recovered in the limit, as the remainder vanishes when the radiusRapproaches infinity.
For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identitycos2θ=1−2sin2θ{\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }}to avoidloss of significance. Then the spherical Pythagorean theorem can alternately be written as
In ahyperbolicspace with uniformGaussian curvature−1/R2, for a righttrianglewith legsa,b, and hypotenusec, the relation between the sides takes the form:[64]
where cosh is thehyperbolic cosine. This formula is a special form of thehyperbolic law of cosinesthat applies to all hyperbolic triangles:[65]
with γ the angle at the vertex opposite the sidec.
By using theMaclaurin seriesfor the hyperbolic cosine,coshx≈ 1 +x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, asa,b, andcall approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem.
For small right triangles(a,b<<R), the hyperbolic cosines can be eliminated to avoidloss of significance, giving
For any uniform curvatureK(positive, zero, or negative), in very small right triangles (|K|a2, |K|b2<< 1) with hypotenusec, it can be shown that
The Pythagorean theorem applies toinfinitesimaltriangles seen indifferential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies
withdsthe element of distance and (dx,dy,dz) the components of the vector separating the two points. Such a space is called aEuclidean space. However, inRiemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form:[66]
which is called themetric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficientsgij.) It may be a function of position, and often describescurved space. A simple example is Euclidean (flat) space expressed incurvilinear coordinates. For example, inpolar coordinates:
There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians ofMesopotamianmathematics have concluded that the Pythagorean rule was in widespread use during theOld Babylonian period(20th to 16th centuries BC), over a thousand years beforePythagoraswas born.[68][69][70][71]The history of the theorem can be divided into four parts: knowledge ofPythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within somedeductive system.
Writtenc.1800BC, theEgyptianMiddle KingdomBerlin Papyrus 6619includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tabletPlimpton 322, written nearLarsaalsoc.1800BC, contains many entries closely related to Pythagorean triples.[72]
InIndia, theBaudhayanaShulba Sutra, the dates of which are given variously as between the 8th and 5th century BC,[73]contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of theisoscelesright triangleand in the general case, as does theApastambaShulba Sutra(c.600 BC).[a]
ByzantineNeoplatonicphilosopher and mathematicianProclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed toPlato, the other to Pythagoras",[76]for generating special Pythagorean triples. The rule attributed to Pythagoras (c.570– c.495 BC) starts from anodd numberand produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According toThomas L. Heath(1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived.[77]However, when authors such asPlutarchandCiceroattributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted.[78][79]ClassicistKurt von Fritzwrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period ofPythagorean mathematics."[35]Around 300 BC, in Euclid'sElements, the oldest extantaxiomatic proofof the theorem is presented.[80]
With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, theChinesetextZhoubi Suanjing(周髀算经), (The Arithmetical Classic of theGnomonand the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理).[81][82]During theHan Dynasty(202 BC to 220 AD), Pythagorean triples appear inThe Nine Chapters on the Mathematical Art,[83]together with a mention of right triangles.[84]Some believe the theorem arose first inChinain the 11th century BC,[85]where it is alternatively known as the "Shang Gao theorem" (商高定理),[86]named after theDuke of Zhou'sastronomer and mathematician, whose reasoning composed most of what was in theZhoubi Suanjing.[87]
|
https://en.wikipedia.org/wiki/Pythagorean_theorem#Non-Euclidean_geometry
|
Adultismis a bias or prejudice against children or youth.[1][2]It has been defined as "the power adults have over children", or the abuse thereof,[2]as well as "prejudice and accompanying systematic discrimination against young people",[3]and "bias towards adults... and the social addiction to adults, including their ideas, activities, and attitudes". It can be considered a subtype ofageism, or prejudice and discrimination due to age in general.
This phenomenon is said to affect families, schools, justice systems and the economy, in addition to other areas of society. Its impacts are largely regarded as negative, except in cases related tochild protectionand the overridingsocial contract.[4]Increased study of adultism has recently occurred in the fields of education,psychology,civic engagement,higher educationand further, with contributions fromEurope,North AmericaandSouth America.[5]
According to one writer, "the term 'adultism' has been varyingly employed since at least the 1840s, when it was used to describe traits of an animal that matured faster than expected."[6]More familiar to current usage, the word was used by Patterson Du Bois in 1903,[7]with a meaning broadly similar to that used by Jack Flasher in a journal article seventy-five years later.
In France in the 1930s, the same word was used for an entirely different topic, the author describing a condition wherein a child possessed adult-like "physique and spirit":
That 1930s usage of the word in France was superseded by a late 1970s American journal article proposing that adultism is the abuse of the power that adults have over children. The author identified examples not only in parents but also in teachers, psychotherapists, the clergy, police, judges, and juries.[2]
John Bell in 1995 defined adultism as "behaviors and attitudes based on the assumptions that adults are better than young people, and entitled to act upon young people without agreement".[9][10]Adam Fletcher in 2016 called it "an addiction to the attitudes, ideas, beliefs, and actions of adults."[11]Adultism is popularly used to describe anydiscriminationagainst young people and is sometimes distinguished fromageism, which is simply prejudice on the grounds of age, although it commonly refers to prejudice against older people, not specifically against youth. It has been suggested that adultism, which is associated with a view of the self that trades on rejecting and excluding child-subjectivity, has always been present in Western culture.[12]
Fletcher[4]suggests that adultism has three main expressions in society:
A study by the Crisis Prevention Institute on the prevalence of adultism found an increasing number of local youth-serving organizations addressing the issue.[13]For instance, a local program (Youth Together) inOakland,California, describes the impact of adultism, which "hinders the development of youth, in particular, their self-esteem and self-worth, ability to form positive relationships with caring adults, or even see adults as allies", on their website.[14]
Adultism has been used to describe the oppression of children and young people by adults, which is seen as having the same power dimension in the lives of young people as racism and sexism.[15]When used in this sense it is a generalization ofpaternalism, describing the force of all adults rather than only male adults, and may be witnessed in the infantilization of children and youth.Pedophobia(the fear of children) andephebiphobia(the fear of youth) have been proposed as antecedents to adultism.[16]
Terms such as adult privilege, adultarchy, andadultcentrismhave been proposed as descriptions of particular aspects or variants of adultism.[17]
National Youth Rights Associationdescribes discrimination against youth asageism, taking that word as any form of discrimination against anyone due to their age. Advocates of using the term 'ageism' for this issue also believe it makes common cause with older people fighting against their own form of age discrimination.[18]However, a national organization calledYouth On Boardcounters this based on a different meaning of "ageism", arguing that "addressing adultist behavior by calling it ageism is discrimination against youth in itself."[19]
In his seminal 1978 article, Flasher says that adultism is born of the belief that children are inferior, and he says it can be manifested as excessive nurturing, possessiveness, or over-restrictiveness, all of which are consciously or unconsciously geared toward excessive control of a child.[20]Adultism has been associated withpsychological projectionandsplitting, a process whereby 'the one with the power attributes his or her unconscious, unresolved sexual and aggressive material' to the child – 'both the dark and the light side...hence thedivine child/deficit child'[21]split.
Theologians Heather Eaton and Matthew Fox proposed, "Adultism derives from adults repressing the inner child."[22]John Holtstated, "An understanding of adultism might begin to explain what I mean when I say that much of what is known aschildren's artis an adult invention."[23]That perspective is seemingly supported byMaya Angelou, who remarked:
We are all creative, but by the time we are three or four years old, someone has knocked the creativity out of us. Some people shut up the kids who start to tell stories. Kids dance in their cribs, but someone will insist they sit still. By the time the creative people are ten or twelve, they want to be like everyone else.[24]
A 2006/2007 survey conducted by theChildren's Rights Alliance for Englandand theNational Children's Bureauasked 4,060 children and young people whether they have ever been treated unfairly based on various criteria (race, age, sex, sexual orientation, etc.). A total of 43% of British youth surveyed reported experiencing discrimination based on their age, substantially more than other categories of discrimination like sex (27%), race (11%), or sexual orientation (6%).[25]
In addition to Fletcher,[4]other experts have identified multiple forms of adultism, offering atypologythat includes the above categories of internalized adultism,[26]institutionalized adultism,[27]cultural adultism, and other forms.
In a publication published by theW. K. Kellogg Foundation,University of MichiganprofessorBarry Checkowayasserts that internalized adultism causes youth to "question their own legitimacy, doubt their ability to make a difference" and perpetuate a "culture of silence" among young people.[28]
"Adultism convinces us as children that children don't really count," reports an investigative study, and it "becomes extremely important to us [children] to have the approval of adults and be 'in good' with them, even if it means betraying our fellow children. This aspect of internalized adultism leads to such phenomena tattling on our siblings or being the 'teacher's pet,' to name just two examples."
Other examples of internalized adultism include many forms of violence imposed upon children and youth by adults who are reliving the violence they faced as young people, such ascorporal punishment,sexual abuse,verbal abuse, and community incidents that include store policies prohibiting youth from visiting shops without adults, and police, teachers, or parents chasing young people from areas without just cause.[9]
Institutional adultism may be apparent in any instance ofsystemic bias, where formalized limitations or demands are placed on people simply because of their young age. Policies, laws, rules, organizational structures, and systematic procedures each serve as mechanisms to leverage, perpetuate, and instill adultism throughout society. These limitations are often reinforced through physical force, coercion or police actions and are often seen as double standards.[29]This treatment is increasingly seen as a form ofgerontocracy.[30][31]
Institutions perpetuating adultism may include the fiduciary, legal, educational, communal, religious, and governmental sectors of a community.Social scienceliterature has identified adultism as "within the context of the social inequality and the oppression of children, where children are denied human rights and are disproportionately victims of maltreatment and exploitation."[32]
Institutional adultism may be present in:
as well asLegal issues affecting adolescenceandTotal institutions.
Cultural adultism is a much more ambiguous, yet much more prevalent, form of "discriminationor intolerance towards youth".[36]Any restriction or exploitation of people because of their youth, as opposed to their ability, comprehension, or capacity, may be said to be adultist. These restrictions are often attributed to euphemisms afforded to adults on the basis of age alone, such as "better judgment" or "the wisdom of age". A parenting magazine editor comments, "Most of the time people talk differently to kids than to adults, and often they act differently, too."[37]
Discrimination against age is increasingly recognized as a form ofbigotryin social and cultural settings around the world. An increasing number of social institutions are acknowledging the positions of children and teenagers as anoppressedminority group.[38]Many youth are rallying against the adultist myths spread through mass media from the 1970s through the 1990s.[39][40]
Research compiled from two sources (a Cornell University nationwide study, and a Harvard University study on youth) has shown that social stratification between age groups causesstereotypingand generalization; for instance, the media-perpetuated myth that all adolescents are immature, violent and rebellious.[41]Opponents of adultism contend that this has led to growing number of youth, academics, researchers, and other adults rallying against adultism and ageism, such as organizing education programs, protesting statements, and creating organizations devoted to publicizing the concept and addressing it.[42]
Simultaneously, research shows that young people who struggle against adultism within community organizations have a high rate of impact upon said agencies, as well as their peers, the adults who work with them, and the larger community to which the organization belongs.[43]
There may be many negative effects of adultism, includingephebiphobiaand a growinggeneration gap. A reactive social response to adultism takes the form of thechildren's rights movement, led by young people who strike against being exploited for their labor. Numerous popular outlets are employed to strike out against adultism, particularlymusicand movies. Additionally, many youth-led social change efforts have inherently responded to adultism, particularly those associated withyouth activismandstudent activism, each of which in their own respects have struggled with the effects of institutionalized and cultural adultism.[42]
A growing number of governmental, academic, and educational institutions around the globe have created policy, conducted studies, and created publications that respond to many of the insinuations and implications of adultism. Much of popular researcherMargaret Mead's work can be said to be a response to adultism.[44]Current researchers whose work analyzes the effects of adultism include sociologistMike Males[45]and critical theoristHenry Giroux. The topic has recently been addressed inliberation psychologyliterature, as well.[46]
Any inanimate or animate exhibition of adultism is said to be "adultist". This may include behaviors, policies, practices, institutions, or individuals. It is legal in most countries, towards people under 18.
EducatorJohn Holtproposed that teaching adults about adultism is a vital step to addressing the effects of adultism,[47]and at least one organization[48]and one curriculum[49]do just that. Several educators have created curricula that seek to teach youth about adultism, as well.[50]Currently, organizations responding to the negative effects of adultism include the United Nations, which has conducted a great deal of research[51]in addition to recognizing the need tocounter adultismthrough policy and programs. TheCRChas particular Articles (5 and 12) which are specifically committed to combating adultism.[citation needed]The international organizationHuman Rights Watchhas done the same.[52]
Common practice accepts the engagement ofyouth voiceand the formation ofyouth-adult partnershipsas essential steps to resisting adultism.[53]
Some ways to challenge adultism also include youth-led programming and participating inyouth-led organizations. These are both ways of children stepping up and taking action to call out the bias towards adults. Youth-led programming allows the voices of the youth to be heard and taken into consideration.[54]Taking control of their autonomy can help children take control of their sexuality, as well. Moving away from an adultist framework leads to moving away from the idea that children are not capable of handling information about sex and their own sexuality. Accepting that children are ready to learn about themselves will decrease the amount of misinformation spread to them by their peers and allow them to receive accurate information from individuals educated on the topic.[55]
|
https://en.wikipedia.org/wiki/Adultism
|
Theprinciple of least effortis a broadtheorythat covers diverse fields fromevolutionary biologytowebpage design. It postulates that animals, people, and even well-designed machines will naturally choose thepath of least resistanceor "effort". It is closely related to many other similar principles (seeprinciple of least actionor other articles listedbelow).
This is perhaps best known, or at least documented, among researchers in the field oflibrary and information science. Their principle states that aninformation-seekingclient will tend to use the most convenient search method in the least exacting mode available. Information-seeking behavior stops as soon as minimally acceptable results are found. This theory holds true regardless of the user's proficiency as a searcher, or their level of subject expertise. Also, this theory takes into account the user's previous information-seeking experience. The user will use the tools that are most familiar and easy to use that find results. The principle of least effort is known as a "deterministic description of human behavior".[1]
The principle of least effort applies not only in the library context, but also to any information-seeking activity. For example, one might consult a generalist co-worker down the hall rather than a specialist in another building, so long as the generalist's answers were within the threshold of acceptability.
The principle of least effort is analogous to thepath of least resistance.
The principle was first articulated by the Italian philosopherGuillaume Ferreroin an article in theRevue philosophique de la France et de l'étranger, 1 January 1894.[2]About fifty years later, this principle was studied bylinguistGeorge Kingsley Zipfwho wroteHuman Behaviour and the Principle of Least Effort: An Introduction to Human Ecology, first published in 1949. He theorised that the distribution of word use was due to tendency to communicate efficiently with least effort and this theory is known asZipf's Law.[3]
Within the context of information seeking, the principle of least effort was studied by Herbert Poole who wroteTheories of the Middle Rangein 1985.[4]LibrarianThomas Mann lists the principle of least effort as one of several principles guidinginformation seekingbehavior in his 1987 book,A Guide to Library Research Methods.[5]
Likewise, one of the most common measures of information-seeking behavior, library circulation statistics, also follows the80-20 rule. This suggests that information-seeking behavior is a manifestation not of anormal distributioncurve, but of apower lawcurve.
(Regarding the Law of Least Effort, seeWilliam James,The Principles of Psychology, 1890, page 944.)
The principle of least effort is especially important when considering design for libraries and research in the context of the modern library. Libraries must take into consideration the user's desire to find information quickly and easily. The principle must be considered when designing individualOnline Public Access Catalogs (OPACs)as well as other library tools.
The principle is a guiding force for the push to provide access toelectronic mediain libraries. The principle of least effort was further explored in a study of library behavior of graduate students by Zao Liu and Zheng Ye (Lan) Lang published in 2004. The study sampledTexas A&Mdistance learning graduate students to test what library resources they used, and why they used those particular resources. In this study the Internet was used the most, while libraries were the next most used resource for conducting class research. The study found that most students used these resources due to their quickness and ability to access from home. The study found that the principle of least effort was the primary behavior model of mostdistance learningstudents.[6]This means that modern libraries, especially academic libraries, need to analyze their electronic databases in order to successfully cater to the needs of the changing realities of information science.
Professional writers employ the principle of least effort duringaudience analysis. The writer analyzes the reader's environment, previous knowledge, and other similar information which the reader may already know. Intechnical writing, recursive organization, where parts resemble the organization of the whole, helps readers find their way. Consistency of navigational features is a common concern in software design.
|
https://en.wikipedia.org/wiki/Principle_of_least_effort
|
Incomputer science,radix sortis a non-comparativesorting algorithm. It avoids comparison by creating anddistributingelements into buckets according to theirradix. For elements with more than onesignificant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason,radix sorthas also been calledbucket sortanddigital sort.
Radix sort can be applied to data that can be sortedlexicographically, be they integers, words, punch cards, playing cards, or themail.
Radix sort dates back as far as 1887 to the work ofHerman Hollerithontabulating machines.[1]Radix sorting algorithms came into common use as a way to sortpunched cardsas early as 1923.[2]
The first memory-efficient computer algorithm for this sorting method was developed in 1954 atMITbyHarold H. Seward. Computerized radix sorts had previously been dismissed as impractical because of the perceived need for variable allocation of buckets of unknown size. Seward's innovation was to use a linear scan to determine the required bucket sizes and offsets beforehand, allowing for a single static allocation of auxiliary memory. The linear scan is closely related to Seward's other algorithm —counting sort.
In the modern era, radix sorts are most commonly applied to collections of binarystringsandintegers. It has been shown in some benchmarks to be faster than other more general-purpose sorting algorithms, sometimes 50% to three times faster.[3][4][5]
Radix sorts can be implemented to start at either themost significant digit(MSD) orleast significant digit(LSD). For example, with1234, one could start with 1 (MSD) or 4 (LSD).
LSD radix sorts typically use the following sorting order: short keys come before longer keys, and then keys of the same length are sortedlexicographically. This coincides with the normal order of integer representations, like the sequence[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. LSD sorts are generallystable sorts.
MSD radix sorts are most suitable for sorting strings or fixed-length integer representations. A sequence like[b, c, e, d, f, g, ba]would be sorted as[b, ba, c, d, e, f, g]. If lexicographic ordering is used to sort variable-length integers in base 10, then numbers from 1 to 10 would be output as[1, 10, 2, 3, 4, 5, 6, 7, 8, 9], as if the shorter keys were left-justified and padded on the right with blank characters to make the shorter keys as long as the longest key. MSD sorts are not necessarily stable if the original ordering of duplicate keys must always be maintained.
Other than the traversal order, MSD and LSD sorts differ in their handling of variable length input.
LSD sorts can group by length, radix sort each group, then concatenate the groups in size order. MSD sorts must effectively 'extend' all shorter keys to the size of the largest key and sort them accordingly, which can be more complicated than the grouping required by LSD.
However, MSD sorts are more amenable to subdivision and recursion. Each bucket created by an MSD step can itself be radix sorted using the next most significant digit, without reference to any other buckets created in the previous step. Once the last digit is reached, concatenating the buckets is all that is required to complete the sort.
Input list:
Starting from the rightmost (last) digit, sort the numbers based on that digit:
Sorting by the next left digit:
And finally by the leftmost digit:
Each step requires just a single pass over the data, since each item can be placed in its bucket without comparison with any other element.
Some radix sort implementations allocate space for buckets by first counting the number of keys that belong in each bucket before moving keys into those buckets. The number of times that each digit occurs is stored in anarray.
Although it is always possible to pre-determine the bucket boundaries using counts, some implementations opt to use dynamic memory allocation instead.
Input list, fixed width numeric strings with leading zeros:
First digit, with brackets indicating buckets:
Next digit:
Final digit:
All that remains is concatenation:
Radix sort operates inO(n⋅w){\displaystyle O(n\cdot w)}time, wheren{\displaystyle n}is the number of keys, andw{\displaystyle w}is the key length. LSD variants can achieve a lower bound forw{\displaystyle w}of 'average key length' when splitting variable length keys into groups as discussed above.
Optimized radix sorts can be very fast when working in a domain that suits them.[6]They are constrained to lexicographic data, but for many practical applications this is not a limitation. Large key sizes can hinder LSD implementations when the induced number of passes becomes the bottleneck.[2]
Binary MSD radix sort, also called binary quicksort, can be implemented in-place by splitting the input array into two bins - the 0s bin and the 1s bin. The 0s bin is grown from the beginning of the array, whereas the 1s bin is grown from the end of the array. The 0s bin boundary is placed before the first array element. The 1s bin boundary is placed after the last array element. The most significant bit of the first array element is examined. If this bit is a 1, then the first element is swapped with the element in front of the 1s bin boundary (the last element of the array), and the 1s bin is grown by one element by decrementing the 1s boundary array index. If this bit is a 0, then the first element remains at its current location, and the 0s bin is grown by one element. The next array element examined is the one in front of the 0s bin boundary (i.e. the first element that is not in the 0s bin or the 1s bin). This process continues until the 0s bin and the 1s bin reach each other. The 0s bin and the 1s bin are then sorted recursively based on the next bit of each array element. Recursive processing continues until the least significant bit has been used for sorting.[7][8]Handling signedtwo's complementintegers requires treating the most significant bit with the opposite sense, followed by unsigned treatment of the rest of the bits.
In-place MSD binary-radix sort can be extended to larger radix and retain in-place capability.Counting sortis used to determine the size of each bin and their starting index. Swapping is used to place the current element into its bin, followed by expanding the bin boundary. As the array elements are scanned the bins are skipped over and only elements between bins are processed, until the entire array has been processed and all elements end up in their respective bins. The number of bins is the same as the radix used - e.g. 16 bins for 16-radix. Each pass is based on a single digit (e.g. 4-bits per digit in the case of 16-radix), starting from themost significant digit. Each bin is then processed recursively using the next digit, until all digits have been used for sorting.[9][10]
Neither in-place binary-radix sort nor n-bit-radix sort, discussed in paragraphs above, arestable algorithms.
MSD radix sort can be implemented as a stable algorithm, but requires the use of a memory buffer of the same size as the input array. This extra memory allows the input buffer to be scanned from the first array element to last, and move the array elements to the destination bins in the same order. Thus, equal elements will be placed in the memory buffer in the same order they were in the input array. The MSD-based algorithm uses the extra memory buffer as the output on the first level of recursion, but swaps the input and output on the next level of recursion, to avoid the overhead of copying the output result back to the input buffer. Each of the bins are recursively processed, as is done for the in-place MSD radix sort. After the sort by the last digit has been completed, the output buffer is checked to see if it is the original input array, and if it's not, then a single copy is performed. If the digit size is chosen such that the key size divided by the digit size is an even number, the copy at the end is avoided.[11]
Radix sort, such as the two-pass method wherecounting sortis used during the first pass of each level of recursion, has a large constant overhead. Thus, when the bins get small, other sorting algorithms should be used, such asinsertion sort. A good implementation of insertion sort is fast for small arrays, stable, in-place, and can significantly speed up radix sort.
This recursive sorting algorithm has particular application toparallel computing, as each of the bins can be sorted independently. In this case, each bin is passed to the next available processor. A single processor would be used at the start (the most significant digit). By the second or third digit, all available processors would likely be engaged. Ideally, as each subdivision is fully sorted, fewer and fewer processors would be utilized. In the worst case, all of the keys will be identical or nearly identical to each other, with the result that there will be little to no advantage to using parallel computing to sort the keys.
In the top level of recursion, opportunity for parallelism is in thecounting sortportion of the algorithm. Counting is highly parallel, amenable to the parallel_reduce pattern, and splits the work well across multiple cores until reaching memory bandwidth limit. This portion of the algorithm has data-independent parallelism. Processing each bin in subsequent recursion levels is data-dependent, however. For example, if all keys were of the same value, then there would be only a single bin with any elements in it, and no parallelism would be available. For random inputs all bins would be near equally populated and a large amount of parallelism opportunity would be available.[12]
There are faster parallel sorting algorithms available, for example optimal complexity O(log(n)) are those of the Three Hungarians and Richard Cole[13][14]andBatcher'sbitonic merge sorthas an algorithmic complexity of O(log2(n)), all of which have a lower algorithmic time complexity to radix sort on a CREW-PRAM. The fastest known PRAM sorts were described in 1991 byDavid M W Powerswith a parallelized quicksort that can operate in O(log(n)) time on a CRCW-PRAM withnprocessors by performing partitioning implicitly, as well as a radixsort that operates using the same trick in O(k), wherekis the maximum keylength.[15]However, neither the PRAM architecture or a single sequential processor can actually be built in a way that will scale without the number of constantfan-outgate delays per cycle increasing as O(log(n)), so that in effect a pipelined version of Batcher's bitonic mergesort and the O(log(n)) PRAM sorts are all O(log2(n)) in terms of clock cycles, with Powers acknowledging that Batcher's would have lower constant in terms of gate delays than his Parallelquicksortand radix sort, or Cole'smerge sort, for a keylength-independentsorting networkof O(nlog2(n)).[16]
Radix sorting can also be accomplished by building atree(orradix tree) from the input set, and doing apre-ordertraversal. This is similar to the relationship betweenheapsortand theheapdata structure. This can be useful for certain data types, seeburstsort.
|
https://en.wikipedia.org/wiki/Radix_sort
|
Disease surveillanceis anepidemiologicalpractice by which the spread ofdiseaseis monitored in order to establish patterns of progression. The main role of disease surveillance is to predict, observe, and minimize the harm caused byoutbreak,epidemic, andpandemicsituations, as well as increase knowledge about which factors contribute to such circumstances. A key part of modern disease surveillance is the practice ofdisease case reporting.[1]
In modern times, reporting incidences of disease outbreaks has been transformed from manual record keeping, to instant worldwide internet communication.
The number of cases could be gathered from hospitals – which would be expected to see most of the occurrences – collated, and eventually made public. With the advent of moderncommunication technology, this has changed dramatically. Organizations like theWorld Health Organization(WHO) and theCenters for Disease Control and Prevention(CDC) now can report cases and deaths from significant diseases within days – sometimes within hours – of the occurrence. Further, there is considerable public pressure to make this information available quickly and accurately.[2][failed verification]
Formal reporting ofnotifiableinfectious diseases is a requirement placed upon health care providers by many regional and national governments, and upon national governments by the World Health Organization to monitor spread as a result of thetransmissionof infectious agents. Since 1969, WHO has required that all cases of the following diseases be reported to the organization:cholera,plague,yellow fever,smallpox,relapsing feverandtyphus. In 2005, the list was extended to includepolioandSARS. Regional and national governments typically monitor a larger set of (around 80 in the U.S.) communicable diseases that can potentially threaten the general population.Tuberculosis,HIV,botulism,hantavirus,anthrax, andrabiesare examples of such diseases. The incidence counts of diseases are often used ashealth indicatorsto describe the overall health of a population.[citation needed]
TheWorld Health Organization(WHO) is the lead agency for coordinating global response to major diseases. The WHO maintains Websites for a number of diseases and has active teams in many countries where these diseases occur.[3]
During the SARS outbreak in early 2004, for example, theBeijingstaff of the WHO produced updates every few days for the duration of the outbreak.[2]Beginning in January 2004, the WHO has produced similar updates forH5N1.[4]These results arewidely reportedand closely watched.[citation needed]
WHO's Epidemic and Pandemic Alert and Response (EPR) to detect, verify rapidly and respond appropriately to epidemic-prone and emerging disease threats covers the following diseases:[5]
As the lead organization in global public health, the WHO occupies a delicate role inglobal politics. It must maintain good relationships with each of the many countries in which it is active. As a result, it may only report results within a particular country with the agreement of the country's government. Because some governments regard the release ofanyinformation on disease outbreaks as a state secret, this can place the WHO in a difficult position.[citation needed]
The WHO coordinatedInternational Outbreak Alert and Responseis designed to ensure "outbreaks of potential international importance are rapidly verified and information is quickly shared within the Network" but not necessarily by the public; integrate and coordinate "activities to support national efforts" rather than challenge national authority within that nation in order to "respect the independence and objectivity of all partners". The commitment that "All Network responses will proceed with full respect for ethical standards, human rights, national and local laws, cultural sensitivities and tradition" ensures each nation that its security, financial, and other interests will be given full weight.[6]
Testing for a disease can be expensive, and distinguishing between two diseases can be prohibitively difficult in many countries. One standard means of determining if a person has had a particular disease is to test for the presence ofantibodiesthat are particular to this disease. In the case of H5N1, for example, there is a low pathogenic H5N1 strain in wild birds in North America that a human could conceivably have antibodies against. It would be extremely difficult to distinguish between antibodies produced by this strain, and antibodies produced byAsian lineage HPAI A(H5N1). Similar difficulties are common, and make it difficult to determine how widely a disease may have spread.[citation needed]
There is currently little available data on the spread of H5N1 in wild birds in Africa and Asia. Without such data, predicting how the disease might spread in the future is difficult. Information that scientists and decision makers need to make useful medical products and informed decisions for health care, but currently lack include:[citation needed]
Surveillance ofH5N1in humans, poultry, wild birds, cats and other animals remains very weak in many parts of Asia and Africa. Much remains unknown about the exact extent of its spread.[citation needed]
H5N1 in China is less than fully reported. Blogs have described many discrepancies between official China government announcements concerning H5N1 and what people in China see with their own eyes. Many reports of total H5N1 cases have excluded China due to widespread disbelief in China's official numbers.[7][8][9][10](SeeDisease surveillance in China.)
"Only half the world's human bird flu cases are being reported to the World Health Organization within two weeks of being detected, a response time that must be improved to avert a pandemic, a senior WHO official said Saturday.Shigeru Omi, WHO's regional director for the Western Pacific, said it is estimated that countries would have only two to three weeks to stamp out, or at least slow, a pandemic flu strain after it began spreading in humans."[11]
David Nabarro, chief avian flu coordinator for theUnited Nations, says avian flu has too many unanswered questions.[12][13]
CIDRAPreported on 25 August 2006 on a new US government Website[14]that allows the public to view current information about testing of wild birds for H5N1 avian influenza, which is part of a national wild-bird surveillance plan that "includes five strategies for early detection of highly pathogenic avian influenza. Sample numbers from three of these will be available onHEDDS: live wild birds, subsistence hunter-killed birds, and investigations of sick and dead wild birds. The other two strategies involve domestic bird testing and environmental sampling of water and wild-bird droppings. [...] A map on the newUSGSsite shows that,9327birds from Alaska have been tested so far this year, with only a few from most other states. Last year, officials tested just721birds from Alaska and none from most other states, another map shows. The goal of the surveillance program for 2006 is to collect75000to100000samples from wild birds and50000environmental samples, officials have said".[15]
|
https://en.wikipedia.org/wiki/Disease_surveillance
|
Themean absolute difference(univariate) is ameasure of statistical dispersionequal to the averageabsolute differenceof two independent values drawn from aprobability distribution. A related statistic is therelative mean absolute difference, which is the mean absolute difference divided by thearithmetic mean, and equal to twice theGini coefficient.
The mean absolute difference is also known as theabsolute mean difference(not to be confused with theabsolute valueof themean signed difference) and theGinimean difference(GMD).[1]The mean absolute difference is sometimes denoted by Δ or as MD.
The mean absolute difference is defined as the "average" or "mean", formally theexpected value, of the absolute difference of tworandom variablesXandYindependently and identically distributedwith the same (unknown) distribution henceforth calledQ.
Specifically, in the discrete case,
In the continuous case,
An alternative form of the equation is given by:
When the probability distribution has a finite and nonzeroarithmetic meanAM, the relative mean absolute difference, sometimes denoted by Δ or RMD, is defined by
The relative mean absolute difference quantifies the mean absolute difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean absolute difference is equal to twice theGini coefficientwhich is defined in terms of theLorenz curve. This relationship gives complementary perspectives to both the relative mean absolute difference and the Gini coefficient, including alternative ways of calculating their values.
The mean absolute difference is invariant to translations and negation, and varies proportionally to positive scaling. That is to say, ifXis a random variable andcis a constant:
The relative mean absolute difference is invariant to positive scaling, commutes with negation, and varies under translation in proportion to the ratio of the original and translated arithmetic means. That is to say, ifXis a random variable and c is a constant:
If a random variable has a positive mean, then its relative mean absolute difference will always be greater than or equal to zero. If, additionally, the random variable can only take on values that are greater than or equal to zero, then its relative mean absolute difference will be less than 2.
The mean absolute difference is twice theL-scale(the secondL-moment), while the standard deviation is the square root of the variance about the mean (the second conventional central moment). The differences between L-moments and conventional moments are first seen in comparing the mean absolute difference and the standard deviation (the first L-moment and first conventional moment are both the mean).
Both thestandard deviationand the mean absolute difference measure dispersion—how spread out are the values of a population or the probabilities of a distribution. The mean absolute difference is not defined in terms of a specific measure of central tendency, whereas the standard deviation is defined in terms of the deviation from the arithmetic mean. Because the standard deviation squares its differences, it tends to give more weight to larger differences and less weight to smaller differences compared to the mean absolute difference. When the arithmetic mean is finite, the mean absolute difference will also be finite, even when the standard deviation is infinite. See theexamplesfor some specific comparisons.
The recently introduceddistance standard deviationplays similar role to the mean absolute difference but the distance standard deviation works with centered distances. See alsoE-statistics.
For a random sampleSfrom a random variableX, consisting ofnvaluesyi, the statistic
is aconsistentandunbiasedestimatorof MD(X). The statistic:
is aconsistentestimatorof RMD(X), but is not, in general,unbiased.
Confidence intervals for RMD(X) can be calculated using bootstrap sampling techniques.
There does not exist, in general, an unbiased estimator for RMD(X), in part because of the difficulty of finding an unbiased estimation for multiplying by the inverse of the mean. For example, even where the sample is known to be taken from a random variableX(p) for an unknownp, andX(p) − 1has theBernoulli distribution, so thatPr(X(p) = 1) = 1 −pandPr(X(p) = 2) =p, then
But the expected value of any estimatorR(S) of RMD(X(p)) will be of the form:[citation needed]
where theriare constants. So E(R(S)) can never equal RMD(X(p)) for allpbetween 0 and 1.
|
https://en.wikipedia.org/wiki/Mean_absolute_difference
|
Actuarial scienceis the discipline that appliesmathematicalandstatisticalmethods toassess riskininsurance,pension,finance,investmentand other industries and professions.
Actuariesare professionals trained in this discipline. In many countries, actuaries must demonstrate their competence by passing a series of rigorous professional examinations focused in fields such as probability and predictive analysis.
Actuarial science includes a number of interrelated subjects, including mathematics,probability theory, statistics, finance,economics,financial accountingandcomputer science. Historically, actuarial science used deterministic models in the construction of tables and premiums. The science has gone through revolutionary changes since the 1980s due to the proliferation of high speed computers and the union ofstochasticactuarial models with modern financial theory.[1]
Many universities have undergraduate and graduate degree programs in actuarial science. In 2010,[needs update]a study published by job search website CareerCast ranked actuary as the #1 job in the United States.[2]The study used five key criteria to rank jobs: environment, income, employment outlook, physical demands, and stress. In 2024,U.S. News & World Reportranked actuary as the third-best job in the business sector and the eighth-best job inSTEM.[3]
Actuarial science became a formal mathematical discipline in the late 17th century with the increased demand for long-term insurance coverage such as burial,life insurance, andannuities. These long term coverages required that money be set aside to pay future benefits, such as annuity and death benefits many years into the future. This requires estimating future contingent events, such as the rates of mortality by age, as well as the development of mathematical techniques for discounting the value of funds set aside and invested. This led to the development of an important actuarial concept, referred to as thepresent valueof a future sum. Certain aspects of the actuarial methods for discountingpension fundshave come under criticism from modernfinancial economics.[citation needed]
Actuarial science is also applied toproperty,casualty,liability, andgeneral insurance. In these forms of insurance, coverage is generally provided on a renewable period, (such as a yearly). Coverage can be cancelled at the end of the period by either party.[citation needed]
Propertyandcasualty insurancecompanies tend to specialize because of the complexity and diversity of risks.[citation needed]One division is to organize around personal and commercial lines of insurance. Personal lines of insurance are for individuals and include fire, auto, homeowners, theft and umbrella coverages. Commercial lines address the insurance needs of businesses and include property, business continuation, product liability, fleet/commercial vehicle, workers compensation, fidelity and surety, andD&Oinsurance. The insurance industry also provides coverage for exposures such as catastrophe, weather-related risks, earthquakes, patent infringement and other forms of corporate espionage, terrorism, and "one-of-a-kind" (e.g., satellite launch). Actuarial science provides data collection, measurement, estimating, forecasting, and valuation tools to provide financial and underwriting data for management to assess marketing opportunities and the nature of the risks. Actuarial science often helps to assess the overall risk from catastrophic events in relation to its underwriting capacity or surplus.[citation needed]
In thereinsurancefields, actuarial science can be used to design and price reinsurance and retrocession arrangements, and to establish reserve funds for known claims and future claims and catastrophes.[citation needed]
There is an increasing trend to recognize that actuarial skills can be applied to a range of applications outside the traditional fields of insurance, pensions, etc. One notable example is the use in some US states of actuarial models to set criminal sentencing guidelines. These models attempt to predict the chance of re-offending according to rating factors which include the type of crime, age, educational background and ethnicity of the offender.[7]However, these models have been open to criticism as providing justification for discrimination against specific ethnic groups by law enforcement personnel. Whether this is statistically correct or a self-fulfilling correlation remains under debate.[8]
Another example is the use of actuarial models to assess the risk of sex offense recidivism. Actuarial models and associated tables, such as the MnSOST-R, Static-99, and SORAG, have been used since the late 1990s to determine the likelihood that a sex offender will re-offend and thus whether he or she should be institutionalized or set free.[9]
Traditional actuarial science and modernfinancial economicsin the US have different practices, which is caused by different ways of calculating funding and investment strategies, and by different regulations.[citation needed]
Regulations are from theArmstrong investigation of 1905, theGlass–Steagall Act of 1932, the adoption of theMandatory Security Valuation Reserveby theNational Association of Insurance Commissioners, which cushioned market fluctuations, and theFinancial Accounting Standards Board, (FASB) in the US and Canada, which regulates pensions valuations and funding.[citation needed]
Historically, much of the foundation of actuarial theory predated modern financial theory. In the early twentieth century, actuaries were developing many techniques that can be found in modern financial theory, but for various historical reasons, these developments did not achieve much recognition.[10]
As a result, actuarial science developed along a different path, becoming more reliant on assumptions, as opposed to thearbitrage-freerisk-neutral valuationconcepts used in modern finance. The divergence is not related to the use of historical data and statistical projections of liability cash flows, but is instead caused by the manner in which traditional actuarial methods apply market data with those numbers. For example, one traditional actuarial method suggests that changing theasset allocationmix of investments can change the value of liabilities and assets (by changing thediscount rateassumption). This concept is inconsistent withfinancial economics.[citation needed]
The potential of modern financial economics theory to complement existing actuarial science was recognized by actuaries in the mid-twentieth century.[11]In the late 1980s and early 1990s, there was a distinct effort for actuaries to combine financial theory and stochastic methods into their established models.[12]Ideas from financial economics became increasingly influential in actuarial thinking, and actuarial science has started to embrace more sophisticated mathematical modelling of finance.[13]Today, the profession, both in practice and in the educational syllabi of many actuarial organizations, is cognizant of the need to reflect the combined approach of tables, loss models, stochastic methods, and financial theory.[14]However, assumption-dependent concepts are still widely used (such as the setting of the discount rate assumption as mentioned earlier), particularly in North America.[citation needed]
Product design adds another dimension to the debate. Financial economists argue that pension benefits are bond-like and should not be funded with equity investments without reflecting the risks of not achieving expected returns. But some pension products do reflect the risks of unexpected returns. In some cases, the pension beneficiary assumes the risk, or the employer assumes the risk. The current debate now seems to be focusing on four principles:
Essentially, financial economics state that pension assets should not be invested in equities for a variety of theoretical and practical reasons.[15]
Elementarymutual aidagreements and pensions arose in antiquity.[16]Early in theRoman empire, associations were formed to meet the expenses of burial, cremation, and monuments—precursors toburial insuranceandfriendly societies. A small sum was paid into a communal fund on a weekly basis, and upon the death of a member, the fund would cover the expenses of rites and burial. These societies sometimes sold shares in the building ofcolumbāria, or burial vaults, owned by the fund—the precursor tomutual insurance companies.[17]Other early examples of mutualsuretyand assurance pacts can be traced back to various forms of fellowship within the Saxon clans of England and their Germanic forebears, and to Celtic society.[18]However, many of these earlier forms of surety and aid would often fail due to lack of understanding and knowledge.[19]
The 17th century was a period of advances in mathematics in Germany, France and England. At the same time there was a rapidly growing desire and need to place the valuation of personal risk on a more scientific basis. Independently of each other,compound interestwas studied andprobability theoryemerged as a well-understood mathematical discipline. Another important advance came in 1662 from a Londondraper, the father ofdemography,John Graunt, who showed that there were predictable patterns of longevity and death in a group, orcohort, of people of the same age, despite the uncertainty of the date of death of any one individual. This study became the basis for the originallife table. One could now set up an insurance scheme to provide life insurance or pensions for a group of people, and to calculate with some degree of accuracy how much each person in the group should contribute to a common fund assumed to earn a fixed rate of interest. The first person to demonstrate publicly how this could be done wasEdmond Halley(ofHalley's cometfame). Halley constructed his own life table, and showed how it could be used to calculate thepremiumamount someone of a given age should pay to purchase a life annuity.[20]
James Dodson's pioneering work on the long term insurance contracts under which the same premium is charged each year led to the formation of the Society for Equitable Assurances on Lives and Survivorship (now commonly known asEquitable Life) in London in 1762.[21]William Morganis often considered the father of modern actuarial science for his work in the field in the 1780s and 90s. Many other life insurance companies and pension funds were created over the following 200 years. Equitable Life was the first to use the word "actuary" for its chief executive officer in 1762.[22]Previously, "actuary" meant an official who recorded the decisions, or "acts", of ecclesiastical courts.[19]Other companies that did not use such mathematical and scientific methods most often failed or were forced to adopt the methods pioneered by Equitable.[23]
In the 18th and 19th centuries, calculations were performed without computers. The computations of life insurance premiums and reserving requirements are rather complex, and actuaries developed techniques to make the calculations as easy as possible, for example "commutation functions" (essentially precalculated columns of summations over time of discounted values of survival and death probabilities).[24]Actuarial organizations were founded to support and further both actuaries and actuarial science, and to protect the public interest by promoting competency and ethical standards.[25]However, calculations remained cumbersome, and actuarial shortcuts were commonplace. Non-life actuaries followed in the footsteps of their life insurance colleagues during the 20th century. The 1920 revision for the New-York based National Council on Workmen's Compensation Insurance rates took over two months of around-the-clock work by day and night teams of actuaries.[26]In the 1930s and 1940s, the mathematical foundations forstochasticprocesses were developed.[27]Actuaries could now begin to estimate losses using models of random events, instead of thedeterministicmethods they had used in the past. The introduction and development of the computer further revolutionized the actuarial profession. From pencil-and-paper to punchcards to current high-speed devices, the modeling and forecasting ability of the actuary has rapidly improved, while still being heavily dependent on the assumptions input into the models, and actuaries needed to adjust to this new world .[28]
|
https://en.wikipedia.org/wiki/Actuarial_science
|
Inmathematics, specificallyset theory, thecontinuum hypothesis(abbreviatedCH) is a hypothesis about the possible sizes ofinfinite sets. It states:
There is no set whosecardinalityis strictly between that of theintegersand thereal numbers.
Or equivalently:
Any subset of the real numbers is either finite, or countably infinite, or has the cardinality of the real numbers.
InZermelo–Fraenkel set theorywith theaxiom of choice(ZFC), this is equivalent to the following equation inaleph numbers:2ℵ0=ℵ1{\displaystyle 2^{\aleph _{0}}=\aleph _{1}}, or even shorter withbeth numbers:ℶ1=ℵ1{\displaystyle \beth _{1}=\aleph _{1}}.
The continuum hypothesis was advanced byGeorg Cantorin 1878,[1]and establishing its truth or falsehood is the first ofHilbert's 23 problemspresented in 1900. The answer to this problem isindependentof ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 byPaul Cohen, complementing earlier work byKurt Gödelin 1940.[2]
The name of the hypothesis comes from the termthe continuumfor the real numbers.
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it.[3]It became the first on David Hilbert'slist of important open questionsthat was presented at theInternational Congress of Mathematiciansin the year 1900 in Paris.Axiomatic set theorywas at that point not yet formulated.Kurt Gödelproved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory.[2]The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 byPaul Cohen.[4]
Two sets are said to have the samecardinalityorcardinal numberif there exists abijection(a one-to-one correspondence) between them. Intuitively, for two setsS{\displaystyle S}andT{\displaystyle T}to have the same cardinality means that it is possible to "pair off" elements ofS{\displaystyle S}with elements ofT{\displaystyle T}in such a fashion that every element ofS{\displaystyle S}is paired off with exactly one element ofT{\displaystyle T}and vice versa. Hence, the set{banana,apple,pear}{\displaystyle \{{\text{banana}},{\text{apple}},{\text{pear}}\}}has the same cardinality as{yellow,red,green}{\displaystyle \{{\text{yellow}},{\text{red}},{\text{green}}\}}despite the sets themselves containing different elements.
With infinite sets such as the set ofintegersorrational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbersQ{\displaystyle \mathbb {Q} }seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed since it does not take into account the fact that all three sets areinfinite. Perhaps more importantly, it in fact conflates the concept of "size" of the setQ{\displaystyle \mathbb {Q} }with the order or topological structure placed on it. In fact, it turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are bothcountable sets.[5]
Cantor gave two proofs that the cardinality of the set ofintegersis strictly smaller than that of the set ofreal numbers(seeCantor's first uncountability proofandCantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
In simple terms, the Continuum Hypothesis (CH) states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every setS⊆R{\displaystyle S\subseteq \mathbb {R} }of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one intoS{\displaystyle S}. Since the real numbers areequinumerouswith thepowersetof the integers, i.e.|R|=2ℵ0{\displaystyle |\mathbb {R} |=2^{\aleph _{0}}}, CH can be restated as follows:
Continuum Hypothesis—∄S:ℵ0<|S|<2ℵ0{\displaystyle \nexists S\colon \aleph _{0}<|S|<2^{\aleph _{0}}}.
Assuming theaxiom of choice, there is a unique smallest cardinal numberℵ1{\displaystyle \aleph _{1}}greater thanℵ0{\displaystyle \aleph _{0}}, and the continuum hypothesis is in turn equivalent to the equality2ℵ0=ℵ1{\displaystyle 2^{\aleph _{0}}=\aleph _{1}}.[6][7]
The independence of the continuum hypothesis (CH) fromZermelo–Fraenkel set theory(ZF) follows from combined work ofKurt GödelandPaul Cohen.
Gödel[8][2]showed that CH cannot be disproved from ZF, even if theaxiom of choice(AC) is adopted, i.e. from ZFC. Gödel's proof shows that both CH and AC hold in theconstructible universeL{\displaystyle L}, aninner modelof ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are (relatively)consistentwith ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due toGödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen[4][9]showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method offorcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds and constructs another model which contains more sets than the original in a way that CH does not hold in the new model. Cohen was awarded theFields Medalin 1966 for his proof.
Cohen's independence proof shows that CH is independent of ZFC. Further research has shown that CH is independent of all knownlarge cardinal axiomsin the context of ZFC.[10]Moreover, it has been shown that thecardinality of the continuumc=2ℵ0{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}}can be any cardinal consistent withKőnig's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, ifκ{\displaystyle \kappa }is a cardinal of uncountablecofinality, then there is a forcing extension in which2ℵ0=κ{\displaystyle 2^{\aleph _{0}}=\kappa }. However, per Kőnig's theorem, it is not consistent to assume2ℵ0{\displaystyle 2^{\aleph _{0}}}isℵω{\displaystyle \aleph _{\omega }}orℵω1+ω{\displaystyle \aleph _{\omega _{1}+\omega }}or any cardinal with cofinalityω{\displaystyle \omega }.
The continuum hypothesis is closely related tomany statementsinanalysis, point settopologyandmeasure theory. As a result of its independence, many substantialconjecturesin those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research: seeWoodin[11][12]andKoellner[13]for an overview of the current research status.
The continuum hypothesis and theaxiom of choicewere among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuminggood soundness propertiesand the consistency of ZFC,Gödel's incompleteness theoremspublished in 1931 establish that there is a formal statement Con(ZFC) (one for each appropriateGödel numberingscheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that theZermelo–Fraenkelaxioms do not adequately characterize the universe of sets. Gödel was aPlatonistand therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though aformalist,[14]also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large"universeof sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against theaxiom of constructibility, which implies CH. More recently,Matthew Foremanhas pointed out thatontological maximalismcan actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.[15]
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 bySkolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known asSkolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although theaxiom of constructibilitydoes resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.[16]
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling[17]presented an argument against CH by showing that the negation of CH is equivalent toFreiling's axiom of symmetry, a statement derived by arguing from particular intuitions aboutprobabilities. Freiling believes this axiom is "intuitively clear"[17]but others have disagreed.[18][19]
A difficult argument against CH developed byW. Hugh Woodinhas attracted considerable attention since the year 2000.[11][12]Foremandoes not reject Woodin's argument outright but urges caution.[20]Woodin proposed a new hypothesis that he labeled the"(*)-axiom", or "Star axiom". The Star axiom would imply that2ℵ0{\displaystyle 2^{\aleph _{0}}}isℵ2{\displaystyle \aleph _{2}}, thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation ofMartin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.[21][22]
Solomon Fefermanargued that CH is not a definite mathematical problem.[23]He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that acceptsclassical logicfor bounded quantifiers but usesintuitionistic logicfor unbounded ones, and suggested that a propositionϕ{\displaystyle \phi }is mathematically "definite" if the semi-intuitionistic theory can prove(ϕ∨¬ϕ){\displaystyle (\phi \lor \neg \phi )}. He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value.Peter Koellnerwrote a critical commentary on Feferman's article.[24]
Joel David Hamkinsproposes amultiverseapproach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for".[25]In a related vein,Saharon Shelahwrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".[26]
Thegeneralized continuum hypothesis(GCH) states that if an infinite set's cardinality lies between that of an infinite setSand that of thepower setP(S){\displaystyle {\mathcal {P}}(S)}ofS, then it has the same cardinality as eitherSorP(S){\displaystyle {\mathcal {P}}(S)}. That is, for anyinfinitecardinalλ{\displaystyle \lambda }there is no cardinalκ{\displaystyle \kappa }such thatλ<κ<2λ{\displaystyle \lambda <\kappa <2^{\lambda }}. GCH is equivalent to:
(occasionally calledCantor's aleph hypothesis).
Thebeth numbersprovide an alternative notation for this condition:ℵα=ℶα{\displaystyle \aleph _{\alpha }=\beth _{\alpha }}for every ordinalα{\displaystyle \alpha }. The continuum hypothesis is the special case for the ordinalα=1{\displaystyle \alpha =1}. GCH was first suggested byPhilip Jourdain.[27]For the early history of GCH, see Moore.[28]
Like CH, GCH is also independent of ZFC, butSierpińskiproved that ZF + GCH implies theaxiom of choice(AC) (and therefore the negation of theaxiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than somealeph number, and thus can be ordered. This is done by showing that n is smaller than2ℵ0+n{\displaystyle 2^{\aleph _{0}+n}}which is smaller than its ownHartogs number—this uses the equality2ℵ0+n=2⋅2ℵ0+n{\displaystyle 2^{\aleph _{0}+n}\,=\,2\cdot \,2^{\aleph _{0}+n}}; for the full proof, see Gillman.[29]
Kurt Gödelshowed that GCH is a consequence of ZF +V=L(the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to proveEaston's theorem, which shows it is consistent with ZFC for arbitrarily large cardinalsℵα{\displaystyle \aleph _{\alpha }}to fail to satisfy2ℵα=ℵα+1{\displaystyle 2^{\aleph _{\alpha }}=\aleph _{\alpha +1}}. Much later,ForemanandWoodinproved that (assuming the consistency of very large cardinals) it is consistent that2κ>κ+{\displaystyle 2^{\kappa }>\kappa ^{+}}holds for every infinite cardinalκ{\displaystyle \kappa }. Later Woodin extended this by showing the consistency of2κ=κ++{\displaystyle 2^{\kappa }=\kappa ^{++}}for everyκ{\displaystyle \kappa }.Carmi Merimovich[30]showed that, for eachn≥ 1, it is consistent with ZFC that for each infinite cardinalκ,2κis thenth successor ofκ(assuming the consistency of some large cardinal axioms). On the other hand, László Patai[31]proved that ifγis an ordinal and for each infinite cardinalκ,2κis theγth successor ofκ, thenγis finite.
For any infinite setsAandB, if there is an injection fromAtoBthen there is an injection from subsets ofAto subsets ofB. Thus for any infinite cardinalsAandB,A<B→2A≤2B{\displaystyle A<B\to 2^{A}\leq 2^{B}}. IfAandBare finite, the stronger inequalityA<B→2A<2B{\displaystyle A<B\to 2^{A}<2^{B}}holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiationℵαℵβ{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}}in all cases. GCH implies that for ordinalsαandβ:[32]
The first equality (whenα≤β+1) follows from:
ℵαℵβ≤ℵβ+1ℵβ=(2ℵβ)ℵβ=2ℵβ⋅ℵβ=2ℵβ=ℵβ+1{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\leq \aleph _{\beta +1}^{\aleph _{\beta }}=(2^{\aleph _{\beta }})^{\aleph _{\beta }}=2^{\aleph _{\beta }\cdot \aleph _{\beta }}=2^{\aleph _{\beta }}=\aleph _{\beta +1}}while:
ℵβ+1=2ℵβ≤ℵαℵβ.{\displaystyle \aleph _{\beta +1}=2^{\aleph _{\beta }}\leq \aleph _{\alpha }^{\aleph _{\beta }}.}
The third equality (whenβ+1 <αandℵβ≥cf(ℵα){\displaystyle \aleph _{\beta }\geq \operatorname {cf} (\aleph _{\alpha })}) follows from:
ℵαℵβ≥ℵαcf(ℵα)>ℵα{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\geq \aleph _{\alpha }^{\operatorname {cf} (\aleph _{\alpha })}>\aleph _{\alpha }}
byKőnig's theorem, while:
ℵαℵβ≤ℵαℵα≤(2ℵα)ℵα=2ℵα⋅ℵα=2ℵα=ℵα+1{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\leq \aleph _{\alpha }^{\aleph _{\alpha }}\leq (2^{\aleph _{\alpha }})^{\aleph _{\alpha }}=2^{\aleph _{\alpha }\cdot \aleph _{\alpha }}=2^{\aleph _{\alpha }}=\aleph _{\alpha +1}}
Quotations related toContinuum hypothesisat Wikiquote
|
https://en.wikipedia.org/wiki/Continuum_hypothesis
|
Inmathematics, thefirst uncountable ordinal, traditionally denoted byω1{\displaystyle \omega _{1}}or sometimes byΩ{\displaystyle \Omega }, is the smallestordinal numberthat, considered as aset, isuncountable. It is thesupremum(least upper bound) of all countable ordinals. When considered as a set, the elements ofω1{\displaystyle \omega _{1}}are the countable ordinals (including finite ordinals),[1]of which there are uncountably many.
Like any ordinal number (invon Neumann's approach),ω1{\displaystyle \omega _{1}}is awell-ordered set, withset membershipserving as the order relation.ω1{\displaystyle \omega _{1}}is alimit ordinal, i.e. there is no ordinalα{\displaystyle \alpha }such thatω1=α+1{\displaystyle \omega _{1}=\alpha +1}.
Thecardinalityof the setω1{\displaystyle \omega _{1}}is the first uncountablecardinal number,ℵ1{\displaystyle \aleph _{1}}(aleph-one). The ordinalω1{\displaystyle \omega _{1}}is thus theinitial ordinalofℵ1{\displaystyle \aleph _{1}}. Under thecontinuum hypothesis, the cardinality ofω1{\displaystyle \omega _{1}}isℶ1{\displaystyle \beth _{1}}, the same as that ofR{\displaystyle \mathbb {R} }—the set ofreal numbers.[2]
In most constructions,ω1{\displaystyle \omega _{1}}andℵ1{\displaystyle \aleph _{1}}are considered equal as sets. To generalize: ifα{\displaystyle \alpha }is an arbitrary ordinal, we defineωα{\displaystyle \omega _{\alpha }}as the initial ordinal of the cardinalℵα{\displaystyle \aleph _{\alpha }}.
The existence ofω1{\displaystyle \omega _{1}}can be proven without theaxiom of choice. For more, seeHartogs number.
Any ordinal number can be turned into atopological spaceby using theorder topology. When viewed as a topological space,ω1{\displaystyle \omega _{1}}is often written as[0,ω1){\displaystyle [0,\omega _{1})}, to emphasize that it is the space consisting of all ordinals smaller thanω1{\displaystyle \omega _{1}}.
If theaxiom of countable choiceholds, everyincreasing ω-sequenceof elements of[0,ω1){\displaystyle [0,\omega _{1})}converges to alimitin[0,ω1){\displaystyle [0,\omega _{1})}. The reason is that theunion(i.e., supremum) of every countable set of countable ordinals is another countable ordinal.
The topological space[0,ω1){\displaystyle [0,\omega _{1})}issequentially compact, but notcompact. As a consequence, it is notmetrizable. It is, however,countably compactand thus notLindelöf(a countably compact space is compact if and only if it is Lindelöf). In terms ofaxioms of countability,[0,ω1){\displaystyle [0,\omega _{1})}isfirst-countable, but neitherseparablenorsecond-countable.
The space[0,ω1]=ω1+1{\displaystyle [0,\omega _{1}]=\omega _{1}+1}is compact and not first-countable.ω1{\displaystyle \omega _{1}}is used to define thelong lineand theTychonoff plank—two important counterexamples intopology.
|
https://en.wikipedia.org/wiki/First_uncountable_ordinal
|
Incomputational complexity theory,L/polyis thecomplexity classoflogarithmic spacemachines with a polynomial amount ofadvice. L/poly is anon-uniformlogarithmic space class, analogous to the non-uniform polynomial time classP/poly.[1]
Formally, for aformal languageLto belong to L/poly, there must exist an advice functionfthat maps an integernto a string of length polynomial inn, and aTuring machineM with two read-only input tapes and one read-write tape of size logarithmic in the input size, such that an inputxof lengthnbelongs toLif and only if machine M accepts the inputx,f(n).[2]Alternatively and more simply,Lis in L/poly if and only if it can be recognized bybranching programsof polynomial size.[3]One direction of the proof that these two models of computation are equivalent in power is the observation that, if a branching program of polynomial size exists, it can be specified by the advice function and simulated by the Turing machine. In the other direction, a Turing machine with logarithmic writable space and a polynomial advice tape may be simulated by a branching program the states of which represent the combination of the configuration of the writable tape and the position of the Turing machine heads on the other two tapes.
In 1979, Aleliunas et al. showed thatsymmetric logspaceis contained in L/poly.[4]However, this result was superseded byOmer Reingold's result that SL collapses to uniform logspace.[5]
BPLis contained in L/poly, which is a variant ofAdleman's theorem.[6]
Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/L/poly
|
Yan Tan Tetheraoryan-tan-tetherais a sheep-counting system traditionally used byshepherdsinYorkshire,Northern Englandand some other parts ofBritain.[1]The words may be derived from numbers inBrythonic Celticlanguages such asCumbricwhich had died out in most of Northern England by the sixth century, but they were commonly used for sheep counting and counting stitches inknittinguntil theIndustrial Revolution, especially in thefellsof theLake District. Though most of these number systems fell out of use by the turn of the 20th century, some are still in use.
Sheep-counting systems ultimately derive fromBrythonicCeltic languages, such asCumbric; Tim Gay writes: “[Sheep-counting systems from all over the British Isles] all compared very closely to 18th-centuryCornishand modernWelsh".[2]It is impossible, given the corrupted form in which they have survived, to be sure of their exact origin. The counting systems have changed considerably over time. A particularly common tendency is for certain pairs of adjacent numbers to come to resemble each other byrhyme(notably the words for 1 and 2, 3 and 4, 6 and 7, or 8 and 9). Still, multiples of five tend to be fairly conservative; comparebumfitwith Welshpymtheg, in contrast with standard Englishfifteen.
Like most Celtic numbering systems, they tend to bevigesimal(basedon the number twenty), but they usually lack words to describe quantities larger than twenty; this is not a limitation of either modernised decimal Celtic counting systems or the older ones. To count a large number of sheep, a shepherd would repeatedly count to twenty, placing a mark on the ground, or move a hand to another mark on ashepherd's crook, or drop a pebble into a pocket to represent eachscore(e.g. 5 score sheep = 100 sheep).
Their use is also attested in a "knitting song" known to be sung around the middle of the nineteenth century inWensleydale,Yorkshire, beginning "yahn, tayhn, tether, mether, mimph".[3]
The counting system has been used for products sold withinNorthern EnglandandYorkshire, such as prints,[4]beers,[5]alcoholic sparkling water (hard seltzer in U.S.),[6]and yarns,[7]as well as in artistic works referencing the region, such asHarrison Birtwistle's 1986 operaYan Tan Tethera.
Jake Thackray's song "Old Molly Metcalfe"[8]from his 1972 albumBantam Cockuses the Swaledale "Yan Tan Tether Mether Pip" as a repeating lyrical theme.
Garth Nixused the counting system to name the seven Grotesques in his novelGrim Tuesday.[9]
The wordyanoryenfor 'one' inCumbrian,Northumbrian, and someYorkshiredialects generally represents a regular development inNorthern Englishin which theOld Englishlong vowel/ɑː/<ā> was broken into/ie/,/ia/and so on. This explains the shift toyanandanefrom the Old Englishān, which is itself derived from theProto-Germanic*ainaz.[10][11]Another example of this development is the Northern English word for 'home',hame, which has forms such ashyem, yemandyamall deriving from the Old Englishhām.[12]
Note: Scots here means "Scots" not "Gaelic"
|
https://en.wikipedia.org/wiki/Yan_tan_tethera
|
Abilingual punis apuncreated by a word or phrase in one language sounding similar to a different word or phrase in another language. The result of a bilingual pun can be a joke that makes sense in more than one language (a joke that can be translated) or a joke which requires understanding of both languages (a joke specifically for those that are bilingual). A bilingual pun can be made with a word from another language that has the same meaning, or an opposite meaning.
A bilingual pun involves a word from one language which has the same or similar meaning in another language's word. The word is often homophonic whether on purpose or by accident.[1]Another feature of the bilingual pun is that the person does not always need to have the ability to speak both languages in order to understand the pun. The bilingual pun can also demonstrate common ground with a person who speaks another language.[2]
There are what appear to be Biblical bilingual puns. InExodus 10:10,Mosesis warned by theEgyptian Pharaohthat evil awaits him. InHebrewthe word "ra" (רע) means evil, but in Egyptian "Ra" is the sun god. So when Moses was warned the word "ra" can mean the sun god stands in the way, or evil stands in the way.[3]
Unintentional bilingual puns occur in translations of one ofShakespeare's plays:Henry V. The line spoken by Katherine, "I cannot speak your England" becomes political in French.[4][specify]
The famous paper “Fun withF1” is a French-English pun as 1 is un in French.
Wario's name is aportmanteauof the name Mario and the Japanese wordwarui(悪い), meaning "bad", reflecting how he is a bad version ofMario. But in English, "Wario" can be seen as either a portmanteau of the name Mario and "war", or as a flip of the M in "Mario".
|
https://en.wikipedia.org/wiki/Bilingual_pun
|
Significant figures, also referred to assignificant digits, are specificdigitswithin a number that is written inpositional notationthat carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the digits that are determined by theresolutionare dependable and therefore considered significant.
For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Further, digits that are uncertain yet meaningful are also included in the significant figures. In this example, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty.[1]Therefore, this measurement contains four significant figures.
Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are meaningful, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant.[1]Thus, there are three significant figures in this example.
The following types of digits are not considered significant:[2]
A zero after a decimal (e.g., 1.0) is significant, and care should be used when appending such a decimal of zero. Thus, in the case of 1.0, there are two significant figures, whereas 1 (without a decimal) has one significant figure.
Among a number's significant digits, themost significant digitis the one with the greatest exponent value (the leftmost significant digit/figure), while theleast significant digitis the one with the lowest exponent value (the rightmost significant digit/figure). For example, in the number "123" the "1" is the most significant digit, representing hundreds (102), while the "3" is the least significant digit, representing ones (100).
To avoid conveying a misleading level of precision, numbers are oftenrounded. For instance, it would createfalse precisionto present a measurement as 12.34525 kg when the measuring instrument only provides accuracy to the nearest gram (0.001 kg). In this case, the significant figures are the first five digits (1, 2, 3, 4, and 5) from the leftmost digit, and the number should be rounded to these significant figures, resulting in 12.345 kg as the accurate value. Therounding error(in this example, 0.00025 kg = 0.25 g) approximates the numerical resolution or precision. Numbers can also be rounded for simplicity, not necessarily to indicate measurement precision, such as for the sake of expediency in news broadcasts.
Significance arithmetic encompasses a set of approximate rules for preserving significance through calculations. More advanced scientific rules are known as thepropagation of uncertainty.
Radix10 (base-10, decimal numbers) is assumed in the following. (SeeUnit in the last placefor extending these concepts to other bases.)
Identifying the significant figures in a number requires knowing which digits are meaningful, which requires knowing the resolution with which the number is measured, obtained, or processed. For example, if the measurable smallest mass is 0.001 g, then in a measurement given as 0.00234 g the "4" is not useful and should be discarded, while the "3" is useful and should often be retained.[3]
The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:
As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros:
Roundingto significant figures is a more general-purpose technique than rounding tondigits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.
To round a number tonsignificant figures:[8][9]
In financial calculations, a number is often rounded to a given number of places. For example, to two places after thedecimal separatorfor many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.
In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.
As an illustration, thedecimalquantity12.345can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number isroundedin some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).
Another example for0.012345. (Remember that the leading zeros are not significant.)
The representation of a non-zero numberxto a precision ofpsignificant digits has a numerical value that is given by the formula:[citation needed]
10n⋅round(x10n){\displaystyle 10^{n}\cdot \operatorname {round} \left({\frac {x}{10^{n}}}\right)}
where
n=⌊log10(|x|)⌋+1−p{\displaystyle n=\lfloor \log _{10}(|x|)\rfloor +1-p}
which may need to be written with a specific marking as detailedaboveto specify the number of significant trailing zeros.
It is recommended for a measurement result to include the measurement uncertainty such asxbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}, wherexbestandσxare the best estimate and uncertainty in the measurement respectively.[10]xbestcan be the average of measured values andσxcan be the standard deviation or a multiple of the measurement deviation. The rules to writexbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}are:[11]
Uncertainty may be implied by the last significant figure if it is not explicitly expressed.[1]The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the mass of an object is reported as 3.78 kg without mentioning uncertainty, then ± 0.005 kg measurement uncertainty may be implied. If the mass of an object is estimated as 3.78 ± 0.07 kg, so the actual mass is probably somewhere in the range 3.71 to 3.85 kg, and it is desired to report it with a single number, then 3.8 kg is the best number to report since its implied uncertainty ± 0.05 kg gives a mass range of 3.75 to 3.85 kg, which is close to the measurement range. If the uncertainty is a bit larger, i.e. 3.78 ± 0.09 kg, then 3.8 kg is still the best single number to quote, since if "4 kg" was reported then a lot of information would be lost.
If there is a need to write the implied uncertainty of a number, then it can be written asx±σx{\displaystyle x\pm \sigma _{x}}with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), wherexandσxare the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.
As there are rules to determine the significant figures in directlymeasuredquantities, there are also guidelines (not rules) to determine the significant figures in quantitiescalculatedfrom thesemeasuredquantities.
Significant figures inmeasuredquantities are most important in the determination of significant figures incalculated quantitieswith them. A mathematical or physical constant (e.g.,πin the formula for thearea of a circlewith radiusrasπr2) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as1/2in the formula for thekinetic energyof a massmwith velocityvas1/2mv2has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).
The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.
For quantities created from measured quantities viamultiplicationanddivision, the calculated result should have as many significant figures as theleastnumber of significant figures among the measured quantities used in the calculation.[12]For example,
withone,two, andonesignificant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.
For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then20.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 20.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.
Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.106 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach.[13]As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.106= 11.106 (one significant digit increase).
For quantities created from measured quantities viaadditionandsubtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as theleftmostor largest digit position among the last significant figures of themeasuredquantities in the calculation. For example,
with the last significant figures in theonesplace,tenthsplace,onesplace, andthousandsplace respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in theonesplace. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.
The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant.[citation needed]However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.[citation needed]
Thebase-10logarithmof anormalized number(i.e.,a× 10bwith 1 ≤a< 10 andbas an integer), is rounded such that its decimal part (calledmantissa) has as many significant figures as the significant figures in the normalized number.
When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.
If atranscendental functionf(x){\displaystyle f(x)}(e.g., theexponential function, thelogarithm, and thetrigonometric functions) is differentiable at its domain element 'x', then its number of significant figures (denoted as "significant figures off(x){\displaystyle f(x)}") is approximately related with the number of significant figures inx(denoted as "significant figures ofx") by the formula
(significantfiguresoff(x))≈(significantfiguresofx)−log10(|df(x)dxxf(x)|){\displaystyle {\rm {(significant~figures~of~f(x))}}\approx {\rm {(significant~figures~of~x)}}-\log _{10}\left(\left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert \right)},
where|df(x)dxxf(x)|{\displaystyle \left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert }is thecondition number.
When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.[14]
When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.[15]
It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy.[16]Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.
When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.
Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See theaccuracy and precisionarticle for a full discussion.) In either case, the number of significant figures roughly corresponds toprecision, not to accuracy or the newer concept of trueness.
Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general withbinary numbers. The number of correct significant figures is closely related to the notion ofrelative error(which has the advantage of being a more accurate measure of precision, and is independent of theradix, also known as the base, of the number system used).
Electronic calculatorssupporting a dedicated significant figures display mode are relatively rare.
Among the calculators to support related features are theCommodoreM55 Mathematician(1976)[17]and theS61 Statistician(1976),[18]which support two display modes, whereDISP+nwill givensignificant digits in total, whileDISP+.+nwill givendecimal places.
TheTexas InstrumentsTI-83 Plus(1999) andTI-84 Plus(2004) families ofgraphical calculatorssupport aSig-Fig Calculatormode in which the calculator will evaluate the count of significant digits of entered numbers and display it in square brackets behind the corresponding number. The results of calculations will be adjusted to only show the significant digits as well.[19]
For theHP20b/30b-based community-developedWP 34S(2011) andWP 31S(2014) calculators significant figures display modesSIG+nandSIG0+n(with zero padding) are available as acompile-timeoption.[20][21]TheSwissMicrosDM42-based community-developed calculatorsWP 43C(2019)[22]/C43(2022) /C47(2023) support a significant figures display mode as well.
|
https://en.wikipedia.org/wiki/Decimal_place
|
ISO 26262, titled "Road vehicles – Functional safety", is an international standard forfunctional safetyof electrical and/or electronic systems that are installed in serial production road vehicles (excluding mopeds), defined by theInternational Organization for Standardization(ISO) in 2011, and revised in 2018.
Functional safety features form an integral part of eachautomotiveproduct development phase, ranging from the specification, to design, implementation, integration, verification, validation, and production release. The standard ISO 26262 is an adaptation of the Functional Safety standardIEC 61508for Automotive Electric/Electronic Systems. ISO 26262 defines functional safety for automotive equipment applicable throughout the lifecycle of all automotive electronic and electrical safety-related systems.
The first edition (ISO 26262:2011), published on 11 November 2011, was limited to electrical and/or electronic systems installed in "series production passengercars" with a maximum gross weight of 3,500 kilograms (7,700 lb). The second edition (ISO 26262:2018), published in December 2018, extended the scope from passenger cars to all roadvehiclesexceptmopeds.[1]
The standard aims to address possible hazards caused by the malfunctioning behaviour of electronic and electrical systems in vehicles. Although entitled "Road vehicles – Functional safety" the standard relates to the functional safety of Electrical and Electronic systems as well as that of systems as a whole or of their mechanical subsystems.
Like its parent standard,IEC 61508, ISO 26262 is a risk-based safety standard, where the risk of hazardous operational situations is qualitatively assessed and safety measures are defined to avoid or control systematic failures and to detect or control random hardware failures, or mitigate their effects.
Goals of ISO 26262:
ISO 26262:2018 consists of twelve parts, ten normative parts (parts 1 to 9 and 12) and two guidelines (parts 10 and 11):[citation needed]
In comparison, ISO 26262:2011 consisted of just 10 parts, with slightly different naming:
ISO 26262 specifies a vocabulary (aProject Glossary) of terms, definitions, and abbreviations for application in all parts of the standard.[1]Of particular importance is the careful definition offault,error, andfailureas these terms are key to the standard’s definitions of functional safety processes,[3]particularly in the consideration that "Afaultcan manifest itself as anerror... and theerrorcan ultimately cause afailure".[1]A resultingmalfunctionthat has ahazardouseffect represents a loss offunctional safety.
Note:In contrast to otherFunctional Safetystandards and the updated ISO 26262:2018,Fault Tolerancewas not explicitly defined in ISO 26262:2011 – since it was assumed impossible to comprehend all possible faults in a system.[4]
Note:ISO 26262 does not use theIEC 61508termSafe failure fraction(SFF). The termssingle point faults metricandlatent faults metricare used instead.[5]
ISO 26262 provides a standard forfunctional safetymanagement for automotive applications, defining standards for overall organizational safety management as well as standards for asafety life cyclefor the development and production of individual automotive products.[6][7][8][9]The ISO 26262 safety life cycle described in the next section operates on the following safety management concepts:[1]
Processes within the ISO 26262safety life cycleidentify and assess hazards (safety risks), establish specific safety requirements to reduce those risks to acceptable levels, and manage and track those safety requirements to produce reasonable assurance that they are accomplished in the delivered product. These safety-relevant processes may be viewed as being integrated or running in parallel with a managed requirements life cycle of a conventionalQuality Management System:[10][11]
ISO 26262 defines objectives for integral processes that are supportive to the Safety Life Cycle processes, but are continuously active throughout all phases, and also defines additional considerations that support accomplishment of general process objectives.
Automotive Safety Integrity Levelrefers to an abstract classification of inherent safety risk in an automotive system or elements of such a system. ASIL classifications are used within ISO 26262 to express the level of risk reduction required to prevent a specific hazard, with ASIL D representing the highest hazard level and ASIL A the lowest. The ASIL assessed for a given hazard is then assigned to the safety goal set to address that hazard and is then inherited by the safety requirements derived from that goal.[12]
The determination of ASIL is the result ofhazard analysis and risk assessment.[13]In the context of ISO 26262, a hazard is assessed based on the relative impact of hazardous effects related to a system, as adjusted for relative likelihoods of the hazard manifesting those effects. That is, each hazardous event is assessed in terms of severity of possible injuries within the context of the relative amount of time a vehicle is exposed to the possibility of the hazard happening as well as the relative likelihood that a typical driver can act to prevent the injury.[14]
At the beginning of thesafety life cycle, hazard analysis and risk assessment is performed, resulting in assessment of ASIL to all identified hazardous events and safety goals.
Eachhazardous eventis classified according to theseverity(S) ofinjuriesit can be expected to cause:
Risk Managementrecognizes that consideration of the severity of a possible injury is modified by how likely the injury is to happen; that is, for a given hazard, a hazardous event is considered a lower risk if it is less likely to happen. Within thehazard analysis and risk assessmentprocess of this standard, the likelihood of an injurious hazard is further classified according to a combination of
In terms of these classifications, anAutomotive Safety Integrity Level Dhazardous event (abbreviatedASIL D) is defined as an event having reasonable possibility of causing a life-threatening (survival uncertain) or fatal injury, with the injury being physically possible in most operating conditions, and with little chance the driver can do something to prevent the injury. That is,ASIL Dis the combination of S3, E4, and C3 classifications. For each single reduction in any one of these classifications from its maximum value (excluding reduction of C1 to C0), there is a single-level reduction in the ASIL fromD.[15][For example, a hypothetical uncontrollable (C3) fatal injury (S3) hazard could be classified asASIL Aif the hazard has a very low probability (E1).] The ASIL level belowAis the lowest level,QM.QMrefers to the standard's consideration that belowASIL A; there is no safety relevance and only standard Quality Management processes are required.[13]
These Severity, Exposure, and Control definitions are informative, not prescriptive, and effectively leave some room for subjective variation or discretion between various automakers and component suppliers.[14][16]In response, theSociety for Automotive Safety Engineers (SAE)has issuedJ2980 – Considerations for ISO26262 ASIL Hazard Classificationto provide more explicit guidance for assessing Exposure, Severity and Controllability for a given hazard.[17]
|
https://en.wikipedia.org/wiki/ISO_26262
|
The following is a comparison of notablefile systemdefragmentationsoftware:
|
https://en.wikipedia.org/wiki/List_of_defragmentation_software
|
SHA-3(Secure Hash Algorithm 3) is the latest[4]member of theSecure Hash Algorithmfamily of standards, released byNISTon August 5, 2015.[5][6][7]Although part of the same series of standards, SHA-3 is internally different from theMD5-likestructureofSHA-1andSHA-2.
SHA-3 is a subset of the broader cryptographic primitive familyKeccak(/ˈkɛtʃæk/or/ˈkɛtʃɑːk/),[8][9]designed byGuido Bertoni,Joan Daemen,Michaël Peeters, andGilles Van Assche, building uponRadioGatún. Keccak's authors have proposed additional uses for the function, not (yet) standardized by NIST, including astream cipher, anauthenticated encryptionsystem, a "tree" hashing scheme for faster hashing on certain architectures,[10][11]andAEADciphers Keyak and Ketje.[12][13]
Keccak is based on a novel approach calledsponge construction.[14]Sponge construction is based on a wide random function or randompermutation, and allows inputting ("absorbing" in sponge terminology) any amount of data, and outputting ("squeezing") any amount of data, while acting as a pseudorandom function with regard to all previous inputs. This leads to great flexibility.
As of 2022, NIST does not plan to withdraw SHA-2 or remove it from the revised Secure Hash Standard.[15]The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, and to significantly improve the robustness of NIST's overall hash algorithm toolkit.[16]
For small message sizes, the creators of the Keccak algorithms and the SHA-3 functions suggest using the faster functionKangarooTwelvewith adjusted parameters and a new tree hashing mode without extra overhead.
The Keccak algorithm is the work of Guido Bertoni,Joan Daemen(who also co-designed theRijndaelcipher withVincent Rijmen), Michaël Peeters, andGilles Van Assche. It is based on earlier hash function designsPANAMAandRadioGatún. PANAMA was designed by Daemen and Craig Clapp in 1998. RadioGatún, a successor of PANAMA, was designed by Daemen, Peeters, and Van Assche, and was presented at the NIST Hash Workshop in 2006.[17]Thereference implementationsource codewas dedicated topublic domainviaCC0waiver.[18]
In 2006,NISTstarted to organize theNIST hash function competitionto create a new hash standard, SHA-3. SHA-3 is not meant to replaceSHA-2, as no significant attack on SHA-2 has been publicly demonstrated[needs update]. Because of the successful attacks onMD5,SHA-0andSHA-1,[19][20]NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3.
After a setup period, admissions were to be submitted by the end of 2008. Keccak was accepted as one of the 51 candidates. In July 2009, 14 algorithms were selected for the second round. Keccak advanced to the last round in December 2010.[21]
During the competition, entrants were permitted to "tweak" their algorithms to address issues that were discovered. Changes that have been made to Keccak are:[22][23]
On October 2, 2012, Keccak was selected as the winner of the competition.[8]
In 2014, the NIST published a draftFIPS202 "SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions".[24]FIPS 202 was approved on August 5, 2015.[25]
On August 5, 2015, NIST announced that SHA-3 had become a hashing standard.[26]
In early 2013 NIST announced they would select different values for the "capacity", the overall strength vs. speed parameter, for the SHA-3 standard, compared to the submission.[27][28]The changes caused some turmoil.
The hash function competition called for hash functions at least as secure as the SHA-2 instances. It means that ad-bit output should haved/2-bit resistance tocollision attacksandd-bit resistance topreimage attacks, the maximum achievable fordbits of output. Keccak's security proof allows an adjustable level of security based on a "capacity"c, providingc/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposedc= 2d. The announced change was to accept the samed/2-bit security for all forms of attack and standardizec=d. This would have sped up Keccak by allowing an additionaldbits of input to be hashed each iteration. However, the hash functions would not have been drop-in replacements with the same preimage resistance as SHA-2 any more; it would have been cut in half, making it vulnerable to advances in quantum computing, which effectively would cut it in half once more.[29]
In September 2013,Daniel J. Bernsteinsuggested on theNISThash-forum mailing list[30]to strengthen the security to the 576-bit capacity that was originally proposed as the default Keccak, in addition to and not included in the SHA-3 specifications.[31]This would have provided at least a SHA3-224 and SHA3-256 with the same preimage resistance as their SHA-2 predecessors, but SHA3-384 and SHA3-512 would have had significantly less preimage resistance than their SHA-2 predecessors. In late September, the Keccak team responded by stating that they had proposed 128-bit security by settingc= 256as an option already in their SHA-3 proposal.[32]Although the reduced capacity was justifiable in their opinion, in the light of the negative response, they proposed raising the capacity toc= 512bits for all instances. This would be as much as any previous standard up to the 256-bit security level, while providing reasonable efficiency,[33]but not the 384-/512-bit preimage resistance offered by SHA2-384 and SHA2-512. The authors stated that "claiming or relying onsecurity strengthlevels above 256 bits is meaningless".
In early October 2013,Bruce Schneiercriticized NIST's decision on the basis of its possible detrimental effects on the acceptance of the algorithm, saying:
There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.[34]
He later retracted his earlier statement, saying:
I misspoke when I wrote that NIST made "internal changes" to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function's capacity in the name of performance. One of Keccak's nice features is that it's highly tunable.[34]
Paul Crowley, a cryptographer and senior developer at an independent software development company, expressed his support of the decision, saying that Keccak is supposed to be tunable and there is no reason for different security levels within one primitive. He also added:
Yes, it's a bit of a shame for the competition that they demanded a certain security level for entrants, then went to publish a standard with a different one. But there's nothing that can be done to fix that now, except re-opening the competition. Demanding that they stick to their mistake doesn't improve things for anyone.[35]
There was some confusion that internal changes may have been made to Keccak, which were cleared up by the original team, stating that NIST's proposal for SHA-3 is a subset of the Keccak family, for which one can generate test vectors using their reference code submitted to the contest, and that this proposal was the result of a series of discussions between them and the NIST hash team.[36]
In response to the controversy, in November 2013 John Kelsey of NIST proposed to go back to the originalc= 2dproposal for all SHA-2 drop-in replacement instances.[37]The reversion was confirmed in subsequent drafts[38]and in the final release.[5]
SHA-3 uses thesponge construction,[14]in which data is "absorbed" into the sponge, then the result is "squeezed" out. In the absorbing phase, message blocks areXORedinto a subset of the state, which is then transformed as a whole using apermutation function(ortransformation)f{\displaystyle f}. In the "squeeze" phase, output blocks are read from the same subset of the state, alternated with the state transformation functionf{\displaystyle f}. The size of the part of the state that is written and read is called the "rate" (denotedr{\displaystyle r}), and the size of the part that is untouched by input/output is called the "capacity" (denotedc{\displaystyle c}). The capacity determines the security of the scheme. The maximumsecurity levelis half the capacity.
Given an input bit stringN{\displaystyle N}, a padding functionpad{\displaystyle pad}, a permutation functionf{\displaystyle f}that operates on bit blocks of widthb{\displaystyle b}, a rater{\displaystyle r}and an output lengthd{\displaystyle d}, we have capacityc=b−r{\displaystyle c=b-r}and the sponge constructionZ=sponge[f,pad,r](N,d){\displaystyle Z={\text{sponge}}[f,pad,r](N,d)}, yielding a bit stringZ{\displaystyle Z}of lengthd{\displaystyle d}, works as follows:[6]: 18
The fact that the internal stateScontainscadditional bits of information in addition to what is output toZprevents thelength extension attacksthat SHA-2, SHA-1, MD5 and other hashes based on theMerkle–Damgård constructionare susceptible to.
In SHA-3, the stateSconsists of a5 × 5array ofw-bit words (withw= 64),b= 5 × 5 ×w= 5 × 5 × 64 = 1600 bits total. Keccak is also defined for smaller power-of-2 word sizeswdown to 1 bit (total state of 25 bits). Small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes (fromw= 8, 200 bits, tow= 32, 800 bits) can be used in practical, lightweight applications.[12][13]
For SHA3-224, SHA3-256, SHA3-384, and SHA3-512 instances,ris greater thand, so there is no need for additional block permutations in the squeezing phase; the leadingdbits of the state are the desired hash. However, SHAKE128 and SHAKE256 allow an arbitrary output length, which is useful in applications such asoptimal asymmetric encryption padding.
To ensure the message can be evenly divided intor-bit blocks, padding is required. SHA-3 uses the pattern 10...01 in its padding function: a 1 bit, followed by zero or more 0 bits (maximumr− 1) and a final 1 bit.
The maximum ofr− 1zero bits occurs when the last message block isr− 1bits long. Then another block is added after the initial 1 bit, containingr− 1zero bits before the final 1 bit.
The two 1 bits will be added even if the length of the message is already divisible byr.[6]: 5.1In this case, another block is added to the message, containing a 1 bit, followed by a block ofr− 2zero bits and another 1 bit. This is necessary so that a message with length divisible byrending in something that looks like padding does not produce the same hash as the message with those bits removed.
The initial 1 bit is required so messages differing only in a few additional 0 bits at the end do not produce the same hash.
The position of the final 1 bit indicates which raterwas used (multi-rate padding), which is required for the security proof to work for different hash variants. Without it, different hash variants of the same short message would be the same up to truncation.
The block transformationf, which is Keccak-f[1600] for SHA-3, is a permutation that usesXOR,ANDandNOToperations, and is designed for easy implementation in both software and hardware.
It is defined for any power-of-twowordsize,w= 2ℓbits. The main SHA-3 submission uses 64-bit words,ℓ= 6.
The state can be considered to be a5 × 5 ×warray of bits. Leta[i][j][k]be bit(5i+j) ×w+kof the input, using alittle-endianbit numbering convention androw-majorindexing. I.e.iselects the row,jthe column, andkthe bit.
Index arithmetic is performed modulo 5 for the first two dimensions and modulowfor the third.
The basic block permutation function consists of12 + 2ℓrounds of five steps:
The speed of SHA-3 hashing of long messages is dominated by the computation off= Keccak-f[1600] and XORingSwith the extendedPi, an operation onb= 1600 bits. However, since the lastcbits of the extendedPiare 0 anyway, and XOR with 0 is a NOP, it is sufficient to perform XOR operations only forrbits (r= 1600 − 2 × 224 = 1152 bits for SHA3-224, 1088 bits for SHA3-256, 832 bits for SHA3-384 and 576 bits for SHA3-512). The lowerris (and, conversely, the higherc=b−r= 1600 −r), the less efficient but more secure the hashing becomes since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensivef.
The authors report the following speeds for software implementations of Keccak-f[1600] plus XORing 1024 bits,[1]which roughly corresponds to SHA3-256:
For the exact SHA3-256 on x86-64, Bernstein measures 11.7–12.25 cpb depending on the CPU.[40]: 7SHA-3 has been criticized for being slow on instruction set architectures (CPUs) which do not have instructions meant specially for computing Keccak functions faster – SHA2-512 is more than twice as fast as SHA3-512, and SHA-1 is more than three times as fast on an Intel Skylake processor clocked at 3.2 GHz.[41]The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512, at the expense of cutting the preimage resistance in half (but while keeping the collision resistance). With this, performance is on par with SHA2-256 and SHA2-512.
However, inhardware implementations, SHA-3 is notably faster than all other finalists,[42]and also faster than SHA-2 and SHA-1.[41]
As of 2018, ARM's ARMv8[43]architecture includes special instructions which enable Keccak algorithms to execute faster and IBM'sz/Architecture[44]includes a complete implementation of SHA-3 and SHAKE in a single instruction. There have also been extension proposals forRISC-Vto add Keccak-specific instructions.[45]
The NIST standard defines the following instances, for messageMand output lengthd:[6]: 20, 23
With the following definitions
SHA-3 instances are drop-in replacements for SHA-2, intended to have identical security properties.
SHAKE will generate as many bits from its sponge as requested, thus beingextendable-output functions(XOFs). For example, SHAKE128(M, 256) can be used as a hash function with a 256 character bitstream with 128-bit security strength. Arbitrarily large lengths can be used as pseudo-random number generators. Alternately, SHAKE256(M, 128) can be used as a hash function with a 128-bit length and 128-bit resistance.[6]
All instances append some bits to the message, the rightmost of which represent thedomain separationsuffix. The purpose of this is to ensure that it is not possible to construct messages that produce the same hash output for different applications of the Keccak hash function. The following domain separation suffixes exist:[6][46]
In December 2016NISTpublished a new document, NIST SP.800-185,[47]describing additional SHA-3-derived functions:
• X is the main input bit string. It may be of any length, including zero.
• L is an integer representing the requested output length in bits.
• N is a function-name bit string, used by NIST to define functions based on cSHAKE. When no function other than cSHAKE is desired, N is set to the empty string.
• S is a customization bit string. The user selects this string to define a variant of the function. When no customization is desired, S is set to the empty string.
• K is a key bit string of any length, including zero.
• B is the block size in bytes for parallel hashing. It may be any integer such that 0 < B < 22040.
In 2016 the same team that made the SHA-3 functions and the Keccak algorithm introduced faster reduced-rounds (reduced to 12 and 14 rounds, from the 24 in SHA-3) alternatives which can exploit the availability of parallel execution because of usingtree hashing: KangarooTwelve and MarsupilamiFourteen.[49]
These functions differ from ParallelHash, the FIPS standardized Keccak-based parallelizable hash function, with regard to the parallelism, in that they are faster than ParallelHash for small message sizes.
The reduced number of rounds is justified by the huge cryptanalytic effort focused on Keccak which did not produce practical attacks on anything close to twelve-round Keccak. These higher-speed algorithms are not part of SHA-3 (as they are a later development), and thus are not FIPS compliant; but because they use the same Keccak permutation they are secure for as long as there are no attacks on SHA-3 reduced to 12 rounds.[49]
KangarooTwelve is a higher-performance reduced-round (from 24 to 12 rounds) version of Keccak which claims to have 128 bits of security[50]while having performance as high as 0.55cycles per byteon aSkylakeCPU.[51]This algorithm is anIETFRFCdraft.[52]
MarsupilamiFourteen, a slight variation on KangarooTwelve, uses 14 rounds of the Keccak permutation and claims 256 bits of security. Note that 256-bit security is not more useful in practice than 128-bit security, but may be required by some standards.[50]128 bits are already sufficient to defeat brute-force attacks on current hardware, so having 256-bit security does not add practical value, unless the user is worried about significant advancements in the speed ofclassicalcomputers. For resistance againstquantumcomputers, see below.
KangarooTwelve and MarsupilamiFourteen are Extendable-Output Functions, similar to SHAKE, therefore they generate closely related output for a common message with different output length (the longer output is an extension of the shorter output). Such property is not exhibited by hash functions such as SHA-3 or ParallelHash (except of XOF variants).[6]
In 2016, the Keccak team released a different construction calledFarfalle construction, and Kravatte, an instance of Farfalle using the Keccak-p permutation,[53]as well as two authenticated encryption algorithms Kravatte-SANE and Kravatte-SANSE[54]
RawSHAKE is the basis for the Sakura coding for tree hashing, which has not been standardized yet. Sakura uses a suffix of 1111 for single nodes, equivalent to SHAKE, and other generated suffixes depending on the shape of the tree.[46]: 16
There is a general result (Grover's algorithm) that quantum computers can perform a structuredpreimage attackin2d=2d/2{\displaystyle {\sqrt {2^{d}}}=2^{d/2}}, while a classical brute-force attack needs 2d. A structured preimage attack implies a second preimage attack[29]and thus acollision attack. A quantum computer can also perform abirthday attack, thus break collision resistance, in2d3=2d/3{\displaystyle {\sqrt[{3}]{2^{d}}}=2^{d/3}}[55](although that is disputed).[56]Noting that the maximum strength can bec/2{\displaystyle c/2}, this gives the following upper[57]bounds on the quantum security of SHA-3:
It has been shown that theMerkle–Damgård construction, as used by SHA-2, is collapsing and, by consequence, quantum collision-resistant,[58]but for the sponge construction used by SHA-3, the authors provide proofs only for the case when the block functionfis not efficiently invertible; Keccak-f[1600], however, is efficiently invertible, and so their proof does not apply.[59][original research]
The following hash values are from NIST.gov:[60]
Changing a single bit causes each bit in the output to change with 50% probability, demonstrating anavalanche effect:
In the table below,internal statemeans the number of bits that are carried over to the next block.
Optimized implementation usingAVX-512VL(i.e. fromOpenSSL, running onSkylake-XCPUs) of SHA3-256 do achieve about 6.4 cycles per byte for large messages,[66]and about 7.8 cycles per byte when usingAVX2onSkylakeCPUs.[67]Performance on other x86, Power and ARM CPUs depending on instructions used, and exact CPU model varies from about 8 to 15 cycles per byte,[68][69][70]with some older x86 CPUs up to 25–40 cycles per byte.[71]
Below is a list of cryptography libraries that support SHA-3:
Apple A13ARMv8 six-coreSoCCPU cores have support[72]for accelerating SHA-3 (and SHA-512) using specialized instructions (EOR3, RAX1, XAR, BCAX) from ARMv8.2-SHA crypto extension set.[73]
Some software libraries usevectorizationfacilities of CPUs to accelerate usage of SHA-3. For example, Crypto++ can useSSE2on x86 for accelerating SHA3,[74]andOpenSSLcan useMMX,AVX-512orAVX-512VLon many x86 systems too.[75]AlsoPOWER8CPUs implement 2x64-bit vector rotate, defined in PowerISA 2.07, which can accelerate SHA-3 implementations.[76]Most implementations for ARM do not useNeonvector instructions asscalar codeis faster. ARM implementations can however be accelerated usingSVEand SVE2 vector instructions; these are available in theFujitsu A64FXCPU for instance.[77]
The IBMz/Architecturesupports SHA-3 since 2017 as part of the Message-Security-Assist Extension 6.[78]The processors support a complete implementation of the entire SHA-3 and SHAKE algorithms via the KIMD and KLMD instructions using a hardware assist engine built into each core.
Ethereumuses the Keccak-256 hash function (as per version 3 of the winning entry to the SHA-3 contest by Bertoni et al., which is different from the final SHA-3 specification).[79]
|
https://en.wikipedia.org/wiki/Keccak
|
Aternary computer, also calledtrinary computer, is one that usesternary logic(i.e.,base 3) instead of the more commonbinary system(i.e.,base 2) in its calculations. Ternary computers use trits, instead of binarybits.
Ternary computing deals with three discrete states, but the ternary digits themselves can be defined differently:[1]
Ternary quantum computers usequtritsrather than trits. A qutrit is aquantum statethat is acomplexunit vectorin three dimensions, which can be written as|Ψ⟩=α|0⟩+β|1⟩+γ|2⟩{\displaystyle |\Psi \rangle =\alpha |0\rangle +\beta |1\rangle +\gamma |2\rangle }in thebra-ket notation.[2]The labels given to thebasis vectors(|0⟩,|1⟩,|2⟩{\displaystyle |0\rangle ,|1\rangle ,|2\rangle }) can be replaced with other labels, for example those given above.
I often reflect that had the Ternary instead of thedenaryNotation been adopted in the Infancy of Society, machines something like the present would long ere this have been common, as the transition from mental to mechanical calculation would have been so very obvious and simple.
One early calculating machine, built entirely from wood by Thomas Fowler in 1840, operated in balanced ternary.[4][5][3]The first modern, electronic ternary computer,Setun, was built in 1958 in the Soviet Union at theMoscow State UniversitybyNikolay Brusentsov,[6][7]and it had notable advantages over thebinarycomputers that eventually replaced it, such as lower electricity consumption and lower production cost.[citation needed]In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70.[6]In the United States, the ternary computing emulatorTernacworking on a binary machine was developed in 1973.[8]: 22
The ternary computer QTC-1 was developed in Canada.[9]
Ternary computing is commonly implemented in terms ofbalanced ternary, which uses the three digits −1, 0, and +1. The negative value of any balancedternary digitcan be obtained by replacing every + with a − and vice versa. It is easy to subtract a number by inverting the + and − digits and then using normal addition. Balanced ternary can express negative values as easily as positive ones, without the need for a leading negative sign as with unbalanced numbers. These advantages make some calculations more efficient in ternary than binary.[10]Considering that digit signs are mandatory, and nonzero digits are magnitude 1 only, notation that drops the '1's and use only zero and the + − signs is more concise than if 1's are included.
Ternary computing can be implemented in terms of unbalanced ternary, which uses the three digits 0, 1, 2. The original 0 and 1 are explained as an ordinarybinary computer, but instead uses 2 asleakage current.
The world's first unbalanced ternary semiconductor design on a large wafer was implemented by the research team led by Kim Kyung-rok atUlsan National Institute of Science and Technologyin South Korea, which will help development of low power and high computing microchips in the future. This research theme was selected as one of the future projects funded bySamsungin 2017, published on July 15, 2019.[11]
With the advent of mass-produced binary components for computers, ternary computers have diminished in significance. However,Donald Knuthargues that they will be brought back into development in the future to take advantage of ternary logic's elegance and efficiency.[10]One possible way this could happen is by combining anoptical computerwith theternary logicsystem.[12]A ternary computer using fiber optics could use dark as 0 and two orthogonal polarizations of light as +1 and −1.[13]
TheJosephson junctionhas been proposed as a balanced ternary memory cell, using circulating superconducting currents, either clockwise, counterclockwise, or off. "The advantages of the proposed memory circuit are capability of high speed computation, low power consumption and very simple construction with fewer elements due to the ternary operation."[14]
Ternary computing shows promise for implementing fastternary large language models(LLMs) and potentially other AI applications, in lieu of floating point arithmetic.[15]
InRobert A. Heinlein's novelTime Enough for Love, the sapient computers of Secundus, the planet on which part of the framing story is set, including Minerva, use an unbalanced ternary system. Minerva, in reporting a calculation result, says "three hundred forty one thousand six hundred forty... the original ternary readout is unit pair pair comma unit nil nil comma unit pair pair comma unit nil nil point nil".[16]
With the emergence ofcarbon nanotube transistors, many researches have shown interest in designing ternary logic gates using them. During 2020–2024 more than 1000 papers about this subject onIEEE Xplorehave been published.[17]
|
https://en.wikipedia.org/wiki/Ternary_computer
|
Lists of filename extensionsinclude:
|
https://en.wikipedia.org/wiki/List_of_filename_extensions
|
Oikocredit(in fullOikocredit, Ecumenical DevelopmentCooperativeSociety U.A.[1]) is a cooperative society that offers loans or investment capital formicrofinanceinstitutions,cooperativesandsmall and medium-sized enterprisesin developing countries. It is one of the world's largest private financiers of the microfinance sector. The idea for Oikocredit came from a 1968 meeting of theWorld Council of Churches. Following this, Oikocredit was established in 1975 in theNetherlands.[2]
As a co-operative, Oikocredit finances and invests infair tradeorganisations, co-operatives, microfinance institutions, and small to medium-sized enterprises in many developing countries. In 2016 Oikocredit had offices in 31 countries.[2]
On 25 March 2010, Oikocredit announced that it had reached €1 billion in cumulative committed loans and investments since the organisation began its operations. In 2009, a year when financial institutions encountered difficult market conditions, Oikocredit achieved record high inflows: Oikocredit's capital inflow reached €62.9 million and its total assets grew by 13% to €537 million at the year end.[3]
Oikocredit’s financial resources primarily derive from investments (£150 minimum, no maximum) contributed by 54,000 individuals,[2]institutions and faith-based organisations worldwide. In return, investors have received stable gross dividends paid (or re-invested) annually since 2000 with no fixed notice period, together with annual social performance reports that monitor the impact of Oikocredit and their partner organisations in developing countries.
As of 2014, Oikocredit channeled €40.3m (£31.7m) of its investment capital into 85 fair trade partner organisations, many in coffee and cocoa enterprises in Latin America and Africa.[4]In the same year Oikocredit launched an agriculture unit led from Lima, Peru. The cooperative also diversified into renewable energy, recruiting a renewable energy financing expert, to source deals in low-income countries.
On 16 November 2016,Alternative Bank Schweiz(ABS) launched a banking partnership with Oikocredit.[5]
"Oiko" is derived from the Greek word "οἶκος" or "oikos", meaning "the house, the place where the people live together", which is also the root of the word, "economy".
There are aligations, that Oikocredit has failed to conduct adequatedue diligenceon its investments inCambodia’smicrofinance sector since at least 2017, despite evidence of harms directly linked to those investments.[6]Cambodian League for the Promotion and Defense of Human Rights(LICADHO), Equitable Cambodia (EC) and FIAN Germany filed a complaint against Oikocredit at NCP Netherlands.[7]In response Oikocredit said, that they had not identified any forced land sales and that customer protection is their top priority.[8]
|
https://en.wikipedia.org/wiki/Oikocredit
|
Smith v Lloyds TSB Bank plc[2005] EWHC 246was a judicial decision of the English High Court relating to theData Protection Act 1998.[1]
The claimant was seeking data from the bank, and he sought to advance two relatively novel lines of argument. The first was referred to in the case as the "once processed always processed" argument, i.e. that even if the respondent no longer held the data in electronic form, if they once held it in electronic form they are obligated to provide it. The second was that if data was held in a non-electronic form but could readily be turned into electronic form, then it constituted data for the purposes of the act.[2]Both arguments failed.
The claimant, Mr Smith, was the formermanaging directorand controlling shareholder of a company called Display Electronics Ltd (referred to in the judgment as "DEL"). At some time in 1988, Mr Smith decided to transfer the banking for DEL from Barclays Bank Plc toLloyds Bank. At that time DEL owed Barclays over £250,000. An agreement was entered into between Mr Smith, DEL and Lloyds under which Lloyds would take over the funding of the development, but one of the terms of this agreement was that both Mr Smith's personal borrowings (which at that time were very small) and DEL's borrowings would be subject to asecurity interestover the development in favour of the bank, and also by amortgageon Mr Smith's home. DEL did not prosper, and eventually Lloyds called in its loans. As a result, DEL went intoliquidation, and Mr Smith lost his home. The bank also lodged abankruptcypetition with respect to Mr Smith personally. A number of litigation cases ensued between Mr Smith and Lloyds. One of the assertions Mr Smith made in these cases was that he and Lloyds had entered into anoral agreementto the effect that Lloyds would make available to DEL long term finance in a substantial amount. Lloyds always denied the existence of any such oral agreement. In at least two of the actions findings of fact had been made to the effect that no such oral agreement existed. But Mr Smith believed that certain documentation held by Lloyds will prove his contentions.
In the various prior proceedings between Mr Smith and the bank there had been only very limiteddisclosureby Lloyds. Accordingly, Mr Smith felt that the crucial documents evidencing the oral agreement have been withheld from the courts. The purpose of his application was to secure access to them pursuant to the provisions of the Data Protection Act 1998. Specifically he sought a declaration that certain Notes and Memoranda recorded by Lloyds, whether filed under his own name or that of DEL, are Mr Smith's personal data in a relevant filing system, as defined in the 1998 Act, and an order that Lloyds provide copies to Mr Smith of certain documents.
Laddie Jcommenced his judgment by noting that Mr Smith's case appeared to run contrary to the two leading decisions in this area of the law: the judgment of theCourt of AppealinDurant v Financial Services Authority[2003] EWCA Civ 1746and a former decision of Laddie J himself inJohnson v Medical Defence Union[2004] EWHC 347 (Ch).
Mr Smith sought to advance broadly two arguments:[2]
The bank resisted those arguments, and raised two counter-arguments:
Laddie J noted that the first point had been previously decided byJohnson, an authority which was binding upon him, and so he was bound to rule against it. Counsel for the claimants accepted this, but noted that he was bound to make that claim in case he wanted to challenge the correctness ofJohnsonin the Court of Appeal.
He also rejected the argument relating to physical documents which were convertible into electronic form. Counsel for Mr Smith had argued "that any selection of paper documents is scannable" and therefore should be treated as "data". The Court was unable to accept the width of that submission, as that would mean that every document in the world would be treated as electronic data under the legislation. It was also felt that this construction (apart from being enormously wide) was inconsistent with Recital (27) ofDirective 95/46/ECupon which the Act was based, which stated "nonetheless, as regards manual processing, this Directive covers only filing systems, not unstructured files".
Having ruled accordingly, Laddie J acknowledged that "it is not strictly necessary to deal with [the] argument ... that, if the documents here contained data within the meaning of the 1998 Act, it is not personal data." However, because he felt that it was a short point and could be dealt with simply, he did so anyway. He noted that the statutory definition of "personal data" was considered inDurant. Applying the principles of that case, he held that it was clear that the documents held by Lloyds and the information contained within them are not personal to Mr Smith in the relevant sense - the files that do exist all relate to the loans to DEL, and not Mr Smith personally.
In the final paragraph of the judgment the court also added that it was not necessary to consider the bank's additional alternative argument that this was not a case where the court's discretion should be exercised in Mr Smith's favour because he intends to use any material obtained from Lloyds for the purpose of re-opening the arguments which he has advanced and lost in at least two earlier sets of proceedings, and that would be anabuse of process.
|
https://en.wikipedia.org/wiki/Smith_v_Lloyds_TSB_Bank_plc
|
Security cultureis a set of practices used by activists, notablycontemporary anarchists, to avoid, or mitigate the effects of, policesurveillanceand harassment and state control.[1][2]
Security culture recognizes the possibility that anarchist spaces and movements are surveilled and/or infiltrated byinformantsorundercover operatives.[3]Security culture has three components: determining when and how surveillance is occurring, protecting anarchist communities if infiltration occurs, and responding to security breaches.[4]Its origins are uncertain, though some anarchists identify its genesis in thenew social movementsof the 1960s, which were targeted by theFederal Bureau of Investigation'sCOINTELPROprojects.[5]The sociologist Christine M. Robinson has identified security culture as a response to the labelling of anarchists as terrorists in theaftermath of the September 11 attacks.[6]
The geographer Nathan L. Clough describes security culture as "a technique for cultivating a new affective structure".[3]The political scientist Sean Parson offers the following definition: "'security culture' ... includes such rules as not disclosing full names, one's activist history, or anything else that could be used to identify oneself or others to authorities. The goal of security culture is to weaken the influence of infiltrators and 'snitches,' which allows groups to more readily engage in illegal acts with less concern for arrest."[7]The media scholar Laura Portwood-Stacer defines security culture as "the norms of privacy and information control developed by anarchists in response to regular infiltration of their groups and surveillance by law enforcement personnel."[8]
Security culture does not involve abandoning confrontational political tactics, but rather eschews boasting about such deeds on the basis that doing so facilitates the targeting and conviction of anarchist activists.[3]Advocates of security culture aim to make its practices instinctive, automatic or unconscious.[3]Participants in anarchist movements see security culture as vital to their ability to function, especially in the context of theWar on Terror.[5]
Portwood-Stacer observes that security culture impacts upon research on anarchistsubculturesand that, while subcultures are often resistant to observation, "the stakes are often much higher for anarchist activists, because they are a frequent target of state surveillance and repression."[8]
Security culture regulates what topics can be discussed, in what context, and among whom.[9]It prohibits speaking to law enforcement, and certain media and locations are identified as security risks; the Internet, telephone and mail, individuals' homes and vehicles, and community meeting places are assumed to containcovert listening devices.[9]Security culture prohibits or discourages discussing involvement in illegal or covert activities.[9]Three exceptions, however, are drawn: discussing plans with others involved, discussing criminal activities for which one has been convicted, and discussing past actions anonymously inzinesor with trusted media are permitted.[9]Robinson identifies theblack bloctactic, in which anarchists cover their faces and wear black clothing, as a component of security culture.[10]Other practices include the use ofpseudonymsand "[i]nverting the gaze to inspect others' corporeality".[11]Breaches of security culture may be met by avoiding, isolating or shunning those responsible.[12]
In his discussion of security culture during the protests around the2008 Republican National Convention(RNC), Clough notes that "fear of surveillance and infiltration" impeded trust among activists and led to energy being directed toward counter-measures.[3]He also suggests that security culture practices may cause newer participants in movements to feel less welcome or less trusted, and therefore less likely to commit to causes,[3]and, in the context of the 2008 RNC, prevented those who did not conform to anarchist norms from assuming prominent positions within theRNC Welcoming Committee.[13]Assessing the role of security culture in the anti-RNC mobilisation, which was infiltrated by four police operatives, Clough finds that had "a mixed record", succeeding in frustrating shorter-term infiltrators operating at the movement's peripheries, but failing to prevent longer-term infiltrators from gaining others' trust.[14]
|
https://en.wikipedia.org/wiki/Security_culture
|
Compartmentalizationorcompartmentalisationmay refer to:
|
https://en.wikipedia.org/wiki/Compartmentalization
|
Matroska(styledMatroška) is a project to create acontainer formatthat can hold an unlimited number of video, audio, picture, or subtitle tracks in one file.[4]TheMatroska Multimedia Containeris similar in concept to other containers likeAVI,MP4, orAdvanced Systems Format(ASF), but is anopen standard.
Matroska file extensions are.mkvfor video (which may includesubtitlesor audio),.mk3dforstereoscopicvideo,.mkafor audio-only files (which may include subtitles), and.mksfor subtitles only.[5]
The project was announced on 6 December 2002[6]as aforkof theMultimedia Container Format(MCF), after disagreements between MCF lead developer Lasse Kärkkäinen and soon-to-be Matroska founder Steve Lhomme about the use of theExtensible Binary Meta Language(EBML) instead of a binary format.[7]This coincided with a 6-month coding break by the MCF's lead developer for his military service, during which most of the community quickly migrated to the new project.[citation needed]
In 2010, it was announced that theWebMaudio/video format would be based on aprofileof the Matroska container format together withVP8video andVorbisaudio.[8]
On 31 October 2014,Microsoftconfirmed thatWindows 10would supportHEVCand Matroskaout of the box, according to a statement from Gabriel Aul, the leader of MicrosoftOperating SystemsGroup's Data and Fundamentals Team.[9][10]Windows 10 Technical Preview Build 9860 added platform level support for HEVC and Matroska.[11][12]
"Matroska" is derived frommatryoshka(Russian:матрёшка[mɐˈtrʲɵʂkə]), the Russian name for thehollow wooden dolls, better known in English as Russian nesting dolls, which open to expose another smaller doll, that in turn opens to expose another doll, and so on. The logo writes it as "Matroška"; the letterš, an "s" with acaronover it, represents the "sh" sound (/ʂ/) in various languages.[13]
The use ofEBMLallows extension for future format changes. The Matroska team has expressed some of their long-term goals onDoom9.organdHydrogenaudioforums. Thus, the following are "goals", not necessarily existing features, of Matroska:[14]
Matroska is supported by a non-profit organization (association loi 1901) in France,[17]and the specifications are open to everyone. It is aroyalty-freeopen standardthat is free to use, and its technical specifications are available for private and commercial use. The Matroska development team licenses its libraries under theLGPL, with parsing and playback libraries available underBSD licenses.[14]
Software supporting Matroska include allffmpeg/libav-based ones,[18]including, notably,mplayer,mpv,VLC,Foobar2000,Media Player Classic-HC,BS.player,Google Chrome,Mozilla Firefox,Blender,Kdenlive,Handbrake,MKVToolNixas well asYouTube(which usesWebMextensively),[19]andOBS Studio.[20]
Outside of ffmpeg,Windows 10supports Matroska natively as well.[21]Earlier versions relied on codec packs (likeK-Lite Codec PackorCombined Community Codec Pack) to integrate ffmpeg (viaffdshow) and other additions into Windows' nativeDirectShow.
Apple's nativeQuickTimeplayer formacOSnotably lacks support.
|
https://en.wikipedia.org/wiki/Matroska
|
Incombinatorics, therule of divisionis a counting principle. It states that there aren/dways to do a task if it can be done using a procedure that can be carried out innways, and for each wayw, exactlydof thenways correspond to the wayw.
In a nutshell, the division rule is a common way to ignore "unimportant" differences when counting things.[1]
In the terms of a set: "If the finite setAis the union of n pairwise disjoint subsets each withdelements, thenn= |A|/d."[1]
The rule of division formulated in terms of functions: "Iffis a function fromAtoBwhereAandBare finite sets, and that for every valuey∈Bthere are exactlydvaluesx∈Asuch thatf(x) =y(in which case, we say thatfisd-to-one), then|B| = |A|/d."[1]
Example 1
- How many different ways are there to seat four people around a circular table, where two seatings are considered the same when each person has the same left neighbor and the same right neighbor?
Example 2
- We have 6 coloured bricks in total, 4 of them are red and 2 are white, in how many ways can we arrange them?
|
https://en.wikipedia.org/wiki/Rule_of_division_(combinatorics)
|
Atransformation languageis acomputer languagedesigned to transform some input text in a certainformal languageinto a modified output text that meets some specific goal[clarification needed].
Program transformation systemssuch asStratego/XT,TXL,Tom,DMS, andASF+SDFall have transformation languages as a major component. The transformation languages for these systems are driven by declarative descriptions of the structure of the input text (typically a grammar), allowing them to be applied to wide variety of formal languages and documents.
Macrolanguages are a kind of transformation languages to transform a meta language into specific higher programming language likeJava,C++,Fortranor into lower-levelAssembly language.
In themodel-driven engineeringtechnical space, there aremodel transformation languages(MTLs), that take as input models conforming to a given metamodel and produce as output models conforming to a different metamodel. An example of such a language is theQVTOMGstandard.
There are also low-level languages such as the Lx family[1]implemented by thebootstrapping method. The L0 language may be considered as assembler for transformation languages. There is also a high-level graphical language built on upon Lx called MOLA.[2]
There are a number ofXML transformation languages. These includeTritium,XSLT,XQuery,STX,FXT,XDuce, CDuce,HaXml,XMLambda, and FleXML.
Concepts:
Languages and typical transforms:
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Transformation_language
|
Renewal theoryis the branch ofprobability theorythat generalizes thePoisson processfor arbitrary holding times. Instead ofexponentially distributedholding times, a renewal process may have anyindependent and identically distributed(IID) holding times that have finite expectation. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.
A renewal process has asymptotic properties analogous to thestrong law of large numbersandcentral limit theorem. The renewal functionm(t){\displaystyle m(t)}(expected number of arrivals) and reward functiong(t){\displaystyle g(t)}(expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of theconvolutionofm′(t){\displaystyle m'(t)}with a suitable non-negative function. The superposition of renewal processes can be studied as a special case ofMarkov renewal processes.
Applications include calculating the best strategy for replacing worn-out machinery in a factory; comparing the long-term benefits of different insurance policies; and modelling the transmission of infectious disease, where "One of the most widely adopted means of inference of thereproduction numberis via the renewal equation".[1]The inspection paradox relates to the fact that observing a renewal interval at timetgives an interval with average value larger than that of an average renewal interval.
Therenewal processis a generalization of thePoisson process. In essence, the Poisson process is acontinuous-time Markov processon the positive integers (usually starting at zero) which has independentexponentially distributedholding times at each integeri{\displaystyle i}before advancing to the next integer,i+1{\displaystyle i+1}. In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean.
Let(Si)i≥1{\displaystyle (S_{i})_{i\geq 1}}be a sequence of positiveindependent identically distributedrandom variableswith finiteexpected value
We refer to the random variableSi{\displaystyle S_{i}}as the "i{\displaystyle i}-th holding time".
Define for eachn> 0 :
eachJn{\displaystyle J_{n}}is referred to as the "n{\displaystyle n}-th jump time" and the intervals[Jn,Jn+1]{\displaystyle [J_{n},J_{n+1}]}are called "renewal intervals".
Then(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}is given by random variable
whereI{Jn≤t}{\displaystyle \operatorname {\mathbb {I} } _{\{J_{n}\leq t\}}}is theindicator function
(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}represents the number of jumps that have occurred by timet, and is called a renewal process.
If one considers events occurring at random times, one may choose to think of the holding times{Si:i≥1}{\displaystyle \{S_{i}:i\geq 1\}}as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does.
The Poisson process is the unique renewal process with theMarkov property,[2]as the exponential distribution is the unique continuous random variable with the property of memorylessness.
LetW1,W2,…{\displaystyle W_{1},W_{2},\ldots }be a sequence ofIIDrandom variables (rewards) satisfying
Then the random variable
is called arenewal-reward process. Note that unlike theSi{\displaystyle S_{i}}, eachWi{\displaystyle W_{i}}may take negative values as well as positive values.
The random variableYt{\displaystyle Y_{t}}depends on two sequences: the holding timesS1,S2,…{\displaystyle S_{1},S_{2},\ldots }and the rewardsW1,W2,…{\displaystyle W_{1},W_{2},\ldots }These two sequences need not be independent. In particular,Wi{\displaystyle W_{i}}may be a function
ofSi{\displaystyle S_{i}}.
In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards"W1,W2,…{\displaystyle W_{1},W_{2},\ldots }(which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.
An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed asSi{\displaystyle S_{i}}. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards"Wi{\displaystyle W_{i}}are the successive (random) financial losses/gains resulting from successive eggs (i= 1,2,3,...) andYt{\displaystyle Y_{t}}records the total financial "reward" at timet.
We define therenewal functionas theexpected valueof the number of jumps observed up to some timet{\displaystyle t}:
The renewal function satisfies
To prove the elementary renewal theorem, it is sufficient to show that{Xtt;t≥0}{\displaystyle \left\{{\frac {X_{t}}{t}};t\geq 0\right\}}is uniformly integrable.
To do this, consider some truncated renewal process where the holding times are defined bySn¯=aI{Sn>a}{\displaystyle {\overline {S_{n}}}=a\operatorname {\mathbb {I} } \{S_{n}>a\}}wherea{\displaystyle a}is a point such that0<F(a)=p<1{\displaystyle 0<F(a)=p<1}which exists for all non-deterministic renewal processes. This new renewal processX¯t{\displaystyle {\overline {X}}_{t}}is an upper bound onXt{\displaystyle X_{t}}and its renewals can only occur on the lattice{na;n∈N}{\displaystyle \{na;n\in \mathbb {N} \}}. Furthermore, the number of renewals at each time is geometric with parameterp{\displaystyle p}. So we have
We define thereward function:
The reward function satisfies
The renewal function satisfies
whereFS{\displaystyle F_{S}}is the cumulative distribution function ofS1{\displaystyle S_{1}}andfS{\displaystyle f_{S}}is the corresponding probability density function.
From the definition of the renewal process, we have
So
as required.
LetXbe a renewal process with renewal functionm(t){\displaystyle m(t)}and interrenewal meanμ{\displaystyle \mu }. Letg:[0,∞)→[0,∞){\displaystyle g:[0,\infty )\rightarrow [0,\infty )}be a function satisfying:
The key renewal theorem states that, ast→∞{\displaystyle t\rightarrow \infty }:[4]
Consideringg(x)=I[0,h](x){\displaystyle g(x)=\mathbb {I} _{[0,h]}(x)}for anyh>0{\displaystyle h>0}gives as a special case the renewal theorem:[5]
The result can be proved using integral equations or by acouplingargument.[6]Though a special case of the key renewal theorem, it can be used to deduce the full theorem, by considering step functions and then increasing sequences of step functions.[4]
Renewal processes and renewal-reward processes have properties analogous to thestrong law of large numbers, which can be derived from the same theorem. If(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}is a renewal process and(Yt)t≥0{\displaystyle (Y_{t})_{t\geq 0}}is a renewal-reward process then:
almost surely.
for allt≥0{\displaystyle t\geq 0}and so
for allt≥ 0.
Now since0<E[Si]<∞{\displaystyle 0<\operatorname {E} [S_{i}]<\infty }we have:
ast→∞{\displaystyle t\to \infty }almost surely(with probability 1). Hence:
almost surely (using the strong law of large numbers); similarly:
almost surely.
Thus (sincet/Xt{\displaystyle t/X_{t}}is sandwiched between the two terms)
almost surely.[4]
Next consider(Yt)t≥0{\displaystyle (Y_{t})_{t\geq 0}}. We have
almost surely (using the first result and using the law of large numbers onYt{\displaystyle Y_{t}}).
Renewal processes additionally have a property analogous to thecentral limit theorem:[7]
A curious feature of renewal processes is that if we wait some predetermined timetand then observe how large the renewal interval containingtis, we should expect it to be typically larger than a renewal interval of average size.
Mathematically theinspection paradoxstates:for any t > 0 the renewal interval containing t isstochastically largerthan the first renewal interval.That is, for allx> 0 and for allt> 0:
whereFSis the cumulative distribution function of the IID holding timesSi. A vivid example is thebus waiting time paradox: For a given random distribution of bus arrivals, the average rider at a bus stop observes more delays than the average operator of the buses.
The resolution of the paradox is that our sampled distribution at timetis size-biased (seesampling bias), in that the likelihood an interval is chosen is proportional to its size. However, a renewal interval of average size is not size-biased.
since both1−F(x)1−F(t−s){\displaystyle {\frac {1-F(x)}{1-F(t-s)}}}and1{\displaystyle 1}are greater than or equal to1−F(x){\displaystyle 1-F(x)}for all values ofs.
Unless the renewal process is a Poisson process, the superposition (sum) of two independent renewal processes is not a renewal process.[8]However, such processes can be described within a larger class of processes called theMarkov-renewal processes.[9]However, thecumulative distribution functionof the first inter-event time in the superposition process is given by[10]
whereRk(t) andαk> 0 are the CDF of the inter-event times and the arrival rate of processk.[11]
Eric the entrepreneur hasnmachines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.
What is his optimal replacement policy?
If Eric decides at the start of a machine's life to replace it at time0 <t< 2but the machine happens to fail before that time then the lifetimeSof the machine is uniformly distributed on [0,t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:
and the expected costWper machine is:
So by the strong law of large numbers, his long-term average cost per unit time is:
then differentiating with respect tot:
this implies that the turning points satisfy:
and thus
We take the only solutiontin [0, 2]:t= 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity asttends to zero, meaning that the cost is decreasing astincreases, until the point 2/3 where it starts to increase.
|
https://en.wikipedia.org/wiki/Renewal_theory
|
Statistical language acquisition, a branch ofdevelopmentalpsycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate withnatural languagein all of its aspects (phonological,syntactic,lexical,morphological,semantic) through the use of general learning mechanisms operating on statistical patterns in the linguistic input.Statistical learningacquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.
Fundamental to the study of statistical language acquisition is the centuries-old debate betweenrationalism(or its modern manifestation in the psycholinguistic community,nativism) andempiricism, with researchers in this field falling strongly in support of the latter category. Nativism is the position that humans are born with innatedomain-specificknowledge, especially inborn capacities for language learning. Ranging from seventeenth century rationalist philosophers such asDescartes,Spinoza, andLeibnizto contemporary philosophers such asRichard Montagueand linguists such asNoam Chomsky, nativists posit an innate learning mechanism with the specific function of language acquisition.[1]
In modern times, this debate has largely surrounded Chomsky's support of auniversal grammar, properties that all natural languages must have, through the controversial postulation of alanguage acquisition device(LAD), an instinctive mental 'organ' responsible for language learning which searches all possible language alternatives and chooses the parameters that best match the learner's environmental linguistic input. Much of Chomsky's theory is founded on thepoverty of the stimulus(POTS) argument, the assertion that a child's linguistic data is so limited and corrupted that learning language from this data alone is impossible. As an example, many proponents of POTS claim that because children are never exposed to negative evidence, that is, information about what phrases are ungrammatical, the language structure they learn would not resemble that of correct speech without a language-specific learning mechanism.[2]Chomsky's argument for an internal system responsible for language, biolinguistics, poses a three-factor model. "Genetic endowment" allows the infant to extract linguistic info, detect rules, and have universal grammar. "External environment" illuminates the need to interact with others and the benefits of language exposure at an early age. The last factor encompasses the brain properties, learning principles, and computational efficiencies that enable children to pick up on language rapidly using patterns and strategies.
Standing in stark contrast to this position is empiricism, theepistemologicaltheory that all knowledge comes from sensory experience. This school of thought often characterizes the nascent mind as atabula rasa, or blank slate, and can in many ways be associated with the nurture perspective of the "nature vs. nurture debate". This viewpoint has a long historical tradition that parallels that of rationalism, beginning with seventeenth century empiricist philosophers such asLocke,Bacon,Hobbes, and, in the following century,Hume. The basic tenet of empiricism is that information in the environment is structured enough that its patterns are both detectable and extractable by domain-general learning mechanisms.[1]In terms oflanguage acquisition, these patterns can be either linguistic or social in nature.
Chomsky is very critical of this empirical theory of language acquisition. He has said, "It's true there's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures." He claims the idea of using statistical methods to acquire language is simply a mimicry of the process, rather than a true understanding of how language is acquired.[3]
One of the most used experimentalparadigmsin investigations of infants' capacities for statistical language acquisition is the Headturn Preference Procedure (HPP), developed byStanfordpsychologistAnne Fernaldin 1985 to study infants' preferences for prototypicalchild-directed speechover normal adult speech.[4]In the classic HPP paradigm, infants are allowed to freely turn their heads and are seated between two speakers with mounted lights. The light of either the right or left speaker then flashes as that speaker provides some type of audial or linguistic input stimulus to the infant. Reliable orientation to a given side is taken to be an indication of a preference for the input associated with that side's speaker. This paradigm has since become increasingly important in the study ofinfant speech perception, especially for input at levels higher thansyllablechunks, though with some modifications, including using the listening times instead of the side preference as the relevant dependent measure.[5]
Similar to HPP, the Conditioned Headturn Procedure also makes use of an infant's differential preference for a given side as an indication of a preference for, or more often a familiarity with, the input or speech associated with that side. Used in studies ofprosodicboundary markers by Gout et al. (2004)[5]and later by Werker in her classic studies ofcategorical perceptionofnative-languagephonemes,[6]infants areconditionedby some attractive image or display to look in one of two directions every time a certain input is heard, a whole word in Gout's case and a single phonemic syllable in Werker's. After the conditioning, new or more complex input is then presented to the infant, and their ability to detect the earlier target word or distinguish the input of the two trials is observed by whether they turn their head in expectation of the conditioned display or not.
While HPP and the Conditioned Headturn Procedure allow for observations of behavioral responses to stimuli and after the fact inferences about what the subject's expectations must have been to motivate this behavior, the Anticipatory Eye Movement paradigm allows researchers to directly observe a subject's expectations before the event occurs. Bytrackingsubjects'eye movementsresearchers have been able to investigate infantdecision-makingand the ways in which infants encode and act onprobabilistic knowledgeto make predictions about their environments.[7]This paradigm also offers the advantage of comparing differences in eye movement behavior across a wider range of ages than others.
Artificial languages, that is, small-scale languages that typically have an extremely limitedvocabularyand simplifiedgrammarrules, are a commonly used paradigm forpsycholinguisticresearchers. Artificial languages allow researchers to isolate variables of interest and wield a greater degree of control over the input the subject will receive. Unfortunately, the overly simplified nature of these languages and the absence of a number of phenomena common to all human natural languages such asrhythm,pitchchanges, and sequential regularities raise questions ofexternal validityfor any findings obtained using this paradigm, even after attempts have been made to increase thecomplexityand richness of the languages used.[8]The artificial language's lack of complexity or decreased complexity fails to account for a child's need to recognize a given syllable in natural language regardless of the sound variability inherent to natural language, though "it is possible that the complexity of natural language actually facilitates learning."[9]
As such, artificial language experiments are typically conducted to explore what the relevant linguistic variables are, what sources of information infants are able to use and when, and how researchers can go about modeling thelearningand acquisition process.[5]AslinandNewport, for example, have used artificial languages to explore what features of linguistic input make certainpatternssalient and easily detectable by infants, allowing them to easily contrast the detection of syllable repetition with that of word-final syllables and make conclusions about the conditions under which either feature is recognized as important.[10]
Statistical learning has been shown to play a large role in language acquisition, but social interaction appears to be a necessary component of learning as well. In one study, infants presented with audio or audiovisual recordings of Mandarin speakers failed to distinguish the phonemes of the language.[11][12]This implies that simply hearing the sounds is not sufficient for language learning; social interaction cues the infant to take statistics. Particular interactions geared towards infants is known as "child-directed" language because it is more repetitive and associative, which makes it easier to learn. These "child directed" interactions could also be the reason why it is easier to learn a language as a child rather than an adult.
Studies of bilingual infants, such as a study Bijeljac-Babic, et al., on French-learning infants, have offered insight to the role of prosody in language acquisition.[13]The Bijeljac-Babic study found that language dominance influences "sensitivity to prosodic contrasts." Although this was not a study on statistical learning, its findings on prosodic pattern recognition might have implications for statistical learning.
It is possible that the kinds of language experience and knowledge gained through the statistical learning of the first language influences one's acquisition of a second language. Some research points to the possibility that the difficulty of learning a second language may be derived from the structural patterns and language cues that one has already picked up from his or her acquisition of first language. In that sense, the knowledge of and skills to process the first language from statistical acquisition may act as a complicating factor when one tries to learn a new language with different sentence structures, grammatical rules, and speech patterns.[citation needed]
The first step in developing knowledge of a system as complex as natural language is learning to distinguish the important language-specific classes of sounds, called phonemes, that distinguish meaning between words.UBCpsychologistJanet Werker, since her influential series of experiments in the 1980s, has been one of the most prominent figures in the effort to understand the process by which human babies develop these phonological distinctions. While adults who speak different languages are unable to distinguish meaningful sound differences in other languages that do not delineate different meanings in their own, babies are born with the ability to universally distinguish all speech sounds. Werker's work has shown that while infants at six to eight months are still able to perceive the difference between certainHindiandEnglishconsonants, they have completely lost this ability by 11 to 13 months.[6]
It is now commonly accepted that children use some form of perceptualdistributional learning, by which categories are discovered by clumping similar instances of an input stimulus, to form phonetic categories early in life.[5]Developing children have been found to be effective judges of linguistic authority, screening the input they model their language on by shifting theirattentionless to speakers who mispronounce words.[5]Infants also use statistical tracking to calculate the likelihood that particular phonemes will follow each other.[14]
Parsingis the process by which a continuous speech stream is segmented into itsdiscretemeaningful units, e.g.sentences,words, and syllables.Saffran(1996) represents a singularly seminal study in this line of research. Infants were presented with two minutes of continuous speech of an artificial language from a computerized voice to remove any interference fromextraneous variablessuch as prosody orintonation. After this presentation, infants were able to distinguish words from nonwords, as measured by longer looking times in the second case.[15]
An important concept in understanding these results is that oftransitional probability, thelikelihoodof an element, in this case a syllable, following or preceding another element. In this experiment, syllables that went together in words had a much higher transitional probability than did syllables atword boundariesthat just happened to be adjacent.[5][8][15]Incredibly, infants, after a short two-minute presentation, were able to keep track of thesestatisticsand recognize highprobabilitywords. Further research has since replicated these results with natural languages unfamiliar to infants, indicating that learning infants also keep track of the direction (forward or backward) of the transitional probabilities.[8]Though the neural processes behind this phenomenon remain largely unknown, recent research reports increased activity in theleft inferior frontal gyrusand themiddle frontal gyrusduring the detection of word boundaries.[16]
The development of syllable-ordering biases is an important step along the way to full language development. The ability to categorize syllables and group together frequentlyco-occurringsequences may be critical in the development of aprotolexicon, a set of common language-specific word templates based on characteristic patterns in the words an infant hears. The development of this protolexicon may in turn allow for the recognition of new types of patterns, e.g. the high frequency of word-initiallystressedconsonants in English, which would allow infants to further parse words by recognizing common prosodic phrasings as autonomous linguistic units, restarting the dynamic cycle of word and language learning.[5]
The question of how novice language-users are capable of associating learnedlabelswith the appropriatereferent, the person or object in the environment which the label names, has been at the heart ofphilosophicalconsiderations oflanguageandmeaningfromPlatotoQuinetoHofstadter.[17]This problem, that of finding some solid relationship between word and object, of finding a word'smeaningwithout succumbing to an infinite recursion of dictionary look-up, is known as thesymbol grounding problem.[18]
Researchers have shown that this problem is intimately linked with the ability to parse language, and that those words that are easy to segment due to their high transitional probabilities are also easier tomapto an appropriate referent.[8]This serves as further evidence of the developmental progression of language acquisition, with children requiring an understanding of the sound distributions of natural languages to form phonetic categories, parse words based on these categories, and then use these parses to map them to objects as labels.
The developmentally earliest understanding of word to referent associations have been reported at six months old, with infants comprehending the words 'mommy' and 'daddy' or their familial or cultural equivalents. Further studies have shown that infants quickly develop in this capacity and by seven months are capable of learning associations between moving images andnonsensewords and syllables.[5]
It is important to note that there is a distinction, often confounded in acquisition research, between mapping a label to a specificinstanceor individual and mapping a label to an entireclassof objects. This latter process is sometimes referred to asgeneralizationor rule learning. Research has shown that if input is encoded in terms of perceptually salient dimensions rather than specific details and if patterns in the input indicate that a number of objects are named interchangeably in the same context, a language learner will be much more likely to generalize that name to every instance with the relevant features. This tendency is heavily dependent on the consistency of context clues and the degree to which word contexts overlap in the input.[10]These differences are furthermore linked to the well-known patterns ofunderandovergeneralizationin infantword learning. Research has also shown that the frequency of co-occurrence of referents is tracked as well, which helps create associations and dispel ambiguities in object-referent models.[19]
The ability to appropriately generalize to whole classes of yet unseen words, coupled with the abilities to parse continuous speech and keep track of word-ordering regularities, may be the critical skills necessary to develop proficiency with and knowledge of syntax and grammar.[5]
According to recent research, there is no neural evidence of statistical language learning in children withautism spectrum disorders.[citation needed]When exposed to a continuous stream of artificial speech, children without autism displayed less cortical activity in thedorsolateral frontal cortices(specifically themiddle frontal gyrus) as cues for word boundaries increased. However activity in these networks remained unchanged in autistic children, regardless of the verbal cues provided. This evidence, highlighting the importance of proper Frontal Lobe brain function is in support of the "Executive Functions" Theory, used to explain some of the biologically related causes of Autistic language deficits. With impaired working memory, decision making, planning, and goal setting, which are vital functions of the Frontal Lobe, Autistic children are at loss when it comes to socializing and communication (Ozonoff, et al., 2004). Additionally, researchers have found that the level of communicative impairment in autistic children was inversely correlated with signal increases in these same regions during exposure to artificial languages. Based on this evidence, researchers have concluded that children with autism spectrum disorders don't have the neural architecture to identify word boundaries in continuous speech. Early word segmentation skills have been shown to predict later language development, which could explain why language delay is a hallmark feature of autism spectrum disorders.[20]
Language learning takes place in different contexts, with both the infant and the caregiver engaging in social interactions. Recent research have investigated how infants and adults use cross-situational statistics in order to learn about not only the meanings of words but also the constraints within a context. For example, Smith and his colleagues proposed that infants learn language by acquiring a bias to label objects to similar objects that come from categories that are well-defined. Important to this view is the idea that the constraints that assist learning of words are not independent of the input itself or the infant's experience. Rather, constraints come about as infants learn about the ways that the words are used and begin to pay attention to certain characteristics of objects that have been used in the past to represent the words.
Inductive learning problem can occur as words are oftentimes used in ambiguous situations in which there are more than one possible referents available. This can lead to confusion for the infants as they may not be able to distinguish which words should be extended to label objects being referenced to. Smith and Yu proposed that a way to make a distinction in such ambiguous situations is to track the word-referent pairings over multiple scenes. For instance, an infant who hears a word in the presence of object A and object B will be unsure of whether the word is the referent of object A or object B. However, if the infant then hears the label again in the presence of object B and object C, the infant can conclude that object B is the referent of the label because object B consistently pairs with the label across different situations.
Computational modelshave long been used to explore the mechanisms by which language learners process and manipulate linguisticinformation. Models of this type allow researchers to systematically control important learning variables that are oftentimes difficult to manipulate at all in human participants.[21]
Associative neural networkmodels of language acquisition are one of the oldest types ofcognitive model, usingdistributed representationsand changes in the weights of the connections between the nodes that make up these representations to simulate learning in a manner reminiscent of theplasticity-basedneuronalreorganization that forms the basis of human learning andmemory.[22]Associative models represent a break withclassical cognitivemodels, characterized by discrete andcontext-free symbols, in favor of adynamical systemsapproach to language better capable of handlingtemporalconsiderations.[23]
A precursor to this approach, and one of the first model types to account for the dimension of time in linguistic comprehension and production wasElman'ssimple recurrent network(SRN). By making use of afeedbacknetwork to represent the system's past states, SRNs were able in a word-prediction task toclusterinput into self-organizedgrammatical categoriesbased solely on statistical co-occurrence patterns.[23][24]
Early successes such as these paved the way for dynamical systems research into linguistic acquisition, answering many questions about early linguistic development but leaving many others unanswered, such as how these statistically acquiredlexemesarerepresented.[23]Of particular importance in recent research has been the effort to understand the dynamic interaction of learning (e.g. language-based) and learner (e.g. speaker-based) variables in lexical organization andcompetitioninbilinguals.[21]In the ceaseless effort to move toward more psychologically realistic models, many researchers have turned to a subset of associative models,self-organizing maps(SOMs), as established, cognitively plausible models of language development.[25][26]
SOMs have been helpful to researchers in identifying and investigating the constraints and variables of interest in a number of acquisition processes, and in exploring the consequences of these findings on linguistic and cognitive theories. By identifyingworking memoryas an important constraint both for language learners and for current computational models, researchers have been able to show that manipulation of this variable allows forsyntactic bootstrapping, drawing not just categorical but actual content meaning from words' positional co-occurrence in sentences.[27]
Some recentmodelsof language acquisition have centered around methods ofBayesian Inferenceto account for infants' abilities to appropriately parse streams of speech and acquire word meanings. Models of this type rely heavily on the notion ofconditional probability(the probability of A given B), in line with findings concerning infants' use of transitional probabilities of words and syllables to learn words.[15]
Models that make use of these probabilistic methods have been able to merge the previouslydichotomouslanguage acquisition perspectives ofsocial theoriesthat emphasize the importance of learning speaker intentions and statistical andassociative theoriesthat rely on cross-situational contexts into a single joint-inference problem. This approach has led to important results in explaining acquisition phenomena such asmutual exclusivity, one-trial learning orfast mapping, and the use ofsocial intentions.[28]
While these results seem to be robust, studies concerning these models' abilities to handle more complex situations such as multiple referent to single label mapping, multiple label to single referent mapping, and bilingual language acquisition in comparison to associative models' successes in these areas have yet to be explored. Hope remains, though, that these model types may be merged to provide a comprehensive account of language acquisition.[29]
Along the lines of probabilistic frequencies, the C/V hypothesis basically states all language hearers use consonantal frequencies to distinguish between words (lexical distinctions) in continuous speech strings, in comparison to vowels. Vowels are more pertinent to rhythmic identification. Several follow-up studies revealed this finding, as they showed that vowels are processed independently of their local statistical distribution.[30]Other research has shown that the consonant-vowel ratio doesn't influence the sizes of lexicons when comparing distinct languages. In the case of languages with a higher consonant ratio, children may depend more on consonant neighbors than rhyme or vowel frequency.[31]
Some models of language acquisition have been based onadaptive parsing[32]andgrammar inductionalgorithms.[33]
|
https://en.wikipedia.org/wiki/Statistical_language_acquisition
|
Inmachine learningandcomputer vision,M-theoryis a learning framework inspired by feed-forward processing in theventral streamofvisual cortexand originally developed for recognition and classification of objects in visual scenes. M-theory was later applied to other areas, such asspeech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-theory, HMAX, achieved human-level performance.[1]
The core principle of M-theory is extracting representations invariant under various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-theory they are not hardcoded into the algorithms, but learned. M-theory also shares some principles withcompressed sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex.
A great challenge in visual recognition tasks is that the same object can be seen in a variety of conditions. It can be seen from different distances, different viewpoints, under different lighting, partially occluded, etc. In addition, for particular classes objects, such as faces, highly complex specific transformations may be relevant, such as changing facial expressions. For learning to recognize images, it is greatly beneficial to factor out these variations. It results in much simpler classification problem and, consequently, in great reduction ofsample complexityof the model.
A simple computational experiment illustrates this idea. Two instances of a classifier were trained to distinguish images of planes from those of cars. For training and testing of the first instance, images with arbitrary viewpoints were used. Another instance received only images seen from a particular viewpoint, which was equivalent to training and testing the system on invariant representation of the images. One can see that the second classifier performed quite well even after receiving a single example from each category, while performance of the first classifier was close to random guess even after seeing 20 examples.
Invariant representations has been incorporated into several learning architectures, such asneocognitrons. Most of these architectures, however, provided invariance through custom-designed features or properties of architecture itself. While it helps to take into account some sorts of transformations, such as translations, it is very nontrivial to accommodate for other sorts of transformations, such as 3D rotations and changing facial expressions. M-theory provides a framework of how such transformations can be learned. In addition to higher flexibility, this theory also suggests how human brain may have similar capabilities.
Another core idea of M-theory is close in spirit to ideas from the field ofcompressed sensing. An implication fromJohnson–Lindenstrauss lemmasays that a particular number of images can be embedded into a low-dimensionalfeature spacewith the same distances between images by using random projections. This result suggests thatdot productbetween the observed image and some other image stored in memory, called template, can be used as a feature helping to distinguish the image from other images. The template need not to be anyhow related to the image, it could be chosen randomly.
The two ideas outlined in previous sections can be brought together to construct a framework for learning invariant representations. The key observation is how dot product between imageI{\displaystyle I}and a templatet{\displaystyle t}behaves when image is transformed (by such transformations as translations, rotations, scales, etc.). If transformationg{\displaystyle g}is a member of aunitary groupof transformations, then the following holds:
⟨gI,t⟩=⟨I,g−1t⟩(1){\displaystyle \langle gI,t\rangle =\langle I,g^{-1}t\rangle \qquad (1)}
In other words, the dot product of transformed image and a template is equal to the dot product of original image and inversely transformed template. For instance, for image rotated by 90 degrees, the inversely transformed template would be rotated by −90 degrees.
Consider the set of dot products of an imageI{\displaystyle I}to all possible transformations of template:{⟨I,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. If one applies a transformationg{\displaystyle g}toI{\displaystyle I}, the set would become{⟨gI,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle gI,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. But because of the property (1), this is equal to{⟨I,g−1g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. The set{g−1g′∣g′∈G}{\displaystyle \lbrace g^{-1}g^{\prime }\mid g^{\prime }\in G\rbrace }is equal to just the set of all elements inG{\displaystyle G}. To see this, note that everyg−1g′{\displaystyle g^{-1}g^{\prime }}is inG{\displaystyle G}due to the closure property ofgroups, and for everyg′′{\displaystyle g^{\prime \prime }}in G there exist its prototypeg′{\displaystyle g^{\prime }}such asg′′=g−1g′{\displaystyle g^{\prime \prime }=g^{-1}g^{\prime }}(namely,g′=gg′′{\displaystyle g^{\prime }=gg^{\prime \prime }}). Thus,{⟨I,g−1g′t⟩∣g′∈G}={⟨I,g′′t⟩∣g′′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace =\lbrace \langle I,g^{\prime \prime }t\rangle \mid g^{\prime \prime }\in G\rbrace }. One can see that the set of dot products remains the same despite that a transformation was applied to the image! This set by itself may serve as a (very cumbersome) invariant representation of an image. More practical representations can be derived from it.
In the introductory section, it was claimed that M-theory allows to learn invariant representations. This is because templates and their transformed versions can be learned from visual experience – by exposing the system to sequences of transformations of objects. It is plausible that similar visual experiences occur in early period of human life, for instance when infants twiddle toys in their hands. Because templates may be totally unrelated to images that the system later will try to classify, memories of these visual experiences may serve as a basis for recognizing many different kinds of objects in later life. However, as it is shown later, for some kinds of transformations, specific templates are needed.
To implement the ideas described in previous sections, one need to know how to derive a computationally efficient invariant representation of an image. Such unique representation for each image can be characterized as it appears by a set of one-dimensional probability distributions (empirical distributions of the dot-products between image and a set of templates stored during unsupervised learning). These probability distributions in their turn can be described by either histograms or a set of statistical moments of it, as it will be shown below.
OrbitOI{\displaystyle O_{I}}is a set of imagesgI{\displaystyle gI}generated from a single imageI{\displaystyle I}under the action of the groupG,∀g∈G{\displaystyle G,\forall g\in G}.
In other words, images of an object and of its transformations correspond to an orbitOI{\displaystyle O_{I}}. If two orbits have a point in common they are identical everywhere,[2]i.e. an orbit is an invariant and unique representation of an image. So, two images are called equivalent when they belong to the same orbit:I∼I′{\displaystyle I\sim I^{\prime }}if∃g∈G{\displaystyle \exists g\in G}such thatI′=gI{\displaystyle I^{\prime }=gI}. Conversely, two orbits are different if none of the images in one orbit coincide with any image in the other.[3]
A natural question arises: how can one compare two orbits? There are several possible approaches. One of them employs the fact that intuitively two empirical orbits are the same irrespective of the ordering of their points. Thus, one can consider a probability distributionPI{\displaystyle P_{I}}induced by the group's action on imagesI{\displaystyle I}(gI{\displaystyle gI}can be seen as a realization of a random variable).
This probability distributionPI{\displaystyle P_{I}}can be almost uniquely characterized byK{\displaystyle K}one-dimensional probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}induced by the (one-dimensional) results of projections⟨I,tk⟩{\displaystyle \langle I,t^{k}\rangle }, wheretk,k=1,…,K{\displaystyle t^{k},k=1,\ldots ,K}are a set of templates (randomly chosen images) (based on the Cramer–Wold theorem[4]and concentration of measures).
Considern{\displaystyle n}imagesXn∈X{\displaystyle X_{n}\in X}. LetK≥2cε2lognδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant. Then
with probability1−δ2{\displaystyle 1-\delta ^{2}}, for allI,I′{\displaystyle I,I^{\prime }}∈{\displaystyle \in }Xn{\displaystyle X_{n}}.
This result (informally) says that an approximately invariant and unique representation of an imageI{\displaystyle I}can be obtained from the estimates ofK{\displaystyle K}1-D probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}fork=1,…,K{\displaystyle k=1,\ldots ,K}. The numberK{\displaystyle K}of projections needed to discriminaten{\displaystyle n}orbits, induced byn{\displaystyle n}images, up to precisionε{\displaystyle \varepsilon }(and with confidence1−δ2{\displaystyle 1-\delta ^{2}}) isK≥2cε2lognδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant.
To classify an image, the following "recipe" can be used:
Estimates of such one-dimensional probability density functions (PDFs)P⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}can be written in terms of histograms asμnk(I)=1/|G|∑i=1|G|ηn(⟨I,gitk⟩){\displaystyle \mu _{n}^{k}(I)=1/\left|G\right|\sum _{i=1}^{\left|G\right|}\eta _{n}(\langle I,g_{i}t^{k}\rangle )}, whereηn,n=1,…,N{\displaystyle \eta _{n},n=1,\ldots ,N}is a set of nonlinear functions. These 1-D probability distributions can be characterized with N-bin histograms or set of statistical moments. For example, HMAX represents an architecture in which pooling is done with a max operation.
In the "recipe" for image classification, groups of transformations are approximated with finite number of transformations. Such approximation is possible only when the group iscompact.
Such groups as all translations and all scalings of the image are not compact, as they allow arbitrarily big transformations. However, they arelocally compact. For locally compact groups, invariance is achievable within certain range of transformations.[2]
Assume thatG0{\displaystyle G_{0}}is a subset of transformations fromG{\displaystyle G}for which the transformed patterns exist in memory. For an imageI{\displaystyle I}and templatetk{\displaystyle t_{k}}, assume that⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }is equal to zero everywhere except some subset ofG0{\displaystyle G_{0}}. This subset is calledsupportof⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }and denoted assupp(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}. It can be proven that if for a transformationg′{\displaystyle g^{\prime }}, support set will also lie withing′G0{\displaystyle g^{\prime }G_{0}}, then signature ofI{\displaystyle I}is invariant with respect tog′{\displaystyle g^{\prime }}.[2]This theorem determines the range of transformations for which invariance is guaranteed to hold.
One can see that the smaller issupp(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}, the larger is the range of transformations for which invariance is guaranteed to hold. It means that for a group that is only locally compact, not all templates would work equally well anymore. Preferable templates are those with a reasonably smallsupp(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}for a generic image. This property is called localization: templates are sensitive only to images within a small range of transformations. Although minimizingsupp(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}is not absolutely necessary for the system to work, it improves approximation of invariance. Requiring localization simultaneously for translation and scale yields a very specific kind of templates:Gabor functions.[2]
The desirability of custom templates for non-compact group is in conflict with the principle of learning invariant representations. However, for certain kinds of regularly encountered image transformations, templates might be the result of evolutionary adaptations. Neurobiological data suggests that there is Gabor-like tuning in the first layer of visual cortex.[5]The optimality of Gabor templates for translations and scales is a possible explanation of this phenomenon.
Many interesting transformations of images do not form groups. For instance, transformations of images associated with 3D rotation of corresponding 3D object do not form a group, because it is impossible to define an inverse transformation (two objects may looks the same from one angle but different from another angle). However, approximate invariance is still achievable even for non-group transformations, if localization condition for templates holds and transformation can be locally linearized.
As it was said in the previous section, for specific case of translations and scaling, localization condition can be satisfied by use of generic Gabor templates. However, for general case (non-group) transformation, localization condition can be satisfied only for specific class of objects.[2]More specifically, in order to satisfy the condition, templates must be similar to the objects one would like to recognize. For instance, if one would like to build a system to recognize 3D rotated faces, one need to use other 3D rotated faces as templates. This may explain the existence of such specialized modules in the brain as one responsible forface recognition.[2]Even with custom templates, a noise-like encoding of images and templates is necessary for localization. It can be naturally achieved if the non-group transformation is processed on any layer other than the first in hierarchical recognition architecture.
The previous section suggests one motivation for hierarchical image recognition architectures. However, they have other benefits as well.
Firstly, hierarchical architectures best accomplish the goal of ‘parsing’ a complex visual scene with many objects consisting of many parts, whose relative position may greatly vary. In this case, different elements of the system must react to different objects and parts. In hierarchical architectures, representations of parts at different levels of embedding hierarchy can be stored at different layers of hierarchy.
Secondly, hierarchical architectures which have invariant representations for parts of objects may facilitate learning of complex compositional concepts. This facilitation may happen through reusing of learned representations of parts that were constructed before in process of learning of other concepts. As a result, sample complexity of learning compositional concepts may be greatly reduced.
Finally, hierarchical architectures have better tolerance to clutter. Clutter problem arises when the target object is in front of a non-uniform background, which functions as a distractor for the visual task. Hierarchical architecture provides signatures for parts of target objects, which do not include parts of background and are not affected by background variations.[6]
In hierarchical architectures, one layer is not necessarily invariant to all transformations that are handled by the hierarchy as a whole. Some transformations may pass through that layer to upper layers, as in the case of non-group transformations described in the previous section. For other transformations, an element of the layer may produce invariant representations only within small range of transformations. For instance, elements of the lower layers in hierarchy have small visual field and thus can handle only a small range of translation. For such transformations, the layer should providecovariantrather than invariant, signatures. The property of covariance can be written asdistr(⟨μl(gI),μl(t)⟩)=distr(⟨μl(I),μl(g−1t)⟩){\displaystyle \operatorname {distr} (\langle \mu _{l}(gI),\mu _{l}(t)\rangle )=\operatorname {distr} (\langle \mu _{l}(I),\mu _{l}(g^{-1}t)\rangle )}, wherel{\displaystyle l}is a layer,μl(I){\displaystyle \mu _{l}(I)}is the signature of image on that layer, anddistr{\displaystyle \operatorname {distr} }stands for "distribution of values of the expression for allg∈G{\displaystyle g\in G}".
M-theory is based on a quantitative theory of the ventral stream of visual cortex.[7][8]Understanding how visual cortex works in object recognition is still a challenging task for neuroscience. Humans and primates are able to memorize and recognize objects after seeing just couple of examples unlike any state-of-the art machine vision systems that usually require a lot of data in order to recognize objects. Prior to the use of visual neuroscience in computer vision has been limited to early vision for deriving stereo algorithms (e.g.,[9]) and to justify the use of DoG (derivative-of-Gaussian) filters and more recently of Gabor filters.[10][11]No real attention has been given to biologically plausible features of higher complexity. While mainstream computer vision has always been inspired and challenged by human vision, it seems to have never advanced past the very first stages of processing in the simple cells in V1 and V2. Although some of the systems inspired – to various degrees – by neuroscience, have been tested on at least some natural images, neurobiological models of object recognition in cortex have not yet been extended to deal with real-world image databases.[12]
M-theory learning framework employs a novel hypothesis about the main computational function of the ventral stream: the representation of new objects/images in terms of a signature, which is invariant to transformations learned during visual experience. This allows recognition from very few labeled examples – in the limit, just one.
Neuroscience suggests that natural functionals for a neuron to compute is a high-dimensional dot product between an "image patch" and another image patch (called template)
which is stored in terms of synaptic weights (synapses per neuron). The standard computational model of a neuron is based on a dot product and a threshold. Another important feature of the visual cortex is that it consists of simple and complex cells. This idea was originally proposed by Hubel and Wiesel.[9]M-theory employs this idea. Simple cells compute dot products of an image and transformations of templates⟨I,gitk⟩{\displaystyle \langle I,g_{i}t^{k}\rangle }fori=1,…,|G|{\displaystyle i=1,\ldots ,|G|}(|G|{\displaystyle |G|}is a number of simple cells). Complex cells are responsible for pooling and computing empirical histograms or statistical moments of it. The following formula for constructing histogram can be computed by neurons:
whereσ{\displaystyle \sigma }is a smooth version of step function,Δ{\displaystyle \Delta }is the width of a histogram bin, andn{\displaystyle n}is the number of the bin.
In[clarification needed][13][14]authors applied M-theory to unconstrained face recognition in natural photographs. Unlike the DAR (detection, alignment, and recognition) method, which handles clutter by detecting objects and cropping closely around them so that very little background remains, this approach accomplishes detection and alignment implicitly by storing transformations of training images (templates) rather than explicitly detecting and aligning or cropping faces at test time. This system is built according to the principles of a recent theory of invariance in hierarchical networks and can evade the clutter problem generally problematic for feedforward systems.
The resulting end-to-end system achieves a drastic improvement in the state of the art on this end-to-end task, reaching the same level of performance as the best systems operating on aligned, closely cropped images (no outside training data). It also performs well on two newer datasets, similar to LFW, but more difficult: significantly jittered (misaligned) version of LFW and SUFR-W (for example, the model's accuracy in the LFW "unaligned & no outside data used" category is 87.55±1.41% compared to state-of-the-art APEM (adaptive probabilistic elastic matching): 81.70±1.78%).
The theory was also applied to a range of recognition tasks: from invariant single object recognition in clutter to multiclass categorization problems on publicly available data sets (CalTech5, CalTech101, MIT-CBCL) and complex (street) scene understanding tasks that requires the recognition of both shape-based as well as texture-based objects (on StreetScenes data set).[12]The approach performs really well: It has the capability of learning from only a few training examples and was shown to outperform several more complex state-of-the-art systems constellation models, the hierarchical SVM-based face-detection system. A key element in the approach is a new set of scale and position-tolerant feature detectors, which are biologically plausible and agree quantitatively with the tuning properties of cells along the ventral stream of visual cortex. These features are adaptive to the training set, though we also show that a universal feature set, learned from a set of natural images unrelated to any categorization task, likewise achieves good performance.
This theory can also be extended for the speech recognition domain.
As an example, in[15]an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluated its validity for voiced speech sound classification was proposed. Authors empirically demonstrated that a single-layer, phone-level representation, extracted from base speech features, improves segment classification accuracy and decreases the number of training examples in comparison with standard spectral and cepstral features for an acoustic classification task on TIMIT dataset.[16]
|
https://en.wikipedia.org/wiki/M-theory_(learning_framework)
|
TheCommon Object Request Broker Architecture(CORBA) is astandarddefined by theObject Management Group(OMG) designed to facilitate the communication of systems that are deployed on diverseplatforms. CORBA enables collaboration between systems on different operating systems,programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of thedistributed objectparadigm.
While briefly popular in the mid to late 1990s, CORBA's complexity, inconsistency, and high licensing costs have relegated it to being a niche technology.[1]
CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses aninterface definition language(IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies amappingfrom IDL to a specific implementation language likeC++orJava. Standard mappings exist forAda,C,C++,C++11,COBOL,Java,Lisp,PL/I,Object Pascal,Python,Ruby, andSmalltalk. Non-standard mappings exist forC#,Erlang,Perl,Tcl, andVisual Basicimplemented byobject request brokers(ORBs) written for those languages. Versions of IDL have changed significantly with annotations replacing some pragmas.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice:
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++Standard Template Library(STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has thePortable Object Adapter(POA) that redirects calls either to the localservantsor (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. seeModel–view–controller), etc.
In addition to providing users with a language and a platform-neutralremote procedure call(RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.
This table presents the history of CORBA standard versions.[2][3][4]
Note that IDL changes have progressed with annotations (e.g. @unit, @topic) replacing some pragmas.
Aservantis the invocation target containing methods for handling theremote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into theobject(that is exposed to remote invocations)andservant(to which the former partforwardsthe method calls). It can be oneservantper remoteobject, or the same servant can support several (possibly all) objects, associated with the givenPortable Object Adapter. Theservantfor eachobjectcan be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remoteobjectand itsservantare objects from the viewpoint of the object-oriented programming.
Incarnationis the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager.
ThePortable Object Adapter(POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remoteobjectand itsservant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server.
On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies.
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.
This reference is either acquired through a stringifiedUniform Resource Locator(URL), NameService lookup (similar toDomain Name System(DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success, or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Apart from remote objects, the CORBA andRMI-IIOPdefine the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be eithera prioriknown for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list ofURLswhence this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM) is an addition to the family of CORBA definitions.[5]It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependentEnterprise Java Beans(EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces calledports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to)notification,authentication,persistence, andtransaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.
Portable interceptors are the "hooks", used by CORBA andRMI-IIOPto mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:
The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.
TheGIOPis an abstract protocol by whichObject request brokers(ORBs) communicate. Standards associated with the protocol are maintained by theObject Management Group(OMG). The GIOP architecture provides several concrete protocols, including:
Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits.
Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58).
Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email totagrequest@omg.org. A list of currently assigned VMCIDs can be found on the OMG website at:https://www.omg.org/cgi-bin/doc?vendor-tags
The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use.
The Common Object Request Broker: Architecture and Specification (CORBA 2.3)
Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL.
All CORBA products must support two OMG-defined URLs: "corbaloc:" and "corbaname:". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.
An example of corbaloc is shown below:
A CORBA product may optionally support the "http:", "ftp:", and "file:" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.
CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.[8]
Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
|
https://en.wikipedia.org/wiki/CORBA
|
SnapTag, invented by SpyderLynk, is a2D mobile barcodealternative similar to aQR code, but that uses an icon or company logo and code ring rather than a square pattern of black dots.[1][2]
Similar to a QR code, SnapTags can be used to take consumers to a brand’s website, but can also facilitatemobile purchases,[3]coupon downloads,free samplerequests, video views, promotional entries,[4]Facebook Likes,PinterestPins,TwitterFollows, Posts and Tweets.[5]SnapTags offer back-end data mining capabilities.[6]
SnapTags can be used in Google's mobile Android operating system[7]and iOS devices (iPhone/iPod/iPad)[8]using The SnapTag Reader App or third party apps that have integrated the SnapTag Reader SDK. SnapTags can also be used by standard camera phones by taking a picture of the SnapTag and texting it to the designated short code or email address.[9][10]
|
https://en.wikipedia.org/wiki/SnapTag
|
HTTP Public Key Pinning(HPKP) is an obsoleteInternet securitymechanism delivered via anHTTPheaderwhich allowsHTTPSwebsites to resistimpersonationby attackers using misissued or otherwise fraudulentdigital certificates.[1]A server uses it to deliver to theclient(e.g. aweb browser) a set of hashes ofpublic keysthat must appear in the certificate chain of future connections to the samedomain name.
For example, attackers might compromise acertificate authority, and then mis-issue certificates for aweb origin. To combat this risk, the HTTPS web server serves a list of “pinned” public key hashes valid for a given time; on subsequent connections, during that validity time, clients expect the server to use one or more of those public keys in its certificate chain. If it does not, an error message is shown, which cannot be (easily) bypassed by the user.
The technique does not pin certificates, butpublic keyhashes. This means that one can use thekey pairto get a certificate from any certificate authority, when one has access to the private key. Also the user can pin public keys ofrootorintermediate certificates(created by certificate authorities), restricting site to certificates issued by the said certificate authority.
Due to HPKP mechanism complexity and possibility of accidental misuse (potentially causing a lockout condition by system administrators), in 2017 browsers deprecated HPKP and in 2018 removed its support in favor ofCertificate Transparency.[2][3]
The server communicates the HPKP policy to the user agent via anHTTPresponse header field namedPublic-Key-Pins(orPublic-Key-Pins-Report-Onlyfor reporting-only purposes).
The HPKP policy specifieshashesof the subject public key info of one of the certificates in the website's authentic X.509public key certificatechain (and at least one backup key) inpin-sha256directives, and a period of time during which the user agent shall enforce public key pinning inmax-agedirective, optionalincludeSubDomainsdirective to include all subdomains (of the domain that sent the header) in pinning policy and optionalreport-uridirective with URL where to send pinning violation reports. At least one of the public keys of the certificates in the certificate chain needs to match a pinned public key in order for the chain to be considered valid by the user agent.
At the time of publishing,RFC7469only allowed theSHA-256hash algorithm. (Appendix A. of RFC 7469mentions some tools and required arguments that can be used to produce hashes for HPKP policies.)
A website operator can choose to either pin theroot certificatepublic key of a particular root certificate authority, allowing only that certificate authority (and all intermediate authorities signed by its key) to issue valid certificates for the website's domain, and/or to pin the key(s) of one or more intermediate issuing certificates, or to pin the end-entity public key.
At least one backup key must be pinned, in case the current pinned key needs to be replaced. The HPKP is not valid without this backup key (a backup key is defined as a public key not present in the current certificate chain).[4]
HPKP is standardized inRFC7469.[1]It expands on staticcertificate pinning, which hardcodes public key hashes of well-known websites or services within web browsers and applications.[5]
Most browsers disable pinning forcertificate chainswith privateroot certificatesto enable various corporatecontent inspectionscanners[6]and web debugging tools (such asmitmproxyorFiddler). The RFC 7469 standard recommends disabling pinning violation reports for "user-defined" root certificates, where it is "acceptable" for the browser to disable pin validation.[7]
If the user agent performs pin validation and fails to find a valid SPKI fingerprint in the served certificate chain, it will POST a JSON formattedviolation reportto the host specified in thereport-uridirective containing details of the violation. This URI may be served viaHTTPorHTTPS; however, the user agent cannot send HPKP violation reports to an HTTPS URI in the same domain as the domain for which it is reporting the violation. Hosts may either use HTTP for thereport-uri, use an alternative domain, or use a reporting service.[8]
Some browsers also support thePublic-Key-Pins-Report-Only, which only triggers this reporting while not showing an error to the user.
During its peak adoption, HPKP was reported to be used by 3,500 of top 1 million internet sites, a figure that declined to 650 around the end of 2019.[9]
Criticism and concern revolved around malicious or human error scenarios known as HPKP Suicide and RansomPKP.[10]In such scenarios, a website owner would have their ability to publish new contents to their domain severely hampered by either losing access to their own keys or having new keys announced by a malicious attacker.
|
https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
|
Themin-entropy, ininformation theory, is the smallest of theRényi familyof entropies, corresponding to themost conservativeway of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of themost likelyoutcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary orShannon entropy(which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley ormax-entropy, defined as the logarithm of thenumberof outcomes with nonzero probability.
As with the classical Shannon entropy and its quantum generalization, thevon Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog ofconditional quantum entropy.
To interpret a conditional information measure, suppose Alice and Bob were to share a bipartite quantum stateρAB{\displaystyle \rho _{AB}}. Alice has access to systemA{\displaystyle A}and Bob to systemB{\displaystyle B}. The conditional entropy measures the average uncertainty Bob has about Alice's state upon sampling from his own system. The min-entropy can be interpreted as the distance of a state from a maximally entangled state.
This concept is useful in quantum cryptography, in the context ofprivacy amplification(See for example[1]).
IfP=(p1,...,pn){\displaystyle P=(p_{1},...,p_{n})}is a classical finite probability distribution, its min-entropy can be defined as[2]Hmin(P)=log1Pmax,Pmax≡maxipi.{\displaystyle H_{\rm {min}}({\boldsymbol {P}})=\log {\frac {1}{P_{\rm {max}}}},\qquad P_{\rm {max}}\equiv \max _{i}p_{i}.}One way to justify the name of the quantity is to compare it with the more standard definition of entropy, which readsH(P)=∑ipilog(1/pi){\displaystyle H({\boldsymbol {P}})=\sum _{i}p_{i}\log(1/p_{i})}, and can thus be written concisely as the expectation value oflog(1/pi){\displaystyle \log(1/p_{i})}over the distribution. If instead of taking the expectation value of this quantity we take its minimum value, we get precisely the above definition ofHmin(P){\displaystyle H_{\rm {min}}({\boldsymbol {P}})}.
From an operational perspective, the min-entropy equals the negative logarithm of the probability of successfully guessing the outcome of a random draw fromP{\displaystyle P}.
This is because it is optimal to guess the element with the largest probability and the chance of success equals the probability of that element.
A natural way to generalize "min-entropy" from classical to quantum states is to leverage the simple observation that quantum states define classical probability distributions when measured in some basis. There is however the added difficulty that a single quantum state can result in infinitely many possible probability distributions, depending on how it is measured. A natural path is then, given a quantum stateρ{\displaystyle \rho }, to still defineHmin(ρ){\displaystyle H_{\rm {min}}(\rho )}aslog(1/Pmax){\displaystyle \log(1/P_{\rm {max}})}, but this time definingPmax{\displaystyle P_{\rm {max}}}as the maximum possible probability that can be obtained measuringρ{\displaystyle \rho }, maximizing over all possible projective measurements.
Using this, one gets the operational definition that the min-entropy ofρ{\displaystyle \rho }equals the negative logarithm of the probability of successfully guessing the outcome of any measurement ofρ{\displaystyle \rho }.
Formally, this leads to the definitionHmin(ρ)=maxΠlog1maxitr(Πiρ)=−maxΠlogmaxitr(Πiρ),{\displaystyle H_{\rm {min}}(\rho )=\max _{\Pi }\log {\frac {1}{\max _{i}\operatorname {tr} (\Pi _{i}\rho )}}=-\max _{\Pi }\log \max _{i}\operatorname {tr} (\Pi _{i}\rho ),}where we are maximizing over the set of all projective measurementsΠ=(Πi)i{\displaystyle \Pi =(\Pi _{i})_{i}},Πi{\displaystyle \Pi _{i}}represent the measurement outcomes in thePOVMformalism, andtr(Πiρ){\displaystyle \operatorname {tr} (\Pi _{i}\rho )}is therefore the probability of observing thei{\displaystyle i}-th outcome when the measurement isΠ{\displaystyle \Pi }.
A more concise method to write the double maximization is to observe that any element of any POVM is a Hermitian operator such that0≤Π≤I{\displaystyle 0\leq \Pi \leq I}, and thus we can equivalently directly maximize over these to getHmin(ρ)=−max0≤Π≤Ilogtr(Πρ).{\displaystyle H_{\rm {min}}(\rho )=-\max _{0\leq \Pi \leq I}\log \operatorname {tr} (\Pi \rho ).}In fact, this maximization can be performed explicitly and the maximum is obtained whenΠ{\displaystyle \Pi }is the projection onto (any of) the largest eigenvalue(s) ofρ{\displaystyle \rho }. We thus get yet another expression for the min-entropy as:Hmin(ρ)=−log‖ρ‖op,{\displaystyle H_{\rm {min}}(\rho )=-\log \|\rho \|_{\rm {op}},}remembering that the operator norm of a Hermitian positive semidefinite operator equals its largest eigenvalue.
LetρAB{\displaystyle \rho _{AB}}be a bipartite density operator on the spaceHA⊗HB{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}. The min-entropy ofA{\displaystyle A}conditioned onB{\displaystyle B}is defined to be
Hmin(A|B)ρ≡−infσBDmax(ρAB‖IA⊗σB){\displaystyle H_{\min }(A|B)_{\rho }\equiv -\inf _{\sigma _{B}}D_{\max }(\rho _{AB}\|I_{A}\otimes \sigma _{B})}
where the infimum ranges over all density operatorsσB{\displaystyle \sigma _{B}}on the spaceHB{\displaystyle {\mathcal {H}}_{B}}. The measureDmax{\displaystyle D_{\max }}is the maximum relative entropy defined as
Dmax(ρ‖σ)=infλ{λ:ρ≤2λσ}{\displaystyle D_{\max }(\rho \|\sigma )=\inf _{\lambda }\{\lambda :\rho \leq 2^{\lambda }\sigma \}}
The smooth min-entropy is defined in terms of the min-entropy.
Hminϵ(A|B)ρ=supρ′Hmin(A|B)ρ′{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }=\sup _{\rho '}H_{\min }(A|B)_{\rho '}}
where the sup and inf range over density operatorsρAB′{\displaystyle \rho '_{AB}}which areϵ{\displaystyle \epsilon }-close toρAB{\displaystyle \rho _{AB}}. This measure ofϵ{\displaystyle \epsilon }-close is defined in terms of the purified distance
P(ρ,σ)=1−F(ρ,σ)2{\displaystyle P(\rho ,\sigma )={\sqrt {1-F(\rho ,\sigma )^{2}}}}
whereF(ρ,σ){\displaystyle F(\rho ,\sigma )}is thefidelitymeasure.
These quantities can be seen as generalizations of thevon Neumann entropy. Indeed, the von Neumann entropy can be expressed as
S(A|B)ρ=limϵ→0limn→∞1nHminϵ(An|Bn)ρ⊗n.{\displaystyle S(A|B)_{\rho }=\lim _{\epsilon \to 0}\lim _{n\to \infty }{\frac {1}{n}}H_{\min }^{\epsilon }(A^{n}|B^{n})_{\rho ^{\otimes n}}~.}This is called the fully quantum asymptotic equipartition theorem.[3]The smoothed entropies share many interesting properties with the von Neumann entropy. For example, the smooth min-entropy satisfy a data-processing inequality:[4]Hminϵ(A|B)ρ≥Hminϵ(A|BC)ρ.{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }\geq H_{\min }^{\epsilon }(A|BC)_{\rho }~.}
Henceforth, we shall drop the subscriptρ{\displaystyle \rho }from the min-entropy when it is obvious from the context on what state it is evaluated.
Suppose an agent had access to a quantum systemB{\displaystyle B}whose stateρBx{\displaystyle \rho _{B}^{x}}depends on some classical variableX{\displaystyle X}. Furthermore, suppose that each of its elementsx{\displaystyle x}is distributed according to some distributionPX(x){\displaystyle P_{X}(x)}. This can be described by the following state over the systemXB{\displaystyle XB}.
ρXB=∑xPX(x)|x⟩⟨x|⊗ρBx,{\displaystyle \rho _{XB}=\sum _{x}P_{X}(x)|x\rangle \langle x|\otimes \rho _{B}^{x},}
where{|x⟩}{\displaystyle \{|x\rangle \}}form an orthonormal basis. We would like to know what the agent can learn about the classical variablex{\displaystyle x}. Letpg(X|B){\displaystyle p_{g}(X|B)}be the probability that the agent guessesX{\displaystyle X}when using an optimal measurement strategy
pg(X|B)=∑xPX(x)tr(ExρBx),{\displaystyle p_{g}(X|B)=\sum _{x}P_{X}(x)\operatorname {tr} (E_{x}\rho _{B}^{x}),}
whereEx{\displaystyle E_{x}}is the POVM that maximizes this expression. It can be shown[5]that this optimum can be expressed in terms of the min-entropy as
pg(X|B)=2−Hmin(X|B).{\displaystyle p_{g}(X|B)=2^{-H_{\min }(X|B)}~.}
If the stateρXB{\displaystyle \rho _{XB}}is a product state i.e.ρXB=σX⊗τB{\displaystyle \rho _{XB}=\sigma _{X}\otimes \tau _{B}}for some density operatorsσX{\displaystyle \sigma _{X}}andτB{\displaystyle \tau _{B}}, then there is no correlation between the systemsX{\displaystyle X}andB{\displaystyle B}. In this case, it turns out that2−Hmin(X|B)=maxxPX(x).{\displaystyle 2^{-H_{\min }(X|B)}=\max _{x}P_{X}(x)~.}
Since the conditional min-entropy is always smaller than the conditional Von Neumann entropy, it follows thatpg(X|B)≥2−S(A|B)ρ.{\displaystyle p_{g}(X|B)\geq 2^{-S(A|B)_{\rho }}~.}
The maximally entangled state|ϕ+⟩{\displaystyle |\phi ^{+}\rangle }on a bipartite systemHA⊗HB{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}is defined as
|ϕ+⟩AB=1d∑xA,xB|xA⟩|xB⟩{\displaystyle |\phi ^{+}\rangle _{AB}={\frac {1}{\sqrt {d}}}\sum _{x_{A},x_{B}}|x_{A}\rangle |x_{B}\rangle }
where{|xA⟩}{\displaystyle \{|x_{A}\rangle \}}and{|xB⟩}{\displaystyle \{|x_{B}\rangle \}}form an orthonormal basis for the spacesA{\displaystyle A}andB{\displaystyle B}respectively.
For a bipartite quantum stateρAB{\displaystyle \rho _{AB}}, we define the maximum overlap with the maximally entangled state as
qc(A|B)=dAmaxEF((IA⊗E)ρAB,|ϕ+⟩⟨ϕ+|)2{\displaystyle q_{c}(A|B)=d_{A}\max _{\mathcal {E}}F\left((I_{A}\otimes {\mathcal {E}})\rho _{AB},|\phi ^{+}\rangle \langle \phi ^{+}|\right)^{2}}
where the maximum is over all CPTP operationsE{\displaystyle {\mathcal {E}}}anddA{\displaystyle d_{A}}is the dimension of subsystemA{\displaystyle A}. This is a measure of how correlated the stateρAB{\displaystyle \rho _{AB}}is. It can be shown thatqc(A|B)=2−Hmin(A|B){\displaystyle q_{c}(A|B)=2^{-H_{\min }(A|B)}}. If the information contained inA{\displaystyle A}is classical, this reduces to the expression above for the guessing probability.
The proof is from a paper by König, Schaffner, Renner in 2008.[6]It involves the machinery ofsemidefinite programs.[7]Suppose we are given some bipartite density operatorρAB{\displaystyle \rho _{AB}}. From the definition of the min-entropy, we have
Hmin(A|B)=−infσBinfλ{λ|ρAB≤2λ(IA⊗σB)}.{\displaystyle H_{\min }(A|B)=-\inf _{\sigma _{B}}\inf _{\lambda }\{\lambda |\rho _{AB}\leq 2^{\lambda }(I_{A}\otimes \sigma _{B})\}~.}
This can be re-written as
−loginfσBTr(σB){\displaystyle -\log \inf _{\sigma _{B}}\operatorname {Tr} (\sigma _{B})}
subject to the conditions
σB≥0,IA⊗σB≥ρAB.{\displaystyle {\begin{aligned}\sigma _{B}&\geq 0,\\I_{A}\otimes \sigma _{B}&\geq \rho _{AB}~.\end{aligned}}}
We notice that the infimum is taken over compact sets and hence can be replaced by a minimum. This can then be expressed succinctly as a semidefinite program. Consider the primal problem
{min:Tr(σB)subject to:IA⊗σB≥ρABσB≥0.{\displaystyle {\begin{cases}{\text{min:}}\operatorname {Tr} (\sigma _{B})\\{\text{subject to: }}I_{A}\otimes \sigma _{B}\geq \rho _{AB}\\\sigma _{B}\geq 0~.\end{cases}}}
This primal problem can also be fully specified by the matrices(ρAB,IB,Tr∗){\displaystyle (\rho _{AB},I_{B},\operatorname {Tr} ^{*})}whereTr∗{\displaystyle \operatorname {Tr} ^{*}}is the adjoint of the partial trace overA{\displaystyle A}. The action ofTr∗{\displaystyle \operatorname {Tr} ^{*}}on operators onB{\displaystyle B}can be written as
Tr∗(X)=IA⊗X.{\displaystyle \operatorname {Tr} ^{*}(X)=I_{A}\otimes X~.}
We can express the dual problem as a maximization over operatorsEAB{\displaystyle E_{AB}}on the spaceAB{\displaystyle AB}as
{max:Tr(ρABEAB)subject to:TrA(EAB)=IBEAB≥0.{\displaystyle {\begin{cases}{\text{max:}}\operatorname {Tr} (\rho _{AB}E_{AB})\\{\text{subject to: }}\operatorname {Tr} _{A}(E_{AB})=I_{B}\\E_{AB}\geq 0~.\end{cases}}}
Using theChoi–Jamiołkowski isomorphism, we can define the channelE{\displaystyle {\mathcal {E}}}such thatdAIA⊗E†(|ϕ+⟩⟨ϕ+|)=EAB{\displaystyle d_{A}I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)=E_{AB}}where the bell state is defined over the spaceAA′{\displaystyle AA'}. This means that we can express the objective function of the dual problem as
⟨ρAB,EAB⟩=dA⟨ρAB,IA⊗E†(|ϕ+⟩⟨ϕ+|)⟩=dA⟨IA⊗E(ρAB),|ϕ+⟩⟨ϕ+|)⟩{\displaystyle {\begin{aligned}\langle \rho _{AB},E_{AB}\rangle &=d_{A}\langle \rho _{AB},I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \\&=d_{A}\langle I_{A}\otimes {\mathcal {E}}(\rho _{AB}),|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \end{aligned}}}as desired.
Notice that in the event that the systemA{\displaystyle A}is a partly classical state as above, then the quantity that we are after reduces tomaxPX(x)⟨x|E(ρBx)|x⟩.{\displaystyle \max P_{X}(x)\langle x|{\mathcal {E}}(\rho _{B}^{x})|x\rangle ~.}We can interpretE{\displaystyle {\mathcal {E}}}as a guessing strategy and this then reduces to the interpretation given above where an adversary wants to find the stringx{\displaystyle x}given access to quantum information via systemB{\displaystyle B}.
|
https://en.wikipedia.org/wiki/Min-entropy
|
Incomputer programming, thereturn type(orresult type) defines and constrains thedata typeof the valuereturnedfrom asubroutineormethod.[1]In manyprogramming languages(especiallystatically-typed programming languagessuch asC,C++,Java) the return type must be explicitly specified when declaring a function.
In the Java example:
the return type isint. The program can therefore rely on the method returning a value of typeint. Various mechanisms are used for the case where a subroutine does not return any value, e.g., a return type ofvoidis used in some programming languages:
A method returns to the code that invoked it when it completes all the statements in the method, reaches a return statement, or
throws an exception, whichever occurs first.
You declare a method's return type in its method declaration. Within the body of the method, you use the return statement to return the value.
Any method declared void doesn't return a value. It does not need to contain a return statement, but it may do so. In such a case, a return statement can be used to branch out of a control flow block and exit the method and is simply used like this:
If you try to return a value from a method that is declared void, you will get a compiler error.
Any method that is not declared void must contain a return statement with a corresponding return value, like this:
The data type of the return value must match the method's declared return type; you can't return an integer value from a method declared to return a boolean.
The getArea() method in the Rectangle Rectangle class that was discussed in the sections on objects returns an integer:
This method returns the integer that the expressionwidth * heightevaluates to.
The getArea method returns a primitive type. A method can also return areference type. For example, in a program to manipulate Bicycle objects, we might have a method like this:
|
https://en.wikipedia.org/wiki/Return_type
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.