text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Inmathematics,abuse of notationoccurs when an author uses amathematical notationin a way that is not entirely formally correct, but which might help simplify the exposition or suggest the correctintuition(while possibly minimizing errors and confusion at the same time). However, since the concept of formal/syntactical correctness depends on both time and context, certain notations in mathematics that are flagged as abuse in one context could be formally correct in one or more other contexts. Time-dependent abuses of notation may occur when novel notations are introduced to a theory some time before the theory is first formalized; these may be formally corrected by solidifying and/or otherwise improving the theory.Abuse of notationshould be contrasted withmisuseof notation, which does not have the presentational benefits of the former and should be avoided (such as the misuse ofconstants of integration[1]).
A related concept isabuse of languageorabuse of terminology,where aterm— rather than a notation — is misused. Abuse of language is an almost synonymous expression for abuses that are non-notational by nature. For example, while the wordrepresentationproperly designates agroup homomorphismfrom agroupGtoGL(V), whereVis avector space, it is common to callV"a representation ofG". Another common abuse of language consists in identifying two mathematical objects that are different, butcanonically isomorphic.[2]Other examples include identifying aconstant functionwith its value, identifying a group with abinary operationwith the name of its underlying set, or identifying toR3{\displaystyle \mathbb {R} ^{3}}theEuclidean spaceof dimension three equipped with aCartesian coordinate system.[3]
Manymathematical objectsconsist of aset, often called the underlying set, equipped with some additional structure, such as amathematical operationor atopology. It is a common abuse of notation to use the same notation for the underlying set and the structured object (a phenomenon known assuppression of parameters[3]). For example,Z{\displaystyle \mathbb {Z} }may denote the set of theintegers, thegroupof integers together withaddition, or theringof integers with addition andmultiplication. In general, there is no problem with this if the object under reference is well understood, and avoiding such an abuse of notation might even make mathematical texts more pedantic and more difficult to read. When this abuse of notation may be confusing, one may distinguish between these structures by denoting(Z,+){\displaystyle (\mathbb {Z} ,+)}the group of integers with addition, and(Z,+,⋅){\displaystyle (\mathbb {Z} ,+,\cdot )}the ring of integers.
Similarly, atopological spaceconsists of a setX(the underlying set) and a topologyT,{\displaystyle {\mathcal {T}},}which is characterized by a set ofsubsetsofX(theopen sets). Most frequently, one considers only one topology onX, so there is usually no problem in referringXas both the underlying set, and the pair consisting ofXand its topologyT{\displaystyle {\mathcal {T}}}— even though they are technically distinct mathematical objects. Nevertheless, it could occur on some occasions that two different topologies are considered simultaneously on the same set. In which case, one must exercise care and use notation such as(X,T){\displaystyle (X,{\mathcal {T}})}and(X,T′){\displaystyle (X,{\mathcal {T}}')}to distinguish between the different topological spaces.
One may encounter, in many textbooks, sentences such as "Letf(x){\displaystyle f(x)}be a function ...". This is an abuse of notation, as the name of thefunctionisf,{\displaystyle f,}andf(x){\displaystyle f(x)}denotes the value off{\displaystyle f}for the elementx{\displaystyle x}of its domain. More precisely correct phrasings include "Letf{\displaystyle f}be a function of the variablex{\displaystyle x}..." or "Letx↦f(x){\displaystyle x\mapsto f(x)}be a function ..." This abuse of notation is widely used, as it simplifies the formulation, and the systematic use of a correct notation quickly becomes pedantic.
A similar abuse of notation occurs in sentences such as "Let us consider the functionx2+x+1{\displaystyle x^{2}+x+1}...", when in factx2+x+1{\displaystyle x^{2}+x+1}is apolynomialexpression, not a function per se. The function that associatesx2+x+1{\displaystyle x^{2}+x+1}tox{\displaystyle x}can be denotedx↦x2+x+1.{\displaystyle x\mapsto x^{2}+x+1.}Nevertheless, this abuse of notation is widely used, since it is more concise but generally not confusing.
Many mathematical structures are defined through a characterizing property (often auniversal property). Once this desired property is defined, there may be various ways to construct the structure, and the corresponding results are formally different objects, but which have exactly the same properties (i.e.,isomorphic). As there is no way to distinguish these isomorphic objects through their properties, it is standard to consider them as equal, even if this is formally wrong.[2]
One example of this is theCartesian product, which is often seen as associative:
But this is strictly speaking not true: ifx∈E{\displaystyle x\in E},y∈F{\displaystyle y\in F}andz∈G{\displaystyle z\in G}, the identity((x,y),z)=(x,(y,z)){\displaystyle ((x,y),z)=(x,(y,z))}would imply that(x,y)=x{\displaystyle (x,y)=x}andz=(y,z){\displaystyle z=(y,z)}, and so((x,y),z)=(x,y,z){\displaystyle ((x,y),z)=(x,y,z)}would mean nothing. However, these equalities can be legitimized and made rigorous incategory theory—using the idea of anatural isomorphism.
Another example of similar abuses occurs in statements such as "there are two non-Abelian groups of order 8", which more strictly stated means "there are two isomorphism classes of non-Abelian groups of order 8".
Referring to anequivalence classof anequivalence relationbyxinstead of [x] is an abuse of notation. Formally, if a setXispartitionedby an equivalence relation ~, then for eachx∈X, the equivalence class {y∈X|y~x} is denoted [x]. But in practice, if the remainder of the discussion is focused on the equivalence classes rather than the individual elements of the underlying set, then it is common to drop the square brackets in the discussion.
For example, inmodular arithmetic, afinite groupoforderncan be formed by partitioning the integers via the equivalence relation "x~yif and only ifx≡y(modn)". The elements of that group would then be [0], [1], ..., [n− 1], but in practice they are usually denoted simply as 0, 1, ...,n− 1.
Another example is the space of (classes of) measurable functions over ameasure space, or classes ofLebesgue integrablefunctions, where the equivalence relation is equality "almost everywhere".
The terms "abuse of language" and "abuse of notation" depend on context. Writing "f:A→B" for apartial functionfromAtoBis almost always an abuse of notation, but not in acategory theoreticcontext, wherefcan be seen as amorphismin the category of sets and partial functions.
|
https://en.wikipedia.org/wiki/Abuse_of_notation
|
Alocal area network(LAN) is acomputer networkthat interconnects computers within a limited area such as a residence, campus, or building,[1][2][3]and has itsnetwork equipmentand interconnects locally managed. LANs facilitate the distribution of data and sharing network devices, such as printers.
The LAN contrasts thewide area network(WAN), which not only covers a larger geographic distance, but also generally involvesleased telecommunication circuitsorInternetlinks. An even greater contrast is theInternet, which is a system of globally connected business and personal computers.
EthernetandWi-Fiare the two most common technologies used for local area networks; historical network technologies includeARCNET,Token Ring, andLocalTalk.
Most wired network infrastructures utilizeCategory 5orCategory 6twisted paircabling withRJ45compatible terminations. This medium provides physical connectivity between theEthernetinterfaces present on a large number of IP-aware devices. Depending on the grade of cable and quality of installation, speeds of up to 10 Mbit/s, 100 Mbit/s, 1 Gbit/s, or 10 Gbit/s are supported.
In awireless LAN, users have unrestricted movement within the coverage area. Wireless networks have become popular in residences and small businesses because of their ease of installation, convenience, and flexibility.[4]Most wireless LANs consist of devices containingwirelessradio technology that conforms to802.11standards as certified by theIEEE. Most wireless-capable residential devices operate at both the 2.4GHzand 5 GHz frequencies and fall within the 802.11n or 802.11ac standards.[5]Some older home networking devices operate exclusively at a frequency of 2.4 GHz under 802.11b and 802.11g, or 5 GHz under 802.11a. Some newer devices operate at the aforementioned frequencies in addition to 6 GHz underWi-Fi 6E.Wi-Fiis a marketing and compliance certification for IEEE 802.11 technologies.[6]TheWi-Fi Alliancehas tested compliant products, and certifies them for interoperability. The technology may be integrated intosmartphones,tablet computersandlaptops. Guests are often offeredInternet accessvia ahotspotservice.
Simple LANs in office or school buildings generally consist of cabling and one or morenetwork switches; a switch is used to allow devices on a LAN to talk to one another viaEthernet. A switch can be connected to arouter,cable modem, orADSL modemforInternetaccess. LANs at residential homes usually tend to have a single router and often may include awireless repeater. A LAN can include a wide variety of other network devices such asfirewalls,load balancers, andnetwork intrusion detection.[7]Awireless access pointis required for connecting wireless devices to a network; when a router includes this device, it is referred to as awireless router.
Advanced LANs are characterized by their use of redundant links with switches using thespanning tree protocolto prevent loops, their ability to manage differing traffic types viaquality of service(QoS), and their ability to segregate traffic withVLANs. Anetwork bridgebinds two different LANs or LAN segments to each other, often in order to grant a wired-only device access to a wireless network medium.
Network topologydescribes the layout of interconnections between devices and network segments. At thedata link layerandphysical layer, a wide variety of LAN topologies have been used, includingring,bus,meshandstar. The star topology is the most common in contemporary times. Wireless LAN (WLAN) also has its topologies: independent basic service set (IBSS, anad-hoc network) where each node connects directly to each other (this is also standardized asWi-Fi Direct), or basic service set (BSS, an infrastructure network that uses anwireless access point).[8]
DHCPis used to assign internal IP addresses to members of a local area network. A DHCP server typically runs on the router[9]with end devices as its clients. All DHCP clients request configuration settings using the DHCP protocol in order to acquire theirIP address, adefault routeand one or moreDNS serveraddresses. Once the client implements these settings, it will be able to communicate on thatinternet.[10]
At the higher network layers, protocols such asNetBIOS,IPX/SPX,AppleTalkand others were once common, but theInternet protocol suite(TCP/IP) has prevailed as the standard of choice for almost all local area networks today.
LANs can maintain connections with other LANs via leased lines, leased services, or across theInternetusingvirtual private networktechnologies. Depending on how the connections are established and secured, and the distance involved, such linked LANs may also be classified as ametropolitan area network(MAN) or awide area network(WAN).
Local area networks may be connected to theInternet(a type ofWAN) via fixed-line means (such as aDSL/ADSLmodem[11]) or alternatively using a cellular or satellitemodem. These would additionally make use of telephone wires such asVDSLandVDSL2, coaxial cables, orfiber to the homefor running fiber-optic cables directly into a house or office building, or alternatively a cellular modem orsatellite dishin the latter non-fixed cases. WithInternet access, theInternet service provider (ISP)would grant a single WAN-facingIP addressto the network. A router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN bynetwork address translation.
Agatewayestablishesphysicalanddata link layerconnectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain acable,DSL, oroptical modembound to anetwork interface controllerfor Ethernet. Home and small business class routers are often incorporated into these devices for additional convenience, and they often also have integratedwireless access pointand 4-port Ethernetswitch.
TheITU-TG.hnandIEEEPowerlinestandard, which provide high-speed (up to 1 Gbit/s) local area networking over existing home wiring, are examples of home networking technology designed specifically forIPTVdelivery.[12][relevant?]
The increasing demand and usage of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A 1970 report from theLawrence Radiation Laboratorydetailing the growth of their "Octopus" network gave a good indication of the situation.[13][14]
A number of experimental and early commercial LAN technologies were developed in the 1970s.Ethernetwas developed atXerox PARCbetween 1973 and 1974.[15][16]TheCambridge Ringwas developed at Cambridge University starting in 1974.[17]ARCNETwas developed byDatapointCorporation in 1976 and announced in 1977.[18]It had the first commercial installation in December 1977 atChase Manhattan Bankin New York.[19]In 1979,[20]theelectronic voting system for the European Parliamentwas the first installation of a LAN connecting hundreds (420) of microprocessor-controlled voting terminals to a polling/selecting central unit with amultidrop buswithMaster/slave (technology)arbitration.[dubious–discuss]It used 10 kilometers of simpleunshielded twisted paircategory 3 cable—the same cable used for telephone systems—installed inside the benches of the European Parliament Hemicycles in Strasbourg and Luxembourg.[21]
The development and proliferation ofpersonal computersusing theCP/Moperating system in the late 1970s, and laterDOS-based systems starting in 1981, meant that many sites grew to dozens or even hundreds of computers. The initial driving force for networking was to sharestorageandprinters, both of which were expensive at the time. There was much enthusiasm for the concept, and for several years, from about 1983 onward, computer industry pundits habitually declared the coming year to be, "The year of the LAN".[22][23][24]
In practice, the concept was marred by the proliferation of incompatiblephysical layerandnetwork protocolimplementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, andnetwork operating system. A solution appeared with the advent ofNovell NetWarewhich provided even-handed support for dozens of competing card and cable types, and a much more sophisticated operating system than most of its competitors.
Of the competitors to NetWare, onlyBanyan Vineshad comparable technical strengths, but Banyan never gained a secure base.3Comproduced3+Shareand Microsoft producedMS-Net. These then formed the basis for collaboration betweenMicrosoftand 3Com to create a simple network operating systemLAN Managerand its cousin, IBM'sLAN Server. None of these enjoyed any lasting success; Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introducedWindows NT.[25]
In 1983, TCP/IP was first shown capable of supporting actual defense department applications on a Defense Communication Agency LAN testbed located at Reston, Virginia.[26][27]The TCP/IP-based LAN successfully supportedTelnet,FTP, and a Defense Department teleconferencing application.[28]This demonstrated the feasibility of employing TCP/IP LANs to interconnectWorldwide Military Command and Control System(WWMCCS) computers at command centers throughout the United States.[29]However, WWMCCS was superseded by theGlobal Command and Control System(GCCS) before that could happen.
During the same period,Unix workstationswere using TCP/IP networking. Although the workstation market segment is now much reduced, the technologies developed in the area continue to be influential on the Internet and in all forms of networking—and the TCP/IP protocol has replacedIPX,AppleTalk,NBF, and other protocols used by the early PC LANs.
Econetwas Acorn Computers's low-cost local area network system, intended for use by schools and small businesses. It was first developed for theAcorn AtomandAcorn System 2/3/4computers in 1981.[30][31]
In the 1980s, several token ring network implementations for LANs were developed.[32][33]IBM released their own implementation of token ring in 1985,[34][35]It ran at4Mbit/s.[36]IBM claimed that their token ring systems were superior to Ethernet, especially under load, but these claims were debated.[37][38]IBM's implementation of token ring was the basis of the IEEE 802.5 standard.[39]A 16 Mbit/s version of Token Ring was standardized by the 802.5 working group in 1989.[40]IBM had market dominance over Token Ring, for example, in 1990, IBM equipment was the most widely used for Token Ring networks.[41]
Fiber Distributed Data Interface(FDDI), a LAN standard, was considered an attractive campusbackbone networktechnology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and Token Ring networks only offered 4 Mbit/s or 16 Mbit/s rates. Thus it was a relatively high-speed choice of that era, with speeds such as 100 Mbit/s.
By 1994, vendors includedCisco Systems,National Semiconductor, Network Peripherals, SysKonnect (acquired byMarvell Technology Group), and3Com.[42]FDDI installations have largely been replaced by Ethernet deployments.[43]
|
https://en.wikipedia.org/wiki/Local_area_network
|
Avideo display controller(VDC), also called adisplay engineordisplay interface, is anintegrated circuitwhich is the main component in avideo-signal generator, a device responsible for the production of aTVvideo signalin a computing or game system. Some VDCs also generate anaudio signal, but that is not their main function.
VDCs were used in thehome computersof the 1980s and also in some earlyvideo picturesystems.
The VDC is the main component of the video signal generator logic, responsible for generating the timing of video signals such as the horizontal and verticalsynchronization signalsand theblanking intervalsignal. Sometimes other supporting chips were necessary to build a complete system, such asRAMto holdpixeldata,ROMto holdcharacter fonts, or somediscrete logicsuch asshift registers.
Most often the VDC chip is completely integrated in the logic of the main computer system, (itsvideo RAMappears in thememory mapof the main CPU), but sometimes it functions as acoprocessorthat can manipulate the video RAM contents independently.
The difference between a display controller, a graphics accelerator, and a video compression/decompression IC is huge, but, since all of this logic is usually found on the chip of agraphics processing unitand is usually not available separately to the end-customer, there is often much confusion about these very different functional blocks.
GPUs with hardware acceleration became popular during the 1990s, including theS3 ViRGE, theMatrox Mystique, and theVoodoo Graphics; though earlier examples such as theNEC μPD7220had already existed for some time. VDCs often had special hardware for the creation of "sprites", a function that in more modern VDP chips is done with the "Bit Blitter" using the "Bit blit" function.
One example of a typical video display processor is the "VDP2 32-bit background and scroll plane video display processor" of theSega Saturn.
Another example is theLisa(AGA) chip that was used for the improved graphics of the later generationAmigacomputers.
That said, it is not completely clear when a "video chip" is a "video display controller" and when it is a "video display processor". For example, the TMS9918 is sometimes called a "video display controller" and sometimes a "video display processor". In general however a "video display processor" has some power to "process" the contents of the video RAM (filling an area of RAM for example), while a "video display controller" only controls the timing of the video synchronization signals and the access to the video RAM.
Thegraphics processing unit(GPU) goes one step further than the VDP and normally also supports 3D functionality. This is the kind of chip that is used in modern personal computers.
Video display controllers can be divided in several different types, listed here from simplest to most complex;
Examples of video display controllers are:
Video shifters
CRT Controllers
Video interface controllers
Video coprocessors
Note that many early home computers did not use a VDP chip, but built the whole video display controller from a lot ofdiscrete logicchips, (examples are theApple II,PET, andTRS-80). Because these methods are very flexible, video display generators could be very capable (or extremely primitive, depending on the quality of the design), but also needed a lot of components.
Many early systems used some form of an earlyprogrammable logic arrayto create a video system; examples include theZX SpectrumandZX81systems and ElektronikaBK-0010, but there were many others. Early implementations were often very primitive, but later implementations sometimes resulted in fairly advanced video systems, like the one in theSAM Coupé. On the lower end, as in the ZX81, the hardware would only perform electrical functions and the timing and level of the video stream was provided by the microprocessor. As the video data rate was high relative to the processor speed, the computer could only perform actual non-display computations during the retrace period between display frames. This limited performance to at most 25% of overall available CPU cycles.
These systems could thus build a very capable system with relatively few components, but the low transistor count of early programmable logic meant that the capabilities of early PLA-based systems were often less impressive than those using the video interface controllers or video coprocessors that were available at the same time. Later PLA solutions, such as those usingCPLDsorFPGAs, could result in much more advanced video systems, surpassing those built using off-the-shelf components.
An often-used hybrid solution was to use a video interface controller (often theMotorola 6845) as a basis and expand its capabilities with programmable logic or anASIC. An example of such a hybrid solution is the originalVGAcard, that used a 6845 in combination with an ASIC. That is why all current VGA based video systems still use thehardware registersthat were provided by the 6845.
With the advancements made insemiconductor device fabrication, more and more functionality is implemented asintegrated circuits, often licensable assemiconductor intellectual property core(SIP core). Display controllerSystem In Package(SiP) blocks can be found on thedieofGPUs,APUsandSoCs.[citation needed]
They support a variety ofinterfaces:VGA,DVI,HDMI,DisplayPort,VHDCI,DMS-59and more. ThePHYincludesLVDS,Embedded DisplayPort,TMDSandFlat Panel Display Link,OpenLDIandCML.[citation needed]A moderncomputer monitormay has built-in LCD controller or OLED controller.[4]
For example, a VGA-signal, which is created by GPU is being transported over a VGA-cable to the monitor built-in controller. Both ends of the cable end in aVGA connector.Laptopsand othermobile computersuse different interfaces between the display controller and the display. A display controller usually supports multiplecomputer display standards.
KMS driveris an example of adevice driverfor display controllers andAMD Eyefinityis a special brand of display controller withmulti-monitorsupport.
RandR(resize and rotate) is a method to configure screen resolution and refresh rate on each individual outputs separately and at the same time configure the settings of the windowing system accordingly.
An example for this dichotomy is offered byARM Holdings: they offer SIP core for 3D rendering acceleration and for display controller independently. The former has marketing names such as Mali-200 or Mali-T880 while the latter is available as Mali-DP500, Mali-DP550 and Mali-DP650.[5]
In 1982,NECreleased theNEC μPD7220, one of the most widely used video display controllers in 1980spersonal computers. It was used in theNEC PC-9801,APC III,IBM PC compatibles,DEC Rainbow,Tulip System-1, andEpson QX-10.[6]Intellicensed the design and called it the 82720 graphics display controller.[7]
Previously, graphic cards were also called graphic adapters, and the chips used on theseISA/EISAcards consisted solely of a display controller, as this was the only functionality required to connect a computer to a display. Later cards included ICs to perform calculations related to 2D rendering in parallel with the CPU; these cards were referred to as graphics accelerator cards. Similarly, ICs for 3D rendering eventually followed. Such cards were available withVLB,PCI, andAGPinterfaces; modern cards typically use thePCI Expressbus, as they require much greater bandwidth then the ISA bus can deliver.
|
https://en.wikipedia.org/wiki/Video_display_controller
|
Surface diffusionis a general process involving the motion ofadatoms,molecules, and atomic clusters (adparticles) at solid materialsurfaces.[1]The process can generally be thought of in terms of particles jumping between adjacentadsorptionsites on a surface, as in figure 1. Just as in bulkdiffusion, this motion is typically a thermally promoted process with rates increasing with increasing temperature. Many systems display diffusion behavior that deviates from the conventional model of nearest-neighbor jumps.[2]Tunneling diffusion is a particularly interesting example of an unconventional mechanism wherein hydrogen has been shown to diffuse on cleanmetalsurfaces via thequantum tunnelingeffect.
Various analytical tools may be used toelucidatesurface diffusion mechanisms and rates, the most important of which arefield ion microscopyandscanning tunneling microscopy.[3]While in principle the process can occur on a variety of materials, most experiments are performed on crystalline metal surfaces. Due to experimental constraints most studies of surface diffusion are limited to well below themelting pointof thesubstrate, and much has yet to be discovered regarding how these processes take place at higher temperatures.[4]
Surface diffusion rates and mechanisms are affected by a variety of factors including the strength of the surface-adparticlebond, orientation of the surface lattice, attraction and repulsion between surface species andchemical potentialgradients. It is an important concept in surfacephase formation,epitaxial growth, heterogeneouscatalysis, and other topics insurface science.[5]As such, the principles of surface diffusion are critical for thechemical productionandsemiconductorindustries. Real-world applications relying heavily on these phenomena includecatalytic converters,integrated circuitsused in electronic devices, andsilver halidesalts used inphotographic film.[5]
Surface diffusion kinetics can be thought of in terms of adatoms residing atadsorptionsites on a 2Dlattice, moving between adjacent (nearest-neighbor) adsorption sites by a jumping process.[1][6]The jump rate is characterized by an attemptfrequencyand athermodynamicfactor that dictates the probability of an attempt resulting in a successful jump. The attempt frequency ν is typically taken to be simply thevibrational frequencyof the adatom, while the thermodynamic factor is aBoltzmann factordependent on temperature and Ediff, thepotential energybarrier to diffusion. Equation 1 describes the relationship:
WhereνandEdiffare as described above,Γis the jump or hopping rate, T is temperature, andkBis theBoltzmann constant. Ediffmust be smaller than the energy of desorption for diffusion to occur, otherwise desorption processes would dominate. Importantly, equation 1 tells us how strongly the jump rate varies with temperature. The manner in which diffusion takes place is dependent on the relationship betweenEdiffandkBTas is given in the thermodynamic factor: whenEdiff< kBTthe thermodynamic factor approaches unity andEdiffceases to be a meaningful barrier to diffusion. This case, known asmobile diffusion, is relatively uncommon and has only been observed in a few systems.[7]For the phenomena described throughout this article, it is assumed thatEdiff>> kBTand thereforeΓ<<ν. In the case ofFickian diffusionit is possible to extract both theνandEdifffrom anArrhenius plotof the logarithm of the diffusion coefficient,D, versus 1/T. For cases where more than one diffusion mechanism is present (see below), there may be more than oneEdiffsuch that the relative distribution between the different processes would change with temperature.
Random walkstatistics describe themean squared displacementof diffusing species in terms of the number of jumpsNand the distance per jumpa. The number of successful jumps is simplyΓmultiplied by the time allowed for diffusion,t. In the most basic model only nearest-neighbor jumps are considered andacorresponds to the spacing between nearest-neighbor adsorption sites. The root mean squared displacement goes as:
The diffusion coefficient is given as:
wherez=2{\displaystyle z=2}for 1D diffusion as would be the case for in-channel diffusion,z=4{\displaystyle z=4}for 2D diffusion, andz=6{\displaystyle z=6}for 3D diffusion.[8]
There are four different general schemes in which diffusion may take place.[9]Tracer diffusion and chemical diffusion differ in the level of adsorbate coverage at the surface, while intrinsic diffusion and mass transfer diffusion differ in the nature of the diffusion environment. Tracer diffusion and intrinsic diffusion both refer to systems where adparticles experience a relatively homogeneous environment, whereas in chemical and mass transfer diffusion adparticles are more strongly affected by their surroundings.
Orientational anisotropy takes the form of a difference in both diffusion rates and mechanisms at the varioussurface orientationsof a given material. For a given crystalline material eachMiller Indexplane may display unique diffusion phenomena.Close packedsurfaces such as thefcc(111) tend to have higher diffusion rates than the correspondingly more "open" faces of the same material such as fcc (100).[10][11]
Directional anisotropy refers to a difference in diffusion mechanism or rate in a particular direction on a given crystallographic plane. These differences may be a result of either anisotropy in the surface lattice (e.g. arectangular lattice) or the presence of steps on a surface. One of the more dramatic examples of directional anisotropy is the diffusion of adatoms on channeled surfaces such as fcc (110), where diffusion along the channel is much faster than diffusion across the channel.
Diffusion of adatoms may occur by a variety of mechanisms. The manner in which they diffuse is important as it may dictate the kinetics of movement, temperature dependence, and overall mobility of surface species, among other parameters. The following is a summary of the most important of these processes:[12]
Recent theoretical work as well as experimental work performed since the late 1970s has brought to light a remarkable variety of surface diffusion phenomena both with regard to kinetics as well as to mechanisms. Following is a summary of some of the more notable phenomena:
Cluster diffusion involves motion of atomic clusters ranging in size fromdimersto islands containing hundreds of atoms. Motion of the cluster may occur via the displacement of individual atoms, sections of the cluster, or the entire cluster moving at once.[23]All of these processes involve a change in the cluster’scenter of mass.
Surface diffusion is a critically important concept in heterogeneous catalysis, as reaction rates are often dictated by the ability of reactants to "find" each other at a catalyst surface. With increased temperature adsorbed molecules, molecular fragments, atoms, and clusters tend to have much greater mobility (see equation 1). However, with increased temperature the lifetime of adsorption decreases as the factor kBT becomes large enough for the adsorbed species to overcome the barrier to desorption, Q (see figure 2).Reactionthermodynamicsaside because of the interplay between increased rates of diffusion and decreased lifetime of adsorption, increased temperature may in some cases decrease the overall rate of the reaction.
Surface diffusion may be studied by a variety of techniques, including both direct and indirect observations. Two experimental techniques that have proved very useful in this area of study are field ion microscopy andscanning tunneling microscopy.[3]By visualizing the displacement of atoms or clusters over time, it is possible to extract useful information regarding the manner in which the relevant species diffuse-both mechanistic and rate-related information. In order to study surface diffusion on the atomistic scale it is unfortunately necessary to perform studies on rigorously clean surfaces and inultra high vacuum(UHV) conditions or in the presence of small amounts ofinertgas, as is the case when using He or Ne as imaging gas infield-ion microscopyexperiments.
|
https://en.wikipedia.org/wiki/Surface_diffusion
|
Inmathematics, aBoolean ringRis aringfor whichx2=xfor allxinR, that is, a ring that consists of onlyidempotent elements.[1][2][3]An example is the ring ofintegers modulo 2.
Every Boolean ring gives rise to aBoolean algebra, with ring multiplication corresponding toconjunctionormeet∧, and ring addition toexclusive disjunctionorsymmetric difference(notdisjunction∨,[4]which would constitute asemiring). Conversely, every Boolean algebra gives rise to a Boolean ring. Boolean rings are named after the founder of Boolean algebra,George Boole.
There are at least four different and incompatible systems of notation for Boolean rings and algebras:
Historically, the term "Boolean ring" has been used to mean a "Boolean ring possibly without an identity", and "Boolean algebra" has been used to mean a Boolean ring with an identity. The existence of the identity is necessary to consider the ring as an algebra over thefield of two elements: otherwise there cannot be a (unital) ring homomorphism of the field of two elements into the Boolean ring. (This is the same as the old use of the terms "ring" and "algebra" inmeasure theory.[a])
One example of a Boolean ring is thepower setof any setX, where the addition in the ring issymmetric difference, and the multiplication isintersection. As another example, we can also consider the set of allfiniteor cofinite subsets ofX, again with symmetric difference and intersection as operations. More generally with these operations anyfield of setsis a Boolean ring. ByStone's representation theoremevery Boolean ring is isomorphic to afield of sets(treated as a ring with these operations).
Since the join operation∨in a Boolean algebra is often written additively, it makes sense in this context to denote ring addition by⊕, a symbol that is often used to denoteexclusive or.
Given a Boolean ringR, forxandyinRwe can define
These operations then satisfy all of the axioms for meets, joins, and complements in aBoolean algebra. Thus every Boolean ring becomes a Boolean algebra. Similarly, every Boolean algebra becomes a Boolean ring thus:
If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a ring, the result is the original ring. The analogous result holds beginning with a Boolean algebra.
A map between two Boolean rings is aring homomorphismif and only ifit is a homomorphism of the corresponding Boolean algebras. Furthermore, a subset of a Boolean ring is aring ideal(prime ring ideal, maximal ring ideal) if and only if it is anorder ideal(prime order ideal, maximal order ideal) of the Boolean algebra. Thequotient ringof a Boolean ring modulo a ring ideal corresponds to the factor algebra of the corresponding Boolean algebra modulo the corresponding order ideal.
Every Boolean ringRsatisfiesx⊕x= 0for allxinR, because we know
and since(R, ⊕)is an abelian group, we can subtractx⊕xfrom both sides of this equation, which givesx⊕x= 0. A similar proof shows that every Boolean ring iscommutative:
The propertyx⊕x= 0shows that any Boolean ring is anassociative algebraover thefieldF2with two elements, in precisely one way.[citation needed]In particular, any finite Boolean ring has ascardinalityapower of two. Not every unital associative algebra overF2is a Boolean ring: consider for instance thepolynomial ringF2[X].
The quotient ringR/Iof any Boolean ringRmodulo any idealIis again a Boolean ring. Likewise, anysubringof a Boolean ring is a Boolean ring.
AnylocalizationRS−1of a Boolean ringRby a setS⊆Ris a Boolean ring, since every element in the localization is idempotent.
The maximal ring of quotientsQ(R)(in the sense of Utumi andLambek) of a Boolean ringRis a Boolean ring, since every partial endomorphism is idempotent.[6]
Everyprime idealPin a Boolean ringRismaximal: thequotient ringR/Pis anintegral domainand also a Boolean ring, so it is isomorphic to thefieldF2, which shows the maximality ofP. Since maximal ideals are always prime, prime ideals and maximal ideals coincide in Boolean rings.
Every finitely generated ideal of a Boolean ring isprincipal(indeed,(x,y) = (x+y+xy)). Furthermore, as all elements are idempotents, Boolean rings are commutativevon Neumann regular ringsand hence absolutely flat, which means that every module over them isflat.
Unificationin Boolean rings isdecidable,[7]that is, algorithms exist to solve arbitrary equations over Boolean rings. Both unification and matching infinitely generatedfree Boolean rings areNP-complete, and both areNP-hardinfinitely presentedBoolean rings.[8](In fact, as any unification problemf(X) =g(X)in a Boolean ring can be rewritten as the matching problemf(X) +g(X) = 0, the problems are equivalent.)
Unification in Boolean rings is unitary if all the uninterpreted function symbols are nullary and finitary otherwise (i.e. if the function symbols not occurring in the signature of Boolean rings are all constants then there exists amost general unifier, and otherwise theminimal complete set of unifiersis finite).[9]
|
https://en.wikipedia.org/wiki/Boolean_ring
|
Domain adaptationis a field associated withmachine learningandtransfer learning. It addresses the challenge of training a model on one data distribution (thesource domain) and applying it to a related but different data distribution (thetarget domain).
A common example isspam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain).
Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends tomulti-source domain adaptation.[1]
Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).[2]
Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.
Common distribution shifts are classified as follows:[3][4]
Domain adaptation problems typically assume that some data from the target domain is available during training. Problems can be classified according to the type of this available data:[5][6]
LetX{\displaystyle X}be the input space (or description space) and letY{\displaystyle Y}be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis)h:X→Y{\displaystyle h:X\to Y}able to attach a label fromY{\displaystyle Y}to an example fromX{\displaystyle X}. This model is learned from a learning sampleS={(xi,yi)∈(X×Y)}i=1m{\displaystyle S=\{(x_{i},y_{i})\in (X\times Y)\}_{i=1}^{m}}.
Usually insupervised learning(without domain adaptation), we suppose that the examples(xi,yi)∈S{\displaystyle (x_{i},y_{i})\in S}are drawn i.i.d. from a distributionDS{\displaystyle D_{S}}of supportX×Y{\displaystyle X\times Y}(unknown and fixed). The objective is then to learnh{\displaystyle h}(fromS{\displaystyle S}) such that it commits the least error possible for labelling new examples coming from the distributionDS{\displaystyle D_{S}}.
The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributionsDS{\displaystyle D_{S}}andDT{\displaystyle D_{T}}onX×Y{\displaystyle X\times Y}[citation needed]. The domain adaptation task then consists of the transfer of knowledge from the source domainDS{\displaystyle D_{S}}to the target oneDT{\displaystyle D_{T}}. The goal is then to learnh{\displaystyle h}(from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domainDT{\displaystyle D_{T}}[citation needed].
The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?
The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).[7][8]
A method for adapting consists in iteratively "auto-labeling" the target examples.[9]The principle is simple:
Note that there exist other iterative approaches, but they usually need target labeled examples.[10][11]
The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task.
This can be achieved through the use ofAdversarial machine learningtechniques where feature representations from samples in different domains are encouraged to be indistinguishable.[12][13]
The goal is to construct aBayesian hierarchical modelp(n){\displaystyle p(n)}, which is essentially a factorization model for countsn{\displaystyle n}, to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.[14]
Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
Inmathematics, afield of setsis amathematical structureconsisting of a pair(X,F){\displaystyle (X,{\mathcal {F}})}consisting of asetX{\displaystyle X}and afamilyF{\displaystyle {\mathcal {F}}}ofsubsetsofX{\displaystyle X}called analgebra overX{\displaystyle X}that contains theempty setas an element, and is closed under the operations of takingcomplementsinX,{\displaystyle X,}finiteunions, and finiteintersections.
Fields of sets should not be confused withfieldsinring theorynor withfields in physics. Similarly the term "algebra overX{\displaystyle X}" is used in the sense of a Boolean algebra and should not be confused withalgebras over fields or ringsin ring theory.
Fields of sets play an essential role in therepresentation theoryof Boolean algebras. Every Boolean algebra can be represented as a field of sets.
A field of sets is a pair(X,F){\displaystyle (X,{\mathcal {F}})}consisting of asetX{\displaystyle X}and afamilyF{\displaystyle {\mathcal {F}}}ofsubsetsofX,{\displaystyle X,}called analgebra overX,{\displaystyle X,}that has the following properties:
In other words,F{\displaystyle {\mathcal {F}}}forms asubalgebraof the power setBoolean algebraofX{\displaystyle X}(with the same identity elementX∈F{\displaystyle X\in {\mathcal {F}}}).
Many authors refer toF{\displaystyle {\mathcal {F}}}itself as a field of sets.
Elements ofX{\displaystyle X}are calledpointswhile elements ofF{\displaystyle {\mathcal {F}}}are calledcomplexesand are said to be theadmissible setsofX.{\displaystyle X.}
A field of sets(X,F){\displaystyle (X,{\mathcal {F}})}is called aσ-field of setsand the algebraF{\displaystyle {\mathcal {F}}}is called aσ-algebraif the following additional condition (4) is satisfied:
For an arbitrary setY,{\displaystyle Y,}itspower set2Y{\displaystyle 2^{Y}}(or, somewhat pedantically, the pair(Y,2Y){\displaystyle (Y,2^{Y})}of this set and its power set) is a field of sets. IfY{\displaystyle Y}is finite (namely,n{\displaystyle n}-element), then2Y{\displaystyle 2^{Y}}is finite (namely,2n{\displaystyle 2^{n}}-element). It appears that every finite field of sets (it means,(X,F){\displaystyle (X,{\mathcal {F}})}withF{\displaystyle {\mathcal {F}}}finite, whileX{\displaystyle X}may be infinite) admits a representation of the form(Y,2Y){\displaystyle (Y,2^{Y})}with finiteY{\displaystyle Y}; it means a functionf:X→Y{\displaystyle f:X\to Y}that establishes a one-to-one correspondence betweenF{\displaystyle {\mathcal {F}}}and2Y{\displaystyle 2^{Y}}viainverse image:S=f−1[B]={x∈X∣f(x)∈B}{\displaystyle S=f^{-1}[B]=\{x\in X\mid f(x)\in B\}}whereS∈F{\displaystyle S\in {\mathcal {F}}}andB∈2Y{\displaystyle B\in 2^{Y}}(that is,B⊂Y{\displaystyle B\subset Y}). One notable consequence: the number of complexes, if finite, is always of the form2n.{\displaystyle 2^{n}.}
To this end one choosesY{\displaystyle Y}to be the set of allatomsof the given field of sets, and definesf{\displaystyle f}byf(x)=A{\displaystyle f(x)=A}wheneverx∈A{\displaystyle x\in A}for a pointx∈X{\displaystyle x\in X}and a complexA∈F{\displaystyle A\in {\mathcal {F}}}that is an atom; the latter means that a nonempty subset ofA{\displaystyle A}different fromA{\displaystyle A}cannot be a complex.
In other words: the atoms are a partition ofX{\displaystyle X};Y{\displaystyle Y}is the correspondingquotient set; andf{\displaystyle f}is the corresponding canonical surjection.
Similarly, every finiteBoolean algebracan be represented as a power set – the power set of its set ofatoms; each element of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). Thispower set representationcan be constructed more generally for anycompleteatomicBoolean algebra.
In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation by considering fields of sets instead of whole power sets. To do this we first observe that the atoms of a finite Boolean algebra correspond to itsultrafiltersand that an atom is below an element of a finite Boolean algebra if and only if that element is contained in the ultrafilter corresponding to the atom. This leads us to construct a representation of a Boolean algebra by taking its set of ultrafilters and forming complexes by associating with each element of the Boolean algebra the set of ultrafilters containing that element. This construction does indeed produce a representation of the Boolean algebra as a field of sets and is known as theStone representation. It is the basis ofStone's representation theorem for Boolean algebrasand an example of a completion procedure inorder theorybased onidealsorfilters, similar toDedekind cuts.
Alternatively one can consider the set ofhomomorphismsonto the two element Boolean algebra and form complexes by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top element. (The approach is equivalent as the ultrafilters of a Boolean algebra are precisely the pre-images of the top elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded as a generalization of the representation of finite Boolean algebras bytruth tables.
These definitions arise from considering thetopologygenerated by the complexes of a field of sets. (It is just one of notable topologies on the given set of points; it often happens that another topology is given, with quite different properties, in particular, not zero-dimensional). Given a field of setsX=(X,F){\displaystyle \mathbf {X} =(X,{\mathcal {F}})}the complexes form abasefor a topology. We denote byT(X){\displaystyle T(\mathbf {X} )}the corresponding topological space,(X,T){\displaystyle (X,{\mathcal {T}})}whereT{\displaystyle {\mathcal {T}}}is the topology formed by taking arbitrary unions of complexes. Then
The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is known as theStone spaceof the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes of the Stone representation. The area of mathematics known asStone dualityis founded on the fact that the Stone representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence adualityexists between Boolean algebras and Boolean spaces.
If an algebra over a set is closed under countableunions(hence also undercountableintersections), it is called asigma algebraand the corresponding field of sets is called ameasurable space. The complexes of a measurable space are calledmeasurable sets. TheLoomis-Sikorskitheorem provides a Stone-type duality between countably complete Boolean algebras (which may be calledabstract sigma algebras) and measurable spaces.
Ameasure spaceis a triple(X,F,μ){\displaystyle (X,{\mathcal {F}},\mu )}where(X,F){\displaystyle (X,{\mathcal {F}})}is a measurable space andμ{\displaystyle \mu }is ameasuredefined on it. Ifμ{\displaystyle \mu }is in fact aprobability measurewe speak of aprobability spaceand call its underlying measurable space asample space. The points of a sample space are calledsample pointsand represent potential outcomes while the measurable sets (complexes) are calledeventsand represent properties of outcomes for which we wish to assign probabilities. (Many use the termsample spacesimply for the underlying set of a probability space, particularly in the case where every subset is an event.) Measure spaces and probability spaces play a foundational role inmeasure theoryandprobability theoryrespectively.
In applications toPhysicswe often deal with measure spaces and probability spaces derived from rich mathematical structures such asinner product spacesortopological groupswhich already have a topology associated with them - this should not be confused with the topology generated by taking arbitrary unions of complexes.
Atopological field of setsis a triple(X,T,F){\displaystyle (X,{\mathcal {T}},{\mathcal {F}})}where(X,T){\displaystyle (X,{\mathcal {T}})}is atopological spaceand(X,F){\displaystyle (X,{\mathcal {F}})}is a field of sets which is closed under theclosure operatorofT{\displaystyle {\mathcal {T}}}or equivalently under theinterior operatori.e. the closure and interior of every complex is also a complex. In other words,F{\displaystyle {\mathcal {F}}}forms a subalgebra of the power setinterior algebraon(X,T).{\displaystyle (X,{\mathcal {T}}).}
Topological fields of sets play a fundamental role in the representation theory of interior algebras andHeyting algebras. These two classes of algebraic structures provide thealgebraic semanticsfor themodal logicS4(a formal mathematical abstraction ofepistemic logic) andintuitionistic logicrespectively. Topological fields of sets representing these algebraic structures provide a related topologicalsemanticsfor these logics.
Every interior algebra can be represented as a topological field of sets with the underlying Boolean algebra of the interior algebra corresponding to the complexes of the topological field of sets and the interior and closure operators of the interior algebra corresponding to those of the topology. EveryHeyting algebracan be represented by a topological field of sets with the underlying lattice of the Heyting algebra corresponding to the lattice of complexes of the topological field of sets that are open in the topology. Moreover the topological field of sets representing a Heyting algebra may be chosen so that the open complexes generate all the complexes as a Boolean algebra. These related representations provide a well defined mathematical apparatus for studying the relationship between truth modalities (possibly true vs necessarily true, studied in modal logic) and notions of provability and refutability (studied in intuitionistic logic) and is thus deeply connected to the theory ofmodal companionsofintermediate logics.
Given a topological space theclopensets trivially form a topological field of sets as each clopen set is its own interior and closure. The Stone representation of a Boolean algebra can be regarded as such a topological field of sets, however in general the topology of a topological field of sets can differ from the topology generated by taking arbitrary unions of complexes and in general the complexes of a topological field of sets need not be open or closed in the topology.
A topological field of sets is calledalgebraicif and only if there is a base for its topology consisting of complexes.
If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology.
Topological fields of sets that are separative, compact and algebraic are calledStone fieldsand provide a generalization of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to theopen elementsof the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - theStone representation. (The topology of the Stone representation is also known as theMcKinsey–Tarski Stone topologyafter the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology).
Apreorder fieldis a triple(X,≤,F){\displaystyle (X,\leq ,{\mathcal {F}})}where(X,≤){\displaystyle (X,\leq )}is apreordered setand(X,F){\displaystyle (X,{\mathcal {F}})}is a field of sets.
Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of theAlexandrov topologyinduced by the preorder. In other words, for allS∈F{\displaystyle S\in {\mathcal {F}}}:Int(S)={x∈X:there exists ay∈Swithy≤x}{\displaystyle \mathrm {Int} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}y\leq x\}}andCl(S)={x∈X:there exists ay∈Swithx≤y}{\displaystyle \mathrm {Cl} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}x\leq y\}}
Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent thepossible worldsin theKripke semanticsof a theory in the modal logicS4, the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of theLindenbaum–Tarski algebraof the theory. They are a special case of thegeneral modal frameswhich are fields of sets with an additional accessibility relation providing representations of modal algebras.
A preorder field is calledalgebraic(ortight) if and only if it has a set of complexesA{\displaystyle {\mathcal {A}}}which determines the preorder in the following manner:x≤y{\displaystyle x\leq y}if and only if for every complexS∈A{\displaystyle S\in {\mathcal {A}}},x∈S{\displaystyle x\in S}impliesy∈S{\displaystyle y\in S}. The preorder fields obtained fromS4theories are always algebraic, the complexes determining the preorder being the sets of possible worlds in which the sentences of the theory closed under necessity hold.
A separative compact algebraic preorder field is said to becanonical. Given an interior algebra, by replacing the topology of its Stone representation with the correspondingcanonical preorder(specialization preorder) we obtain a representation of the interior algebra as a canonical preorder field. By replacing the preorder by its correspondingAlexandrov topologywe obtain an alternative representation of the interior algebra as a topological field of sets. (The topology of this "Alexandrov representation" is just theAlexandrov bi-coreflectionof the topology of the Stone representation.) While representation of modal algebras by general modal frames is possible for any normal modal algebra, it is only in the case of interior algebras (which correspond to the modal logicS4) that the general modal frame corresponds to topological field of sets in this manner.
The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal)Boolean algebras with operators. For this we consider structures(X,(Ri)I,F){\displaystyle (X,(R_{i})_{I},{\mathcal {F}})}where(X,(Ri)I){\displaystyle (X,(R_{i})_{I})}is arelational structurei.e. a set with an indexed family ofrelationsdefined on it, and(X,F){\displaystyle (X,{\mathcal {F}})}is a field of sets. Thecomplex algebra(oralgebra of complexes) determined by a field of setsX=(X,(Ri)I,F){\displaystyle \mathbf {X} =(X,\left(R_{i}\right)_{I},{\mathcal {F}})}on a relational structure, is the Boolean algebra with operatorsC(X)=(F,∩,∪,′,∅,X,(fi)I){\displaystyle {\mathcal {C}}(\mathbf {X} )=({\mathcal {F}},\cap ,\cup ,\prime ,\emptyset ,X,(f_{i})_{I})}where for alli∈I,{\displaystyle i\in I,}ifRi{\displaystyle R_{i}}is a relation of arityn+1,{\displaystyle n+1,}thenfi{\displaystyle f_{i}}is an operator of arityn{\displaystyle n}and for allS1,…,Sn∈F{\displaystyle S_{1},\ldots ,S_{n}\in {\mathcal {F}}}fi(S1,…,Sn)={x∈X:there existx1∈S1,…,xn∈Snsuch thatRi(x1,…,xn,x)}{\displaystyle f_{i}(S_{1},\ldots ,S_{n})=\left\{x\in X:{\text{ there exist }}x_{1}\in S_{1},\ldots ,x_{n}\in S_{n}{\text{ such that }}R_{i}(x_{1},\ldots ,x_{n},x)\right\}}
This construction can be generalized to fields of sets on arbitraryalgebraic structureshaving bothoperatorsand relations as operators can be viewed as a special case of relations. IfF{\displaystyle {\mathcal {F}}}is the whole power set ofX{\displaystyle X}thenC(X){\displaystyle {\mathcal {C}}(\mathbf {X} )}is called afull complex algebraorpower algebra.
Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it isisomorphicto the complex algebra corresponding to the field.
(Historically the termcomplexwas first used in the case where the algebraic structure was agroupand has its origins in 19th centurygroup theorywhere a subset of a group was called acomplex.)
Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
|
https://en.wikipedia.org/wiki/Field_of_sets
|
Vector calculusorvector analysisis a branch of mathematics concerned with thedifferentiationandintegrationofvector fields, primarily in three-dimensionalEuclidean space,R3.{\displaystyle \mathbb {R} ^{3}.}[1]The termvector calculusis sometimes used as a synonym for the broader subject ofmultivariable calculus, which spans vector calculus as well aspartial differentiationandmultiple integration. Vector calculus plays an important role indifferential geometryand in the study ofpartial differential equations. It is used extensively in physics and engineering, especially in the description ofelectromagnetic fields,gravitational fields, andfluid flow.
Vector calculus was developed from the theory ofquaternionsbyJ. Willard GibbsandOliver Heavisidenear the end of the 19th century, and most of the notation and terminology was established by Gibbs andEdwin Bidwell Wilsonin their 1901 book,Vector Analysis, though earlier mathematicians such asIsaac Newtonpioneered the field.[2]In its standard form using thecross product, vector calculus does not generalize to higher dimensions, but the alternative approach ofgeometric algebra, which uses theexterior product, does (see§ Generalizationsbelow for more).
Ascalar fieldassociates ascalarvalue to every point in a space. The scalar is a mathematical number representing aphysical quantity. Examples of scalar fields in applications include thetemperaturedistribution throughout space, thepressuredistribution in a fluid, and spin-zero quantum fields (known asscalar bosons), such as theHiggs field. These fields are the subject ofscalar field theory.
Avector fieldis an assignment of avectorto each point in aspace.[3]A vector field in the plane, for instance, can be visualized as a collection of arrows with a givenmagnitudeand direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of someforce, such as themagneticorgravitationalforce, as it changes from point to point. This can be used, for example, to calculateworkdone over a line.
In more advanced treatments, one further distinguishespseudovectorfields andpseudoscalarfields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, thecurlof a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated ingeometric algebra, as described below.
The algebraic (non-differential) operations in vector calculus are referred to asvector algebra, being defined for a vector space and then appliedpointwiseto a vector field. The basic algebraic operations consist of:
Also commonly used are the twotriple products:
Vector calculus studies variousdifferential operatorsdefined on scalar or vector fields, which are typically expressed in terms of thedeloperator (∇{\displaystyle \nabla }), also known as "nabla". The three basicvector operatorsare:[4]
Also commonly used are the two Laplace operators:
A quantity called theJacobian matrixis useful for studying functions when both the domain and range of the function are multivariable, such as achange of variablesduring integration.
The three basic vector operators have corresponding theorems which generalize thefundamental theorem of calculusto higher dimensions:
In two dimensions, the divergence and curl theorems reduce to the Green's theorem:
Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable functionf(x,y)with real values, one can approximatef(x,y)for(x,y)close to(a,b)by the formula
The right-hand side is the equation of the plane tangent to the graph ofz=f(x,y)at(a,b).
For a continuously differentiablefunction of several real variables, a pointP(that is, a set of values for the input variables, which is viewed as a point inRn) iscriticalif all of thepartial derivativesof the function are zero atP, or, equivalently, if itsgradientis zero. The critical values are the values of the function at the critical points.
If the function issmooth, or, at least twice continuously differentiable, a critical point may be either alocal maximum, alocal minimumor asaddle point. The different cases may be distinguished by considering theeigenvaluesof theHessian matrixof second derivatives.
ByFermat's theorem, all localmaxima and minimaof a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros.
Vector calculus can also be generalized to other3-manifoldsandhigher-dimensionalspaces.
Vector calculus is initially defined forEuclidean 3-space,R3,{\displaystyle \mathbb {R} ^{3},}which has additional structure beyond simply being a 3-dimensional real vector space, namely: anorm(giving a notion of length) defined via aninner product(thedot product), which in turn gives a notion of angle, and anorientation, which gives a notion of left-handed and right-handed. These structures give rise to avolume form, and also thecross product, which is used pervasively in vector calculus.
The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of thecoordinate systemto be taken into account (seeCross product § Handednessfor more detail).
Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetricnondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (thespecial orthogonal groupSO(3)).
More generally, vector calculus can be defined on any 3-dimensional orientedRiemannian manifold, or more generallypseudo-Riemannian manifold. This structure simply means that thetangent spaceat each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegeneratemetric tensorand an orientation, and works because vector calculus is defined in terms of tangent vectors at each point.
Most of the analytic results are easily understood, in a more general form, using the machinery ofdifferential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yieldingharmonic analysis), while curl and cross product do not generalize as directly.
From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as beingk-vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to0,1,n− 1orndimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors.
In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7[5](and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or7dimensions can a cross product be defined (generalizations in other dimensionalities either requiren−1{\displaystyle n-1}vectors to yield 1 vector, or are alternativeLie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated atCurl § Generalizations; in brief, the curl of a vector field is abivectorfield, which may be interpreted as thespecial orthogonal Lie algebraof infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally(n2)=12n(n−1){\displaystyle \textstyle {{\binom {n}{2}}={\frac {1}{2}}n(n-1)}}dimensions of rotations inndimensions).
There are two important alternative generalizations of vector calculus. The first,geometric algebra, usesk-vectorfields instead of vector fields (in 3 or fewer dimensions, everyk-vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with theexterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yieldsClifford algebrasas the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions.
The second generalization usesdifferential forms(k-covector fields) instead of vector fields ork-vector fields, and is widely used in mathematics, particularly indifferential geometry,geometric topology, andharmonic analysis, in particular yieldingHodge theoryon oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to theexterior derivativeof 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form ofStokes' theorem.
From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear.
From the point of view of geometric algebra, vector calculus implicitly identifiesk-vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifiesk-forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
|
https://en.wikipedia.org/wiki/Vector_calculus
|
Acollective networkis a set of social groups linked, directly or indirectly, by some common bond. According to this approach of the social sciences to study social relationships, social phenomena are investigated through the properties of relations among groups, which also influence the internal relations among the individuals of each group within the set.
A collective network may be defined a set of social groups linked, directly or indirectly, by some common bond, shared group status, similar or shared group functions, or geographic or cultural connection; the intergroup links also reinforce the intragroup links, hence the group identity. In informal types of associations, such as the mobilisation of social movements, a collective network may be a set of groups whose individuals, though not necessarily knowing each other or sharing anything outside the organising criteria of the network, are psychologically bound to the network itself and are willing to maintain it indefinitely, tying the internal links among the persons in a group while forming new links with the persons in other groups of the collective network.
It may be interesting to note that the term collective network was firstly officially used in the public domain not in science, instead in a global meeting called by theZapatista Army of National Liberation(EZLN): on July 27, 1996, over 3,000 activists from more than 40 countries converged on Zapatista territory in rebellion in Chiapas, Mexico, to attend the “First Intercontinental Encuentro for Humanity and Against Neoliberalism”. At the end of the Encuentro (Meeting), the General Command of the EZLN issued the “Second Declaration of La Realidad (The Reality) for Humanity and Against Neoliberalism”, calling for the creation of a “collective network of all our particular struggles and resistances, an intercontinental network of resistance against neoliberalism, an intercontinental network of resistance for humanity.[1]”
In science, the term collective network is related to the study ofcomplex systems. As all complex systems have many interconnected components, thescience of networksand thenetwork theoryare important aspects of the study of complex systems, hence of the collective network, too. The idea of collective network rises from that ofsocial networkand its analysis, that is thesocial network analysis, SNA.
Cynthia F. Kurtz’s group (Snowden 2005) developed methods of carrying out SNA in which people were asked questions about groups (SNA for identities) and about abstract representations of behavior (SNA for abstractions). Whilst the SNA is primarily concerned with connections among individuals, according to Cynthia F. Kurtz thecollective network analysisinvolves the creation of ‘identity group constructs’ as abstract expressions of group-to-group interactions.[2]
Since 2007 the campus-wide interdisciplinary research group CoCo at Binghamton University,U.S. stateofNew York, studies the collective dynamics of various types of interacting agents as complex systems. CoCo’s goals are (i) to advance our understanding about the collective dynamics of physical, biological, social, and engineered complex systems through scientific research; (ii) promote interdisciplinary collaboration among faculty and students in different schools and departments; (iii) translate the understanding to products and processes which will improve the well-being of people at regional, state, national and global scales.[3]
In 2011 Emerius, the Euro-Mediterranean Research Institute Upon Social Sciences, based in Rome, started the development of an experimental collective network namedYoospherawith the purpose of studying the intra- and intergroup dynamics in order to reinforce thesense of communityin territorial groups along four main components: (i) the rational and affective perception of the affinities with other individuals both within a person’s main group and other groups; (ii) the consciousness and acceptance of the dependence to the intra- and intergroup bonds; (iii) the voluntary commitment to keep the dependence as far as it is valuable and useful for both the person, his main group and the perceived macrogroup (theYoosphera); (iv) the will of not being detrimental to other individuals, groups or macrogroups.[4]
Emerius’s research on collective networks incorporates thesmall-world networkgraph, with the nodes represented both by the individuals and their groups, as well as it embeds the idea ofMalcolm Gladwellexpressed in his bookThe Tipping Point: How Little Things Can Make a Big Difference, though whilst Gladwell considers that “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts,[5]” according to Emerius the success of any social epidemic is also strongly dependent on the involvement of special groups with a strong intra- and intergroup degree of cohesion.
Social sciences aim also at the development of new models to manage groups and their internal and external relations according to the limits and the abilities of the human nature, so that to increase the efficiency of the groups. This is the reason behind theYoosphera, the experimental collective network which is being continuously monitored and developed through a specific piece of software, also namedYoosphera, which reinforces the sense of community in territorial groups as mentioned above. It also nurtures the creation of small groups organised in concentric rings, being small groups easily to be managed according to the theories of ProfessorRobin Dunbar, in particularDunbar’s number.
The first observations of theYooshperaexperiment seem to point out that it tends to improve the quality of the relationships between each individual and its environment through the organisation of small cooperative groups which back their own members and the closest groups both in the material and the psychological aspects, thus creating also emotional and affective links.
To the function of socialisation, typical of the social networks, the collective networks add those of organisation and cohesion within and among the groups, that balance the need of maximising the community’s potentialities with that of respecting the different conditions of their members as regards with their culture, profession, family commitments, wealth, time, as well as keeping into account the tidal of those conditions and seconding them with the utmost flexibility.
Related to that of collective network is the definition ofcollective network intelligence, orcolnetigence, which is close tocollective intelligencethough differs from it ascolnetigenceemerges from both intra- and intergroupcompetitive cooperation.
|
https://en.wikipedia.org/wiki/Collective_network
|
Thesharing economyis asocio-economic systemwhereby consumers share in the creation, production, distribution, trade and consumption of goods, and services. These systems take a variety of forms, often leveraginginformation technologyand theInternet, particularly digital platforms, to facilitate the distribution, sharing and reuse of excess capacity in goods and services.[1][2][3][4]
It can be facilitated bynonprofit organizations, usually based on the concept of book-lending libraries, in which goods and services are provided for free (or sometimes for a modest subscription) or by commercial entities, in which a company provides a service to customers for profit.
It relies on the will of the users to share and the overcoming ofstranger danger.[5]
It provides benefits, for example can lower theGHG emissionsof products by 77%-85%.[6]
Dariusz Jemielniakand Aleksandra Przegalinska credit Marcus Felson and Joe L. Spaeth's academic article "Community Structure andCollaborative Consumption" published in 1978[7]with coining the termeconomy of sharing.[8]: 6
The term "sharing economy" began to appear around the time of theGreat Recession, enabling social technologies, and an increasing sense of urgency around global population growth andresource depletion.Lawrence Lessigwas possibly first to use the term in 2008, though others claim the origin of the term is unknown.[9][10]
There is a conceptual and semantic confusion caused by the many facets ofInternet-based sharing leading to discussions regarding the boundaries and the scope of the sharing economy[11]and regarding the definition of the sharing economy.[12][8]: 7, 27Arun Sundararajannoted in 2016 that he is "unaware of any consensus on a definition of the sharing economy".[13]: 27–28As of 2015, according to aPew Research Centersurvey, only 27% of Americans had heard of the term "sharing economy".[14]
The term "sharing economy" is often used ambiguously and can imply different characteristics.[15]Survey respondents who had heard of the term had divergent views on what it meant, with many thinking it concerned "sharing" in the traditional sense of the term.[14]To this end, the terms “sharing economy” and “collaborative consumption” have often been used interchangeably. Collaborative consumption refers to the activities and behaviors that drive the sharing economy, making the two concepts closely interrelated. A definition published in the Journal of Consumer Behavior in 2015 emphasizes these synergies: “Collaborative consumption takes place in organized systems or networks, in which participants conduct sharing activities in the form of renting, lending, trading, bartering, and swapping of goods, services, transportation solutions, space, or money.”[16]
The sharing economy is sometimes understood exclusively as apeer-to-peerphenomenon[17]while at times, it has been framed as abusiness-to-customerphenomenon.[18]Additionally, the sharing economy can be understood to encompass transactions with a permanent transfer of ownership of a resource, such as a sale,[19]while other times, transactions with a transfer of ownership are considered beyond the boundaries of the sharing economy.[20]One definition of the sharing economy, developed to integrate existing understandings and definitions, based on a systematic review is:
"the sharing economy is an IT-facilitated peer-to-peer model for commercial or non-commercial sharing of underutilized goods and service capacity through an intermediary without transfer of ownership"[15]
The phenomenon has been defined from a legal perspective as "a for-profit, triangular legal structure where two parties (Providers and Users) enter into binding contracts for the provision of goods (partial transfer of the property bundle of rights) or services (ad hoc or casual services) in exchange for monetary payment through an online platform operated by a third party (Platform Operator) with an active role in the definition and development of the legal conditions upon which the goods and services are provided."[21]Under this definition, the "Sharing Economy" is a triangular legal structure with three different legal actors: "1) a Platform Operator which using technology provides aggregation and interactivity to create a legal environment by setting the terms and conditions for all the actors; (2) a User who consumes the good or service on the terms and conditions set by the Platform Operator; and (3) a Provider who provides a good or service also abiding by the Platform Operator's terms and conditions."[21]
While the term "sharing economy" is the term most often used, the sharing economy is also referred to as the access economy, crowd-based capitalism, collaborative economy,community-based economy,gig economy, peer economy, peer-to-peer (P2P) economy,platform economy, renting economy and on-demand economy, though at times some of those terms have been defined as separate if related topics.[13]: 27–28[22][23]
The notion of "sharing economy" has often been considered anoxymoron, and amisnomerfor actual commercial exchanges.[24]Arnould and Rose proposed to replace the misleading term "sharing" with "mutuality".[25]In an article inHarvard Business Review, authors Giana M. Eckhardt and Fleura Bardhi argue that "sharing economy" is a misnomer, and that the correct term for this activity is access economy. The authors say, "When 'sharing' is market-mediated—when a company is an intermediary between consumers who don't know each other—it is no longer sharing at all. Rather, consumers are paying to access someone else's goods or services."[26]The article states that companies (such asUber) that understand this, and whose marketing highlights the financial benefits to participants, are successful, while companies (such asLyft) whose marketing highlights the social benefits of the service are less successful.[26]According toGeorge Ritzer, this trend towards increased consumer input in commercial exchanges refers to the notion ofprosumption, which, as such, is not new.[27]Jemielniak and Przegalinska note that the term sharing economy is often used to discuss aspects of the society that do not predominantly relate to the economy, and propose a broader termcollaborative societyfor such phenomena.[8]: 11
The term "platform capitalism" has been proposed by some scholars as more correct than "sharing economy" in discussion of activities of for-profit companies like Uber and Airbnb in the economy sector.[8]: 30Companies that try to focus on fairness and sharing, instead of justprofit motive, are much less common, and have been contrastingly described asplatform cooperatives(or cooperativist platforms vs capitalist platforms). In turn, projects likeWikipedia, which rely on unpaid labor of volunteers, can be classified ascommons-based peer-productioninitiatives. A related dimension is concerned with whether users are focused on non-profit sharing ormaximizing their own profit.[8]: 31, 36Sharing is a model that is adapting to the abundance of resource, whereas for-profit platform capitalism is a model that persists in areas where there is still ascarcityof resources.[8]: 38
Yochai Benkler, one of the earliest proponents of open source software, who studied thetragedy of the commons, which refers to the idea that when people all act solely in our self-interest, they deplete the shared resources they need for their own quality of life, posited that network technology could mitigate this issue through what he called "commons-based peer production", a concept first articulated in 2002.[28]Benkler then extended that analysis to "shareable goods" inSharing Nicely: On Shareable Goods and the emergence of sharing as a modality of economic production, written in 2004.[29]
There are a wide range of actors who participate in the sharing economy. This includes individual users, for-profit enterprises, social enterprise or cooperatives, digital platform companies, local communities, non-profit enterprises and thepublic sectoror the government.[30]Individual users are the actors engaged in sharing goods and resources through "peer-to-peer (P2P) or business-to-peer (B2P) transactions".[30]The for-profit enterprises are those actors who are profit-seekers who buy, sell, lend, rent or trade with the use of digital platforms as means to collaborate with other actors.[30]The social enterprises, sometimes referred to as cooperatives, are mainly "motivated by social or ecological reasons" and seek to empower actors as means of genuine sharing.[30]Digital platforms are technology firms that facilitate the relationship between transacting parties and make profits by charging commissions.[31]The local communities are the players at the local level with varied structures and sharing models where most activities are non-monetized and often carried out to further develop the community. The non-profit enterprises have a purpose of "advancing a mission or purpose" for a greater cause and this is their primary motivation which is genuine sharing of resources. In addition, the public sector or the government can participate in the sharing economy by "using public infrastructures to support or forge partnerships with other actors and to promote innovative forms of sharing".[30]
Geographer Lizzie Richardson describes the sharing economy as a paradox, since it is framed as bothcapitalistand an alternative to capitalism.[32]A distinction can be made between free sharing, such as genuine sharing, and for-profit sharing, often associated with companies such asUber,Airbnb, andTaskRabbit.[33][34][8]: 22–24Commercial co-options of the 'sharing economy' encompass a wide range of structures including mostly for-profit, and, to a lesser extent, co-operative structures.[35]
The usage of the term sharing by for-profit companies has been described as "abuse" and "misuse" of the term, or more precisely, itscommodification.[8]: 21, 24In commercial applications, the sharing economy can be considered amarketing strategymore than an actual 'sharing economy' ethos;[8]: 8, 24for example,Airbnbhas sometimes been described as a platform for individuals to 'share' extra space in their homes, but in some cases, the space is rented, not shared. Airbnb listings additionally are often owned byproperty managementcorporations.[36][34]This has led to a number of legal challenges, with some jurisdiction ruling, for example, thatride sharingthrough for-profit services like Uber de facto makes the drivers indistinguishable from regular employees of ride sharing companies.[8]: 9
According to a report by theUnited States Department of Commercein June 2016, quantitative research on the size and growth of the sharing economy remains sparse. Growth estimates can be challenging to evaluate due to different and sometimes unspecified definitions about what sort of activity counts as sharing economy transactions. The report noted a 2014 study byPricewaterhouseCoopers, which looked at five components of the sharing economy: travel, car sharing, finance, staffing and streaming. It found that global spending in these sectors totaled about $15 billion in 2014, which was only about 5% of the total spending in those areas. The report also forecasted a possible increase of "sharing economy" spending in these areas to $335 billion by 2025, which would be about 50% of the total spending in these five areas. A 2015PricewaterhouseCoopersstudy found that nearly one-fifth of American consumers partake in some type of sharing economy activity.[37]A 2017 report byDiana Farrelland Fiona Greig suggested that at least in the US, sharing economy growth may have peaked.[38]
A February 2018 study ordered by theEuropean Commissionand theDirectorate-General for Internal Market, Industry, Entrepreneurship and SMEsindicated the level of collaborative economy development between the EU-28 countries across the transport, accommodation, finance and online skills sectors. The size of the collaborative economy relative to the total EU economy was estimated to be €26.5 billion in 2016.[39]Some experts predict that shared economy could add between €160 and €572 billion to the EU economy in the upcoming years.[40]
According to "The Sharing Economy in Europe"[41]from 2022 the sharing economy is spreading rapidly and widely in today's European societies; however, the sharing economy requires more regulation at European level because of increasing problems related to its functioning. The authors also suggest that sometimes the local initiatives, especially when it comes to specific niches, are doing even better than global corporations.
In China, the sharing economy doubled in 2016, reaching 3.45 trillion yuan ($500 billion) in transaction volume, and was expected to grow by 40% per year on average over the next few years, according to the country's State Information Center.[42]In 2017, an estimated 700 million people used sharing economy platforms.[43]According to a report fromState Information Center of China, in 2022 sharing economy is still growing and reached about 3.83 trillion yuan (US$555 billion). The report also includes an overview of 7 main sectors of China's sharing economy: domestic services, production capacity, knowledge, and skills, shared transportation, shared healthcare, co-working space, and shared accommodation.[44]
In most sharing-economy platforms in China the user profiles connected toWeChatorAlipaywhich require real name and identification, which ensures that service abuse is minimised. This fact contributes to an increase in interest for shared healthcare services.[44][45]
According to TIARCENTER and the Russian Association of Electronic Communications, eight key verticals of Russia's sharing economy (C2C sales, odd jobs, car sharing, carpooling, accommodation rentals, shared offices, crowdfunding, and goods sharing) grew 30% to 511 billion rubles ($7.8 billion) in 2018.[46]
According to Sharing Economy Association of Japan, The market size of the sharing economy in Japan in 2021 was 2.4 trillion yen. It is expected to expand up to 14.2799 trillion yen in FY2030.[47][48]
Overall the Japanese environment is not well suited for the development of a sharing economy. Industries do not seek new revolutionary solutions and some services are banned.[49]For example, for ride-hailing services,Uberis not very popular in Japan as the public transport is very sufficient and the regulations ban from operating private car-sharing services and taxi apps are much more popular.[50]According toThe Japan Times(2024) it is possible that car-sharing services will be available in the future, however only in certain areas when taxis are deemed in short supply.[51]
The impacts of the access economy in terms of costs, wages and employment are not easily measured and appear to be growing.[52]Various estimates indicate that 30-40% of the U.S. workforce is self-employed, part-time, temporary or freelancers. However, the exact percentage of those performing short-term tasks or projects found via technology platforms was not effectively measured as of 2015 by government sources.[53]In the U.S., one private industry survey placed the number of "full-time independent workers" at 17.8 million in 2015, roughly the same as 2014. Another survey estimated the number of workers who do at least some freelance work at 53.7 million in 2015, roughly 34% of the workforce and up slightly from 2014.[54]
EconomistsLawrence F. KatzandAlan B. Kruegerwrote in March 2016 that there is a trend towards more workers in alternative (part-time or contract) work arrangements rather than full-time; the percentage of workers in such arrangements rose from 10.1% in 2005 to 15.8% in late 2015.[55]Katz and Krueger defined alternative work arrangements as "temporary help agency workers, on-call workers, contract company workers, and independent contractors or free-lancers".[56]They also estimated that approximately 0.5% of all workers identify customers through an online intermediary; this was consistent with two others studies that estimated the amount at 0.4% and 0.6%.[56]
At the individual transaction level, the removal of a higher overhead business intermediary (say a taxi company) with a lower cost technology platform helps reduce the cost of the transaction for the customer while also providing an opportunity for additional suppliers to compete for the business, further reducing costs.[53]Consumers can then spend more on other goods and services, stimulating demand and production in other parts of the economy. Classical economics argues that innovation that lowers the cost of goods and services represents a net economic benefit overall. However, like many new technologies and business innovations, this trend is disruptive to existing business models and presents challenges for governments and regulators.[57]
For example, should the companies providing the technology platform be liable for the actions of the suppliers in their network? Should persons in their network be treated as employees, receiving benefits such as healthcare and retirement plans? If consumers tend to be higher income persons while the suppliers are lower-income persons, will the lower cost of the services (and therefore lower compensation of the suppliers) worsen income inequality? These are among the many questions the on-demand economy presents.[53][58]
Using a personal car to transport passengers or deliveries requires payment, or sufferance, ofcostsfor fees deducted by the dispatching company, fuel, wear and tear, depreciation, interest, taxes, as well as adequate insurance. The driver is typically not paid for driving to an area where fares might be found in the volume necessary for high earnings, or driving to the location of a pickup or returning from a drop-off point.[59]Mobile appshave been written that help a driver be aware of and manage such costs has been introduced.[60]
Ridesharing companieshave affectedtraffic congestionand Airbnb has affected housing availability. According to transportation analyst Charles Komanoff, "Uber-caused congestion has reduced traffic speeds in downtown Manhattan by around 8 percent".[61]
Depending on the structure of the country's legal system, companies involved in the sharing economy may shift legal realm where cases involving sharers is disputed. Technology (such as algorithmic controls) which connects sharers also allows for the development of policies and standards of service. Companies can act as 'guardians' of their customer base by monitoring their employee's behavior. For example, Uber and Lyft can monitor their employees' driving behavior, location, and provide emergency assistance.[62]Several studies have shown that In the United States, the sharing economy restructures how legal disputes are resolved and who is considered the victims of potential crime.
In the United States's civil law, the dispute is between two individuals, determining which individual (if any) is the victim of the other party. U.S. criminal law considers the actions of a criminal who "victimizes" the state or federal law(s) by breaking said law(s). In criminal law cases, a government court punishes the offender to make the legal victim (the government) whole, but any civilian victim does not necessarily receive restitution from the state. In civil law cases, it is the direct victim party, not the state, who receives the compensatory restitution, fees, or fines. While it is possible for both kinds of law to apply to a case, the additional contracts created in sharing economy agreements creates the opportunity for more cases to be classified as civil law disputes. When the sharing economy is directly involved, the victim is the individual rather than the state. This means the civilian victim of a crime is more likely to receive compensation under a civil law case in the sharing economy than in the criminal law precedent.[63]The introduction of civil law cases has the potential to increase victims' ability to be made whole, since the legal change shifts incentives of consumers towards action.[64]
Suggested benefits of the sharing economy include:
Freelance work entails better opportunities for employment, as well as more flexibility for workers, since people have the ability to pick and choose the time and place of their work. As freelance workers, people can plan around their existing schedules and maintain multiple jobs if needed. Evidence of the appeal to this type of work can be seen from a 2015 survey conducted by theFreelancers Union, which showed that around 34% of the U.S. population was involved in freelance work.[65]
Freelance work can also be beneficial for small businesses. During their early developmental stages, many small companies can't afford or aren't in need of full-time departments, but rather require specialized work for a certain project or for a short period of time. With freelance workers offering their services in the sharing economy, firms are able to save money on long-term labor costs and increase marginal revenue from their operations.[66]
The sharing economy allows workers to set their own hours of work. An Uber driver explains, "the flexibility extends far beyond the hours you choose to work on any given week. Since you don’t have to make any sort of commitment, you can easily take time off for the big moments in your life as well, such as vacations, a wedding, the birth of a child, and more."[67]Workers are able to accept or reject additional work based on their needs while using the commodities they already possess to make money. It provides increased flexibility of work hours and wages for independent contractors of the sharing economy[68]
Depending on their schedules and resources, workers can provide services in more than one area with different companies. This allows workers to relocate and continue earning income. Also, by working for such companies, the transaction costs associated with occupational licenses are significantly lowered. For example, in New York City, taxi drivers must have a special driver's license and undergo training and background checks,[69]while Uber contractors can offer "their services for little more than a background check".[70]
The percentage of seniors in the work force increased from 20.7% in 2009 to 23.1% in 2015, an increase in part attributed to additional employment as gig workers.[71]
A common premise is that when information about goods is shared (typically via anonline marketplace), the value of those goods may increase for the business, for individuals, for the community and for society in general.[72]
Many state, local and federal governments are engaged inopen datainitiatives and projects such asdata.gov.[73]The theory of open or "transparent" access to information enables greater innovation, and makes for more efficient use of products and services, and thus supporting resilient communities.[74]
Unused value refers to the time over which products, services, and talents lay idle. This idle time is wasted value thatbusiness modelsand organizations that are based on sharing can potentially utilize. The classic example is that the average car is unused 95% of the time.[75]This wasted value can be a significant resource, and hence an opportunity, for sharing economy car solutions. There is also significant unused value in "wasted time", as articulated byClay Shirkyin his analysis of the power of crowds connected by information technology.[citation needed]Many people have unused capacity in the course of their day. With social media and information technology, such people can donate small slivers of time to take care of simple tasks that others need doing. Examples of thesecrowdsourcingsolutions include the for-profitAmazon Mechanical Turk[76]and the non-profitUshahidi.
Christopher Koopman, an author of a 2015 study byGeorge Mason Universityeconomists, said the sharing economy "allows people to take idle capital and turn them into revenue sources". He has stated, "People are taking spare bedroom[s], cars, tools they are not using and becoming their own entrepreneurs."[77]
Arun Sundararajan, a New York University economist who studies the sharing economy, told a congressional hearing that "this transition will have a positive impact on economic growth and welfare, by stimulating new consumption, by raising productivity, and by catalyzing individual innovation and entrepreneurship".[77]
An independent data study conducted byBusbudin 2016 compared the average price of hotel rooms with the average price ofAirbnblistings in thirteen major cities in the United States. The research concluded that in nine of the thirteen cities, Airbnb rates were lower than hotel rates by an average price of $34.56.[78]A further study conducted by Busbud compared the average hotel rate with the average Airbnb rate in eight major European cities. The research concluded that the Airbnb rates were lower than the hotel rates in six of the eight cities by a factor of $72.[78]Data from a separate study shows that with Airbnb's entry into the market in Austin, Texas hotels were required to lower prices by 6 percent to keep up with Airbnb's lower prices.[79]
The sharing economy lowers consumer costs via borrowing and recycling items.[80]
The sharing economy reduces negative environmental impacts by decreasing the amount of goods needed to be produced, cutting down on industry pollution (such as reducing thecarbon footprintand overall consumption of resources)[81][80][82]
The sharing economy allows the reuse and repurpose of already existing commodities. Under this business model, private owners share the assets they already possess when not in use.[83]
The sharing economy acceleratessustainable consumptionand production patterns.[84]
In 2019 a comprehensive study checked the effect of one sharing platform, which facilitate the sharing of around 7,000 product and services, ongreenhouse gas emissions. It found the emissions were reduced by 77%-85%.[6]
The sharing economy provides people with access to goods who can't afford or have no interest in buying them.[85]
The sharing economy facilitates increased quality of service through rating systems provided by companies involved in the sharing economy[86]It also facilitates increased quality of service provided by incumbent firms that work to keep up with sharing firms like Uber and Lyft[87]
A study inIntereconomics / The Review of European Economic Policynoted that the sharing economy has the potential to bring many benefits for the economy, while noting that this presupposes that the success of sharing economy services reflects their business models rather than 'regulatory arbitrage' from avoiding the regulation that affects traditional businesses.[88]
Additional benefits include:
Oxford Internet Institute Economic Geographer Mark Graham argued that key parts of the sharing economy impose a new balance of power onto workers.[90]By bringing together workers in low- and high-income countries, gig economy platforms that are not geographically confined can bring about a 'race to the bottom' for workers.
New York Magazinewrote that the sharing economy has succeeded in large part because the real economy has been struggling. Specifically, in the magazine's view, the sharing economy succeeds because of a depressed labor market, in which "lots of people are trying to fill holes in their income by monetizing their stuff and their labor in creative ways", and in many cases, people join the sharing economy because they've recently lost a full-time job, including a few cases where the pricing structure of the sharing economy may have made their old jobs less profitable (e.g. full-time taxi drivers who may have switched toLyftorUber). The magazine writes that "In almost every case, what compels people to open up their homes and cars to complete strangers is money, not trust.... Tools that help people trust in the kindness of strangers might be pushing hesitant sharing-economy participants over the threshold to adoption. But what's getting them to the threshold in the first place is a damaged economy and harmful public policy that has forced millions of people to look to odd jobs for sustenance."[91][92][93]
Uber's "audacious plan to replace human drivers" may increase job loss as even freelance driving will be replaced by automation.[94]
However, in a report published in January 2017,Carl Benedikt Freyfound that while the introduction of Uber had not led to jobs being lost, but had caused a reduction in the incomes of incumbent taxi drivers of almost 10%. Frey found that the "sharing economy", and Uber, in particular, has had substantial negative impacts on workers wages.[95]
Some people believe theGreat Recessionled to the expansion of the sharing economy because job losses enhanced the desire fortemporary work, which is prevalent in the sharing economy. However, there are disadvantages to the worker; when companies use contract-based employment, the "advantage for a business of using such non-regular workers is obvious: It can lower labor costs dramatically, often by 30 percent, since it is not responsible for health benefits, social security, unemployment or injured workers' compensation, paid sick or vacation leave and more. Contract workers, who are barred from forming unions and have no grievance procedure, can be dismissed without notice".[61]
There is debate over the status of the workers within the sharing economy; whether they should be treated asindependent contractorsoremployeesof the companies. This issue seems to be most relevant among sharing economy companies such as Uber. The reason this has become such a major issue is that the two types of workers are treated very differently. Contract workers are not guaranteed any benefits and pay can be below average. However, if they are employees, they are granted access to benefits and pay is generally higher. This has been described as "shifting liabilities and responsibilities" to the workers, while denying them the traditionaljob security.[8]: 25It has been argued that this trend is de facto "obliterating the achievements ofunionsthus far in their struggle to secure basic mutual obligations in worker-employer relations".[8]: 28
InUberland: How the Algorithms are Rewriting the Rules of Work, technologyethnographerAlex Rosenblat argues that Uber's reluctance to classify its drivers as "employees" strips them of their agency as the company's revenue-generating workforce, resulting in lower compensation and, in some cases, risking their safety.[96]: 138–147In particular, Rosenblat critiques Uber's ratings system, which she argues elevates passengers to the role of "middle managers" without offering drivers the chance to contest poor ratings.[96]: 149Rosenblat notes that poor ratings, or any other number of unspecified breaches of conduct, can result in an Uber driver's "deactivation", an outcome Rosenblat likens to being fired without notice or stated cause.[96]: 152Prosecutors have used Uber's opaque firing policy as evidence of illegal worker misclassification; Shannon Liss-Riordan, an attorney leading a class action lawsuit against the company, claims that "the ability to fire at will is an important factor in showing a company's workers are employees, not independent contractors."[97]
TheCalifornia Public Utilities Commissionfiled a case, later settled out of court, that "addresses the same underlying issue seen in the contract worker controversy—whether the new ways of operating in the sharing economy model should be subject to the same regulations governing traditional businesses".[98]Like Uber, Instacart faced similar lawsuits. In 2015, a lawsuit was filed against Instacart alleging the company misclassified a person who buys and delivers groceries as an independent contractor.[99]Instacart had to eventually make all such people as part-time employees and had to accord benefits such as health insurance to those qualifying. This led to Instacart having thousands of employees overnight from zero.[99]
A 2015 article by economists at George Mason University argued that many of the regulations circumvented by sharing economy businesses are exclusive privileges lobbied for by interest groups.[100]Workers and entrepreneurs not connected to the interest groups engaging in thisrent-seekingbehavior are thus restricted from entry into the market. For example, taxi unions lobbying a city government to restrict the number of cabs allowed on the road prevents larger numbers of drivers from entering the marketplace.
The same research finds that while access economy workers do lack the protections that exist in the traditional economy,[101]many of them cannot actually find work in the traditional economy.[100]In this sense, they are taking advantage of opportunities that the traditional regulatory framework has not been able to provide for them. As the sharing economy grows, governments at all levels are reevaluating how to adjust their regulatory schemes to accommodate these workers.
However, a 2021 research on Uber's downfall in Turkey, which was carried out with user-generated content from TripAdvisor comments and YouTube videos related to Uber use in Istanbul, finds that the main reasons for people to use Uber are that since the drivers are independent, they tend to treat the customers in a kinder way than the regular taxi drivers and that it's much cheaper to use Uber.[102]Although, Turkish taxi drivers claim that Uber's operations in Turkey are illegal because the independent drivers don't pay the operating license fee, which is compulsory for taxi drivers to pay, to the government. Their efforts led to the banning of Uber in Turkey by the Turkish government in October 2019. After being unavailable for approximately two years, Uber eventually became available again in Turkey in January 2021.[103]
Andrew Leonard,[104][105][106]Evgeny Morozov,[107]criticized the for-profit sector of the sharing economy, writing that sharing economy businesses "extract" profits from their given sector by "successfully [making] an end run around the existing costs of doing business" – taxes, regulations, and insurance. Similarly, In the context of online freelancing marketplaces, there have been worries that the sharing economy could result in a 'race to the bottom' in terms of wages and benefits: as millions of new workers from low-income countries come online.[90][108]
Susie Caglewrote that the benefits big sharing economy players might be making for themselves are "not exactly" trickling down, and that the sharing economy "doesn't build trust" because where it builds new connections, it often "replicates old patterns of privileged access for some, and denial for others".[109]William Alden wrote that "The so-called sharing economy is supposed to offer a new kind of capitalism, one where regular folks, enabled by efficient online platforms, can turn their fallow assets into cash machines ... But the reality is that these markets also tend to attract a class of well-heeled professional operators, who outperform the amateurs—just like the rest of the economy".[110]
The local economic benefit of the sharing economy is offset by its current form, which is that huge tech companies reap a great deal of profit in many cases. For example, Uber, which is estimated to be worth $50B as of mid-2015,[111]takes up to 30% commission from the gross revenue of its drivers,[112]leaving many drivers making less than minimum wage.[113]This is reminiscent of a peakRentier state"which derives all or a substantial portion of its national revenues from the rent of indigenous resources to external clients".
Agriculture
Finance
Food
Property
Labor
Real estate
Transportation
Governance
Business
Technology
Digital rights
Other
In order to reap the real benefits of a sharing economy and somehow address some issues that revolve around it, there is a great need for the government and policy-makers to create the “right enabling framework based on a set of guiding principles” proposed by the World Economic Forum. These principles are derived from the analysis of global policymaking and consultation with experts. The following are the seven principles for regulation in the sharing economy.[30]
|
https://en.wikipedia.org/wiki/Sharing_economy
|
The term "bin bug" was coined in August 2006 by theBritish mediato refer to the use ofRadio Frequency Identification(RFID) chips by some local councils to monitor the amount ofdomestic wastecreated by each household. The system works by having a unique RFID chip for each household's non-recyclingwheelie bin(such households have two bins: one for general waste and onerecyclingbin). The chip is scanned by thedustbin lorryand, as it lifts the bin, records the weight of the contents. This is then stored in a centraldatabasethat monitors the non-recycled waste output of each household.[1][2]
In August 2006, it was reported that fiveUlstercouncils had installed chips in household wheelie bins,[3]and that three more local councils were about to trial the technology.[4]Paul Bettison, the chairman of theLocal Government Association's environment board, said that if pilot schemes received approval from the government and were successful, weighing schemes could be commonplace across the country within two years.[4]While some councils informed the householders of their intentions to monitor their waste output many others did not.[1]Worcester City Council, for example, detailed their plans through local newspaperWorcester Newsin August 2005.[5]Aberdeen City Councilkept the scheme quiet until a local newspaper ran the story; the council declared no intention to operate or bring the system online but did not rule out future use.[6]Some councillors said that the purpose of the "bin bugs" was to settle disputes about the ownership of the bins, but others mentioned that the system is a trial and means that they are more prepared should the government introduce a household waste tax. The tax would be in the form of a charge for households that exceed set limits of non-recycled waste.[1][7]With recycling in the UK amongst the lowest percentage in Europe at 18%, a new tax scheme would have the intention of encouraging domestic recycling and meeting European landfill reduction targets.[4]
Each RFID chip costs around £2, with each scanning system costing around £15,000. TheLocal Government Association(LGA) provided £5 million to councils to fund 40 pilot schemes.[4]They are supplied by two rival German companies: Sulo andDeister Electronic. Mr Bettison said that although removing a device from a bin "would not break any law", in the future a local authority might have grounds to refuse to empty the bin.[1]
The motivation behind the RFID chips are to monitor the production of landfill waste so that councils can comply with the European Landfill Directive 1999/31/EC.[8]The standard regulating RFID tags for the waste industry is EN 14803 Identification and/or determination of the quantity of waste.
TheRFIDtag is located in a recess under the front lip of the bin, either as a self-contained unit or behind a plastic cap.[9][10][11]
There is some debate as to the legality of removing the RFID chip.[12]
|
https://en.wikipedia.org/wiki/Bin_bug
|
Bradford's lawis a pattern first described bySamuel C. Bradfordin 1934 that estimates theexponentiallydiminishing returnsof searching for references inscience journals. One formulation is that if journals in a field are sorted by number of articles into three groups, each with about one-third of all articles, then the number of journals in each group will be proportional to 1:n:n2.[1]There are a number of related formulations of the principle.
In many disciplines, this pattern is called aPareto distribution. As a practical example, suppose that a researcher has five corescientific journalsfor his or her subject. Suppose that in a month there are 12 articles of interest in those journals. Suppose further that in order to find another dozen articles of interest, the researcher would have to go to an additional 10 journals. Then that researcher's Bradford multiplierbmis 2 (i.e. 10/5). For each new dozen articles, that researcher will need to look inbmtimes as many journals. After looking in 5, 10, 20, 40, etc. journals, most researchers quickly realize that there is little point in looking further.
Different researchers have different numbers of core journals, and different Bradford multipliers. But the pattern holds quite well across many subjects, and may well be a general pattern for human interactions in social systems. LikeZipf's law, to which it is related, we do not have a good explanation for why it works, but knowing that it does is very useful for librarians. What it means is that for each specialty, it is sufficient to identify the "core publications" for that field and only stock those; very rarely will researchers need to go outside that set.[verification needed]
However, its impact has been far greater than that. Armed with this idea and inspired byVannevar Bush's famous articleAs We May Think,Eugene Garfieldat theInstitute for Scientific Informationin the 1960s developed a comprehensive index of how scientific thinking propagates. HisScience Citation Index(SCI) had the effect of making it easy to identify exactly which scientists did science that had an impact, and which journals that science appeared in. It also caused the discovery, which some did not expect, that a few journals, such asNatureandScience, were core for all ofhard science. The same pattern does not happen with the humanities or the social sciences.
The result of this is pressure on scientists to publish in the best journals, and pressure on universities to ensure access to that core set of journals. On the other hand, the set of "core journals" may vary more or less strongly with the individual researchers, and even more strongly along schools-of-thought divides. There is also a danger of over-representing majority views if journals are selected in this fashion.
Bradford's law is also known as Bradford's law of scattering or the Bradford distribution, as it describes how the articles on a particular subject are scattered throughout the mass of periodicals.[2]Another more general term that has come into use since 2006 is information scattering, an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few.[3]This law of distribution in bibliometrics can be applied to theWorld Wide Webas well.[4]
Hjørland and Nicolaisen identified three kinds of scattering:[5]
They found that the literature of Bradford's law (including Bradford's own papers) is unclear in relation to which kind of scattering is actually being measured.
The interpretation of Bradford's law in terms of a geometric progression was suggested by V. Yatsko,[6]who introduced an additional constant and demonstrated that Bradford distribution can be applied to a variety of objects, not only to distribution of articles or citations across journals. V. Yatsko's interpretation (Y-interpretation) can be effectively used to compute threshold values in case it is necessary to distinguish subsets within a set of objects (successful/unsuccessful applicants, developed/underdeveloped regions, etc.).
|
https://en.wikipedia.org/wiki/Bradford%27s_law
|
This is a list of the origins of computer-related terms or terms used in the computing world (i.e., alist of computer termetymologies). It relates to bothcomputer hardwareandcomputer software.
Names of many computer terms, especially computer applications, often relate to the function they perform, e.g., acompileris an application thatcompiles(programming languagesource codeinto the computer'smachine language). However, there are other terms with less obvious origins, which are of etymological interest. This article lists such terms.
|
https://en.wikipedia.org/wiki/List_of_computer_term_etymologies
|
Cohen's kappa coefficient('κ', lowercase Greekkappa) is astatisticthat is used to measureinter-rater reliability(and alsointra-rater reliability) for qualitative (categorical) items.[1]It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.[2]
The first mention of a kappa-like statistic is attributed to Galton in 1892.[3][4]
The seminal paper introducing kappa as a new technique was published byJacob Cohenin the journalEducational and Psychological Measurementin 1960.[5]
Cohen's kappa measures the agreement between two raters who each classifyNitems intoCmutually exclusive categories. The definition ofκ{\textstyle \kappa }is
wherepois the relative observed agreement among raters, andpeis the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly selecting each category. If the raters are in complete agreement thenκ=1{\textstyle \kappa =1}. If there is no agreement among the raters other than what would be expected by chance (as given bype),κ=0{\textstyle \kappa =0}. It is possible for the statistic to be negative,[6]which can occur by chance if there is no relationship between the ratings of the two raters, or it may reflect a real tendency of the raters to give differing ratings.
Forkcategories,Nobservations to categorize andnki{\displaystyle n_{ki}}the number of times rateripredicted categoryk:
This is derived from the following construction:
Wherepk12^{\displaystyle {\widehat {p_{k12}}}}is the estimated probability that both rater 1 and rater 2 will classify the same item as k, whilepk1^{\displaystyle {\widehat {p_{k1}}}}is the estimated probability that rater 1 will classify an item as k (and similarly for rater 2).
The relationpk^=∑kpk1^pk2^{\textstyle {\widehat {p_{k}}}=\sum _{k}{\widehat {p_{k1}}}{\widehat {p_{k2}}}}is based on using the assumption that the rating of the two raters areindependent. The termpk1^{\displaystyle {\widehat {p_{k1}}}}is estimated by using the number of items classified as k by rater 1 (nk1{\displaystyle n_{k1}}) divided by the total items to classify (N{\displaystyle N}):pk1^=nk1N{\displaystyle {\widehat {p_{k1}}}={n_{k1} \over N}}(and similarly for rater 2).
In the traditional 2 × 2confusion matrixemployed inmachine learningandstatisticsto evaluatebinary classifications, the Cohen's Kappa formula can be written as:[7]
where TP are the true positives, FP are the false positives, TN are the true negatives, and FN are the false negatives. In this case, Cohen's Kappa is equivalent to theHeidke skill scoreknown inMeteorology.[8]The measure was first introduced by Myrick Haskell Doolittle in 1888.[9]
Suppose that you were analyzing data related to a group of 50 people applying for a grant. Each grant proposal was read by two readers and each reader either said "Yes" or "No" to the proposal. Suppose the disagreement count data were as follows, where A and B are readers, data on the main diagonal of the matrix (a and d) count the number of agreements and off-diagonal data (b and c) count the number of disagreements:
e.g.
The observed proportionate agreement is:
To calculatepe(the probability of random agreement) we note that:
So the expected probability that both would say yes at random is:
Similarly:
Overall random agreement probability is the probability that they agreed on either Yes or No, i.e.:
So now applying our formula for Cohen's Kappa we get:
A case sometimes considered to be a problem with Cohen's Kappa occurs when comparing the Kappa calculated for two pairs of raters with the two raters in each pair having the same percentage agreement but one pair give a similar number of ratings in each class while the other pair give a very different number of ratings in each class.[10](In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases there is equal agreement between A and B (60 out of 100 in both cases) in terms of agreement in each class, so we would expect the relative values of Cohen's Kappa to reflect this. However, calculating Cohen's Kappa for each:
we find that it shows greater similarity between A and B in the second case, compared to the first. This is because while the percentage agreement is the same, the percentage agreement that would occur 'by chance' is significantly higher in the first case (0.54 compared to 0.46).
P-valuefor kappa is rarely reported, probably because even relatively low values of kappa can nonetheless be significantly different from zero but not of sufficient magnitude to satisfy investigators.[11]: 66Still, its standard error has been described[12]and is computed by various computer programs.[13]
Confidence intervalsfor Kappa may be constructed, for the expected Kappa values if we had infinite number of items checked, using the following formula:[1]
WhereZ1−α/2=1.960{\displaystyle Z_{1-\alpha /2}=1.960}is thestandard normalpercentile whenα=5%{\displaystyle \alpha =5\%}, andSEκ{\displaystyle SE_{\kappa }}is calculated byjackknife,bootstrapor the asymptotic formula described by Fleiss & Cohen.[12]
If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? Guidelines would be helpful, but factors other than agreement can influence its magnitude, which makes interpretation of a given magnitude problematic. As Sim and Wright noted, two important factors are prevalence (are the codes equiprobable or do their probabilities vary) and bias (are the marginal probabilities for the two observers similar or different). Other things being equal, kappas are higher when codes are equiprobable. On the other hand, Kappas are higher when codes are distributed asymmetrically by the two observers. In contrast to probability variations, the effect of bias is greater when Kappa is small than when it is large.[14]: 261–262
Another factor is the number of codes. As number of codes increases, kappas become higher. Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, values for kappa were lower when codes were fewer. And, in agreement with Sim & Wrights's statement concerning prevalence, kappas were higher when codes were roughly equiprobable. Thus Bakeman et al. concluded that "no one value of kappa can be regarded as universally acceptable."[15]: 357They also provide a computer program that lets users compute values for kappa specifying number of codes, their probability, and observer accuracy. For example, given equiprobable codes and observers who are 85% accurate, value of kappa are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and 10, respectively.
Nonetheless, magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch,[16]who characterized values < 0 as indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful.[17]Fleiss's[18]: 218equally arbitrary guidelines characterize kappas over 0.75 as excellent, 0.40 to 0.75 as fair to good, and below 0.40 as poor.
Kappa assumes its theoretical maximum value of 1 only when both observers distribute codes the same, that is, when corresponding row and column sums are identical. Anything less is less than perfect agreement. Still, the maximum value kappa could achieve given unequal distributions helps interpret the value of kappa actually obtained. The equation for κ maximum is:[19]
wherePexp=∑i=1kPi+P+i{\displaystyle P_{\exp }=\sum _{i=1}^{k}P_{i+}P_{+i}}, as usual,Pmax=∑i=1kmin(Pi+,P+i){\displaystyle P_{\max }=\sum _{i=1}^{k}\min(P_{i+},P_{+i})},
k= number of codes,Pi+{\displaystyle P_{i+}}are the row probabilities, andP+i{\displaystyle P_{+i}}are the column probabilities.
Kappa is an index that considers observed agreement with respect to a baseline agreement. However, investigators must consider carefully whether Kappa's baseline agreement is relevant for the particular research question. Kappa's baseline is frequently described as the agreement due to chance, which is only partially correct. Kappa's baseline agreement is the agreement that would be expected due to random allocation, given the quantities specified by the marginal totals of square contingency table. Thus, κ = 0 when the observed allocation is apparently random, regardless of the quantity disagreement as constrained by the marginal totals. However, for many applications, investigators should be more interested in the quantity disagreement in the marginal totals than in the allocation disagreement as described by the additional information on the diagonal of the square contingency table. Thus for many applications, Kappa's baseline is more distracting than enlightening. Consider the following example:
The disagreement proportion is 14/16 or 0.875. The disagreement is due to quantity because allocation is optimal. κ is 0.01.
The disagreement proportion is 2/16 or 0.125. The disagreement is due to allocation because quantities are identical. Kappa is −0.07.
Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator. Furthermore, a ratio does not reveal its numerator nor its denominator. It is more informative for researchers to report disagreement in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. When predictive accuracy is the goal, researchers can more easily begin to think about ways to improve a prediction by using two components of quantity and allocation, rather than one ratio of Kappa.[2]
Some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can make it unreliable for measuring agreement in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the agreement on the rare category.[20]For this reason, κ is considered an overly conservative measure of agreement.[21]Others[22][citation needed]contest the assertion that kappa "takes into account" chance agreement. To do this effectively would require an explicit model of how chance affects rater decisions. The so-called chance adjustment of kappa statistics supposes that, when not completely certain, raters simply guess—a very unrealistic scenario. Moreover, some works[23]have shown how kappa statistics can lead to a wrong conclusion for unbalanced data.
A similar statistic, calledpi, was proposed by Scott (1955). Cohen's kappa andScott's pidiffer in terms of howpeis calculated.
Note that Cohen's kappa measures agreement betweentworaters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, seeFleiss(1971). The Fleiss kappa, however, is a multi-rater generalization ofScott's pistatistic, not Cohen's kappa. Kappa is also used to compare performance inmachine learning, but the directional version known asInformednessorYouden's J statisticis argued to be more appropriate for supervised learning.[24]
The weighted kappa allows disagreements to be weighted differently[25]and is especially useful when codes are ordered.[11]: 66Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros. Off-diagonal cells contain weights indicating the seriousness of that disagreement. Often, cells one off the diagonal are weighted 1, those two off 2, etc.
The equation for weighted κ is:
wherek=number of codes andwij{\displaystyle w_{ij}},xij{\displaystyle x_{ij}}, andmij{\displaystyle m_{ij}}are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above.
|
https://en.wikipedia.org/wiki/Cohen%27s_kappa
|
Inmathematics, particularly in the area ofarithmetic, amodular multiplicative inverseof anintegerais an integerxsuch that the productaxiscongruentto 1 with respect to the modulusm.[1]In the standard notation ofmodular arithmeticthis congruence is written as
which is the shorthand way of writing the statement thatmdivides (evenly) the quantityax− 1, or, put another way, the remainder after dividingaxby the integermis 1. Ifadoes have an inverse modulom, then there is an infinite number of solutions of this congruence, which form acongruence classwith respect to this modulus. Furthermore, any integer that is congruent toa(i.e., ina's congruence class) has any element ofx's congruence class as a modular multiplicative inverse. Using the notation ofw¯{\displaystyle {\overline {w}}}to indicate the congruence class containingw, this can be expressed by saying that themodulo multiplicative inverseof the congruence classa¯{\displaystyle {\overline {a}}}is the congruence classx¯{\displaystyle {\overline {x}}}such that:
where the symbol⋅m{\displaystyle \cdot _{m}}denotes the multiplication of equivalence classes modulom.[2]Written in this way, the analogy with the usual concept of amultiplicative inversein the set ofrationalorreal numbersis clearly represented, replacing the numbers by congruence classes and altering thebinary operationappropriately.
As with the analogous operation on the real numbers, a fundamental use of this operation is in solving, when possible, linear congruences of the form
Finding modular multiplicative inverses also has practical applications in the field ofcryptography, e.g.public-key cryptographyand theRSA algorithm.[3][4][5]A benefit for the computer implementation of these applications is that there exists a very fast algorithm (theextended Euclidean algorithm) that can be used for the calculation of modular multiplicative inverses.
For a given positive integerm, two integers,aandb, are said to becongruent modulomifmdivides their difference. Thisbinary relationis denoted by,
This is anequivalence relationon the set of integers,Z{\displaystyle \mathbb {Z} }, and the equivalence classes are calledcongruence classes modulomorresidue classes modulom. Leta¯{\displaystyle {\overline {a}}}denote the congruence class containing the integera,[6]then
Alinear congruenceis a modular congruence of the form
Unlike linear equations over the reals, linear congruences may have zero, one or several solutions. Ifxis a solution of a linear congruence then every element inx¯{\displaystyle {\overline {x}}}is also a solution, so, when speaking of the number of solutions of a linear congruence we are referring to the number of different congruence classes that contain solutions.
Ifdis thegreatest common divisorofaandmthen the linear congruenceax≡b(modm)has solutions if and only ifddividesb. Ifddividesb, then there are exactlydsolutions.[7]
A modular multiplicative inverse of an integerawith respect to the modulusmis a solution of the linear congruence
The previous result says that a solution exists if and only ifgcd(a,m) = 1, that is,aandmmust berelatively prime(i.e. coprime). Furthermore, when this condition holds, there is exactly one solution, i.e., when it exists, a modular multiplicative inverse is unique:[8]Ifbandb'are both modular multiplicative inverses ofarespect to the modulusm, then
therefore
Ifa≡ 0 (modm), thengcd(a,m) =m, andawon't even have a modular multiplicative inverse. Therefore,b ≡ b'(modm).
Whenax≡ 1 (modm)has a solution it is often denoted in this way −
but this can be considered anabuse of notationsince it could be misinterpreted as thereciprocalofa{\displaystyle a}(which, contrary to the modular multiplicative inverse, is not an integer except whenais 1 or −1). The notation would be proper ifais interpreted as a token standing for the congruence classa¯{\displaystyle {\overline {a}}}, as the multiplicative inverse of a congruence class is a congruence class with the multiplication defined in the next section.
The congruence relation, modulom, partitions the set of integers intomcongruence classes. Operations of addition and multiplication can be defined on thesemobjects in the following way: To either add or multiply two congruence classes, first pick a representative (in any way) from each class, then perform the usual operation for integers on the two representatives and finally take the congruence class that the result of the integer operation lies in as the result of the operation on the congruence classes. In symbols, with+m{\displaystyle +_{m}}and⋅m{\displaystyle \cdot _{m}}representing the operations on congruence classes, these definitions are
and
These operations arewell-defined, meaning that the end result does not depend on the choices of representatives that were made to obtain the result.
Themcongruence classes with these two defined operations form aring, called thering of integers modulom. There are several notations used for these algebraic objects, most oftenZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }orZ/m{\displaystyle \mathbb {Z} /m}, but several elementary texts and application areas use a simplified notationZm{\displaystyle \mathbb {Z} _{m}}when confusion with other algebraic objects is unlikely.
The congruence classes of the integers modulomwere traditionally known asresidue classes modulo m, reflecting the fact that all the elements of a congruence class have the same remainder (i.e., "residue") upon being divided bym. Any set ofmintegers selected so that each comes from a different congruence class modulo m is called acomplete system of residues modulom.[9]Thedivision algorithmshows that the set of integers,{0, 1, 2, ...,m− 1}form a complete system of residues modulom, known as theleast residue system modulom. In working with arithmetic problems it is sometimes more convenient to work with a complete system of residues and use the language of congruences while at other times the point of view of the congruence classes of the ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is more useful.[10]
Not every element of a complete residue system modulomhas a modular multiplicative inverse, for instance, zero never does. After removing the elements of a complete residue system that are not relatively prime tom, what is left is called areduced residue system, all of whose elements have modular multiplicative inverses. The number of elements in a reduced residue system isϕ(m){\displaystyle \phi (m)}, whereϕ{\displaystyle \phi }is theEuler totient function, i.e., the number of positive integers less thanmthat are relatively prime tom.
In a generalring with unitynot every element has amultiplicative inverseand those that do are calledunits. As the product of two units is a unit, the units of a ring form agroup, thegroup of units of the ringand often denoted byR×ifRis the name of the ring. The group of units of the ring of integers modulomis called themultiplicative group of integers modulom, and it isisomorphicto a reduced residue system. In particular, it hasorder(size),ϕ(m){\displaystyle \phi (m)}.
In the case thatmis aprime, sayp, thenϕ(p)=p−1{\displaystyle \phi (p)=p-1}and all the non-zero elements ofZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }have multiplicative inverses, thusZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }is afinite field. In this case, the multiplicative group of integers modulopform acyclic groupof orderp− 1.
For any integern>1{\displaystyle n>1}, it's always the case thatn2−n+1{\displaystyle n^{2}-n+1}is the modular multiplicative inverse ofn+1{\displaystyle n+1}with respect to the modulusn2{\displaystyle n^{2}}, since(n+1)(n2−n+1)=n3+1{\displaystyle (n+1)(n^{2}-n+1)=n^{3}+1}. Examples are3×3≡1(mod4){\displaystyle 3\times 3\equiv 1{\pmod {4}}},4×7≡1(mod9){\displaystyle 4\times 7\equiv 1{\pmod {9}}},5×13≡1(mod16){\displaystyle 5\times 13\equiv 1{\pmod {16}}}and so on.
The following example uses the modulus 10: Two integers are congruent mod 10 if and only if their difference is divisible by 10, for instance
Some of the ten congruence classes with respect to this modulus are:
The linear congruence4x≡ 5 (mod 10)has no solutions since the integers that are congruent to 5 (i.e., those in5¯{\displaystyle {\overline {5}}}) are all odd while4xis always even. However, the linear congruence4x≡ 6 (mod 10)has two solutions, namely,x= 4andx= 9. Thegcd(4, 10) = 2and 2 does not divide 5, but does divide 6.
Sincegcd(3, 10) = 1, the linear congruence3x≡ 1 (mod 10)will have solutions, that is, modular multiplicative inverses of 3 modulo 10 will exist. In fact, 7 satisfies this congruence (i.e., 21 − 1 = 20). However, other integers also satisfy the congruence, for instance 17 and −3 (i.e., 3(17) − 1 = 50 and 3(−3) − 1 = −10). In particular, every integer in7¯{\displaystyle {\overline {7}}}will satisfy the congruence since these integers have the form7 + 10rfor some integerrand
is divisible by 10. This congruence has only this one congruence class of solutions. The solution in this case could have been obtained by checking all possible cases, but systematic algorithms would be needed for larger moduli and these will be given in the next section.
The product of congruence classes5¯{\displaystyle {\overline {5}}}and8¯{\displaystyle {\overline {8}}}can be obtained by selecting an element of5¯{\displaystyle {\overline {5}}}, say 25, and an element of8¯{\displaystyle {\overline {8}}}, say −2, and observing that their product (25)(−2) = −50 is in the congruence class0¯{\displaystyle {\overline {0}}}. Thus,5¯⋅108¯=0¯{\displaystyle {\overline {5}}\cdot _{10}{\overline {8}}={\overline {0}}}. Addition is defined in a similar way. The ten congruence classes together with these operations of addition and multiplication of congruence classes form the ring of integers modulo 10, i.e.,Z/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }.
A complete residue system modulo 10 can be the set {10, −9, 2, 13, 24, −15, 26, 37, 8, 9} where each integer is in a different congruence class modulo 10. The unique least residue system modulo 10 is {0, 1, 2, ..., 9}. A reduced residue system modulo 10 could be {1, 3, 7, 9}. The product of any two congruence classes represented by these numbers is again one of these four congruence classes. This implies that these four congruence classes form a group, in this case the cyclic group of order four, having either 3 or 7 as a (multiplicative) generator. The represented congruence classes form the group of units of the ringZ/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. These congruence classes are precisely the ones which have modular multiplicative inverses.
A modular multiplicative inverse ofamodulomcan be found by using the extended Euclidean algorithm.
TheEuclidean algorithmdetermines the greatest common divisor (gcd) of two integers, sayaandm. Ifahas a multiplicative inverse modulom, this gcd must be 1. The last of several equations produced by the algorithm may be solved for this gcd. Then, using a method called "back substitution", an expression connecting the original parameters and this gcd can be obtained. In other words, integersxandycan be found to satisfyBézout's identity,
Rewritten, this is
that is,
so, a modular multiplicative inverse ofahas been calculated. A more efficient version of the algorithm is the extended Euclidean algorithm, which, by using auxiliary equations, reduces two passes through the algorithm (back substitution can be thought of as passing through the algorithm in reverse) to just one.
Inbig O notation, this algorithm runs in timeO(log2(m)), assuming|a| <m, and is considered to be very fast and generally more efficient than its alternative, exponentiation.
As an alternative to the extended Euclidean algorithm, Euler's theorem may be used to compute modular inverses.[11]
According toEuler's theorem, ifaiscoprimetom, that is,gcd(a,m) = 1, then
whereϕ{\displaystyle \phi }isEuler's totient function. This follows from the fact thatabelongs to the multiplicative group(Z/mZ){\displaystyle (\mathbb {Z} /m\mathbb {Z} )}×if and only ifaiscoprimetom. Therefore, a modular multiplicative inverse can be found directly:
In the special case wheremis a prime,ϕ(m)=m−1{\displaystyle \phi (m)=m-1}and a modular inverse is given by
This method is generally slower than the extended Euclidean algorithm, but is sometimes used when an implementation for modular exponentiation is already available. Some disadvantages of this method include:
One notableadvantageof this technique is that there are no conditional branches which depend on the value ofa, and thus the value ofa, which may be an important secret inpublic-key cryptography, can be protected fromside-channel attacks. For this reason, the standard implementation ofCurve25519uses this technique to compute an inverse.
It is possible to compute the inverse of multiple numbersai, modulo a commonm, with a single invocation of the Euclidean algorithm and three multiplications per additional input.[12]The basic idea is to form the product of all theai, invert that, then multiply byajfor allj≠ito leave only the desireda−1i.
More specifically, the algorithm is (all arithmetic performed modulom):
It is possible to perform the multiplications in a tree structure rather than linearly to exploitparallel computing.
Finding a modular multiplicative inverse has many applications in algorithms that rely on the theory of modular arithmetic. For instance, in cryptography the use of modular arithmetic permits some operations to be carried out more quickly and with fewer storage requirements, while other operations become more difficult.[13]Both of these features can be used to advantage. In particular, in the RSA algorithm, encrypting and decrypting a message is done using a pair of numbers that are multiplicative inverses with respect to a carefully selected modulus. One of these numbers is made public and can be used in a rapid encryption procedure, while the other, used in the decryption procedure, is kept hidden. Determining the hidden number from the public number is considered to be computationally infeasible and this is what makes the system work to ensure privacy.[14]
As another example in a different context, consider the exact division problem in computer science where you have a list of odd word-sized numbers each divisible bykand you wish to divide them all byk. One solution is as follows:
On many machines, particularly those without hardware support for division, division is a slower operation than multiplication, so this approach can yield a considerable speedup. The first step is relatively slow but only needs to be done once.
Modular multiplicative inverses are used to obtain a solution of a system of linear congruences that is guaranteed by theChinese Remainder Theorem.
For example, the system
has common solutions since 5,7 and 11 are pairwisecoprime. A solution is given by
where
Thus,
and in its unique reduced form
since 385 is theLCMof 5,7 and 11.
Also, the modular multiplicative inverse figures prominently in the definition of theKloosterman sum.
|
https://en.wikipedia.org/wiki/Modular_multiplicative_inverse
|
Variational Bayesian methodsare a family of techniques for approximating intractableintegralsarising inBayesian inferenceandmachine learning. They are typically used in complexstatistical modelsconsisting of observed variables (usually termed "data") as well as unknownparametersandlatent variables, with various sorts of relationships among the three types ofrandom variables, as might be described by agraphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:
In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative toMonte Carlo samplingmethods—particularly,Markov chain Monte Carlomethods such asGibbs sampling—for taking a fully Bayesian approach tostatistical inferenceover complexdistributionsthat are difficult to evaluate directly orsample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior.
Variational Bayes can be seen as an extension of theexpectation–maximization(EM) algorithm frommaximum likelihood(ML) ormaximum a posteriori(MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entireposterior distributionof the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically.
For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables.
Invariationalinference, the posterior distribution over a set of unobserved variablesZ={Z1…Zn}{\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}}given some dataX{\displaystyle \mathbf {X} }is approximated by a so-calledvariational distribution,Q(Z):{\displaystyle Q(\mathbf {Z} ):}
The distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is restricted to belong to a family of distributions of simpler form thanP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}(e.g. a family of Gaussian distributions), selected with the intention of makingQ(Z){\displaystyle Q(\mathbf {Z} )}similar to the true posterior,P(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}.
The similarity (or dissimilarity) is measured in terms of a dissimilarity functiond(Q;P){\displaystyle d(Q;P)}and hence inference is performed by selecting the distributionQ(Z){\displaystyle Q(\mathbf {Z} )}that minimizesd(Q;P){\displaystyle d(Q;P)}.
The most common type of variational Bayes uses theKullback–Leibler divergence(KL-divergence) ofQfromPas the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as
Note thatQandPare reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to theexpectation–maximization algorithm. (Using the KL-divergence in the other way produces theexpectation propagationalgorithm.)
Variational techniques are typically used to form an approximation for:
The marginalization overZ{\displaystyle \mathbf {Z} }to calculateP(X){\displaystyle P(\mathbf {X} )}in the denominator is typically intractable, because, for example, the search space ofZ{\displaystyle \mathbf {Z} }is combinatorially large. Therefore, we seek an approximation, usingQ(Z)≈P(Z∣X){\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )}.
Given thatP(Z∣X)=P(X,Z)P(X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}}, the KL-divergence above can also be written as
BecauseP(X){\displaystyle P(\mathbf {X} )}is a constant with respect toZ{\displaystyle \mathbf {Z} }and∑ZQ(Z)=1{\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1}becauseQ(Z){\displaystyle Q(\mathbf {Z} )}is a distribution, we have
which, according to the definition ofexpected value(for a discreterandom variable), can be written as follows
which can be rearranged to become
As thelog-evidencelogP(X){\displaystyle \log P(\mathbf {X} )}is fixed with respect toQ{\displaystyle Q}, maximizing the final termL(Q){\displaystyle {\mathcal {L}}(Q)}minimizes the KL divergence ofQ{\displaystyle Q}fromP{\displaystyle P}. By appropriate choice ofQ{\displaystyle Q},L(Q){\displaystyle {\mathcal {L}}(Q)}becomes tractable to compute and to maximize. Hence we have both an analytical approximationQ{\displaystyle Q}for the posteriorP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}, and a lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}for the log-evidencelogP(X){\displaystyle \log P(\mathbf {X} )}(since the KL-divergence is non-negative).
The lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}is known as the (negative)variational free energyin analogy withthermodynamic free energybecause it can also be expressed as a negative energyEQ[logP(Z,X)]{\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]}plus theentropyofQ{\displaystyle Q}. The termL(Q){\displaystyle {\mathcal {L}}(Q)}is also known asEvidence Lower Bound, abbreviated asELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data.
By the generalizedPythagorean theoremofBregman divergence, of which KL-divergence is a special case, it can be shown that:[1][2]
whereC{\displaystyle {\mathcal {C}}}is a convex set and the equality holds if:
In this case, the global minimizerQ∗(Z)=q∗(Z1∣Z2)q∗(Z2)=q∗(Z2∣Z1)q∗(Z1),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),}withZ={Z1,Z2},{\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},}can be found as follows:[1]
in which the normalizing constant is:
The termζ(X){\displaystyle \zeta (\mathbf {X} )}is often called theevidencelower bound (ELBO) in practice, sinceP(X)≥ζ(X)=exp(L(Q∗)){\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))},[1]as shown above.
By interchanging the roles ofZ1{\displaystyle \mathbf {Z} _{1}}andZ2,{\displaystyle \mathbf {Z} _{2},}we can iteratively compute the approximatedq∗(Z1){\displaystyle q^{*}(\mathbf {Z} _{1})}andq∗(Z2){\displaystyle q^{*}(\mathbf {Z} _{2})}of the true model's marginalsP(Z1∣X){\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} )}andP(Z2∣X),{\displaystyle P(\mathbf {Z} _{2}\mid \mathbf {X} ),}respectively. Although this iterative scheme is guaranteed to converge monotonically,[1]the convergedQ∗{\displaystyle Q^{*}}is only a local minimizer ofDKL(Q∥P){\displaystyle D_{\mathrm {KL} }(Q\parallel P)}.
If the constrained spaceC{\displaystyle {\mathcal {C}}}is confined within independent space, i.e.q∗(Z1∣Z2)=q∗(Z1),{\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),}the above iterative scheme will become the so-called mean field approximationQ∗(Z)=q∗(Z1)q∗(Z2),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),}as shown below.
The variational distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is usually assumed to factorize over somepartitionof the latent variables, i.e. for some partition of the latent variablesZ{\displaystyle \mathbf {Z} }intoZ1…ZM{\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}},
It can be shown using thecalculus of variations(hence the name "variational Bayes") that the "best" distributionqj∗{\displaystyle q_{j}^{*}}for each of the factorsqj{\displaystyle q_{j}}(in terms of the distribution minimizing the KL divergence, as described above) satisfies:[3]
whereEq−j∗[lnp(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}is theexpectationof the logarithm of thejoint probabilityof the data and latent variables, taken with respect toq∗{\displaystyle q^{*}}over all variables not in the partition: refer to Lemma 4.1 of[4]for a derivation of the distributionqj∗(Zj∣X){\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )}.
In practice, we usually work in terms of logarithms, i.e.:
The constant in the above expression is related to thenormalizing constant(the denominator in the expression above forqj∗{\displaystyle q_{j}^{*}}) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g.Gaussian,gamma, etc.).
Using the properties of expectations, the expressionEq−j∗[lnp(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}can usually be simplified into a function of the fixedhyperparametersof theprior distributionsover the latent variables and of expectations (and sometimes highermomentssuch as thevariance) of latent variables not in the current partition (i.e. latent variables not included inZj{\displaystyle \mathbf {Z} _{j}}). This createscircular dependenciesbetween the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests aniterativealgorithm, much like EM (theexpectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed toconverge.[5]
In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. themeans); sometimes expectations of squared variables (which can be related to thevarianceof the variables), or expectations of higher powers (i.e. highermoments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual,nonlineardependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer.
The following theorem is referred to as a duality formula for variational inference.[4]It explains some important properties of the variational distributions used in variational Bayes methods.
TheoremConsider twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}withQ≪P{\displaystyle Q\ll P}. Assume that there is a common dominatingprobability measureλ{\displaystyle \lambda }such thatP≪λ{\displaystyle P\ll \lambda }andQ≪λ{\displaystyle Q\ll \lambda }. Leth{\displaystyle h}denote any real-valuedrandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}that satisfiesh∈L1(P){\displaystyle h\in L_{1}(P)}. Then the following equality holds
Further, the supremum on the right-hand side is attainedif and only ifit holds
almost surely with respect to probability measureQ{\displaystyle Q}, wherep(θ)=dP/dλ{\displaystyle p(\theta )=dP/d\lambda }andq(θ)=dQ/dλ{\displaystyle q(\theta )=dQ/d\lambda }denote the Radon–Nikodym derivatives of the probability measuresP{\displaystyle P}andQ{\displaystyle Q}with respect toλ{\displaystyle \lambda }, respectively.
Consider a simple non-hierarchical Bayesian model consisting of a set ofi.i.d.observations from aGaussian distribution, with unknownmeanandvariance.[6]In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method.
For mathematical convenience, in the following example we work in terms of theprecision— i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of thecovariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is aone-to-one correspondencebetween the two.)
We placeconjugate priordistributions on the unknown meanμ{\displaystyle \mu }and precisionτ{\displaystyle \tau }, i.e. the mean also follows a Gaussian distribution while the precision follows agamma distribution. In other words:
Thehyperparametersμ0,λ0,a0{\displaystyle \mu _{0},\lambda _{0},a_{0}}andb0{\displaystyle b_{0}}in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions ofμ{\displaystyle \mu }andτ{\displaystyle \tau }.
We are givenN{\displaystyle N}data pointsX={x1,…,xN}{\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}}and our goal is to infer theposterior distributionq(μ,τ)=p(μ,τ∣x1,…,xN){\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})}of the parametersμ{\displaystyle \mu }andτ.{\displaystyle \tau .}
Thejoint probabilityof all variables can be rewritten as
where the individual factors are
where
Assume thatq(μ,τ)=q(μ)q(τ){\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )}, i.e. that the posterior distribution factorizes into independent factors forμ{\displaystyle \mu }andτ{\displaystyle \tau }. This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be aGaussian-gamma distribution), and hence the result we obtain will be an approximation.
Then
In the above derivation,C{\displaystyle C},C2{\displaystyle C_{2}}andC3{\displaystyle C_{3}}refer to values that are constant with respect toμ{\displaystyle \mu }. Note that the termEτ[lnp(τ)]{\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]}is not a function ofμ{\displaystyle \mu }and will have the same value regardless of the value ofμ{\displaystyle \mu }. Hence in line 3 we can absorb it into theconstant termat the end. We do the same thing in line 7.
The last line is simply a quadratic polynomial inμ{\displaystyle \mu }. Since this is the logarithm ofqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}, we can see thatqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}itself is aGaussian distribution.
With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involvingμ{\displaystyle \mu }andμ2{\displaystyle \mu ^{2}}andcompleting the squareoverμ{\displaystyle \mu }), we can derive the parameters of the Gaussian distribution:
Note that all of the above steps can be shortened by using the formula for thesum of two quadratics.
In other words:
The derivation ofqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is similar to above, although we omit some of the details for the sake of brevity.
Exponentiating both sides, we can see thatqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is agamma distribution. Specifically:
Let us recap the conclusions from the previous sections:
and
In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions:
Applying these formulas to the above equations is trivial in most cases, but the equation forbN{\displaystyle b_{N}}takes more work:
We can then write the parameter equations as follows, without any expectations:
Note that there are circular dependencies among the formulas forλN{\displaystyle \lambda _{N}}andbN{\displaystyle b_{N}}. This naturally suggests anEM-like algorithm:
We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc.
It can be shown that this algorithm is guaranteed to converge to a local maximum.
Note also that the posterior distributions have the same form as the corresponding prior distributions. We didnotassume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of theexponential family, which is the case for most of the standard distributions.
The above example shows the method by which the variational-Bayesian approximation to aposterior probabilitydensity in a givenBayesian networkis derived:
Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are:
Variational Bayes (VB) is often compared withexpectation–maximization(EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations.
However, there are a number of differences. Most important iswhatis being computed.
Imagine a BayesianGaussian mixture modeldescribed as follows:[3]
Note:
The interpretation of the above variables is as follows:
The joint probability of all variables can be rewritten as
where the individual factors are
where
Assume thatq(Z,π,μ,Λ)=q(Z)q(π,μ,Λ){\displaystyle q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}.
Then[3]
where we have defined
Exponentiating both sides of the formula forlnq∗(Z){\displaystyle \ln q^{*}(\mathbf {Z} )}yields
Requiring that this be normalized ends up requiring that theρnk{\displaystyle \rho _{nk}}sum to 1 over all values ofk{\displaystyle k}, yielding
where
In other words,q∗(Z){\displaystyle q^{*}(\mathbf {Z} )}is a product of single-observationmultinomial distributions, and factors over each individualzn{\displaystyle \mathbf {z} _{n}}, which is distributed as a single-observation multinomial distribution with parametersrnk{\displaystyle r_{nk}}fork=1…K{\displaystyle k=1\dots K}.
Furthermore, we note that
which is a standard result for categorical distributions.
Now, considering the factorq(π,μ,Λ){\displaystyle q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}, note that it automatically factors intoq(π)∏k=1Kq(μk,Λk){\displaystyle q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})}due to the structure of the graphical model defining our Gaussian mixture model, which is specified above.
Then,
Taking the exponential of both sides, we recognizeq∗(π){\displaystyle q^{*}(\mathbf {\pi } )}as aDirichlet distribution
where
where
Finally
Grouping and reading off terms involvingμk{\displaystyle \mathbf {\mu } _{k}}andΛk{\displaystyle \mathbf {\Lambda } _{k}}, the result is aGaussian-Wishart distributiongiven by
given the definitions
Finally, notice that these functions require the values ofrnk{\displaystyle r_{nk}}, which make use ofρnk{\displaystyle \rho _{nk}}, which is defined in turn based onE[lnπk]{\displaystyle \operatorname {E} [\ln \pi _{k}]},E[ln|Λk|]{\displaystyle \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]}, andEμk,Λk[(xn−μk)TΛk(xn−μk)]{\displaystyle \operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]}. Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them:
These results lead to
These can be converted from proportional to absolute values by normalizing overk{\displaystyle k}so that the corresponding values sum to 1.
Note that:
This suggests an iterative procedure that alternates between two steps:
Note that these steps correspond closely with the standard EM algorithm to derive amaximum likelihoodormaximum a posteriori(MAP) solution for the parameters of aGaussian mixture model. The responsibilitiesrnk{\displaystyle r_{nk}}in the E step correspond closely to theposterior probabilitiesof the latent variables given the data, i.e.p(Z∣X){\displaystyle p(\mathbf {Z} \mid \mathbf {X} )}; the computation of the statisticsNk{\displaystyle N_{k}},x¯k{\displaystyle {\bar {\mathbf {x} }}_{k}}, andSk{\displaystyle \mathbf {S} _{k}}corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model.
Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from theexponential family.
|
https://en.wikipedia.org/wiki/Variational_Bayesian_methods
|
Majoritarian democracyis a form ofdemocracybased upon a principle ofmajority rule.[1]Majoritarian democracy contrasts withconsensus democracy, rule by as many people as possible.[1][2][3][4]
Arend Lijphartoffers what is perhaps the dominant definition of majoritarian democracy. He identifies that majoritarian democracy is based on theWestminster model, and majority rule.[5]According to Lijphart, the key features of a majoritarian democracy are:
In the majoritarian vision of democracy, voters mandate elected politicians to enact the policies they proposed during their electoral campaign.[6]Electionsare the focal point of political engagement, with limited ability for the people to influencepolicymakingbetween elections.[7]
Though common, majoritarian democracy is not universally accepted – majoritarian democracy is criticized as having the inherent danger of becoming a "tyranny of the majority" whereby the majority in society could oppress or exclude minority groups,[1]which can lead to violence and civil war.[2][3]Some argue[who?]that since parliament, statutes and preparatory works are very important in majoritarian democracies,[citation needed]and considering the absence of a tradition to exercisejudicial reviewat the national level,[citation needed]majoritarian democracies are undemocratic.[citation needed]
Fascismrejects majoritarian democracy because the latter assumes equality of citizens and fascists claim that fascism is a form ofauthoritarian democracythat represents the views of a dynamic organized minority of a nation rather than the disorganized majority.[8]
There are few, if any, purely majoritarian democracies. In many democracies, majoritarianism is modified or limited by one or several mechanisms which attempt to represent minorities.
TheUnited Kingdomis the classical example of a majoritarian system.[5]The United Kingdom's Westminster system has been borrowed and adapted in many other democracies. Majoritarian features of theUnited Kingdom's political systeminclude:
However, even in the United Kingdom, majoritarianism has been at least somewhat limited by the introduction ofdevolved parliaments.[10]
Australiais a generally majoritarian democracy, although some have argued that it typifies a form of 'modified majoritarianism'.[9]This is because while the lower house of theAustralian Parliamentis elected viapreferential voting, the upper house is elected via proportional representation.Proportional representationis a voting system that allows for greater minority representation.[11]Canada is subject to a similar debate.[12]
TheUnited Stateshas some elements of majoritarianism - such as first-past-the-post voting in many contexts - however this is complicated by variation among states. In addition, a strict separation of powers and strongfederalismmediates majoritarianism. An example of this complexity can be seen in the role of theElectoral Collegein presidential elections, as a result of which a candidate who loses thepopular votemay still go on to win the presidency.[13]
|
https://en.wikipedia.org/wiki/Majoritarian_democracy
|
Abusiness incubatoris an organization that helpsstartup companiesand individual entrepreneurs to develop their businesses by providing a fullscale range of services, starting withmanagement trainingandoffice space, and ending with venture capital financing.[1]The National Business Incubation Association (NBIA) defines business incubators as a catalyst tool for either regional or national economic development. NBIA categorizes its members' incubators by the following five incubator types:academic institutions;non-profitdevelopment corporations;for-profitproperty developmentventures;venture capital firms, and a combination of the above.[2]
Business incubators differ from research and technologyparksin their dedication tostartupand early-stage companies. Research and technology parks, on the other hand, tend to be large-scale projects that house everything from corporate, government, or university labs to very small companies. Most research and technology parks do not offer business assistance services, which are the hallmark of a business incubation program. However, many research and technology parks house incubation programs.[3]
Incubators also differ from theU.S. Small Business Administration'sSmall Business Development Centers (and similar business support programs) in that they serve only selected clients. Congress created the Small Business Administration in the Small Business Act of July 30, 1953. Its purpose is to "aid, counsel, assist and protect, insofar as is possible, the interests of small business concerns." In addition, the charter ensures that small businesses receive a "fair proportion" of any government contracts and sales of surplus property.[4]SBDCs work with any small businesses at any stage of development, and not only with startup companies. Many business incubation programs partner with their local SBDC to create a "one-stop shop" for entrepreneurial support.[5]
Within European Union countries, there are different EU and state funded programs that offer support in form of consulting, mentoring, prototype creation, and other services and co-funding for them.[6]
In India, the business incubators are promoted in a varied fashion: astechnology business incubators(TBI) and as startup incubators—the first deals with technology business (mostly, consultancy and promoting technology related businesses) and the later deals with promoting startups (with more emphasis on establishing new companies, scaling the businesses, prototyping, patenting, and so forth).[7][8][9][10][11]
The first business incubator was the Batavia Industrial Center, which opened in 1959 inBatavia, New York.[12]Two years earlier,Massey-Harrishad announced the closure of its Batavia farm machinery factory, resulting in a giant vacant building and a local unemployment rate of 18 percent.[12]The Mancuso family, the dominant business family in that area ofWestern New York, was desperate to resuscitate the regional economy, whose imminent collapse threatened to bring down their various business enterprises.[12]They bought the formerharvesterfactory and placed Joseph Mancuso in charge of finding commercial tenants.[12]It soon became clear that large corporations preferred to build new factories from scratch rather than shoehorn them into someone else's 80-year-old building, thereby forcing Mancuso to subdivide the vast space and lease smaller spaces to smaller tenants.[12]
In Mancuso's frantic search for tenants, he offered creative incentives to anyone willing to sign a lease, such as "short-term leases, shared office supplies and equipment, business advice, and secretarial services", as well as assistance with linking up with local banks to secure financing.[12]One tenant was a nearby chicken hatchery in need of space to house additional chicken coops, which explains the origin of the term "business incubator".[12]In 1963, while giving a tour to a reporter of the various tenants in the Batavia Industrial Center, Mancuso pointed out the coops and remarked, "These guys are incubating chickens...I guess we’re incubating businesses".[12]
Business incubation expanded across the U.S. in the 1980s and spread to theUKand Europe through various related forms (e.g. innovation centres, pépinières d'entreprises, technopoles/science parks).
The U.S.-based International Business Innovation Association estimates that there are about 7,000 incubators worldwide. A study funded by the European Commission in 2002 identified around 900 incubation environments in Western Europe.[13]As of October 2006, there were more than 1,400 incubators in North America, up from only 12 in 1980. Her Majesty's Treasury identified around 25 incubation environments in the UK in 1997; by 2005, UKBI identified around 270 incubation environments across the country. In 2005 alone, North American incubation programs assisted more than 27,000 companies that provided employment for more than 100,000 workers and generated annual revenues of $17 billion.[14]
Incubation activity has not been limited to developed countries; incubation environments are now being implemented in developing countries and raising interest for financial support from organizations such asUNIDOand theWorld Bank.
The first high-tech incubator located inSilicon Valleywas Catalyst Technologies started byNolan Bushnellafter he leftAtari. "My idea was that I would fund [the businesses] with a key," says Bushnell. "And the key would fit a lock in a building. In the building would be a desk and chair, and down the hall would be a Xerox machine. They would sign their name 35 times and the company would be incorporated." All the details would be handled: "They'd have a health care plan, their payroll system would be in place, and the books would be set up. So in 15 minutes, they would be in business working on the project."[15]
Since startup companies lack many resources, experience and networks, incubators provide services which helps them get through initial hurdles in starting up a business. These hurdles include space, funding, legal, accounting, computer services and other prerequisites to running the business.
According to the Small Business Administration's website, their mission provides small businesses with four main services. These services are:
Among the most common incubator services are:[14]
There are a number of business incubators that have focused on particular industries or on a particular business model, earning them their own name.
More than half of all business incubation programs are "mixed-use" projects, meaning they work with clients from a variety of industries. Technology incubators account for 39% of incubation programs.[14]
One example of a specialized type of incubator is abio incubator. Bioincubators specialize in supportinglife science-basedstartup companies. Entrepreneurs with feasible projects in life sciences are selected and admitted to these programs.
Unlike many business assistance programs, business incubators do not serve any and all companies. Entrepreneurs who wish to enter a business incubation program must apply for admission. Acceptance criteria vary from program to program, but in general only those with feasible business ideas and a workable business plan are admitted.[19]It is this factor that makes it difficult to compare the success rates of incubated companies against general business survival statistics.[20]
Although most incubators offer their clients office space and shared administrative services, the heart of a true business incubation program is the services it provides to startup companies. More than half of incubation programs surveyed by the National Business Incubation Association[21]in 2006 reported that they also served affiliate or virtual clients.[14]These companies do not reside in the incubator facility. Affiliate clients may be home-based businesses or early-stage companies that have their own premises but can benefit from incubator services. Virtual clients may be too remote from an incubation facility to participate on site, and so receive counseling and other assistance electronically.
The amount of time a company spends in an incubation program can vary widely depending on a number of factors, including the type of business and the entrepreneur's level of business expertise.Life scienceand other firms with long research and development cycles require more time in an incubation program than manufacturing or service companies that can immediately produce and bring a product or service to market. On average, incubator clients spend 33 months in a program.[14]Many incubation programs set graduation requirements by developmentbenchmarks, such as company revenues or staffing levels, rather than time.
Business incubation has been identified as a means of meeting a variety ofeconomicandsocioeconomicpolicy needs, which may include job creation, fostering a community's entrepreneurial climate, technology commercialization, diversifying local economies, building or accelerating growth of local industry clusters, business creation and retention, encouraging minority entrepreneurship, identifying potential spin-in or spin-out business opportunities, or community revitalization.[14]
About one-third of business incubation programs are sponsored by economic development organizations. Government entities (such as cities or counties) account for 21% of program sponsors. Another 20% are sponsored by academic institutions, including two- and four-year colleges, universities, and technical colleges.[14]In many countries, incubation programs are funded by regional or national governments as part of an overall economic development strategy. In the United States, however, most incubation programs are independent, community-based and resourced projects. The U.S.Economic Development Administrationis a frequent source of funds for developing incubation programs, but once a program is open and operational it typically receives no federal funding; few states offer centralized incubator funding. Rents and/or client fees account for 59% of incubator revenues, followed by service contracts or grants (18%) and cash operating subsidies (15%).[14]
As part of a major effort to address the ongoing economic crisis of the US, legislation was introduced to "reconstituteProject Socrates". The updated version of Socrates supports incubators by enabling users with technology-based facts about the marketplace, competitor maneuvers, potential partners, and technology paths to achieve competitive advantage. Michael Sekora, the original creator and director of Socrates says that a key purpose of Socrates is to assist government economic planners in addressing the economic and socioeconomic issues (see above) with unprecedented speed, efficiency and agility.[22]
Many for-profit or "private" incubation programs were launched in the late 1990s by investors and other for-profit operators seeking to hatch businesses quickly and bring in big payoffs. At the time, NBIA estimated that nearly 30% of all incubation programs were for-profit ventures. In the wake of thedot-com bust, however, many of those programs closed. In NBIA's 2002 State of the Business Incubation survey, only 16% of responding incubators were for-profit programs. By the 2006 SOI, just 6% of respondents were for-profit.[14]
Although some incubation programs (regardless of nonprofit or for-profit status) take equity in client companies, most do not. Only 25% of incubation programs report that they take equity in some or all of their clients.[14]
Incubators often aggregate themselves into networks which are used to share good practices and new methodologies.
Europe's European Business and Innovation Centre Network ("EBN")[23]association federates more than 250European Business and Innovation Centres(EU|BICs) throughout Europe. France has its own national network oftechnopoles, pre-incubators, and EU|BICs, called RETIS Innovation. This network focuses on internationalizing startups.[citation needed]
Of 1000 incubators across Europe, 500 are situated in Germany. Many of them are organized federally within the ADT (Arbeitsgemeinschaft Deutscher Innovations-, Technologie-, und Gründerzentren e.V.).[24]
San Francisco and Silicon Valley are home to 'founder houses.'[25]These involve a collective of founders sharing an apartment or house while working to get their companies off the ground. Similar to tech/hacker houses in the same area, the founders collaborate to promote one another's success while enjoying the financial benefits of co-living in one of the most expensive regions of the country.[26]These collectives are typically located in San Francisco or near to Stanford University's campus.[27]Many of the founders have dropped out of Stanford University to pursue their careers– in fact, there is a more than a 1 in 10 chance that billion-dollar startups have one or more founders who attended Stanford.[28]In addition to the financial incentives of co-living, founders share investor recommendations, funding strategies, VC contacts, and other elements critical to a startup company's success in its early days.[29]These set-ups allow for largely virtual work, eliminating the burden on new founders to find a physical space for their company.[29]Due to the collaborative nature of these spaces, residents who have failed companies often pivot to taking a high-ranking position at a roommate's company.[25]Collectives such as these build on a legacy set forth by Mark Zuckerberg and Facebook. The house featured in the filmThe Social Networkwas a hacker's den rented by Zuckerberg that ultimately gave rise to a tech supergiant.[30]This house and the fortune it gave rise to was well-documented in the 2010 filmThe Social Network.[31]
|
https://en.wikipedia.org/wiki/Public_incubator
|
Ingenerativemorphology, therighthand head ruleis aruleofgrammarthat specifies that the rightmostmorphemein amorphological structureis almost always theheadin certain languages. What this means is that it is the righthand element that provides the primarysyntacticand/orsemanticinformation. The projection of syntactic information from the righthand element onto the outputwordis known asfeature percolation. The righthand head rule is considered a broadly general and universal principle ofmorphology. In certain other languages it is proposed that rather than a righthand head rule, alefthand head ruleapplies, where the lefthand element provides this information.
Inderivational morphology(i.e. the creation of newwords), theheadis thatmorphemethat provides thepart of speech(PoS) information. According to the righthand head rule, this is of course the righthand element.
For instance, theword'person' is anoun, but if thesuffix'-al' were added then 'personal' is derived. 'Personal' is anadjective, and the righthand head rule holds that thePoSinformation is provided by the suffix '-al', which is the righthand element.
Theadverb'personally' is derived from 'personal' by adding thesuffix'-ly'. ThePoS-information is provided by thissuffixwhich is added to the right of 'personal'.
The same applies to thenoun'personality', which is also derived from 'personal', this time by adding thenominalsuffix'-ity' to the right of the inputword. Again thePoS-information is projected from the righthand element.
The three above examples may be formalized thus (N=noun, ADJ=adjective, ADV=adverb):
They are all instances of the righthand head rule, which may be formalized as:
The righthand head rule may also be applied toinflectional morphology(i.e. the addition of semantic information without changing theword class). In relation toinflectional morphology, the righthand head rule holds that the rightmost element of awordprovides the most essential additionalsemanticinformation.
For example, thepast tenseform of 'play' is created by adding thepast tensesuffix'-(e)d' to the right. Thissuffixprovides thepast tensefeature which is also the main additionalsemanticcontent of the outputword'played'.
Likewise, thepluralform of 'dog' is created by the addition of thepluralnominalsuffix'-s' to the right of the input. Thus 'dogs' inherits its plurality feature from thesuffix.
The same thing goes for thecomparativeform of theadjective'ugly'. 'Uglier' is created by the addition of thecomparativesuffix'-er' to the right, thus receiving itscomparativefeature from thesuffix.
Formalizing the examples shows that the underlying principle ofinflectionis basically the same as the righthand head rule (INF=infinitive, P=past tense, SG=singular, POS=positive, COM=comparative):
Another area ofmorphologywhere the righthand head rule seems applicable is that ofcompounding(i.e. the creation of awordby combining two or more otherwords), in which it holds that the righthandwordprovides both the essentialsemanticinformation and theword class.
For instance, thenoun'runway' combines averband anoun. Since it refers to a kind of way rather than a kind of running, and since it is anounand not averb, the head is 'way', which appears on the right.
The noun 'wheelchair' combines twonouns. The primary element is the righthand one - namely, 'chair' - since thewordrefers to a kind of chair rather than a kind of wheel.
Again formalizations show that the underlying principle must be the righthand head rule:
The righthand head rule is taken to be a universal principle ofmorphology, but has been subject to much severe criticism. The main point of criticism is that it is empirically insufficient because it ignores numerous cases where the head does not appear in the righthand position (PREP=preposition, NEG=negation):
Another main point of criticism is that the righthand head rule is tooEurocentric, or evenAnglocentric, taking into consideration only morphological processes typical ofEuropeanlanguages(mainlyEnglish) and ignoring processes fromlanguagesall over the world. Certainly in certain languages a lefthand head rule applies rather than a righthand head rule.[1]
Many linguists reject the righthand head rule as being too idealizing and empirically inadequate.
|
https://en.wikipedia.org/wiki/Righthand_head_rule
|
HARKing(hypothesizing after the results are known) is an acronym coined by social psychologistNorbert Kerr[1]that refers to the questionable research practice of "presenting a post hochypothesisin the introduction of a research report as if it were ana priorihypothesis".[1][2]Hence, a key characteristic of HARKing is that post hoc hypothesizing is falsely portrayed as a priori hypothesizing.[3]HARKing may occur when a researcher tests an a priori hypothesis but then omits that hypothesis from their research report after they find out the results of their test.Post hoc analysisorpost hoc theorizingthen may lead to a post hoc hypothesis.
Several types of HARKing have been distinguished, including:
Concerns about HARKing appear to be increasing in the scientific community, as shown by the increasing number of citations to Kerr's seminal article.[7]A 2017 review of six surveys found that an average of 43% of researchers surveyed (mainly psychologists) self-reported HARKing "at least once".[5]This figure may be an underestimate if researchers are concerned about reporting questionable research practices, do not perceive themselves to be responsible for HARKing that is proposed by editors and reviewers (i.e., passive HARKing), and/or do not recognize their HARKing due tohindsightorconfirmation biases.
HARKing appears to be motivated by a desire to publish research in a publication environment that values a priori hypotheses over post hoc hypotheses and contains apublication biasagainstnull results. In order to improve their chances of publishing their results, researchers may secretly suppress any a priori hypotheses that failed to yield significant results, construct or retrieve post hoc hypotheses that account for any unexpected significant results, and then present these new post hoc hypotheses in their research reports as if they are a priori hypotheses.[1][8][9][5][10]
HARKing is associated with the debate regarding prediction and accommodation.[11]In the case of prediction, hypotheses are deduced from a priori theory and evidence. In the case of accommodation, hypotheses are induced from the current research results.[7]One view is that HARKing represents a form of accommodation in which researchers induce ad hoc hypotheses from their current results.[1][3]Another view is that HARKing represents a form of prediction in which researchers deduce hypotheses from a priori theory and evidence after they know their current results.[7]
Potential costs of HARKing include:[1]: 211
In 2022, Rubin provided a critical analysis of Kerr's 12 costs of HARKing. He concluded that these costs "are either misconceived, misattributed to HARKing, lacking evidence, or that they do not take into account pre- and post-publication peer review and public availability to research materials and data."[7]
Some of the costs of HARKing are thought to have led to thereplication crisisin science.[4]Hence, Bishop described HARKing as one of "the four horsemen of the reproducibility apocalypse," with publication bias, lowstatistical power, andp-hacking[12]being the other three.[13]An alternative view is that it is premature to conclude that HARKing has contributed to the replication crisis.[7][5][14]
Thepreregistrationof research hypotheses prior to data collection has been proposed as a method of identifying and deterring HARKing. However, the use of preregistration to prevent HARKing is controversial.[3]
Kerr pointed out that "HARKing can entail concealment. The question then becomes whether what is concealed in HARKing can be a useful part of the 'truth' ...or is instead basically uninformative (and may, therefore, be safely ignored at an author's discretion)".[1]: 209Three different positions about the ethics of HARKing depend on whether HARKing conceals "a useful part of the 'truth'".
The first position is that all HARKing is unethical under all circumstances because it violates a fundamental principle of communicating scientific research honestly and completely.[1]: 209According to this position, HARKing always conceals a useful part of the truth.
A second position is that HARKing falls into a "gray zone" of ethical practice.[1][15]According to this position, some forms of HARKing are more or less ethical under some circumstances.[16][5][17][7]Hence, only some forms of HARKing conceal a useful part of the truth under some conditions. Consistent with this view, a 2018 survey of 119 USA researchers found that HARKing ("reporting an unexpected result as having been hypothesized from the start") was associated with "ambiguously unethical" research practices more than with "unambiguously unethical" research practices.[18]
A third position is that HARKing is acceptable provided that hypotheses are explicitly deduced from a priori theory and evidence, as explained in a theoretical rationale, and readers have access to the relevant research data and materials.[7]According to this position, HARKing does not prevent readers from making an adequately informed evaluation of the theoretical quality and plausibility of the HARKed hypotheses and the methodological rigor with which the hypotheses have been tested.[7][17]In this case, HARKing does not conceal a useful part of the truth. Furthermore, researchers may claim that a priori theory and evidence predict their results even if the prediction is deduced after they know their results.[7][19]
|
https://en.wikipedia.org/wiki/HARKing
|
Innetwork theory, agiant componentis aconnected componentof a givenrandom graphthat contains a significant fraction of the entire graph'svertices.
More precisely, in graphs drawn randomly from a probability distribution over arbitrarily large graphs, a giant component is a connected component whose fraction of the overall number of vertices is bounded away from zero. In sufficiently dense graphs distributed according to theErdős–Rényi model, a giant component exists with high probability.
Giant components are a prominent feature of theErdős–Rényi model(ER) of random graphs, in which each possible edge connecting pairs of a given set ofnvertices is present, independently of the other edges, with probabilityp. In this model, ifp≤1−ϵn{\displaystyle p\leq {\frac {1-\epsilon }{n}}}for any constantϵ>0{\displaystyle \epsilon >0}, thenwith high probability(in the limit asn{\displaystyle n}goes to infinity) all connected components of the graph have sizeO(logn), and there is no giant component. However, forp≥1+ϵn{\displaystyle p\geq {\frac {1+\epsilon }{n}}}there is with high probability a single giant component, with all other components having sizeO(logn). Forp=pc=1n{\displaystyle p=p_{c}={\frac {1}{n}}}, intermediate between these two possibilities, the number of vertices in the largest component of the graph,Pinf{\displaystyle P_{\inf }}is with high probability proportional ton2/3{\displaystyle n^{2/3}}.[1]
Giant component is also important in percolation theory.[1][2]When a fraction of nodes,q=1−p{\displaystyle q=1-p}, is removed randomly from an ER network of degree⟨k⟩{\displaystyle \langle k\rangle }, there exists a critical threshold,pc=1⟨k⟩{\displaystyle p_{c}={\frac {1}{\langle k\rangle }}}. Abovepc{\displaystyle p_{c}}there exists a giant component (largest cluster) of size,Pinf{\displaystyle P_{\inf }}.Pinf{\displaystyle P_{\inf }}fulfills,Pinf=p(1−exp(−⟨k⟩Pinf)){\displaystyle P_{\inf }=p(1-\exp(-\langle k\rangle P_{\inf }))}. Forp<pc{\displaystyle p<p_{c}}the solution of this equation isPinf=0{\displaystyle P_{\inf }=0}, i.e., there is no giant component.
Atpc{\displaystyle p_{c}}, the distribution of cluster sizes behaves as a power law,n(s){\displaystyle n(s)}~s−5/2{\displaystyle s^{-5/2}}which is a feature ofphase transition.
Alternatively, if one adds randomly selected edges one at a time, starting with anempty graph, then it is not until approximatelyn/2{\displaystyle n/2}edges have been added that the graph contains a large component, and soon after that the component becomes giant. More precisely, whentedges have been added, for values oftclose to but larger thann/2{\displaystyle n/2}, the size of the giant component is approximately4t−2n{\displaystyle 4t-2n}.[1]However, according to thecoupon collector's problem,Θ(nlogn){\displaystyle \Theta (n\log n)}edges are needed in order to have high probability that the whole random graph is connected.
A similar sharp threshold between parameters that lead to graphs with all components small and parameters that lead to a giant component also occurs in tree-like random graphs with non-uniformdegree distributionsP(k){\displaystyle P(k)}. The degree distribution does not define a graph uniquely. However, under the assumption that in all respects other than their degree distribution, the graphs are treated as entirely random, many results on finite/infinite-component sizes are known. In this model, the existence of the giant component depends only on the first two (mixed)momentsof the degree distribution. Let a randomly chosen vertex have degreek{\displaystyle k}, then the giant component exists[3]if and only if⟨k2⟩−2⟨k⟩>0.{\displaystyle \langle k^{2}\rangle -2\langle k\rangle >0.}This is known as the Molloy and Reed condition.[4]The first moment ofP(k){\displaystyle P(k)}is the mean degree of the network. In general, then{\displaystyle n}-th moment is defined as⟨kn⟩=E[kn]=∑knP(k){\displaystyle \langle k^{n}\rangle =\mathbb {E} [k^{n}]=\sum k^{n}P(k)}.
When there is no giant component, the expected size of the small component can also be determined by the first and second moments and it is1+⟨k⟩22⟨k⟩+⟨k2⟩.{\displaystyle 1+{\frac {\langle k\rangle ^{2}}{2\langle k\rangle +\langle k^{2}\rangle }}.}However, when there is a giant component, the size of the giant component is more tricky to evaluate.[2]
Similar expressions are also valid fordirected graphs, in which case thedegree distributionis two-dimensional.[5]There are three types of connected components indirected graphs. For a randomly chosen vertex:
Let a randomly chosen vertex haskin{\displaystyle k_{\text{in}}}in-edges andkout{\displaystyle k_{\text{out}}}out edges. By definition, the average number of in- and out-edges coincides so thatc=E[kin]=E[kout]{\displaystyle c=\mathbb {E} [k_{\text{in}}]=\mathbb {E} [k_{\text{out}}]}. IfG0(x)=∑kP(k)xk{\displaystyle G_{0}(x)=\textstyle \sum _{k}\displaystyle P(k)x^{k}}is the generating function of thedegree distributionP(k){\displaystyle P(k)}for an undirected network, thenG1(x){\displaystyle G_{1}(x)}can be defined asG1(x)=∑kk⟨k⟩P(k)xk−1{\displaystyle G_{1}(x)=\textstyle \sum _{k}\displaystyle {\frac {k}{\langle k\rangle }}P(k)x^{k-1}}. For directed networks, generating function assigned to thejoint probability distributionP(kin,kout){\displaystyle P(k_{in},k_{out})}can be written with two valuablesx{\displaystyle x}andy{\displaystyle y}as:G(x,y)=∑kin,koutP(kin,kout)xkinykout{\displaystyle {\mathcal {G}}(x,y)=\sum _{k_{in},k_{out}}\displaystyle P({k_{in},k_{out}})x^{k_{in}}y^{k_{out}}}, then one can defineg(x)=1c∂G∂x|y=1{\displaystyle g(x)={\frac {1}{c}}{\partial {\mathcal {G}} \over \partial x}\vert _{y=1}}andf(y)=1c∂G∂y|x=1{\displaystyle f(y)={\frac {1}{c}}{\partial {\mathcal {G}} \over \partial y}\vert _{x=1}}.
The criteria for giant component existence in directed and undirected random graphs are given in the table below:
|
https://en.wikipedia.org/wiki/Giant_component
|
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss.
Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known.
The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is
for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1]
Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function.
The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers.
Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function
usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise.
In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10]
In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX.
BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms.
We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by:
Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX.
In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ:
where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk.
In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ.
In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized.
Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are:
Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15]
A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances.
In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility.
Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering.
For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable.
Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value.
The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others.
W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
|
https://en.wikipedia.org/wiki/Quadratic_loss_function
|
Inpublic relationsandpolitics,spinis a form ofpropaganda, achieved through knowingly
providing a biased interpretation of an event. While traditional public relations andadvertisingmay manage their presentation of facts, "spin" often implies the use ofdisingenuous,deceptive, andmanipulativetactics.[1]
Because of the frequent association between spin andpress conferences(especiallygovernmentpress conferences), the room in which these conferences take place is sometimes described as a "spin room".[2]Public relationsadvisors, pollsters andmedia consultantswho develop deceptive or misleading messages may be referred to as "spin doctors" or "spinmeisters".
A standard tactic used in "spinning" is to reframe or modify the perception of an issue or event to reduce any negative impact it might have on public opinion. For example, a company whose top-selling product is found to have a significant safety problem may "reframe" the issue by criticizing the safety of its main competitor's products or by highlighting the risk associated with the entire product category. This might be done using a "catchy"sloganorsound bitethat can help to persuade the public of the company's biasedpoint of view. This tactic could enable the company to refocus the public's attention away from the negative aspects of its product.
Spinning is typically a service provided by paid media advisors and media consultants. The largest and most powerful companies may have in-house employees and sophisticated units with expertise in spinning issues. While spin is often considered to be a private-sector tactic, in the 1990s and 2000s some politicians and political staff were accused of using deceptive "spin" tactics to manipulate or deceive the public. Spin may include "burying" potentially negative new information by releasing it at the end of the workday on the last day before a long weekend; selectivelycherry-pickingquotes from previous speeches made by their employer or an opposing politician to give the impression that they advocate a certain position; or purposelyleakingmisinformationabout an opposing politician or candidate that casts them in a negative light.[3]
Edward Bernayshas been called the "Father of Public Relations". Bernays helpedtobaccoandalcoholcompanies make consumption of their products more socially acceptable, and he was proud of his work as a propagandist.[4]Throughout the 1990s, the use of spin by politicians and parties accelerated, especially in theUnited Kingdom; the emergence of 24-hour news increased pressures placed upon journalists to provide nonstop content, which was further intensified by the competitive nature of British broadcasters and newspapers, and content quality declined due to 24-hour news' and political parties' techniques for handling the increased demand.[5]This led to journalists relying more heavily on the public relations industry as a source for stories, and advertising revenue as a profit source, making them more susceptible to spin.[6]
Spin in the United Kingdom began to break down with the high-profile resignations of the architects of spin within theNew Labourgovernment, withCharlie Whelanresigning asGordon Brown's spokesman in 1999 andAlastair Campbellresigning asTony Blair's Press Secretary in 2003.[3][7]As information technology has increased since the end of the 20th century, commentators likeJoe Trippihave advanced the theory that modernInternet activismspells the end for political spin, in that the Internet may reduce the effectiveness of spin by providing immediate counterpoints.[8]
Spin doctors can either command media attention or remain anonymous. Examples from the UK includeJamie Sheaduring his time asNATO's press secretary throughout theKosovo War, Charlie Whelan, and Alastair Campbell.[6][clarification needed]
Campbell, previously a journalist before becoming Tony Blair's Press Secretary, was the driving force behind a government that was able to produce the message it wanted in the media. He played a key role in important decisions, with advisors viewing him as a 'Deputy Prime Minister' inseparable from Blair.[9]Campbell identifies how he was able to spinRupert Murdoch, during a meeting in July 1995, into positively reporting an upcoming Blair speech, gathering the support fromThe SunandThe Times, popular British newspapers.[10]Campbell later acknowledged that his and the government's spinning had contributed to the electorate's growing distrust of politicians, and he asserted that spin must cease.[11]
"Spin doctors" such as Shea praised and respected Campbell's work. In 1999, during the beginning of NATO's intervention in Kosovo, Shea's media strategy was non-existent before the arrival of Campbell and his team. Campbell taught Shea how to organise his team to deliver what he wanted to be in the media, which led to Shea being appreciated for his work by PresidentBill Clinton.[9]
Some spin techniques include:
For years, businesses have used fake or misleadingcustomer testimonialsby editing/spinning customers to reflect a much more satisfied experience than was actually the case. In 2009, theFederal Trade Commissionupdated their laws to include measures to prohibit this type of "spinning" and have been enforcing these laws as of late.[14]
The extent of the impact of "spin doctors" is contested, though their presence is still recognized in the political environment. The1997 General electionin the United Kingdom saw a landslide victory for New Labour with a 10.3% swing fromConservativetoLabour, with help from newspapers such asThe Suntowards whichAlastair Campbellfocused his spinning tactics as he greatly valued their support.[15]The famous newspaper headline 'The Sun Backs Blair' was a key turning point in the campaign which provided New Labour with a lot of confidence and hope of increased electoral support.[16]The change in political alignment had an impact on the electorate, with the number of individuals voting for Labour that read switching newspapers rising by 19.4%, compared to only 10.8% by those that did not read switching newspapers; a study conducted by Ladd and Lenz.[17]
|
https://en.wikipedia.org/wiki/Spin_(propaganda)
|
Thecausal setsprogram is an approach toquantum gravity. Its founding principles are thatspacetimeis fundamentally discrete (a collection of discrete spacetime points, called the elements of the causal set) and that spacetime events are related by apartial order. This partial order has the physical meaning of thecausality relationsbetween spacetime events.
For some decades after the formulation ofgeneral relativity, the attitude towards Lorentzian geometry was mostly dedicated to understanding its physical implications and not concerned with theoretical issues.[1]However, early attempts to use causality as a starting point have been provided byHermann WeylandHendrik Lorentz.[2]Alfred Robbin two books in 1914 and 1936 suggested an axiomatic framework where the causal precedence played a critical role.[1]The first explicit proposal of quantising the causal structure of spacetime is attributed by Sumati Surya[1]to E. H. Kronheimer andRoger Penrose,[3]who invented causal spaces in order to "admit structures which can be very different from a manifold". Causal spaces are defined axiomatically, by considering not only causal precedence, but also chronological precedence.
The program of causal sets is based on a theorem[4]byDavid Malament, extending former results byChristopher Zeeman,[5]and byStephen Hawking, A. R. King and P. J. McCarthy.[6][1]Malament's theorem states that if there is abijectivemap between twopast and future distinguishingspace times that preserves theircausal structurethen the map is aconformal isomorphism. The conformal factor that is left undetermined is related to the volume of regions in the spacetime. This volume factor can be recovered by specifying a volume element for each space time point. The volume of a space time region could then be found by counting the number of points in that region.
Causal sets was initiated byRafael Sorkinwho continues to be the main proponent of the program. He has coined the slogan "Order + Number = Geometry" to characterize the above argument. The program provides a theory in which space time is fundamentally discrete while retaining localLorentz invariance.
A causal set (or causet) is a setC{\displaystyle C}with apartial orderrelation⪯{\displaystyle \preceq }that is
We'll writex≺y{\displaystyle x\prec y}ifx⪯y{\displaystyle x\preceq y}andx≠y{\displaystyle x\neq y}.
The setC{\displaystyle C}represents the set ofspacetime eventsand the order relation⪯{\displaystyle \preceq }represents the causal relationship between events (seecausal structurefor the analogous idea in aLorentzian manifold).
Although this definition uses the reflexive convention we could have chosen the irreflexive convention in which the order relation isirreflexiveandasymmetric.
Thecausal relationof aLorentzian manifold(without closedcausal curves) satisfies the first three conditions. It is the local finiteness condition that introduces spacetime discreteness.
Given a causal set we may ask whether it can beembeddedinto aLorentzian manifold. An embedding would be a map taking elements of the causal set into points in the manifold such that the order relation of the causal set matches the causal ordering of the manifold. A further criterion is needed however before the embedding is suitable. If, on average, the number of causal set elements mapped into a region of the manifold is proportional to the volume of the region then the embedding is said to befaithful. In this case we can consider the causal set to be 'manifold-like'.
A central conjecture of the causal set program, called theHauptvermutung('fundamental conjecture'), is that the same causal set cannot be faithfully embedded into two spacetimes that are not similar on large scales.
It is difficult to define this conjecture precisely because it is difficult to decide when two spacetimes are 'similar on large scales'.
Modelling spacetime as a causal set would require us to restrict attention to those causal sets that are 'manifold-like'. Given a causal set this is a difficult property to determine.
The difficulty of determining whether a causal set can be embedded into a manifold can be approached from the other direction. We can create a causal set by sprinkling points into a Lorentzian manifold. By sprinkling points in proportion to the volume of the spacetime regions and using the causal order relations in the manifold to induce order relations between the sprinkled points, we can produce a causal set that (by construction) can be faithfully embedded into the manifold.
To maintain Lorentz invariance this sprinkling of points must be done randomly using aPoisson process. Thus the probability of sprinklingn{\displaystyle n}points into a region of volumeV{\displaystyle V}is
P(n)=(ρV)ne−ρVn!{\displaystyle P(n)={\frac {(\rho V)^{n}e^{-\rho V}}{n!}}}
whereρ{\displaystyle \rho }is the density of the sprinkling.
Sprinkling points as a regular lattice would not keep the number of points proportional to the region volume.
Some geometrical constructions in manifolds carry over to causal sets. When defining these we must remember to rely only on the causal set itself, not on any background spacetime into which it might be embedded. For an overview of these constructions, see.[7]
Alinkin a causal set is a pair of elementsx,y∈C{\displaystyle x,y\in C}such thatx≺y{\displaystyle x\prec y}but with noz∈C{\displaystyle z\in C}such thatx≺z≺y{\displaystyle x\prec z\prec y}.
Achainis a sequence of elementsx0,x1,…,xn{\displaystyle x_{0},x_{1},\ldots ,x_{n}}such thatxi≺xi+1{\displaystyle x_{i}\prec x_{i+1}}fori=0,…,n−1{\displaystyle i=0,\ldots ,n-1}. The length of a chain isn{\displaystyle n}.
If everyxi,xi+1{\displaystyle x_{i},x_{i+1}}in the chain form a link, then the chain is called apath.
We can use this to define the notion of ageodesicbetween two causal set elements, provided they are order comparable, that is, causally connected (physically, this means they are time-like). A geodesic between two elementsx⪯y∈C{\displaystyle x\preceq y\in C}is a chain consisting only of links such that
In general there can be more than one geodesic between two comparable elements.
Myrheim[8]first suggested that the length of such a geodesic should be directly proportional to the proper time along a timelike geodesic joining the two spacetime points. Tests of this conjecture have been made using causal sets generated from sprinklings into flat spacetimes. The proportionality has been shown to hold and is conjectured to hold for sprinklings in curved spacetimes too.
Much work has been done in estimating the manifolddimensionof a causal set. This involves algorithms using the causal set aiming to give the dimension of the manifold into which it can be faithfully embedded. The algorithms developed so far are based on finding the dimension of aMinkowski spacetimeinto which the causal set can be faithfully embedded.
This approach relies on estimating the number ofk{\displaystyle k}-length chains present in a sprinkling intod{\displaystyle d}-dimensional Minkowski spacetime. Counting the number ofk{\displaystyle k}-length chains in the causal set then allows an estimate ford{\displaystyle d}to be made.
This approach relies on the relationship between the proper time between two points in Minkowski spacetime and the volume of thespacetime intervalbetween them. By computing the maximal chain length (to estimate the proper time) between two pointsx{\displaystyle x}andy{\displaystyle y}and counting the number of elementsz{\displaystyle z}such thatx≺z≺y{\displaystyle x\prec z\prec y}(to estimate the volume of the spacetime interval) the dimension of the spacetime can be calculated.
These estimators should give the correct dimension for causal sets generated by high-density sprinklings intod{\displaystyle d}-dimensional Minkowski spacetime. Tests in conformally-flat spacetimes[9]have shown these two methods to be accurate.
An ongoing task is to develop the correct dynamics for causal sets. These would provide a set of rules that determine which causal sets correspond to physically realistic spacetimes. The most popular approach to developing causal set dynamics is based on thesum-over-historiesversion ofquantum mechanics. This approach would perform a sum-over-causal sets by growing a causal set one element at a time. Elements would be added according to quantum mechanical rules and interference would ensure a large manifold-like spacetime would dominate the contributions. The best model for dynamics at the moment is a classical model in which elements are added according to probabilities. This model, due to David Rideout andRafael Sorkin, is known as classical sequential growth (CSG) dynamics.[10]The classical sequential growth model is a way to generate causal sets by adding new elements one after another. Rules for how new elements are added are specified and, depending on the parameters in the model, different causal sets result.
In analogy to thepath integral formulationof quantum mechanics, one approach to developing a quantum dynamics for causal sets has been to apply anaction principlein the sum-over-causal sets approach. Sorkin has proposed a discrete analogue for thed'Alembertian, which can in turn be used to define theRicci curvature scalarand thereby the Benincasa–Dowker action on a causal set.[11][12]Monte-Carlo simulationshave provided evidence for a continuum phase in 2D using the Benincasa–Dowker action.[13]
|
https://en.wikipedia.org/wiki/Causal_sets
|
Michael James David PowellFRSFAA[2](29 July 1936 – 19 April 2015) was a British mathematician, who worked in theDepartment of Applied Mathematics and Theoretical Physics(DAMTP) at theUniversity of Cambridge.[3][1][4][5][6]
Born in London, Powell was educated atFrensham Heights SchoolandEastbourne College.[2]He earned his Bachelor of Arts degree[when?]followed by aDoctor of Science(DSc) degree in 1979 at theUniversity of Cambridge.[7]
Powell was known for his extensive work innumerical analysis, especiallynonlinear optimisationandapproximation. He was a founding member of theInstitute of Mathematics and its Applicationsand a founding editor-in-chief ofIMA Journal of Numerical Analysis.[8]His mathematical contributions includequasi-Newton methods, particularly theDavidon–Fletcher–Powell formulaand the Powell's Symmetric Broyden formula,augmented Lagrangianfunction (also called Powell–Rockafellarpenalty function),sequential quadratic programmingmethod (also called as Wilson–Han–Powell method),trust regionalgorithms (Powell's dog leg method),conjugate direction method(also calledPowell's method), andradial basis function.[citation needed]He had been working onderivative-free optimizationalgorithms in recent years, the resultant algorithms including COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA.[9]He was the author of numerous scientific papers[1]and of several books, most notablyApproximation Theory and Methods.[10]
Powell won several awards, including theGeorge B. Dantzig Prizefrom the Mathematical Programming Society/Society for Industrial and Applied Mathematics(SIAM) and theNaylor Prizefrom theLondon Mathematical Society.[when?]Powell was elected aForeign Associate of the National Academy of Sciencesof the United States in 2001 and as a corresponding fellow to theAustralian Academy of Sciencein 2007.[7][11][12][13]
|
https://en.wikipedia.org/wiki/LINCOA
|
Vote countingis the process of countingvotesin anelection. It can be done manually orby machines. In the United States, the compilation of election returns and validation of the outcome that forms the basis of the official results is calledcanvassing.[1]
Counts are simplest in elections where just one choice is on theballot, and these are often counted manually. In elections where many choices are on the same ballot, counts are often done by computers to give quick results. Tallies done at distant locations must be carried or transmitted accurately to the central election office.
Manual counts are usually accurate within one percent. Computers are at least that accurate, except when they have undiscovered bugs, broken sensors scanning the ballots, paper misfeeds, orhacks. Officials keep election computers off theinternetto minimize hacking, but the manufacturers are on the internet. They and their annual updates are still subject to hacking, like any computers. Further voting machines are in public locations on election day, and often the night before, so they are vulnerable.
Paper ballots and computer files of results are stored until they are tallied, so they need secure storage, which is hard. The election computers themselves are stored for years, and briefly tested before each election.
Despite the challenges to the U.S. voting process integrity in recent years, including multiple claims by Republican Party members of error orvoter fraud in 2020and 2021, a robust examination of the voting process in multiple U.S. states, including Arizona[2](where claims were most strenuous), found no basis in truth for those claims. The absence of error and fraud is partially attributable to the inherent checks and balances in the voting process itself, which are, as with democracy, built into the system to reduce their likelihood.
Manual counting, also known as hand-counting, requires a physicalballotthat represents voter intent. The physical ballots are taken out of ballot boxes and/or envelopes, read and interpreted; then results are tallied.[3]Manual counting may be used forelection auditsandrecountsin areas where automated counting systems are used.[4]
One method of manual counting is to sort ballots in piles by candidate, and count the number of ballots in each pile. If there is more than one contest on the same sheet of paper, the sorting and counting are repeated for each contest.[5]This method has been used in Burkina Faso, Russia, Sweden, United States (Minnesota), and Zimbabwe.[6]
A variant is to read aloud the choice on each ballot while putting it into its pile, so observers can tally initially, and check by counting the piles. This method has been used in Ghana, Indonesia, and Mozambique.[6]These first two methods do not preserve the original order of the ballots, which can interfere with matching them to tallies or digital images taken earlier.
Another approach is for one official to read all the votes on a ballot aloud, to one or more other staff, who tally the counts for each candidate. The reader and talliers read and tally all contests, before going on to the next ballot.[4]A variant is to project the ballots where multiple people can see them to tally.[7][8]
Another approach is for three or more people to look at and tally ballots independently; if a majority (Arizona[9]) or all (Germany[10]) agree on their tallies after a certain number of ballots, that result is accepted; otherwise they re-tally.
A variant of all approaches is to scan all the ballots and release a file of the images, so anyone can count them. Parties and citizens can count these images by hand or by software. The file gives them evidence to resolve discrepancies.[11][12]The fact that different parties and citizens count with independent systems protects against errors from bugs and hacks. Achecksumfor the file identifies true copies.[13]Election machines which scan ballots typically create such image files automatically,[14]though those images can be hacked or be subject to bugs if the election machine is hacked or has bugs. Independent scanners can also create image files.Copies of ballotsare known to be available for release in many parts of the United States.[15][16][17]The press obtained copies of many ballots in the2000 Presidential election in Floridato recount after the Supreme Court halted official recounts.[18]Different methods resulted in different winners.
The tallying may be done at night at the end of the last day of voting, as in Britain,[19]Canada,[20]France,[21]Germany,[22]and Spain,[23]or the next day,[6]or 1–2 weeks later in the US, afterprovisional ballotshave been adjudicated.[24]
If counting is not done immediately, or if courts accept challenges which can require re-examination of ballots, the ballots need to besecurely stored, which is problematic.
Australia federal elections count ballots at least twice, at the polling place and, starting Monday night after election day, at counting centres.[25][26]
Hand counting has been found to be slower and more prone to error than other counting methods.[27]
Repeated tests have found that the tedious and repetitive nature of hand counting leads to a loss of focus and accuracy over time. A 2023 test inMohave County, Arizonaused 850 ballots, averaging 36 contests each, that had been machine-counted many times. The hand count used seven experienced poll workers: one reader with two watchers, and two talliers with two watchers.
The results included 46 errors not noticed by the counting team, including:
Similar tallying errors were reported in Indiana and Texas election hand counts. Errors were 3% to 27% for various candidates in a 2016 Indiana race, because the tally sheet labels misled officials into over-counting groups of five tally marks, and officials sometimes omitted absentee ballots or double-counted ballots.[29]12 of 13 precincts in the 2024 Republican primary in Gillespie County, TX, were added or written down wrong after a hand count, including two precincts with seven contests wrong and one with six contests wrong.[30]While the Texas errors were caught and corrected before results were finalized, the Indiana errors were not.
Average errors in hand-counted candidate tallies in New Hampshire towns were 2.5% in 2002, including one town with errors up to 20%. Omitting that town cut the average error to 0.87%. Only the net result for each candidate in each town could be measured, by assuming the careful manual recount was fully accurate. Total error can be higher if there were countervailing errors hidden in the net result, but net error in the overall electorate is what determines winners.[31]Connecticut towns in 2007 to 2013 had similar errors up to 2%.[32]
In candidate tallies for precincts in Wisconsin recounted by hand in 2011 and 2016, the average net discrepancy was 0.28% in 2011 and 0.18% in 2016.[33]
India hand tallies paper records from a 1.5% sample of election machines before releasing results. For each voter, the machine prints the selected candidate on a slip of paper, displays it to the voter, then drops the slip into a box. In the April–May 2019 elections for the lower house of Parliament, the Lok Sabha, the Election Commission hand-tallied the slips of paper from 20,675 voting machines (out of 1,350,000 machines)[34]and found discrepancies for 8 machines, usually of four votes or less.[35]Most machines tally over 16 candidates,[36]and they did not report how many of these candidate tallies were discrepant. They formed investigation teams to report within ten days, were still investigating in November 2019, with no report as of June 2021.[35][37]Hand tallies before and after 2019 had a perfect match with machine counts.[35]
An experiment with multiple types of ballots counted by multiple teams found average errors of 0.5% in candidate tallies when one person, watched by another, read to two people tallying independently. Almost all these errors were overcounts. The same ballots had errors of 2.1% in candidate tallies from sort and stack. These errors were equally divided between undercounts and overcounts of the candidates. Optical scan ballots, which were tallied by both methods, averaged 1.87% errors, equally divided between undercounts and overcounts. Since it was an experiment, the true numbers were known. Participants thought that having the candidate names printed in larger type and bolder than the office and party would make hand tallies faster and more accurate.[38]
Intentional errors hand tallying election results are fraud. Close review by observers, if allowed, may detect fraud, and the observers may or may not be believed.[39]If only one person sees each ballot and reads off its choice, there is no check on that person's mistakes. In the US only Massachusetts and the District of Columbia give anyone but officials a legal right to see ballot marks during hand counting.[40]If fraud is detected and proven, penalties may be light or delayed. US prosecution policy since the 1980s has been to let fraudulent winners take office and keep office, usually for years, until convicted,[41][42]and to impose sentencing level 8–14,[43]which earns less than two years of prison.[44]
In 1934, the United States had been hand-counting ballots for over 150 years, and problems were described in a report by Joseph P. Harris, who 20 years later invented apunched card votingmachine,[45]
"Recounts in Chicago and Philadelphia have indicated such wide variations that apparently the precinct officers did not take the trouble to count the ballots at all... While many election boards pride themselves upon their ability to conduct the count rapidly and accurately, as a general rule the count is conducted poorly and slowly... precinct officers conduct the count with practically no supervision whatever... It is impossible to fix the responsibility for errors or frauds... Not infrequently there is a mixup with the ballots and some uncertainty as to which have been counted and which have not... The central count was used some years ago in San Francisco... experience indicated that there is considerable confusion at the central counting place... and that the results are not more accurate than those obtained from the count by the precinct officer."[46]
Data in the table are comparable, because average error in candidate tallies as percent of candidate tallies, weighted by number of votes for each candidate (in NH) is mathematically the same as the sum of absolute values of errors in each candidate's tally, as percent of all ballots (in other studies).
Cost depends on pay levels and staff time needed, recognizing that staff generally work in teams of two to four (one to read, one to watch, and one or two to record votes). Teams of four, with two to read and two to record are more secure[38][51]and would increase costs. Three to record might more quickly resolve discrepancies, if 2 of the 3 agree.
Typical times in the table below range from a tenth to a quarter of a minute per vote tallied, so 24-60 ballots per hour per team, if there are 10 votes per ballot.
One experiment with identical ballots of various types and multiple teams found that sorting ballots into stacks took longer and had more errors than two people reading to two talliers.[38]
Mechanical voting machines have voters selecting switches (levers),[65][66]pushing plastic chips through holes, or pushing mechanical buttons which increment a mechanical counter (sometimes called the odometer) for the appropriate candidate.[3]
There is no record of individual votes to check.
Tampering with the gears or initial settings can change counts, or gears can stick when a small object is caught in them, so they fail to count some votes.[67]When not maintained well the counters can stick and stop counting additional votes; staff may or may not choose to fix the problem.[68]Also, election staff can read the final results wrong off the back of the machine.
Electronic machines for elections are being procured around the world, often with donor money. In places with honest independent election commissions, machines can add efficiency, though not usually transparency. Where the election commission is weaker, expensive machines can be fetishized, waste money on kickbacks and divert attention, time and resources from harmful practices, as well as reducing transparency.[69]
An Estonian study compared the staff, computer, and other costs of different ways of voting to the numbers of voters, and found highest costs per vote were in lightly used, heavily staffed early in-person voting. Lowest costs per vote were in internet voting and in-person voting on election day at local polling places, because of the large numbers of voters served by modest staffs. For internet voting they do not break down the costs. They show steps to decrypt internet votes and imply but do not say they are hand-counted.[70]
In anoptical scan voting system, or marksense, each voter's choices are marked on one or more pieces of paper, which then go through a scanner. The scanner creates an electronic image of each ballot, interprets it, creates a tally for each candidate, and usually stores the image for later review.
The voter may mark the paper directly, usually in a specific location for each candidate, either by filling in an oval or by using a patterned stamp that can be easily detected by OCR software.
Or the voter may pick one pre-marked ballot among many, each with its own barcode or QR code corresponding to a candidate.
Or the voter may select choices on an electronic screen, which then prints the chosen names, usually with a bar code or QR code summarizing all choices, on a sheet of paper to put in the scanner.[71]This screen and printer is called an electronic ballot marker (EBM) orballot marking device(BMD), and voters with disabilities can communicate with it by headphones, large buttons, sip and puff, or paddles, if they cannot interact with the screen or paper directly. Typically the ballot marking device does not store or tally votes. The paper it prints is the official ballot, put into a scanning system which counts the barcodes, or the printed names can be hand-counted, as a check on the machines.[72]Most voters do not look at the paper to ensure it reflects their choices, and when there is a mistake, an experiment found that 81% of registered voters do not report errors to poll workers.[73]
Two companies, Hart and Clear Ballot, have scanners which count the printed names, which voters had a chance to check, rather than bar codes and QR codes, which voters are unable to check.[74]
The machines are faster than hand-counting, so are typically used the night after the election, to give quick results. The paper ballots and electronic memories still need to be stored, to check that the images are correct, and to be available for court challenges.
Scanners have a row of photo-sensors which the paper passes by, and they record light and dark pixels from the ballot. A black streak results when a scratch or paper dust causes a sensor to record black continuously.[75][76]A white streak can result when a sensor fails.[77]In the right place, such lines can indicate a vote for every candidate or no votes for anyone. Some offices blow compressed air over the scanners after every 200 ballots to remove dust.[78]Fold lines in the wrong places can also count as votes.[79]
Software can miscount; if it miscounts drastically enough, people notice and check. Staff rarely can say who caused an error, so they do not know whether it was accidental or a hack. Errors from 2002 to 2008 were listed and analyzed by the Brennan Center in 2010.[80]There have been numerous examples before and since.
Researchers find security flaws in all election computers, which let voters, staff members or outsiders disrupt or change results, often without detection.[85]Security reviews and audits are discussed inElectronic voting in the United States#Security reviews.
When a ballot marking device prints a bar code or QR code along with candidate names, the candidates are represented in the bar code or QR code as numbers, and the scanner counts those codes, not the names. If a bug or hack makes the numbering system in the ballot marking device not aligned with the numbering system in the scanner, votes will be tallied for the wrong candidates.[74]This numbering mismatch has appeared with direct recording electronic machines (below).[86]
SomeUS states checka small number of places by hand-counting or use of machines independent of the original election machines.[40]
Recreated ballots are paper[87]or electronic[88]ballots created by election staff when originals cannot be counted for some reason. They usually apply to optical scan elections, not hand-counting. Reasons include tears, water damage and folds which prevent feeding through scanners. Reasons also include voters selecting candidates by circling them or other marks, when machines are only programmed to tally specific marks in front of the candidate's name.[89]As many as 8% of ballots in an election may be recreated.[88]
Recreating ballots is sometimes called reconstructing ballots,[87]ballot replication, ballot remaking or ballot transcription.[90]The term "duplicate ballot" sometimes refers to these recreated ballots,[91]and sometimes to extra ballots erroneously given to or received from a voter.[92]
Recreating can be done manually, or by scanners with manual review.[93]
Because of its potential for fraud, recreation of ballots is usually done by teams of two people working together[94]or closely observed by bipartisan teams.[87]The security of a team process can be undermined by having one person read to the other, so only one looks at the original votes and one looks at the recreated votes, or by having the team members appointed by a single official.[95]
When auditing an election, audits need to be done with the original ballots, not the recreated ones.
List prices of optical scanners in the US in 2002–2019, ranged from $5,000 to $111,000 per machine, depending primarily on speed. List prices add up to $1 to $4 initial cost per registered voter. Discounts vary, based on negotiations for each buyer, not on number of machines purchased. Annual fees often cost 5% or more per year, and sometimes over 10%. Fees for training and managing the equipment during elections are additional. Some jurisdictions lease the machines so their budgets can stay relatively constant from year to year. Researchers say that the steady flow of income from past sales, combined with barriers to entry, reduces the incentive for vendors to improve voting technology.[96]
If most voters mark their own paper ballots and one marking device is available at each polling place for voters with disabilities, Georgia's total cost of machines and maintenance for 10 years, starting 2020, has been estimated at $12 per voter ($84 million total). Pre-printed ballots for voters to mark would cost $4 to $20 per voter ($113 million to $224 million total machines, maintenance and printing). The low estimate includes $0.40 to print each ballot, and more than enough ballots for historic turnout levels. the high estimate includes $0.55 to print each ballot, and enough ballots for every registered voter, including three ballots (of different parties) for each registered voter in primary elections with historically low turnout.[97][98]The estimate is $29 per voter ($203 million total) if all voters use ballot marking devices, including $0.10 per ballot for paper.
The capital cost of machines in 2019 in Pennsylvania is $11 per voter if most voters mark their own paper ballots and a marking device is available at each polling place for voters with disabilities, compared to $23 per voter if all voters use ballot marking devices.[99]This cost does not include printing ballots.
New York has an undated comparison of capital costs and a system where all voters use ballot marking devices costing over twice as much as a system where most do not. The authors say extra machine maintenance would exacerbate that difference, and printing cost would be comparable in both approaches.[100]Their assumption of equal printing costs differs from the Georgia estimates of $0.40 or $0.50 to print a ballot in advance, and $0.10 to print it in a ballot marking device.[97]
A touch screen displays choices to the voter, who selects choices, and can change their mind as often as needed, before casting the vote. Staff initialize each voter once on the machine, to avoid repeat voting. Voting data and ballot images are recorded in memory components, and can be copied out at the end of the election.
The system may also provide a means for communicating with a central location for reporting results and receiving updates,[101]which is an access point for hacks and bugs to arrive.
Some of these machines also print names of chosen candidates on paper for the voter to verify. These names on paper can be used forelection auditsandrecountsif needed. The tally of the voting data is stored in a removable memory component and in bar codes on the paper tape. The paper tape is called aVoter-verified paper audit trail(VVPAT). The VVPATs can be counted at 20–43 seconds of staff time per vote (not per ballot).[102][60]
For machines without VVPAT, there is no record of individual votes to check.
This approach can have software errors. It does not include scanners, so there are no scanner errors. When there is no paper record, it is hard to notice or research most errors.
Election officials or optical scanners decide if a ballot is valid before tallying it. Reasons why it might not be valid include: more choices selected than allowed; incorrect voter signature or details on ballots received by mail, if allowed; lack of poll worker signatures, if required; forged ballot (wrong paper, printing or security features); stray marks which could identify who cast the ballot (to earn payments); and blank ballots, though these may be counted separately as abstentions.[6]
For paper ballots officials decide if the voter's intent is clear, since voters may mark lightly, or circle their choice, instead of marking as instructed. The ballot may be visible to observers to ensure agreement, by webcam or passing around a table,[6]or the process may be private. In the US only Massachusetts and the District of Columbia give anyone but officials a legal right to see ballot marks during hand counting.[40]For optical scans, the software has rules to interpret voter intent, based on the darkness of marks.[77]Software may ignore circles around a candidate name, and paper dust or broken sensors can cause marks to appear or disappear, not where the voter intended.
Officials also check if the number of voters checked in at the polling place matches the number of ballots voted, and that the votes plus remaining unused ballots matches the number of ballots sent to the polling place. If not, they look for the extra ballots, and may report discrepancies.[6]
If ballots or other paper or electronic records of an election may be needed for counting or court review after a period of time, they need to be stored securely.
Election storage often usestamper-evident seals,[112][113]although seals can typically be removed and reapplied without damage, especially in the first 48 hours.[114]Photos taken when the seal is applied can be compared to photos taken when the seal is opened.[115]Detecting subtle tampering requires substantial training.[114][116][117]Election officials usually take too little time to examine seals, and observers are too far away to check seal numbers, though they could compare old and new photos projected on a screen. If seal numbers and photos are kept for later comparison, these numbers and photos need their own secure storage. Seals can also be forged. Seals and locks can be cut so observers cannot trust the storage. If the storage is breached, election results cannot be checked and corrected.
Experienced testers can usually bypass all physical security systems.[118]Locks[119]and cameras[120]are vulnerable before and after delivery.[118]Guards can be bribed or blackmailed. Insider threats[121][122]and the difficulty of following all security procedures are usually under-appreciated, and most organizations do not want to learn their vulnerabilities.[118]
Security recommendations include preventing access by anyone alone,[123]which would typically require two hard-to-pick locks, and having keys held by independent officials if such officials exist in the jurisdiction; having storage risks identified by people other than those who design or manage the system; and using background checks on staff.[112]
No US state has adequate laws on physical security of the ballots.[124]
Starting the tally soon after voting ends makes it feasible for independent parties to guard storage sites.[125]
The ballots can be carried securely to a central station for central tallying, or they can be tallied at each polling place, manually or by machine, and the results sent securely to the central elections office. Transport is often accompanied by representatives of different parties to ensure honest delivery. Colorado transmits voting records by internet from counties to the Secretary of State, with hash values also sent by internet to try to identify accurate transmissions.[126]
Postal votingis common worldwide, though France stopped it in the 1970s because of concerns about ballot security. Voters who receive a ballot at home may also hand-deliver it or have someone else to deliver it. The voter may be forced or paid to vote a certain way,[39]or ballots may be changed or lost during the delivery process,[127][128]or delayed so they arrive too late to be counted or for signature mis-matches to be resolved.[129][130]
Postal voting lowered turnout in California by 3%.[131]It raised turnout in Oregon only in Presidential election years by 4%, turning occasional voters into regular voters, without bringing in new voters.[132]Election offices do not mail to people who have not voted recently, and letter carriers do not deliver to recent movers they do not know, omitting mobile populations.[133]
Some jurisdictions let ballots be sent to the election office by email, fax, internet or app.[134]Email and fax are highly insecure.[135]Internet so far has also been insecure, including inSwitzerland,[136]Australia,[137]andEstonia.[138]Apps try to verify the correct voter is using the app by name, date of birth and signature,[139]which are widely available for most voters, so can be faked; or by name, ID and video selfie, which can be faked by loading a pre-recorded video.[140]Apps have been particularly criticized for operating on insecure phones, and pretending to more security during transmission than they have.[141][142][140]
|
https://en.wikipedia.org/wiki/Vote_counting_system
|
Inmathematical optimizationtheory,dualityor theduality principleis the principle thatoptimization problemsmay be viewed from either of two perspectives, theprimal problemor thedual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal.[1]This fact is calledweak duality.
In general, the optimal values of the primal and dual problems need not be equal. Their difference is called theduality gap. Forconvex optimizationproblems, the duality gap is zero under aconstraint qualificationcondition. This fact is calledstrong duality.
Usually the term "dual problem" refers to theLagrangian dual problembut other dual problems are used – for example, theWolfe dual problemand theFenchel dual problem. The Lagrangian dual problem is obtained by forming theLagrangianof a minimization problem by using nonnegativeLagrange multipliersto add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints).
In general given twodual pairsofseparatedlocally convex spaces(X,X∗){\displaystyle \left(X,X^{*}\right)}and(Y,Y∗){\displaystyle \left(Y,Y^{*}\right)}and the functionf:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}, we can define the primal problem as findingx^{\displaystyle {\hat {x}}}such thatf(x^)=infx∈Xf(x).{\displaystyle f({\hat {x}})=\inf _{x\in X}f(x).\,}In other words, ifx^{\displaystyle {\hat {x}}}exists,f(x^){\displaystyle f({\hat {x}})}is theminimumof the functionf{\displaystyle f}and theinfimum(greatest lower bound) of the function is attained.
If there are constraint conditions, these can be built into the functionf{\displaystyle f}by lettingf~=f+Iconstraints{\displaystyle {\tilde {f}}=f+I_{\mathrm {constraints} }}whereIconstraints{\displaystyle I_{\mathrm {constraints} }}is a suitable function onX{\displaystyle X}that has a minimum 0 on the constraints, and for which one can prove thatinfx∈Xf~(x)=infxconstrainedf(x){\displaystyle \inf _{x\in X}{\tilde {f}}(x)=\inf _{x\ \mathrm {constrained} }f(x)}. The latter condition is trivially, but not always conveniently, satisfied for thecharacteristic function(i.e.Iconstraints(x)=0{\displaystyle I_{\mathrm {constraints} }(x)=0}forx{\displaystyle x}satisfying the constraints andIconstraints(x)=∞{\displaystyle I_{\mathrm {constraints} }(x)=\infty }otherwise). Then extendf~{\displaystyle {\tilde {f}}}to aperturbation functionF:X×Y→R∪{+∞}{\displaystyle F:X\times Y\to \mathbb {R} \cup \{+\infty \}}such thatF(x,0)=f~(x){\displaystyle F(x,0)={\tilde {f}}(x)}.[2]
Theduality gapis the difference of the right and left hand sides of the inequality
whereF∗{\displaystyle F^{*}}is theconvex conjugatein both variables andsup{\displaystyle \sup }denotes thesupremum(least upper bound).[2][3][4]
The duality gap is the difference between the values of any primal solutions and any dual solutions. Ifd∗{\displaystyle d^{*}}is the optimal dual value andp∗{\displaystyle p^{*}}is the optimal primal value, then the duality gap is equal top∗−d∗{\displaystyle p^{*}-d^{*}}. This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only ifstrong dualityholds. Otherwise the gap is strictly positive andweak dualityholds.[5]
In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of theconvex relaxationof the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closedconvex hulland with replacing a non-convex function with its convexclosure, that is the function that has theepigraphthat is the closed convex hull of the original primal objective function.[6][7][8][9][10][11][12][13][14][15][16]
Linear programmingproblems areoptimizationproblems in which theobjective functionand theconstraintsare alllinear. In the primal problem, the objective function is alinear combinationofnvariables. There aremconstraints, each of which places an upper bound on a linear combination of thenvariables. The goal is to maximize the value of the objective function subject to the constraints. Asolutionis avector(a list) ofnvalues that achieves the maximum value for the objective function.
In the dual problem, the objective function is a linear combination of themvalues that are the limits in themconstraints from the primal problem. There arendual constraints, each of which places a lower bound on a linear combination ofmdual variables.
In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction orsubspaceof directions to move that increases the objective function. Moving in any such direction is said to remove slack between thecandidate solutionand one or more constraints. Aninfeasiblevalue of the candidate solution is one that exceeds one or more of the constraints.
In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum.
This intuition is made formal by the equations inLinear programming: Duality.
Innonlinear programming, the constraints are not necessarily linear. Nonetheless, many of the same principles apply.
To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets. This is the significance of theKarush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of non-linear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to anoptimalsolution. An optimal solution is one that is alocal optimum, but possibly not a global optimum.
Motivation[17]
Suppose we want to solve the followingnonlinear programmingproblem:
minimizef0(x)subject tofi(x)≤0,i∈{1,…,m}{\displaystyle {\begin{aligned}{\text{minimize }}&f_{0}(x)\\{\text{subject to }}&f_{i}(x)\leq 0,\ i\in \left\{1,\ldots ,m\right\}\\\end{aligned}}}
The problem has constraints; we would like to convert it to a program without constraints. Theoretically, it is possible to do it by minimizing the functionJ(x){\displaystyle J(x)}, defined as
J(x)=f0(x)+∑iI[fi(x)]{\displaystyle J(x)=f_{0}(x)+\sum _{i}I[f_{i}(x)]}
whereI{\displaystyle I}is an infinitestep function:I[u]=0{\displaystyle I[u]=0}ifu≤0{\displaystyle u\leq 0}, andI[u]=∞{\displaystyle I[u]=\infty }otherwise. ButJ(x){\displaystyle J(x)}is hard to solve as it is not continuous. It is possible to "approximate"I[u]{\displaystyle I[u]}byλu{\displaystyle \lambda u}, whereλ{\displaystyle \lambda }is a positive constant. This yields a function known as the Lagrangian:
L(x,λ)=f0(x)+∑iλifi(x){\displaystyle L(x,\lambda )=f_{0}(x)+\sum _{i}\lambda _{i}f_{i}(x)}
Note that, for everyx{\displaystyle x},
maxλ≥0L(x,λ)=J(x){\displaystyle \max _{\lambda \geq 0}L(x,\lambda )=J(x)}.
Proof:
Therefore, the original problem is equivalent to:
minxmaxλ≥0L(x,λ){\displaystyle \min _{x}\max _{\lambda \geq 0}L(x,\lambda )}.
By reversing the order of min and max, we get:
maxλ≥0minxL(x,λ){\displaystyle \max _{\lambda \geq 0}\min _{x}L(x,\lambda )}.
Thedual functionis the inner problem in the above formula:
g(λ):=minxL(x,λ){\displaystyle g(\lambda ):=\min _{x}L(x,\lambda )}.
TheLagrangian dual programis the program of maximizing g:
maxλ≥0g(λ){\displaystyle \max _{\lambda \geq 0}g(\lambda )}.
The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is theweak dualityprinciple.
If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual programequalsthe optimal solution of the primal program; this is thestrong dualityprinciple. In this case, we can solve the primal program by finding an optimal solutionλ* to the dual program, and then solving:
minxL(x,λ∗){\displaystyle \min _{x}L(x,\lambda ^{*})}.
Note that, to use either the weak or the strong duality principle, we need a way to compute g(λ). In general this may be hard, as we need to solve a different minimization problem for everyλ. But for some classes of functions, it is possible to get an explicit formula for g(). Solving the primal and dual programs together is often easier than solving only one of them. Examples arelinear programmingandquadratic programming. A better and more general approach to duality is provided byFenchel's duality theorem.[18]: Sub.3.3.1
Another condition in which the min-max and max-min are equal is when the Lagrangian has asaddle point: (x∗, λ∗) is a saddle point of the Lagrange function L if and only if x∗ is an optimal solution to the primal, λ∗ is an optimal solution to the dual, and the optimal values in the indicated problems are equal to each other.[18]: Prop.3.2.2
Given anonlinear programmingproblem in standard form
with the domainD⊂Rn{\displaystyle {\mathcal {D}}\subset \mathbb {R} ^{n}}having non-empty interior, theLagrangian functionL:Rn×Rm×Rp→R{\displaystyle {\mathcal {L}}:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as
The vectorsλ{\displaystyle \lambda }andν{\displaystyle \nu }are called thedual variablesorLagrange multiplier vectorsassociated with the problem. TheLagrange dual functiong:Rm×Rp→R{\displaystyle g:\mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as
The dual functiongis concave, even when the initial problem is not convex, because it is a point-wise infimum of affine functions. The dual function yields lower bounds on the optimal valuep∗{\displaystyle p^{*}}of the initial problem; for anyλ≥0{\displaystyle \lambda \geq 0}and anyν{\displaystyle \nu }we haveg(λ,ν)≤p∗{\displaystyle g(\lambda ,\nu )\leq p^{*}}.
If aconstraint qualificationsuch asSlater's conditionholds and the original problem is convex, then we havestrong duality, i.e.d∗=maxλ≥0,νg(λ,ν)=inff0=p∗{\displaystyle d^{*}=\max _{\lambda \geq 0,\nu }g(\lambda ,\nu )=\inf f_{0}=p^{*}}.
For a convex minimization problem with inequality constraints,
the Lagrangian dual problem is
where the objective function is the Lagrange dual function. Provided that the functionsf{\displaystyle f}andg1,…,gm{\displaystyle g_{1},\ldots ,g_{m}}are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem
is called theWolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables(u,x){\displaystyle (u,x)}. Also, the equality constraint∇f(x)+∑j=1muj∇gj(x){\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\,\nabla g_{j}(x)}is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case,weak dualityholds.[19]
According toGeorge Dantzig, the duality theorem for linear optimization was conjectured byJohn von Neumannimmediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from hisgame theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 byAlbert W. Tuckerand his group. (Dantzig's foreword to Nering and Tucker, 1993)
Insupport vector machines(SVMs), formulating the primal problem of SVMs as the dual problem can be used to implement theKernel trick, but the latter has higher time complexity in the historical cases.
|
https://en.wikipedia.org/wiki/Dual_problem
|
Digital Enhanced Cordless Telecommunications(DECT) is acordless telephonystandard maintained byETSI. It originated inEurope, where it is the common standard, replacing earlier standards, such asCT1andCT2.[1]Since the DECT-2020 standard onwards, it also includesIoTcommunication.
Beyond Europe, it has been adopted byAustraliaand most countries inAsiaandSouth America. North American adoption was delayed byUnited Statesradio-frequency regulations. This forced development of a variation of DECT calledDECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America.
DECT was originally intended for fast roaming between networked base stations, and the first DECT product wasNet3wireless LAN. However, its most popular application is single-cell cordless phones connected totraditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in manyprivate branch exchange(PBX) systems for medium and large businesses, produced byPanasonic,Mitel,Gigaset,Ascom,Cisco,Grandstream,Snom,Spectralink, and RTX. DECT can also be used for purposes other than cordless phones, such asbaby monitors,wireless microphonesand industrial sensors. TheULE Alliance'sDECT ULEand its "HAN FUN" protocol[2]are variants tailored for home security, automation, and theinternet of things(IoT).
The DECT standard includes thegeneric access profile(GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum.[3]
The New Generation DECT (NG-DECT) standard, marketed asCAT-iqby the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability acrossIP-DECTbase stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support forwideband audio.
DECT-2020New Radio, marketed as NR+ (New Radio plus), is a5Gdata transmission protocol which meets ITU-RIMT-2020requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices.[4][5][6]
The DECT standard was developed byETSIin several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard.[7]Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing.
Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by theITUas fulfilling theIMT-2000requirements and thus qualifies as a3Gsystem. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT).
DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries inEurope. Outside Europe, it is used in most ofAsia,AustraliaandSouth America. In theUnited States, theFederal Communications Commissionin 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9GHz), known asUnlicensed Personal Communications Services(UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such asbaby monitorsandwireless networks.
The New Generation DECT (NG-DECT) standard was first published in 2007;[8]it was developed by ETSI with guidance from theHome Gateway Initiativethrough the DECT Forum[9]to supportIP-DECTfunctions inhome gateway/IP-PBXequipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570.[10]The DECT Forum maintains theCAT-iqtrademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527.
TheDECT Ultra Low Energy(DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year byDialog Semiconductor. The standard was created to enablehome automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference thanZigbee,Bluetooth, orWi-Fifrom microwave ovens, which all operate in the unlicensed 2.4 GHzISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit.
A new low-complexity audio codec,LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications such as wireless speakers, headphones, headsets, and microphones. LC3plus supports scalable 16-bit narrowband, wideband, super wideband, fullband, and 24-bit high-resolution fullband and ultra-band coding, with sample rates of 8, 16, 24, 32, 48 and 96 kHz and audio bandwidth of up to 48 kHz.[11][12]
DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based oncyclic prefixorthogonal frequency-division multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate withQAM-1024 modulation. The updated standard supports multi-antennaMIMOandbeamforming, FECchannel coding, and hybridautomatic repeat request. There are 17 radio channel frequencies in the range from 450MHz up to 5,875MHz, and channel bandwidths of 1,728, 3,456, or 6,912kHz. Direct communication between end devices is possible with amesh networktopology. In October 2021, DECT-2020 NR was approved for theIMT-2020standard,[4]for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professionalwireless audioapplications with point-to-point ormulticastcommunications;[13][14][15]the proposal was fast-tracked by ITU-R following real-world evaluations.[5][16]The new protocol will be marketed as NR+ (New Radio plus) by the DECT Forum.[6]OFDMAandSC-FDMAmodulations were also considered by the ESTI DECT committee.[17][18]
OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware fromDialog SemiconductorandDSP Group; the project is maintained by the DECT forum.[19][20]
The DECT standard originally envisaged three major areas of application:[7]
Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprisePABXmarket, albeit much smaller than the cordless home market, has been very successful as well, and all the major PABX vendors have advanced DECT access options available. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998Telecom Italialaunched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy.[21]The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001.[22]
DECT has been used forwireless local loopas a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over 10 kilometres (6.2 mi). One example is thecorDECTstandard.
The first data application for DECT wasNet3wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates.
Data applications such as electronic cash terminals, traffic lights, and remote door openers[23]also exist, but have been eclipsed byWi-Fi,3Gand4Gwhich compete with DECT for both voice and data.
The DECT standard specifies a means for aportable phoneor "Portable Part" to access a fixed telephone network via radio.Base stationor "Fixed Part" is used to terminate the radio link and provide access to a fixed line. Agatewayis then used to connect calls to the fixed network, such aspublic switched telephone network(telephone jack), office PBX, ISDN, or VoIP over Ethernet connection.
Typical abilities of a domestic DECTGeneric Access Profile(GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone line. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used asintercoms, communicating between each other, and sometimes aswalkie-talkies, intercommunicating without telephone line connection.
DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz.
DECT operates as a multicarrierfrequency-division multiple access(FDMA) andtime-division multiple access(TDMA) system. This means that theradio spectrumis divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT usestime-division duplex(TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel – thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s.
DECT also providesfrequency-hopping spread spectrumoverTDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion.[24]
DECT allows interference-free wireless operation to around 100 metres (110 yd) outdoors. Indoor performance is reduced when interior spaces are constrained by walls.
DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems,Wi-Finetworks,video senders,Bluetoothtechnology, baby monitors and other wireless devices.
ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties:
The DECTphysical layeruses FDMA/TDMA access with TDD.
Gaussian frequency-shift keying(GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 6 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s.
DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list.
The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed aseffective radiated power(ERP), rather than the more commonly usedequivalent isotropically radiated power(EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges.
The DECTmedia access controllayer controls the physical layer and providesconnection oriented,connectionlessandbroadcastservices to the higher layers.
The DECTdata link layeruses Link Access Protocol Control (LAPC), a specially designed variant of theISDNdata link protocol called LAPD. They are based onHDLC.
GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP).
There are several combinations of slots and corresponding types of physical packets with GFSK modulation:
The 420/424 bits of a GFSK basic packet (P32) contain the following fields:
The resulting full data rate is 32 kbit/s, available in both directions.
The DECTnetwork layeralways contains the following protocol entities:
Optionally it may also contain others:
All these communicate through a Link Control Entity (LCE).
The call control protocol is derived fromISDNDSS1, which is aQ.931-derived protocol. Many DECT-specific changes have been made.[specify]
The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT.
Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability.
DECTGAPis an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions.
The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number.
The standard also providesencryptionservices with the DECT Standard Cipher (DSC). The encryption isfairly weak, using a 35-bitinitialization vectorand encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under anon-disclosure agreementto the phones' manufacturers fromETSI.
The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time.[26]Unfortunately very few DECT devices on the market implemented authentication and encryption procedures[26][27]– and even when encryption was used by the phone, it was possible to implement aman-in-the-middle attackimpersonating a DECT base station and revert to unencrypted mode – which allows calls to be listened to, recorded, and re-routed to a different destination.[27][28][29]
After an unverified report of a successful attack in 2002,[30][31]members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008,[27]and as of 2010 there has been a viable attack on it that can recover the key.[32]
In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based onAES128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite.
DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication.
Various access profiles have been defined in the DECT standard:
DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM.
In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since theUPCS band(1920–1930 MHz) is not free from heavy interference.[34]Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor.
Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed2.4 GHzand 900 MHz Region 2ISM bands; some users ofUnidenWDECT 2.4 GHz phones reported interoperability issues withWi-Fiequipment.[35][36][unreliable source?]
North-AmericanDECT 6.0products may not be used in Europe, Pakistan,[37]Sri Lanka,[38]and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities,PTA, Telecommunications Regulatory Commission of Sri Lanka[39]and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by theFederal Communications CommissionandInnovation, Science and Economic Development Canada.
DECT 8.0 HD is a marketing designation for North American DECT devices certified withCAT-iq 2.0"Multi Line" profile.[40]
Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI.
NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitateVoIPcalls throughSIPandH.323protocols.
There are several CAT-iq profiles which define supported voice features:
CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to supportIP-DECTbase stations found in officeIP-PBXandhome gateways.
DECT-2020, also called NR+, is a new radio standard byETSIfor the DECT bands worldwide.[41][42]The standard was designed to meet a subset of theITUIMT-20205Grequirements that are applicable toIOTandIndustrial internet of things.[43]DECT-2020 is compliant with the requirements for Ultra Reliable Low Latency CommunicationsURLLCand massive Machine Type Communication (mMTC) of IMT-2020.
DECT-2020 NR has new capabilities[44]compared to DECT and DECT Evolution:
The DECT-2020 standard has been designed to co-exist in the DECT radio band with existing DECT deployments. It uses the same Time Division slot timing and Frequency Division center frequencies and uses pre-transmit scanning to minimize co-channel interference.
Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to 200 metres (660 ft) indoors and 6 kilometres (3.7 mi) using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative toWi-Fi.[45]The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti'sNet3, was a wireless LAN, and German firmsDosch & AmandandHoeft & Wesselbuilt niche businesses on the supply of data transmission systems based on DECT.
However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed.
DECT usesUHFradio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies.
In North America, the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe.
The UKHealth Protection Agency(HPA) claims that due to a mobile phone's adaptive power ability, a European DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A European DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.[46]
Most studies have been unable to demonstrate any link to health effects, or have been inconclusive.Electromagnetic fieldsmay have an effect on protein expression in laboratory settings[47]but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.[48]
|
https://en.wikipedia.org/wiki/DECT
|
Database testingusually consists of a layered process, including theuser interface(UI) layer, the business layer, the data access layer and the database itself. The UI layer deals with the interface design of the database, while the business layer includes databases supportingbusiness strategies.
Databases, the collection of interconnected files on a server, storing information, may not deal with the sametypeof data, i.e. databases may beheterogeneous. As a result, many kinds of implementation and integrationerrorsmay occur in large database systems, which negatively affect the system's performance, reliability, consistency and security. Thus, it is important totestin order to obtain a database system which satisfies theACIDproperties (Atomicity, Consistency, Isolation, and Durability) of adatabase management system.[1]
One of the most critical layers is the data access layer, which deals with databases directly during the communication process. Database testing mainly takes place at this layer and involves testing strategies such as quality control and quality assurance of the product databases.[2]Testing at these different layers is frequently used to maintain the consistency of database systems, most commonly seen in the following examples:
The figure indicates the areas of testing involved during different database testing methods, such asblack-box testingandwhite-box testing.
Black-box testing involves testing interfaces and the integration of the database, which includes:
With the help of these techniques, the functionality of the database can be tested thoroughly.
Pros and Cons of black box testing include: Test case generation in black box testing is fairly simple. Their generation is completely independent of software development and can be done in an early stage of development. As a consequence, the programmer has better knowledge of how to design the database application and uses less time for debugging. Cost for development of black box test cases is lower than development of white box test cases. The major drawback of black box testing is that it is unknown how much of the program is being tested. Also, certain errors cannot be detected.[3]
White-box testing mainly deals with the internal structure of the database. The specification details are hidden from the user.
The main advantage of white box testing in database testing is that coding errors are detected, so internal bugs in the database can be eliminated. The limitation of white box testing is that SQL statements are not covered.
While generating test cases for database testing, the semantics of SQL statement need to be reflected in the test cases. For that purpose, a technique called WHite bOx Database Application Technique "(WHODATE)" is used. As shown in the figure, SQL statements are independently converted into GPL statements, followed by traditional white box testing to generate test cases which include SQL semantics.[4]
A set fixture describes the initial state of the database before entering the testing. After setting fixtures, database behavior is tested for defined test cases. Depending on the outcome, test cases are either modified or kept as is. The "tear down" stage either results in terminating testing or continuing with other test cases.[5]
For successful database testing the following workflow executed by each single test is commonly executed:
|
https://en.wikipedia.org/wiki/Database_testing
|
Music and artificial intelligence(music and AI) is the development ofmusic softwareprograms which useAIto generate music.[1]As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.[2]Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI inmusic composition,performance, theory and digitalsound processing. Composers/artists likeJennifer WalsheorHolly Herndonhave been exploring aspects of music AI for years in their performances and musical works. Another original approach of humans “imitating AI” can be found in the 43-hour sound installationString Quartet(s)byGeorges Lentz.
20th century art historianErwin Panofskyproposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[3][4]AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[5]
In the 1950s and the 1960s, music made by artificial intelligence was not fully original, but generated from templates that people had already defined and given to theAI, with this being known asrule-based systems. As time passed, computers became more powerful, which allowed machine learning and artificial neural networks to help in the music industry by giving AI large amounts of data to learn how music is made instead of predefined templates. By the early 2000s, more advancements in artificial intelligence had been made, withgenerative adversarial networks(GANs) anddeep learningbeing used to help AI compose more original music that is more complex and varied than possible before. Notable AI-driven projects, such as OpenAI’sMuseNetand Google’s Magenta, have demonstrated AI’s ability to generate compositions that mimic various musical styles.[6]
Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played.Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.[7]
In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composerLejaren Hillerand mathematicianLeonard Isaacson.[5]: v–viiIn 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using theUral-1computer.[8]
In 1965, inventorRay Kurzweildeveloped software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz showI've Got a Secretthat same year.[9]
By 1983,Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research.[7][10]
In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style ofBach.[11]EMI would later become the basis for a more sophisticated algorithm calledEmily Howell, named for its creator.
In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientistFrançois Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[12]
Emily Howellwould continue to make advancements in musical artificial intelligence, publishing its first albumFrom Darkness, Lightin 2009.[13]Since then, many more pieces by artificial intelligence and various groups have been published.
In 2010,Iamusbecame the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles.[14][5]: 468–481In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies,[15]was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method.
With progress ingenerative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field areSuno AI, launched in December 2023, andUdio, which followed in April 2024.[16]
Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[17]By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[18]The technology is used bySLOrk(Stanford Laptop Orchestra)[19]andPLOrk(Princeton Laptop Orchestra).
Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos.[20][21]The team started building the music generation technology in 2010,[22]formed a company around it in 2012,[23]and launched the website publicly in 2015.[21]The technology used was originally a rule-basedalgorithmic compositionsystem,[24]which was later replaced withartificial neural networks.[20]The website was used to create over 1 million pieces of music, and brands that used it includedCoca-Cola,Google,UKTV, and theNatural History Museum, London.[25]In 2019, the company was acquired byByteDance.[26][27][28]
MorpheuS[29]is a research project byDorien Herremansand Elaine Chew atQueen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on avariable neighborhood searchalgorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.
Created in February 2016, inLuxembourg,AIVAis a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures[30]AIVA has also been used to compose a Rock track calledOn the Edge,[31]as well as a pop tuneLove Sick[32]in collaboration with singerTaryn Southern,[33]for the creation of her 2018 album "I am AI".
Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.[34]In 2017 they released theNSynthalgorithm and dataset,[35]and anopen sourcehardware musical instrument, designed to facilitate musicians in using the algorithm.[36]The instrument was used by notable artists such asGrimesandYACHTin their albums.[37][38]In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.[39]In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.[40][41]
Riffusionis aneural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[42]
The resulting music has been described as "de otro mundo" (otherworldly),[43]although unlikely to replace man-made music.[43]The model was made available on December 15, 2022, with the code also freely available onGitHub.[44]
The first version of Riffusion was created as afine-tuningofStable Diffusion, an existing open-source model for generating images from text prompts, onspectrograms,[42]resulting in a model which used text prompts to generate image files which could then be put through aninverse Fourier transformand converted into audio files.[44]While these files were only several seconds long, the model could also uselatent spacebetween outputs tointerpolatedifferent files together[42][45](using theimg2imgcapabilities of SD).[46]It was one of many models derived from Stable Diffusion.[46]
In December 2022, Mubert[47]similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[48][49]
Spike AI is an AI-basedaudio plug-in, developed bySpike Stentin collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects duringmixing. Communication is done by using achatbottrained on Spike Stent's personal data. The plug-in integrates intodigital audio workstation.[52][53]
Artificial intelligence can potentially impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for.[5]AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations.[54]New tools that are powered by artificial intelligence have been made to help aid in generating original music compositions, likeAIVA(Artificial Intelligence Virtual Artist) andUdio. This is done by giving an AI model data of already-existing music and having it analyze the data using deep learning techniques to generate music in many different genres, such as classical music or electronic music.[55]
Several musicians such asDua Lipa,Elton John,Nick Cave,Paul McCartneyandStinghave criticized the use of AI in music and are encouraging theUK governmentto act on this matter.[56][57][58][59][60]
Some artists have encouraged the use of AI in music such asGrimes.[61]
While helpful in generating new music, many issues have come up since artificial intelligence has begun making music. Some major concerns include how the economy will be impacted with AI taking over music production, who truly owns music generated by AI, and a lower demand for human-made musical compositions. Some critics argue that AI diminishes the value of human creativity, while proponents see it as an augmentative tool that expands artistic possibilities rather than replacing human musicians.[62]
Additionally, concerns have been raised about AI's potential to homogenize music. AI-driven models often generate compositions based on existing trends, which some fear could limit musical diversity. Addressing this concern, researchers are working on AI systems that incorporate more nuanced creative elements, allowing for greater stylistic variation.[55]
Another major concern about artificial intelligence in music is copyright laws. Many questions have been asked about who owns AI generated music and productions, as today’s copyright laws require the work to be human-authorized in order to be granted copyright protection. One proposed solution is to create hybrid laws that recognize both the artificial intelligence that generated the creation and the humans that contributed to the creation.
In the United States, the current legal framework tends to apply traditional copyright laws to AI, despite its differences with the human creative process.[63]However, music outputs solely generated by AI are not granted copyright protection. In thecompendium of the U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to "works that lack human authorship" and "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[64]In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."[65]The usage of copyrighted music in training AI has also been a topic of contention. One instance of this was seen whenSACEM, a professional organization of songwriters, composers, and music publishers demanded that PozaLabs, an AI music generation startup refrain from utilizing any music affiliated with them for training models.[66]
The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.[67]According to theEuropean Union Intellectual Property Officeand the recent jurisprudence of theCourt of Justice of the European Union, the originality criterion requires the work to be the author's own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.[67]The reCreating Europe project, funded by the European Union's Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.[67]The recognition ofAIVAmarks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.[68]
The recent advancements in artificial intelligence made by groups such asStability AI,OpenAI, andGooglehas incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[69]Strides towards addressing ethical issues have been made as well, such as the collaboration between Sound Ethics(a company promoting ethical AI usage in the music industry) and UC Irvine, focusing on ethical frameworks and the responsible usage of AI.[70]
A more nascent development of AI in music is the application ofaudio deepfakesto cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[71]Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[72]Most recent preventative measures have started to be developed byGoogleand Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists.[73]
In 2023, an artist known as ghostwriter977 created a musical deepfake called "Heart on My Sleeve" that cloned the voices ofDrakeandThe Weekndby inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto originalreference vocalswith original lyrics.[74]The track was submitted forGrammyconsideration for the best rap song and song of the year.[75]It went viral and gained traction onTikTokand received a positive response from the audience, leading to its official release onApple Music,Spotify, andYouTubein April 2023.[76]Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him.[74]It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award.[76]The track would end up being removed from all music platforms byUniversal Music Group.[76]The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers.
In 2013, country music singerRandy Travissuffered astrokewhich left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producerKyle Lehningreleased a new song in May 2024 titled "Where That Came From", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré.[77][78]
Artificial intelligence music encompasses a number of technical approaches used for music composition, analysis, classification, and suggestion. Techniques used are drawn from deep learning, machine learning, natural language processing, and signal processing. Current systems are able to compose entire musical compositions, parse affective content, accompany human players in real-time, and acquire patterns of user and context-dependent preferences.[79][80][81][82]
Symbolic music generation is the generation of music in discrete symbolic forms such as MIDI, where note and timing are precisely defined. Early systems employed rule-based systems and Markov models, but modern systems employ deep learning to a large extent. Recurrent Neural Networks (RNNs), and more precisely Long Short-Term Memory (LSTM) networks, have been employed in modeling temporal dependencies of musical sequences. They may be used to generate melodies, harmonies, and counterpoints in various musical genres.[83]
Transformer models such as Music Transformer and MuseNet became more popular for symbolic generation due to their ability to model long-range dependencies and scalability. These models were employed to generate multi-instrument polyphonic music and stylistic imitations.[84]
This method generates music as raw audio waveforms instead of symbolic notation. DeepMind's WaveNet is an early example that uses autoregressive sampling to generate high-fidelity audio. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) are being used more and more in new audio texture synthesis and timbre combination of different instruments.[80]
NSynth (Neural Synthesizer), a Google Magenta project, uses a WaveNet-like autoencoder to learn latent audio representations and thereby generate completely novel instrumental sounds.[85]
Music Information Retrieval (MIR) is the extraction of musically relevant information from audio recordings to be utilized in applications such as genre classification, instrument recognition, mood recognition, beat detection, and similarity estimation. CNNs on spectrogram features have been very accurate on these tasks.[82]SVMs and k-Nearest Neighbors (k-NN) are also used for classification on features such as Mel-frequency cepstral coefficients (MFCCs).
Hybrid systems combine symbolic and sound-based methods to draw on their respective strengths. They can compose high-level symbolic compositions and synthesize them as natural sound. Interactive systems in real-time allow for AI to instantaneously respond to human input to support live performance. Reinforcement learning and rule-based agents tend to be utilized to allow for human–AI co-creation in improvisation contexts.[81]
Affective computing techniques enable AI systems to classify or create music based on some affective content. The models use musical features such as tempo, mode, and timbre to classify or influence listener emotions. Deep learning models have been trained for classifying music based on affective content and even creating music intended to have affective impacts.[86]
Music recommenders employ AI to suggest tracks to users based on what they have heard, their tastes, and information available in context. Collaborative filtering, content-based filtering, and hybrid filtering are most widely applied, deep learning being utilized for fine-tuning. Graph-based and matrix factorization methods are used within commercial systems like Spotify and YouTube Music to represent complex user-item relationships.[87]
AI is also used in audio engineering automation such as mixing and mastering. Such systems level, equalize, pan, and compress to give well-balanced sound outputs. Software such as LANDR and iZotope Ozone utilize machine learning in emulating professional audio engineers' decisions.[88]
Natural language generation also applies to songwriting assistance and lyrics generation. Transformer language models like GPT-3 have also been proven to be able to generate stylistic and coherent lyrics from input prompts, themes, or feeling. There even exist AI programs that assist with rhyme scheme, syllable count, and poem form. .[89]
Recent developments include multimodal AI systems that integrate music with other media, e.g., dance, video, and text. These can generate background scores in synchronization with video sequences or generate dance choreography from audio input. Cross-modal retrieval systems allow one to search for music using images, text, or gestures.[90]
The advent of AI music has caused heated cultural debates, especially its impacts on creativity, morality, and audience. As much as there have been praises about the democratization of music production, there have been fears raised about its impacts on producers, audience, and society in general.
The most contentious application of AI music creation has been its misuse to produce offensive work. The music AI platforms have been used in several instances to produce songs with offensive lyrics that were racist, antisemitic, or contained violence and have tested moderation and accountability in generative AI platforms.[[91]] The case has renewed argument about accountability in users and developers in producing moral outputs in generative models.
Aside from that, there have been several producers and artists denouncing the use of AI music due to threats to originality, handmade craftsmanship, and cultural authenticity. The music created by AIs lacks the emotional intelligence and lived life upon which human work relies, according to its critics. The concern comes in an era when there are steadily more songs made from AIs appearing on platforms and which others consider lowering human artistry.[[92]]
Interestingly enough, while professional musicians have been generally more dismissive about using AI in music production, the general consumer or listener has been receptive or neutral to the idea. Surveys have found that in a commercial context, the average consumer often doesn't know or even care whether they hear music made by human beings or AI and that a high percentage says that it doesn't affect their enjoyment.[[92]] The contrast between artist sentiment and consumer sentiment may hold far-reaching consequences in terms of the future economics within the music industry and the worth assigned to human creativity.
The cultural value placed on AI music is similarly related to overall popular perceptions regarding generative AI. How generative AI-produced work—whether music or writing—is received in human terms has been found to be dependent upon such factors as emotional meaning and authenticity.[[93]] As long as the output from AI proves persuasive and engaging, audiences may in some cases be willing to accept music whose author is not a human being, with the potential to reshape conventions regarding creators and creativity.
The field of music and artificial intelligence is still evolving. Some of the key future directions for advancement include advancements in generation models, changes in how humans and AI collaborate musically, and the development of legal and ethical frameworks to address the technology's impact.
Future research and development is expected to move beyond established techniques such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). More recent architectures such as diffusion models and transformer based networks[94]are showing promise for generating more complex, nuanced, and stylistically coherent music. These models may lead to higher quality audio generation and better long term structure in music compositions.
Besides the act of generation itself, a significant future direction of interest involves deepening the collaboration between human musicians and AI. Developments are increasingly focused on understanding the way these collaborations can occur, and how they can be facilitated to be ethically sound.[95]This involves studying musicians perceptions and experiences with AI tools to inform the design of future systems.
Research actively explores these collaborative models in different domains. For instance, studies investigate how AI can be co-designed with professionals such as music therapists to act as supportive partners in complex creative and therapeutic processes,[96]showing a trend towards developing AI not just as an output tool, but as an integrated component designed to augment human skills.
As AI generated music becomes more capable and widespread, legal and ethical frameworks worldwide are expected to continue adapting. Current policy discussions have been focusing on copyright ownership, the use of AI to mimic artists (deepfakes), and fair compensation for artists.[97]Recent legislative efforts and debates, such as those concerning AI safety and regulation in places like California, show the challenges involved in balancing innovation with potential risks and societal impacts.[98]Tracking these developments is crucial for understanding the future of AI in the music industry.[99]
|
https://en.wikipedia.org/wiki/Music_and_artificial_intelligence
|
SIMHis afree and open source, multi-platform multi-systememulator. It is maintained by Bob Supnik, a formerDECengineer and DEC vice president, and has been in development in one form or another since the 1960s.
SIMH was based on a much older systems emulator called MIMIC, which was written in the late 1960s at Applied Data Research.[1]SIMH was started in 1993 with the purpose of preservingminicomputerhardware and software that was fading into obscurity.[1]
In May 2022, theMIT Licenseof SIMH version 4 onGitHubwas unilaterally modified by a contributor to make itno longer free software, by adding a clause that revokes the right to use any subsequent revisions of the software containing their contributions if modifications that "influence the behaviour of the disk access activities" are made.[3]As of 27 May 2022, Supnik no longer endorses version 4 on his official website for SIMH due to these changes, only recognizing the "classic" version 3.x releases.[4]
On 3 June 2022, the last revision of SIMH not subject to this clause (licensed underBSD licensesand the MIT License) wasforkedby the group Open SIMH, with a new governance model and steering group that includes Supnik and others. The Open SIMH group cited that a "situation" had arisen in the project that compromised its principles.[5]
SIMH emulates hardware from the following companies.
Thisemulation-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/SIMH
|
Inlinguisticmorphology, atransfixis a discontinuousaffixwhich is inserted into aword root, as inroot-and-patternsystems of morphology, like those of manySemitic languages.
A discontinuous affix is an affix whose phonetic components are not sequential within a word, and instead, are spread out between or around thephonesthat comprise the root. The word root is often an abstract series of three consonants, though single consonant, biliteral, and quadriliteral roots do exist.[1]An example of a triconsonantal root would beḍ–r–b(ض ر ب) in Arabic, which can be inflected to create forms such asḍaraba'he beat' andyaḍribu'he beats'. While triconsonantal roots are widely considered to be the most common state, some linguists posit that biliteral roots may in fact be the default, though at least one scholar is skeptical of the legitimacy of these claims.[1]
Transfixes are placed into these roots in assigned positions, dictated by templates which are tied to the specific meaning of a giveninflectionorderivation.[2]The transfixes in the examples above are–a–a–aandya––i–u.
Transfixes are different fromprefixes,suffixes, andinfixesin that a complete transfix is the entire structure which is placed into a root. A transfix is not a combination of prefixes, suffixes, and infixes, but its own unique structure which is split through a word. Similarly, another difference transfixes hold from other affixes is that the individual components of the transfix are meaningless on their own. If we look again atḍaraba,the components of the–a–a–atransfix do not encode any meaning individually. Only together do they create the tense meaning.
The following are examples of verb inflection inMaltese, noun derivation inArabic, and noun pluralization inHausa, all three of which areAfro-Asiatic languages.
The Maltese example efficiently demonstrates the broad nature of transfixes and how they can be inserted into a root.
The Arabic example shows the ways in which a great variety of different nouns and verbs can be derived from a single root through the use of transfixes.
The Hausa example demonstrates the presence of transfixation in non-Semitic languages, though the phenomenon does not seem to be attested outside the Afro-Asiatic family.
|
https://en.wikipedia.org/wiki/Transfix
|
Business process modeling(BPM) is the action of capturing and representingprocessesof an enterprise (i.e.modelingthem), so that the current business processes may be analyzed, applied securely and consistently, improved, and automated.
BPM is typically performed by business analysts, with subject matter experts collaborating with these teams to accurately model processes. It is primarily used inbusiness process management,software development, orsystems engineering.
Alternatively, process models can be directly modeled from IT systems, such as event logs.
According to the Association of Business Process Management Professionals (ABPMP), business process modeling is one of the five key disciplines withinBusiness Process Management(BPM).[1](Chapter 1.4 CBOK® structure) ← automatic translation from GermanThe five disciplines are:
However, these disciplines cannot be considered in isolation: Business process modeling always requires abusiness process analysisfor modeling the as-is processes (see sectionAnalysis of business activities) or specifications fromprocess designfor modeling the to-be processes (see sectionsBusiness process reengineeringandBusiness process optimization).
The focus of business process modeling is on therepresentationof the flow ofactions (activities), according to Hermann J. Schmelzer and Wolfgang Sesselmann consisting "of the cross-functional identification of value-adding activities that generate specific services expected by the customer and whose results have strategic significance for the company. They can extend beyond company boundaries and involve activities of customers, suppliers, or even competitors."[2](Chapter 2.1 Differences between processes and business processes) ← automatic translation from German
But also otherqualities(facts) such asdataandbusiness objects(as inputs/outputs,formal organizationsandroles(responsible/accountable/consulted/informed persons, seeRACI),resourcesandIT-systemsas well asguidelines/instructions (work equipment),requirements,key figuresetc. can be modeled.
Incorporating more of these characteristics into business process modeling enhances the accuracy of abstraction but also increases model complexity. "To reduce complexity and improve the comprehensibility and transparency of the models, the use of a view concept is recommended."[3](Chapter 2.4 Views of process modeling) ← automatic translation from GermanThere is also a brief comparison of the view concepts of five relevant German-speaking schools ofbusiness informatics: 1) August W. Scheer, 2) Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch.
The termviews (August W. Scheer, Otto K. Ferstl and Elmar J. Sinz, Hermann Gehring and Andreas Gadatsch) is not used uniformly in all schools of business informatics – alternative terms aredesign dimensions(Hubert Österle) orperspectives(Zachman).
M. Rosemann, A. Schwegmann, and P. Delfmann also see disadvantages in theconcept of views: "It is conceivable to create information models for each perspective separately and thus partially redundantly. However, redundancies always mean increased maintenance effort and jeopardize the consistency of the models."[4](Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
According to Andreas Gadatsch,business process modeling is understood as a part of business process management alongside process definition and process management.[3](Chapter 1.1 Process management) ← automatic translation from German
Business process modeling is also a central aspect of holistic company mapping – which also deals with the mapping of thecorporate mission statement, corporate policy/corporate governance, organizational structure, process organization,application architecture, regulations and interest groups as well as themarket.
According to the European Association of Business Process Management EABPM, there are three different types of end-to-end business processes:
These three process types can be identified in every company and are used in practice almost without exception as the top level for structuring business process models.[5]Instead the termleadership processesthe termmanagement processesis typically used. Instead of the termexecution processesthe termcore processeshas become widely accepted.[2](Chapter 6.2.1 Objectives and concept) ← automatic translation from German,[6](Chapter 1.3 The concept of process) ← automatic translation from German,[7](Chapter 4.12.2 Differentiation between core and support objectives) ← automatic translation from German,[8](Chapter 6.2.2 Identification and rough draft) ← automatic translation from German
If thecore processesare then organized/decomposed at the next level insupply chain management(SCM),customer relationship management(CRM), andproduct lifecycle management(PLM), standard models of large organizations and industry associations such as theSCOR modelcan also be integrated into business process modeling.
Techniques to model business processes such as theflow chart,functional flow block diagram,control flow diagram,Gantt chart,PERTdiagram, andIDEFhave emerged since the beginning of the 20th century. The Gantt charts were among the first to arrive around 1899, the flow charts in the 1920s, functional flow block diagram and PERT in the 1950s, anddata-flow diagramsand IDEF in the 1970s. Among the modern methods areUnified Modeling LanguageandBusiness Process Model and Notation. Still, these represent just a fraction of the methodologies used over the years to document business processes.[9]The termbusiness process modelingwas coined in the 1960s in the field ofsystems engineeringby S. Williams in his 1967 article "Business Process Modelling Improves Administrative Control".[10]His idea was that techniques for obtaining a better understanding of physical control systems could be used in a similar way forbusiness processes. It was not until the 1990s that the term became popular.
In the 1990s, the termprocessbecame a new productivity paradigm.[11]Companies were encouraged to think inprocessesinstead offunctionsandprocedures. Process thinking looks at the chain of events in the company from purchase to supply, from order retrieval to sales, etc. The traditional modeling tools were developed to illustrate time and cost, while modern tools focus on cross-functional activities. These cross-functional activities have increased significantly in number and importance, due to the growth of complexity and dependence. New methodologies includebusiness process redesign, business process innovation, business process management,integrated business planning, among others, all "aiming at improving processes across the traditional functions that comprise a company".[11]
In the field ofsoftware engineering, the termbusiness process modelingopposed the commonsoftware processmodeling, aiming to focus more on the state of the practice duringsoftware development.[12]In that time (the early 1990s) all existing and new modeling techniques to illustrate business processes were consolidated as 'business processmodeling languages'[citation needed]. In theObject Orientedapproach, it was considered to be an essential step in the specification of business application systems. Business process modeling became the base of new methodologies, for instance, those that supporteddata collection, data flow analysis, process flow diagrams, and reporting facilities. Around 1995, the first visually oriented tools for business process modeling and implementation were presented.
The objective of business process modeling is a – usually graphical – representation of end-to-end processes, whereby complex facts of reality are documented using a uniform (systematized) representation and reduced to the substantial (qualities). Regulatory requirements for the documentation of processes often also play a role here (e.g.document control,traceability, orintegrity), for example fromquality management,information security managementordata protection.
Business process modeling typically begins with determining the environmental requirements: First, thegoalof the modeling (applications of business process modeling) must be determined. Business process models are now often used in a multifunctional way (see above). Second the model addressees must be determined, as the properties of the model to be created must meet their requirements. This is followed by the determination of the business processes to be modeled.
The qualities of the business process that are to be represented in the model are specified in accordance with the goal of the modeling. As a rule, these are not only the functions constituting the process, including therelationshipsbetween them, but also a number of other qualities, such as formal organization, input, output,resources,information,media,transactions,events,states,conditions,operationsandmethods.
The objectives of business process modeling may include (compare: Association of Business Process Management Professionals (ABPMP)[1](Chapter 3.1.2 Process characteristics and properties) ← automatic translation from German):
Since business process modeling in itself makes no direct contribution to the financialsuccessof a company, there is no motivation for business process modeling from the most important goal of a company, theintention to make a profit. The motivation of a company to engage in business process modeling therefore always results from the respective purpose.Michael Rosemann, Ansgar Schwegmann und Patrick Delfmannlists a number of purposes as motivation for business process modeling:
Within an extensive research program initiated in 1984 titled "Management in the 1990s" atMIT, the approach ofprocess re-engineeringemerged in the early 1990s. The research program was designed to explore the impact of information technology on the way organizations would be able to survive and thrive in the competitive environment of the 1990s and beyond. In the final report, N. Venkat Venkatraman[15]summarizes the result as follows: The greatest increases in productivity can be achieved when new processes are planned in parallel with information technologies.
This approach was taken up byThomas H. Davenport[16](Part I: A Framework For Process Innovation, Chapter: Introduction)as well asMichael M. HammerandJames A. Champy[17]and developed it into business process re-engineering (BPR) as we understand it today, according to which business processes are fundamentally restructured in order to achieve an improvement in measurable performance indicators such as costs, quality, service and time.
Business process re-engineering has been criticized in part for starting from a "green field" and therefore not being directly implementable for established companies.Hermann J. Schmelzer and Wolfgang Sesselmannassess this as follows: "The criticism of BPR has an academic character in many respects. ... Some of the points of criticism raised are justified from a practical perspective. This includes pointing out that an overly radical approach carries the risk of failure. It is particularly problematic if the organization and employees are not adequately prepared for BPR."[2](Chapter 6.2.1 Objectives and concept) ← automatic translation from German
The high-level approach to BPR according to Thomas H. Davenport consists of:
With ISO/IEC 27001:2022, the standard requirements for management systems are now standardized for all major ISO standards and have a process character.
In the ISO/IEC 9001,ISO/IEC 14001, ISO/IEC 27001 standards, this is anchored in Chapter 4.4 in each case:
Clause 4.4 Quality management system and its processes
Clause 4.4. Environmental management systems
Clause 4.4 Information security management system
Each of these standards requires the organization to establish, implement, maintain and continually improve an appropriate management system "including the processes needed and their interactions".[18],[19],[20]
In the definition of the standard requirements for theprocesses needed and their interactions, ISO/IEC 9001 is more specific in clause 4.4.1 than any other ISO standard for management systems and defines that "the organization shall determine and apply the processes needed for"[18]an appropriate management system throughout the organization and also lists detailed requirements with regard to processes:
In addition, clause 4.4.2 of the ISO/IEC 9001 lists some more
detailed requirements with regard to processes:
The standard requirements fordocumented informationare also relevant for business process modelling as part of an ISO management system.
In the standards ISO/IEC 9001, ISO/IEC 14001, ISO/IEC 27001 the requirements with regard todocumented informationare anchored in clause 7.5 (detailed in the respective standard in clauses "7.5.1. General", "7.5.2. Creating and updating" and "7.5.3. Control of documented information").
The standard requirements of ISO/IEC 9001 used here as an exampleincludein clause "7.5.1. General"
Demandin clause "7.5.2. Creating and updating"
Andrequirein clause "7.5.3. Control of documented information"
Based on the standard requirements,
Preparing for ISO certification of a management system is a very good opportunity to establish or promote business process modelling in the organisation.
Hermann J. Schmelzer and Wolfgang Sesselmann point out that the field of improvement of the three methods mentioned by them as examples for process optimization (control and reduction of total cycle time (TCT),KaizenandSix Sigma) are processes: In the case of total cycle time (TCT), it is the business processes (end-to-end processes) and sub-processes, with Kaizen it is the process steps and activity and with Six Sigma it is the sub-processes, process steps and activity.[2](Chapter 6.3.1 Total Cycle Time (TCT), KAIZEN and Six Sigma in comparison) ← automatic translation from German
For thetotal cycle time(TCT), Hermann J. Schmelzer and Wolfgang Sesselmann list the following key features:[2](Chapter 6.3.2 Total Cycle Time (TCT)) ← automatic translation from German
Consequently, business process modeling for TCT must support adequate documentation of barriers, barrier handling, and measurement.
When examining Kaizen tools, initially, there is no direct connection to business processes or business process modeling. However, Kaizen and business process management can mutually enhance each other. In the realm of business process management, Kaizen's objectives are directly derived from the objectives for business processes and sub-processes. This linkage ensures that Kaizen measures effectively support the overarching business objectives."[2](Chapter 6.3.3 KAIZEN) ← automatic translation from German
Six Sigma is designed to prevent errors and improve theprocess capabilityso that the proportion of process outcomes that meet the requirements is 6σ – or in other words, for every million process outcomes, only 3.4 errors occur. Hermann J. Schmelzer and Wolfgang Sesselmann explain: "Companies often encounter considerable resistance at a level of 4σ, which makes it necessary to redesign business processes in the sense of business process re-engineering (design for Six Sigma)."[2](Chapter 6.3.4 Six Sigma) ← automatic translation from GermanFor a reproducible measurement of process capability, precise knowledge of the business processes is required and business process modeling is a suitable tool for design for Six Sigma. Six Sigma, therefore, uses business process modeling according toSIPOCas an essential part of the methodology, and business process modeling using SIPOC has established itself as a standard tool for Six Sigma.
The aim of inter-company business process modeling is to include the influences of externalstakeholdersin the analysis or to achieve inter-company comparability of business processes, e.g. to enable benchmarking.
Martin Kuglerlists the following requirements for business process modeling in this context:[21](Chapter 14.2.1 Requirements for inter-company business process modeling) ← automatic translation from German
The analysis of business activities determines and defines the framework conditions for successful business process modeling. This is where the company should start,
Thisstrategy for the long-term success of business process modelingcan be characterized by the market-oriented view and/or the resource-based view.Jörg Becker and Volker Meiseexplain: "Whereas in the market view, the industry and the behavior of competitors directly determine a company's strategy, the resource-oriented approach takes an internal view by analyzing the strengths and weaknesses of the company and deriving the direction of development of the strategy from this."[7](Chapter 4.6 The resource-based view) ← automatic translation from GermanAnd further: "The alternative character initially formulated in the literature between the market-based and resource-based view has now given way to a differentiated perspective. The core competence approach is seen as an important contribution to the explanation of success potential, which is used alongside the existing, market-oriented approaches."[7](Chapter 4.7 Combination of views) ← automatic translation from GermanDepending on the company's strategy, theprocess mapwill therefore be the business process models with a view to market development and to resource optimization in a balanced manner.
Following the identification phase, a company's business processes are distinguished from one another through an analysis of their respective business activities (refer also to business process analysis). A business process constitutes a set of interconnected, organized actions (activities) geared towards delivering a specific service or product (to fulfill a specific goal) for a particular customer or customer group.
According to the European Association of Business Process Management (EABPM), establishing a common understanding of the current process and its alignment with the objectives serves as an initial step in process design or reengineering."[1](Chapter 4 Process analysis) ← automatic translation from German
The effort involved in analysing the as-is processes is repeatedly criticised in the literature, especially by proponents of business process re-engineering (BPR), and it is suggested that the definition of the target state should begin immediately.
Hermann J. Schmelzer and Wolfgang Sesselmann, on the other hand, discuss and evaluate the criticism levelled at the radical approach of business process re-engineering (BPR) in the literature and "recommend carrying out as-is analyses. A reorganisation must know the current weak points in order to be able to eliminate them. The results of the analyses also provide arguments as to why a process re-engineering is necessary. It is also important to know the initial situation for the transition from the current to the target state. However, the analysis effort should be kept within narrow limits. The results of the analyses should also not influence the redesign too strongly."[2](Chapter 6.2.2 Critical assessment of the BPR) ← automatic translation from German
Timo Füermannexplains: "Once the business processes have been identified and named, they are now compiled in an overview. Such overviews are referred to as process maps."[22](Chapter 2.4 Creating the process map) ← automatic translation from German
Jörg Becker and Volker Meiseprovide the following list of activities for structuring business processes:
The structuring of business processes generally begins with a distinction between management, core, and support processes.
As thecore business processesclearly make up the majority of a company's identified business processes, it has become common practice to subdivide the core processes once again. There are different approaches to this depending on the type of company and business activity. These approaches are significantly influenced by the definedapplicationof business process modeling and thestrategy for the long-term success of business process modeling.
In the case of a primarily market-based strategy, end-to-end core business processes are often defined from the customer or supplier to the retailer or customer (e.g. "from offer to order", "from order to invoice", "from order to delivery", "from idea to product", etc.). In the case of a strategy based on resources, the core business processes are often defined on the basis of the central corporate functions ("gaining orders", "procuring and providing materials", "developing products", "providing services", etc.).
In a differentiated view without a clear focus on the market view or the resource view, the core business processes are typically divided into CRM, PLM and SCM.
However, other approaches to structuring core business processes are also common, for example from the perspective of customers, products or sales channels.
The result of structuring a company's business processes is theprocess map(shown, for example, as avalue chain diagram).Hermann J. Schmelzer and Wolfgang Sesselmannadd: "There are connections and dependencies between the business processes. They are based on the transfer of services and information. It is important to know these interrelationships in order to understand, manage, and control the business processes."[2](Chapter 2.4.3 Process map) ← automatic translation from German
The definition of business processes often begins with the company's core processes because they
For the company
The scope of a business process should be selected in such a way that it contains a manageable number of sub-processes, while at the same time keeping the total number of business processes within reasonable limits. Five to eight business processes per business unit usually cover the performance range of a company.
Each business process should be independent – but the processes are interlinked.
The definition of a business process includes: What result should be achieved on completion? What activities are necessary to achieve this? Which objects should be processed (orders, raw materials, purchases, products, ...)?
Depending on the prevailing corporate culture, which may either be more inclined towards embracing change or protective of the status quo and the effectiveness of communication, defining business processes can prove to be either straightforward or challenging. This hinges on the willingness of key stakeholders within the organization, such as department heads, to lend their support to the endeavor. Within this context, effective communication plays a pivotal role.
In elucidating this point, Jörg Becker and Volker Meise elucidate that the communication strategy within an organizational design initiative should aim to garner support from members of the organization for the intended structural changes. It is worth noting that business process modeling typically precedes business process optimization, which entails a reconfiguration of process organization – a fact well understood by the involved parties. Therefore, the communication strategy must focus on persuading organizational members to endorse the planned structural adjustments."[7](Chapter 4.15 Influencing the design of the regulatory framework) ← automatic translation from GermanIn the event of considerable resistance, however, external knowledge can also be used to define the business processes.
Jörg Becker and Volker Meisemention two approaches (general process identificationandindividual process identification) and state the following about general process identification: "In the general process definition, it is assumed that basic, generally valid processes exist that are the same in all companies." It goes on to say: "Detailed reference models can also be used for general process identification. They describe industry- or application system-specific processes of an organization that still need to be adapted to the individual case, but are already coordinated in their structure."[7](Chapter 4.11 General process identification) ← automatic translation from German
Jörg Becker and Volker Meisestate the following about individual process identification: "In individual or singular process identification, it is assumed that the processes in each company are different according to customer needs and the competitive situation and can be identified inductively based on the individual problem situation."[7](Chapter 4.12 Individual process identification) ← automatic translation from German
The result of the definition of the business processes is usually a rough structure of the business processes as a value chain diagram.
The rough structure of the business processes created so far will now be decomposed – by breaking it down into sub-processes that have their own attributes but also contribute to achieving the goal of the business process. This decomposition should be significantly influenced by theapplicationandstrategy for the long-term success of business process modelingand should be continued as long as the tailoring of the sub-processes defined this way contributes to the implementation of thepurposeandstrategy.
A sub-process created in this way uses amodelto describe the way in which procedures are carried out in order to achieve the intended operating goals of the company. The model is an abstraction of reality (or a target state) and its concrete form depends on the intended use (application).
A further decomposition of the sub-processes can then take place duringbusiness process modelingif necessary. If the business process can be represented as a sequence of phases, separated bymilestones, the decomposition into phases is common. Where possible, the transfer of milestones to the next level of decomposition contributes to general understanding.
The result of the further structuring of business processes is usually a hierarchy of sub-processes, represented in value chain diagrams. It is common that not all business processes have the same depth of decomposition. In particular, business processes that are not safety-relevant, cost-intensive or contribute to the operating goal are broken down to a much lesser depth. Similarly, as a preliminary stage of a decomposition of a process planned for (much) later, a common understanding can first be developed using simpler / less complex means thanvalue chain diagrams– e.g. with a textual description or with a turtle diagram[22](Chapter 3.1 Defining process details) ← automatic translation from German(not to be confused withturtle graphic!).
Complete, self-contained processes are summarized and handed over to a responsible person or team. Theprocess owneris responsible for success, creates the framework conditions, and coordinates his or her approach with that of the other process owners. Furthermore, he/she is responsible for the exchange of information between the business processes. This coordination is necessary in order to achieve the overall goal orientation.
If business processes are documented using a specific IT-system andrepresentation, e.g. graphically, this is generally referred to as modeling. The result of the documentation is thebusiness process model.
The question of whether the business process model should be created throughas is modelingorto be modelingis significantly influenced by the definedapplicationand thestrategy for the long-term success of business process modeling. The previous procedure with analysis of business activities,defineition of business processesandfurther structuring of business processesis advisable in any case.
Ansgar Schwegmann and Michael Laske explain: "Determining the current status is the basis for identifying weaknesses and localizing potential for improvement. For example, weak points such as organizational breaks or insufficient IT penetration can be identified."[23](Chapter 5.1 Intention of theas ismodeling) ← automatic translation from German
The following disadvantages speak againstas ismodeling:
These arguments weigh particularly heavily if Business process re-engineering (BPR) is planned anyway.
Ansgar Schwegmann and Michael Laske also list a number of advantages ofas ismodeling:[23](Chapter 5.1 Intention of as-is modeling) ← automatic translation from German
Other advantages can also be found, such as
Mario Speck and Norbert Schnetgöke define the objective ofto bemodeling as follows: "The target processes are based on the strategic goals of the company. This means that all sub-processes and individual activities of a company must be analyzed with regard to their target contribution. Sub-processes or activities that cannot be identified as value-adding and do not serve at least one non-monetary corporate objective must therefore be eliminated from the business processes."[8](Chapter 6.2.3 Capturing and documentingto bemodels
)
They also list five basic principles that have proven their worth in the creation ofto bemodels:
The business process model created byas is modelingorto be modelingconsists of:
August W. Scheer is said to have said in his lectures:A process is a process is a process.This is intended to express therecursivenessof the term, because almost every process can be broken down into smaller processes (sub-processes). In this respect, terms such asbusiness process,main process,sub-processorelementary processare only a desperate attempt to name the level of process decomposition. As there is no universally valid agreement on the granularity of abusiness process,main process,sub-processorelementary process, the terms are not universally defined, but can only be understood in the context of the respective business process model.
In addition, some German-speaking schools of business informatics do not use the termsprocess(in the sense of representing the sequence ofactions) andfunction(in the sense of a delimitedcorporate function/action (activity) area that is clearly assigned to acorporate function owner).
For example, in August W. Scheer's ARIS it is possible to use functions from thefunction viewas processes in thecontrol viewand vice versa. Although this has the advantage that already defined processes or functions can be reused across the board, it also means that the proper purpose of thefunction viewis diluted and the ARIS user is no longer able to separateprocessesandfunctionsfrom one another.
The first image shows as a value chain diagram how the business processEdit sales pipelinehas been broken down intosub-processes(in the sense of representing the sequence of actions (activities)) based on its phases.
The second image shows an excerpt of typicalfunctions(in the sense of delimitedcorporate function/action (activity) areas, which are assigned to acorporate function owner), which are structured based on the areas of competence and responsibility hierarchy. Thecorporate functionsthat support the business processEdit sales pipelineare marked in the function tree.
A business process can be decomposed into sub-processes until further decomposition is no longer meaningful/possible (smallest meaningful sub-process =elementary process). Usually, all levels of decomposition of a business process are documented in the same methodology: Process symbols. The process symbols used when modeling one level of decomposition then usually refer to the sub-processes of the next level until the level ofelementary processesis reached. Value chain diagrams are often used to representbusiness processes,main processes,sub-processesandelementary processes.
Aworkflowis a representation of a sequence of tasks, declared as work of a person, of a simple or complex mechanism, of a group of persons,[24]of an organization of staff, or of machines (including IT-systems). A workflow is therefore always located at the elementary process level. The workflow may be seen as any abstraction of real work, segregated into workshare, work split, or other types of ordering. For control purposes, the workflow may be a view of real work under a chosen aspect.
The termfunctionsis often used synonymously for a delimitedcorporate function/action (activita) area, which is assigned to acorporate function owner, and the atomicactivity (task)at the level of theelementary processes. In order to avoid the double meaning of the termfunction, the termtaskcan be used for the atomic activities at the level of theelementary processesin accordance with the naming in BPMN. Modern tools also offer the automatic conversion of ataskinto aprocess, so that it is possible to create a further level of process decomposition at any time, in which ataskmust then be upgraded to anelementary process.
The graphical elements used at the level of elementary processes then describe the (temporal-logical) sequence with the help of functions (tasks). The sequence of the functions (tasks) within theelementary processesis determined by their logical linking with each other (bylogical operatorsorGateways), provided it is not already specified by input/output relationships or Milestones. It is common to use additional graphical elements to illustrate interfaces, states (events), conditions (rules), milestones, etc. in order to better clarify the process. Depending on the modeling tool used, very different graphical representation (models) are used.
Furthermore, the functions (tasks) can be supplemented with graphical elements to describe inputs, outputs, systems, roles, etc. with the aim of improving the accuracy of the description and/or increasing the number of details. However, these additions quickly make themodelconfusing. To resolve the contradiction between accuracy of description and clarity, there are two main solutions: Outsourcing the additional graphical elements for describing inputs, outputs, systems, roles, etc. to aFunction Allocation Diagram(FAD) or selectively showing/hiding these elements depending on the question/application.
Thefunction allocation diagramshown in the image illustrates the addition of graphical elements for the description of inputs, outputs, systems, roles, etc. to functions (tasks) very well.
The termmaster datais neither defined byThe Open Group(The Open Group Architecture Framework, TOGAF) orJohn A. Zachman(Zachman Framework) nor any of the five relevant German-speaking schools of business informatics: 1)August W. Scheer, 2)Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch and is commonly used in the absence of a suitable term in the literature. It is based on the general term fordatathat represents basic information about operationally relevant objects and refers to basic information that is not primary information of the business process.
For August W. Scheer in ARIS, this would be the basic information of the organization view, data view, function view and performance view.[25](Chapter 1 The vision: A common language for IT and management) ← automatic translation from German
For Andreas Gadatsch in GPM (GanzheitlicheProzessmodellierung (German), means holistic process modelling), this would be the basic information of the organizational structure view, activity structure view, data structure view, and application structure view.[3](Chapter 3.2 GPM – Holistic process modelling) ← automatic translation from German
For Otto K. Ferstl and Elmar J. Sinz in SOM (SemanticObjektmodell), this would be the basic information of the levels Business plan and Resourcen.
Master data can be, for example:
By adding master data to the business process modeling, the same business process model can be used for differentapplicationand areturn on investmentfor the business process modeling can be achieved more quickly with the resulting synergy.
Depending on how much value is given to master data in business process modeling, it is still possible to embed the master data in the process model without negatively affecting the readability of the model or the master data should be outsourced to a separate view, e.g.Function Allocation Diagrams.
If master data is systematically added to the business process model, this is referred to as anartifact-centric business processmodel.
Theartifact-centric business process modelhas emerged as a holistic approach for modeling business processes, as it provides a highly flexible solution to capture operational specifications of business processes. It particularly focuses on describing the data of business processes, known as "artifacts", by characterizing business-relevant data objects, their life-cycles, and related services. The artifact-centric process modelling approach fosters the automation of the business operations and supports the flexibility of the workflow enactment and evolution.[26]
The integration of externaldocumentsand IT-systems can significantly increase the added value of a business process model.
For example, direct access to objects in aknowledge databaseor documents in arule frameworkcan significantly increase the benefits of the business process model in everyday life and thus the acceptance of business process modeling. All IT-systems involved can exploit their specific advantages and cross-fertilize each other (e.g. link to each other or standardize the filing structure):
If all relevant objects of theknowledge databaseand / or documents of therule frameworkare connected to the processes, the end users have context-related access to this information and do not need to be familiar with the respective filing structure of the connected systems.
The direct connection of external systems can also be used to integrate current measurement results or system statuses into the processes (and, for example, to display the current operating status of the processes), to displaywidgetsand show output from external systems or to jump to external systems and initiate a transaction there with a preconfigured dialog.
Further connections to external systems can be used, for example, forelectronic data interchange(EDI).
This is about checking whether there are any redundancies. If so, the relevant sub-processes are combined. Or sub-processes that are used more than once are outsourced to support processes. For a successful model consolidation, it may be necessary to revise the original decomposition of the sub-processes.
Ansgar Schwegmann and Michael Laskeexplain: "A consolidation of the models of different modeling complexes is necessary in order to obtain an integrated ... model."[23](Chapter 5.2.4 Model consolidation) ← automatic translation from GermanThey also list a number of aspects for which model consolidation is important:
The chaining of the sub-processes with each other and the chaining of the functions (tasks) in the sub-processes is modeled using Control Flow Patterns.
Material details of the chaining (What does the predecessor deliver to the successor?) are specified in the process interfaces if intended.
Process interfaces are defined in order to
As a rule, thiswhatand its structure is determined by the requirements in the subsequent process.
Process interfaces represent the exit from the current business process/sub-process and the entry into the subsequent business process/sub-process.
Process interfaces are therefore description elements for linking processes section by section. A process interface can
Process interfaces are agreed between the participants of superordinate/subordinate or neighboring business process models. They are defined and linked once and used as often as required inprocess models.
Interfaces can be defined by:
In real terms, the transferred inputs/outputs are often data or information, but any other business objects are also conceivable (material, products in their final or semi-finished state, documents such as a delivery bill). They are provided via suitable transport media (e.g. data storage in the case of data).
See article Business process management.
In order to put improved business processes into practice,change managementprograms are usually required. With advances in software design, the vision of BPM models being fully executable (enabling simulations and round-trip engineering) is getting closer to reality.
In business process management, process flows are regularly reviewed and optimized (adapted) if necessary. Regardless of whether this adaptation of process flows is triggered bycontinuous process improvementor by process reorganization (business process re-engineering), it entails an update of individual sub-processes or an entire business process.
In practice, combinations ofinformal,semiformalandformalmodels are common:informaltextual descriptions for explanation,semiformalgraphical representation forvisualization, andformal languagerepresentation to supportsimulationand transfer into executable code.
There are various standards for notations; the most common are:
Furthermore:
In addition, representation types fromsoftware architecturecan also be used:
Business Process Model and Notation(BPMN) is agraphical representationfor specifyingbusiness processesin a business process model.
Anevent-driven process chain(EPC) is a type offlow chartfor business process modeling. EPC can be used to configureenterprise resource planningexecution, and forbusiness processimprovement. It can be used to control an autonomous workflow instance in work sharing.
APetri net, also known as a place/transition net (PT net), is one of severalmathematicalmodeling languagesfor the description ofdistributed systems. It is a class ofdiscrete event dynamic system. A Petri net is a directedbipartite graphthat has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles.
A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources[33]state that Petri nets were invented in August 1939 byCarl Adam Petri— at the age of 13 — for the purpose of describing chemical processes.
Like industry standards such asUMLactivity diagrams,Business Process Model and Notation, andevent-driven process chains, Petri nets offer agraphical notationfor stepwise processes that include choice,iteration, andconcurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis[citation needed].
Aflowchartis a type ofdiagramthat represents aworkfloworprocess. A flowchart can also be defined as a diagrammatic representation of analgorithm, a step-by-step approach to solving a task.
The Lifecycle Modeling Language (LML)is an open-standard modeling language designed forsystems engineering. It supports the fulllifecycle: conceptual, utilization, support and retirement stages. Along with the integration of all lifecycle disciplines including,program management, systems and designengineering,verification and validation, deployment and maintenance into one framework.[38]LML was originally designed by the LML steering committee. The specification was published October 17, 2013.
Subject-oriented business process management(S-BPM) is a communication based view onactors(the subjects), which compose a business process orchestration or choreography.[40]The modeling paradigm uses five symbols to model any process and allows direct transformation into executable form.
Each business process consists of two or moresubjectswhich exchangemessages. Each subject has aninternal behavior(capsulation), which is defined as a control flow between different states, which arereceiveandsend messageanddo something. For practical usage and for syntactical sugaring there are more elements available, but not necessary.
Cognition enhanced Natural language Information Analysis Method (CogNIAM)is a conceptualfact-based modelling method, that aims to integrate the different dimensions of knowledge: data, rules, processes and semantics. To represent these dimensions world standardsSBVR,BPMNandDMNfrom theObject Management Group(OMG) are used. CogNIAM, a successor ofNIAM, is based on the work of knowledge scientistSjir Nijssen.[citation needed]
TheUnified Modeling Language(UML) is a general-purpose visualmodeling languagethat is intended to provide a standard way to visualize the design of a system.[45]
UML provides a standard notation for many types of diagrams which can be roughly divided into three main groups: behavior diagrams, interaction diagrams, and structure diagrams.
The creation of UML was originally motivated by the desire to standardize the disparate notational systems and approaches to software design. It was developed atRational Softwarein 1994–1995, with further development led by them through 1996.[46]
In 1997, UML was adopted as a standard by theObject Management Group(OMG) and has been managed by this organization ever since. In 2005, UML was also published by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC) as the ISO/IEC 19501 standard.[47]Since then the standard has been periodically revised to cover the latest revision of UML.[48]
IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation,object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
Harbarian process modeling (HPM)is a method for obtaining internalprocessinformation from an organization and then documenting that information in a visually effective, simple manner.
The HPM method involves two levels:
Business process modelling tools provide business users with the ability to model their business processes, implement and execute those models, and refine the models based on as-executed data. As a result, business process modelling tools can provide transparency into business processes, as well as the centralization of corporate business process models and execution metrics.[51]Modelling tools may also enable collaborate modelling of complex processes by users working in teams, where users can share and simulate models collaboratively.[52]Business process modelling tools should not be confused with business process automation systems – both practices have modeling the process as the same initial step and the difference is that process automation gives you an 'executable diagram' and that is drastically different from traditional graphical business process modelling tools.[citation needed]
BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine.[51]This component is often referenced as theengineof the BPM suite.
Programming languages that are being introduced for BPM include:[53]
Some vendor-specific languages:
Other technologies related to business process modelling includemodel-driven architectureandservice-oriented architecture.
The simulation functionality of such tools allows for pre-execution "what-if" modelling (which has particular requirements for this application) and simulation. Post-execution optimization is available based on the analysis of actual as-performed metrics.[51]
Abusiness reference modelis a reference model, concentrating on the functional and organizational aspects of anenterprise,service organization, orgovernment agency. In general, areference modelis a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that performs them. Other types of business reference models can also depict the relationship between the business processes, business functions, and the business area's business reference model. These reference models can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance.
The most familiar business reference model is the Business Reference Model of the US federal government. That model is afunction-drivenframework for describing the business operations of the federal government independent of the agencies that perform them. The Business Reference Model provides an organized, hierarchical construct for describing the day-to-day business operations of the federal government. While many models exist for describing organizations –organizational charts, location maps, etc. – this model presents the business using a functionally driven approach.[55]
A business model, which may be considered an elaboration of a business process model, typically shows business data and business organizations as well as business processes. By showing business processes and their information flows, a business model allows business stakeholders to define, understand, and validate their business enterprise. Thedata modelpart of the business model shows how business information is stored, which is useful for developingsoftware code. See the figure on the right for an example of the interaction between business process models and data models.[56]
Usually, a business model is created after conducting an interview, which is part of thebusiness analysisprocess. The interview consists of a facilitator asking a series of questions to extract information about the subject business process. The interviewer is referred to as a facilitator to emphasize that it is the participants, not the facilitator, who provide the business process information. Although the facilitator should have some knowledge of the subject business process, but this is not as important as the mastery of a pragmatic and rigorous method interviewing business experts. The method is important because for most enterprises a team of facilitators is needed to collect information across the enterprise, and the findings of all the interviewers must be compiled and integrated once completed.[56]
Business models are developed to define either the current state of the process, resulting in the 'as is' snapshot model, or a vision of what the process should evolve into, leading to a 'to be' model. By comparing and contrasting the 'as is' and 'to be' models, business analysts can determine if existing business processes and information systems require minor modifications or if reengineering is necessary to enhance efficiency. As a result, business process modeling and subsequent analysis can fundamentally reshape the way an enterprise conducts its operations.[56]
Business process reengineering(BPR) aims to improve theefficiencyand effectiveness of the processes that exist within and across organizations. It examines business processes from a "clean slate" perspective to determine how best to construct them.
Business process re-engineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work. A key stimulus for re-engineering has been the development and deployment of sophisticated information systems and networks. Leading organizations use this technology to support innovative business processes, rather than refining current ways of doing work.[57]
Change management programs are typically involved to put any improved business processes into practice. With advances in software design, the vision of BPM models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality.
In business process management, process flows are regularly reviewed and, if necessary, optimized (adapted). Regardless of whether this adaptation of process flows is triggered bycontinual improvement processor business process re-engineering, it entails updating individual sub-processes or an entire business process.
{{|bot=InternetArchiveBot |fix-attempted=yes}}
|
https://en.wikipedia.org/wiki/Business_process_model
|
Ingraph theory,graph coloringis a methodic assignment of labels traditionally called "colors" to elements of agraph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case ofgraph labeling. In its simplest form, it is a way of coloring theverticesof a graph such that no two adjacent vertices are of the same color; this is called avertex coloring. Similarly, anedge coloringassigns a color to eachedgesso that no two adjacent edges are of the same color, and aface coloringof aplanar graphassigns a color to eachface(or region) so that no two faces that share a boundary have the same color.
Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of itsline graph, and a face coloring of a plane graph is just a vertex coloring of itsdual. However, non-vertex coloring problems are often stated and studied as-is. This is partlypedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring.
The convention of using colors originates from coloring the countries in apolitical map, where each face is literally colored. This was generalized to coloring the faces of a graphembeddedin the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use anyfinite setas the "color set". The nature of the coloring problem depends on the number of colors but not on what they are.
Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzleSudoku. Graph coloring is still a very active field of research.
Note: Many terms used in this article are defined inGlossary of graph theory.
The first results about graph coloring deal almost exclusively withplanar graphsin the form ofmap coloring.
While trying to color a map of the counties of England,Francis Guthriepostulated thefour color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie's brother passed on the question to his mathematics teacherAugustus De MorganatUniversity College, who mentioned it in a letter toWilliam Hamiltonin 1852.Arthur Cayleyraised the problem at a meeting of theLondon Mathematical Societyin 1879. The same year,Alfred Kempepublished a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of theRoyal Societyand later President of the London Mathematical Society.[1]
In 1890,Percy John Heawoodpointed out that Kempe's argument was wrong. However, in that paper he proved thefive color theorem, saying that every planar map can be colored with no more thanfivecolors, using ideas of Kempe. In the following century, a vast amount of work was done and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 byKenneth AppelandWolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments.[2]The proof of the four color theorem is noteworthy, aside from its solution of a century-old problem, for being the first major computer-aided proof.
In 1912,George David Birkhoffintroduced thechromatic polynomialto study the coloring problem, which was generalised to theTutte polynomialbyW. T. Tutte, both of which are important invariants inalgebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879,[3]and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century.
In 1960,Claude Bergeformulated another conjecture about graph coloring, thestrong perfect graph conjecture, originally motivated by aninformation-theoreticconcept called thezero-error capacityof a graph introduced byShannon. The conjecture remained unresolved for 40 years, until it was established as the celebratedstrong perfect graph theorembyChudnovsky,Robertson,Seymour, andThomasin 2002.
Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see section§ Vertex coloringbelow) is one ofKarp's 21 NP-complete problemsfrom 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence ofZykov (1949). One of the major applications of graph coloring,register allocationin compilers, was introduced in 1981.
When used without any qualification, acoloringof a graph almost always refers to aproper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the sameedgehave the same color. Since a vertex with aloop(i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless.
The terminology of usingcolorsfor vertex labels goes back to map coloring. Labels likeredandblueare only used when the number of colors is small, and normally it is understood that the labels are drawn from theintegers{1, 2, 3, ...}.
A coloring using at mostkcolors is called a (proper)k-coloring. The smallest number of colors needed to color a graphGis called itschromatic number, and is often denotedχ(G).[4]Sometimesγ(G)is used, sinceχ(G)is also used to denote theEuler characteristicof a graph.[5]A graph that can be assigned a (proper)k-coloring isk-colorable, and it isk-chromaticif its chromatic number is exactlyk. A subset of vertices assigned to the same color is called acolor class; every such class forms anindependent set. Thus, ak-coloring is the same as a partition of the vertex set intokindependent sets, and the termsk-partiteandk-colorablehave the same meaning.
Thechromatic polynomialcounts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4 × 12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (everyassignment of four colors toany4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this:
The chromatic polynomial is a functionP(G,t)that counts the number oft-colorings ofG. As the name indicates, for a givenGthe function is indeed apolynomialint. For the example graph,P(G,t) =t(t− 1)2(t− 2), and indeedP(G, 4) = 72.
The chromatic polynomial includes more information about the colorability ofGthan does the chromatic number. Indeed,χis the smallest positive integer that is not a zero of the chromatic polynomialχ(G) = min{k:P(G,k) > 0}.
Anedge coloringof a graph is a proper coloring of theedges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring withkcolors is called ak-edge-coloring and is equivalent to the problem of partitioning the edge set intokmatchings. The smallest number of colors needed for an edge coloring of a graphGis thechromatic index, oredge chromatic number,χ′(G). ATait coloringis a 3-edge coloring of acubic graph. Thefour color theoremis equivalent to the assertion that every planar cubicbridgelessgraph admits a Tait coloring.
Total coloringis a type of coloring on the verticesandedges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic numberχ″(G)of a graphGis the fewest colors needed in any total coloring ofG.
For a graph with a strong embedding on a surface, theface coloringis the dual of the vertex coloring problem.
For a graphGwith a strong embedding on an orientable surface,William T. Tutte[6][7][8]discovered that if the graph isk-face-colorable thenGadmits a nowhere-zerok-flow. The equivalence holds if the surface is sphere.
Anunlabeled coloringof a graph is anorbitof a coloring under the action of theautomorphism groupof the graph. The colors remain labeled; it is the graph that is unlabeled.
There is an analogue of thechromatic polynomialwhich counts the number of unlabeled colorings of a graph from a given finite color set.
If we interpret a coloring of a graph ondvertices as a vector inZd{\displaystyle \mathbb {Z} ^{d}}, the action of an automorphism is apermutationof the coefficients in the coloring vector.
Assigning distinct colors to distinct vertices always yields a proper coloring, so
The only graphs that can be 1-colored areedgeless graphs. Acomplete graphKn{\displaystyle K_{n}}ofnvertices requiresχ(Kn)=n{\displaystyle \chi (K_{n})=n}colors. In an optimal coloring there must be at least one of the graph'smedges between every pair of color classes, so
More generally a familyF{\displaystyle {\mathcal {F}}}of graphs isχ-boundedif there is some functionc{\displaystyle c}such that the graphsG{\displaystyle G}inF{\displaystyle {\mathcal {F}}}can be colored with at mostc(ω(G)){\displaystyle c(\omega (G))}colors, whereω(G){\displaystyle \omega (G)}is theclique numberofG{\displaystyle G}. For the family of the perfect graphs this function isc(ω(G))=ω(G){\displaystyle c(\omega (G))=\omega (G)}.
The 2-colorable graphs are exactly thebipartite graphs, includingtreesand forests.
By the four color theorem, every planar graph can be 4-colored.
Agreedy coloringshows that every graph can be colored with one more color than the maximum vertexdegree,
Complete graphs haveχ(G)=n{\displaystyle \chi (G)=n}andΔ(G)=n−1{\displaystyle \Delta (G)=n-1}, andodd cycleshaveχ(G)=3{\displaystyle \chi (G)=3}andΔ(G)=2{\displaystyle \Delta (G)=2}, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved;Brooks' theorem[9]states that
Several lower bounds for the chromatic bounds have been discovered over the years:
IfGcontains acliqueof sizek, then at leastkcolors are needed to color that clique; in other words, the chromatic number is at least the clique number:
Forperfect graphsthis bound is tight. Finding cliques is known as theclique problem.
Hoffman's bound:LetW{\displaystyle W}be a real symmetric matrix such thatWi,j=0{\displaystyle W_{i,j}=0}whenever(i,j){\displaystyle (i,j)}is not an edge inG{\displaystyle G}. DefineχW(G)=1−λmax(W)λmin(W){\displaystyle \chi _{W}(G)=1-{\tfrac {\lambda _{\max }(W)}{\lambda _{\min }(W)}}}, whereλmax(W),λmin(W){\displaystyle \lambda _{\max }(W),\lambda _{\min }(W)}are the largest and smallest eigenvalues ofW{\displaystyle W}. DefineχH(G)=maxWχW(G){\textstyle \chi _{H}(G)=\max _{W}\chi _{W}(G)}, withW{\displaystyle W}as above. Then:
Vector chromatic number:LetW{\displaystyle W}be a positive semi-definite matrix such thatWi,j≤−1k−1{\displaystyle W_{i,j}\leq -{\tfrac {1}{k-1}}}whenever(i,j){\displaystyle (i,j)}is an edge inG{\displaystyle G}. DefineχV(G){\displaystyle \chi _{V}(G)}to be the least k for which such a matrixW{\displaystyle W}exists. Then
Lovász number:The Lovász number of a complementary graph is also a lower bound on the chromatic number:
Fractional chromatic number:The fractional chromatic number of a graph is a lower bound on the chromatic number as well:
These bounds are ordered as follows:
Graphs with largecliqueshave a high chromatic number, but the opposite is not true. TheGrötzsch graphis an example of a 4-chromatic graph without a triangle, and the example can be generalized to theMycielskians.
To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family oftriangle-free graphsbut with arbitrarily large chromatic number.[11]Burling (1965)constructed axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}whoseintersection graphis triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et al. (2014).[12]It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}as well as line segments inR2{\displaystyle \mathbb {R} ^{2}}are notχ-bounded.[12]
From Brooks's theorem, graphs with high chromatic number must have high maximum degree. But colorability is not an entirely local phenomenon: A graph with highgirthlooks locally like a tree, because all cycles are long, but its chromatic number need not be 2:
An edge coloring ofGis a vertex coloring of itsline graphL(G){\displaystyle L(G)}, and vice versa. Thus,
There is a strong relationship between edge colorability and the graph's maximum degreeΔ(G){\displaystyle \Delta (G)}. Since all edges incident to the same vertex need their own color, we have
Moreover,
In general, the relationship is even stronger than what Brooks's theorem gives for vertex coloring:
A graph has ak-coloring if and only if it has anacyclic orientationfor which thelongest pathhas length at mostk; this is theGallai–Hasse–Roy–Vitaver theorem(Nešetřil & Ossona de Mendez 2012).
For planar graphs, vertex colorings are essentially dual tonowhere-zero flows.
About infinite graphs, much less is known.
The following are two of the few results about infinite graph coloring:
As stated above,ω(G)≤χ(G)≤Δ(G)+1.{\displaystyle \omega (G)\leq \chi (G)\leq \Delta (G)+1.}A conjecture of Reed from 1998 is that the value is essentially closer to the lower bound,χ(G)≤⌈ω(G)+Δ(G)+12⌉.{\displaystyle \chi (G)\leq \left\lceil {\frac {\omega (G)+\Delta (G)+1}{2}}\right\rceil .}
Thechromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 5, 6, or 7. Otheropen problemsconcerning the chromatic number of graphs include theHadwiger conjecturestating that every graph with chromatic numberkhas acomplete graphonkvertices as aminor, theErdős–Faber–Lovász conjecturebounding the chromatic number of unions of complete graphs that have at most one vertex in common to each pair, and theAlbertson conjecturethat amongk-chromatic graphs the complete graphs are the ones with smallestcrossing number.
When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphsG, the polynomialP(G,t){\displaystyle P(G,t)}has no zeros in the region[4,∞){\displaystyle [4,\infty )}. Although it is known that such a chromatic polynomial has no zeros in the region[5,∞){\displaystyle [5,\infty )}and thatP(G,4)≠0{\displaystyle P(G,4)\neq 0}, their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic.
Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph isbipartite, and thus computable inlinear timeusingbreadth-first searchordepth-first search. More generally, the chromatic number and a corresponding coloring ofperfect graphscan be computed inpolynomial timeusingsemidefinite programming.Closed formulasfor chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time.
If the graph is planar and has low branch-width (or is nonplanar but with a knownbranch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width.
Brute-force searchfor ak-coloring considers each of thekn{\displaystyle k^{n}}assignments ofkcolors tonvertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for everyk=1,…,n−1{\displaystyle k=1,\ldots ,n-1}, impractical for all but the smallest input graphs.
Usingdynamic programmingand a bound on the number ofmaximal independent sets,k-colorability can be decided in time and spaceO(2.4423n){\displaystyle O(2.4423^{n})}.[15]Using the principle ofinclusion–exclusionandYates's algorithm for the fast zeta transform,k-colorability can be decided in timeO(2nn){\displaystyle O(2^{n}n)}[14][16][17][18]for anyk. Faster algorithms are known for 3- and 4-colorability, which can be decided in timeO(1.3289n){\displaystyle O(1.3289^{n})}[19]andO(1.7272n){\displaystyle O(1.7272^{n})},[20]respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs.[21]
ThecontractionG/uv{\displaystyle G/uv}of a graphGis the graph obtained by identifying the verticesuandv, and removing any edges between them. The remaining edges originally incident touorvare now incident to their identification (i.e., the new fused nodeuv). This operation plays a major role in the analysis of graph coloring.
The chromatic number satisfies therecurrence relation:
due toZykov (1949), whereuandvare non-adjacent vertices, andG+uv{\displaystyle G+uv}is the graph with the edgeuvadded. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the verticesuandv.
The chromatic polynomial satisfies the following recurrence relation
whereuandvare adjacent vertices, andG−uv{\displaystyle G-uv}is the graph with the edgeuvremoved.P(G−uv,k){\displaystyle P(G-uv,k)}represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the verticesuandvhave different colors, then we might as well consider a graph whereuandvare adjacent. Ifuandvhave the same colors, we might as well consider a graph whereuandvare contracted. Tutte's curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, theTutte polynomial.
These expressions give rise to a recursive procedure called thedeletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as theFibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of(1+52)n+m=O(1.6180n+m){\displaystyle \left({\tfrac {1+{\sqrt {5}}}{2}}\right)^{n+m}=O(1.6180^{n+m})}fornvertices andmedges.[22]The analysis can be improved to within a polynomial factor of the numbert(G){\displaystyle t(G)}ofspanning treesof the input graph.[23]In practice,branch and boundstrategies andgraph isomorphismrejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair.
Thegreedy algorithmconsiders the vertices in a specific orderv1{\displaystyle v_{1}}, ...,vn{\displaystyle v_{n}}and assigns tovi{\displaystyle v_{i}}the smallest available color not used byvi{\displaystyle v_{i}}'s neighbours amongv1{\displaystyle v_{1}}, ...,vi−1{\displaystyle v_{i-1}}, adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number ofχ(G){\displaystyle \chi (G)}colors. On the other hand, greedy colorings can be arbitrarily bad; for example, thecrown graphonnvertices can be 2-colored, but has an ordering that leads to a greedy coloring withn/2{\displaystyle n/2}colors.
Forchordal graphs, and for special cases of chordal graphs such asinterval graphsandindifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of aperfect elimination orderingfor the graph. Theperfectly orderable graphsgeneralize this property, but it is NP-hard to find a perfect ordering of these graphs.
If the vertices are ordered according to theirdegrees, the resulting greedy coloring uses at mostmaximin{d(xi)+1,i}{\displaystyle {\text{max}}_{i}{\text{ min}}\{d(x_{i})+1,i\}}colors, at most one more than the graph's maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm.[24]Another heuristic due toBrélazestablishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors.[25]Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes calledsequential coloringalgorithms.
The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called theGrundy numberof a graph.
Two well-known polynomial-time heuristics for graph colouring are theDSaturandrecursive largest first(RLF) algorithms.
Similarly to thegreedy colouring algorithm, DSatur colours theverticesof agraphone after another, expending a previously unused colour when needed. Once a newvertexhas been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of different colours in its neighbourhood and colours this vertex next. This is defined as thedegree of saturationof a given vertex.
Therecursive largest first algorithmoperates in a different fashion by constructing each color class one at a time. It does this by identifying amaximal independent setof vertices in the graph using specialised heuristic rules. It then assigns these vertices to the same color and removes them from the graph. These actions are repeated on the remaining subgraph until no vertices remain.
The worst-case complexity of DSatur isO(n2){\displaystyle O(n^{2})}, wheren{\displaystyle n}is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating inO((n+m)logn){\displaystyle O((n+m)\log n)}wherem{\displaystyle m}is the number of edges in the graph.[26]This produces much faster runs with sparse graphs. The overall complexity of RLF is slightly higher thanDSaturatO(mn){\displaystyle O(mn)}.[26]
DSatur and RLF areexactforbipartite,cycle, andwheel graphs.[26]
It is known that aχ-chromatic graph can bec-colored in the deterministic LOCAL model, inO(n1/α){\displaystyle O(n^{1/\alpha })}. rounds, withα=⌊c−1χ−1⌋{\displaystyle \alpha =\left\lfloor {\frac {c-1}{\chi -1}}\right\rfloor }. A matching lower bound ofΩ(n1/α){\displaystyle \Omega (n^{1/\alpha })}rounds is also known. This lower bound holds even if quantum computers that can exchange quantum information, possibly with a pre-shared entangled state, are allowed.
In the field ofdistributed algorithms, graph coloring is closely related to the problem ofsymmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ themulti-trials techniqueby Schneider and Wattenhofer.[27]
In asymmetric graph, adeterministicdistributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has aunique identifier, for example, from the set {1, 2, ...,n}. Put otherwise, we assume that we are given ann-coloring. The challenge is toreducethe number of colors fromnto, e.g., Δ + 1. The more colors are employed, e.g.O(Δ) instead of Δ + 1, the fewer communication rounds are required.[27]
A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case – information may need to be propagated from one side of the network to another side.
The simplest interesting case is ann-cycle. Richard Cole andUzi Vishkin[28]show that there is a distributed algorithm that reduces the number of colors fromntoO(logn) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of ann-cycle inO(log*n) communication steps (assuming that we have unique node identifiers).
The functionlog*,iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is aconstant-timedistributed algorithm for 3-coloring ann-cycle.Linial (1992)showed that this is not possible: any deterministic distributed algorithm requires Ω(log*n) communication steps to reduce ann-coloring to a 3-coloring in ann-cycle.
The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) +O(log*n).[29]The technique was extended tounit disk graphsby Schneider and Wattenhofer.[30]The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn.[31]The algorithm by Barenboim et al. runs in timeO(Δ) +log*(n)/2, which is optimal in terms ofnsince the constant factor 1/2 cannot be improved due to Linial's lower bound.Panconesi & Srinivasan (1996)use network decompositions to compute a Δ+1 coloring in time2O(logn){\displaystyle 2^{O\left({\sqrt {\log n}}\right)}}.
The problem of edge coloring has also been studied in the distributed model.Panconesi & Rizzi (2001)achieve a (2Δ − 1)-coloring inO(Δ +log*n) time in this model. The lower bound for distributed vertex coloring due toLinial (1992)applies to the distributed edge coloring problem as well.
Decentralized algorithms are ones where nomessage passingis allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one.[32]
Graph coloring is computationally hard. It isNP-completeto decide if a given graph admits ak-coloring for a givenkexcept for the casesk∈ {0,1,2}. In particular, it is NP-hard to compute the chromatic number.[33]The 3-coloring problem remains NP-complete even on 4-regularplanar graphs.[34]On graphs with maximal degree 3 or less, however,Brooks' theoremimplies that the 3-coloring problem can be solved in linear time. Further, for everyk> 3, ak-coloring of a planar graph exists by thefour color theorem, and it is possible to find such a coloring in polynomial time. However, finding thelexicographicallysmallest 4-coloring of a planar graph is NP-complete.[35]
The best knownapproximation algorithmcomputes a coloring of size at most within a factorO(n(log logn)2(log n)−3) of the chromatic number.[36]For allε> 0, approximating the chromatic number withinn1−εisNP-hard.[37]
It is also NP-hard to color a 3-colorable graph with 5 colors,[38]4-colorable graph with 7 colours,[38]and ak-colorable graph with(k⌊k/2⌋)−1{\displaystyle \textstyle {\binom {k}{\lfloor k/2\rfloor }}-1}colors fork≥ 5.[39]
Computing the coefficients of the chromatic polynomial is♯P-hard. In fact, even computing the value ofχ(G,k){\displaystyle \chi (G,k)}is ♯P-hard at anyrational pointkexcept fork= 1 andk= 2.[40]There is noFPRASfor evaluating the chromatic polynomial at any rational pointk≥ 1.5 except fork= 2 unlessNP=RP.[41]
For edge coloring, the proof of Vizing's result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete.[42]In terms of approximation algorithms, Vizing's algorithm shows that the edge chromatic number can be approximated to within 4/3,
and the hardness result shows that no (4/3 −ε)-algorithm exists for anyε > 0unlessP = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion.[43]
Vertex coloring models to a number ofscheduling problems.[44]In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be inconflictin the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimummakespan, the optimal time to finish all jobs without conflicts.
Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is aninterval graph, so the coloring problem can be solved efficiently. Inbandwidth allocationto radio stations, the resulting conflict graph is aunit disk graph, so the coloring problem is 3-approximable.
Acompileris acomputer programthat translates onecomputer languageinto another. To improve the execution time of the resulting code, one of the techniques ofcompiler optimizationisregister allocation, where the most frequently used values of the compiled program are kept in the fastprocessor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used.
The textbook approach to this problem is to model it as a graph coloring problem.[45]The compiler constructs aninterference graph, where vertices are variables and an edge connects two vertices if they are needed at the same time. If the graph can be colored withkcolors then any set of variables needed at the same time can be stored in at mostkregisters.
The problem of coloring a graph arises in many practical areas such as sports scheduling,[46]designing seating plans,[47]exam timetabling,[48]the scheduling of taxis,[49]and solvingSudokupuzzles.[50]
An important class ofimpropercoloring problems is studied inRamsey theory, where the graph's edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is thetheorem on friends and strangers, which states that in any coloring of the edges ofK6{\displaystyle K_{6}}, the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure.
Modular coloring is a type of graph coloring in which the color of each vertex is the sum of the colors of its adjacent vertices.
Letk ≥ 2be a number of colors whereZk{\displaystyle \mathbb {Z} _{k}}is the set of integers modulo k consisting of the elements (or colors)0,1,2, ..., k-2, k-1. First, we color each vertex in G using the elements ofZk{\displaystyle \mathbb {Z} _{k}}, allowing two adjacent vertices to be assigned the same color. In other words, we want c to be a coloring such that c: V(G) →Zk{\displaystyle \mathbb {Z} _{k}}where adjacent vertices can be assigned the same color.
For each vertex v in G, the color sum ofv, σ(v), is the sum of all of the adjacent vertices to v mod k. The color sum of v is denoted by
where u is an arbitrary vertex in the neighborhood of v, N(v). We then color each vertex with the new coloring determined by the sum of the adjacent vertices. The graph G has a modular k-coloring if, for every pair of adjacent vertices a,b, σ(a) ≠ σ(b). The modular chromatic number of G, mc(G), is the minimum value of k such that there exists a modular k-coloring of G.<
For example, let there be a vertex v adjacent to vertices with the assigned colors 0, 1, 1, and 3 mod 4 (k=4). The color sum would be σ(v) = 0 + 1 + 1+ 3 mod 4 = 5 mod 4 = 1. This would be the new color of vertex v. We would repeat this process for every vertex in G. If two adjacent vertices have equal color sums, G does not have a modulo 4 coloring. If none of the adjacent vertices have equal color sums, G has a modulo 4 coloring.
Coloring can also be considered forsigned graphsandgain graphs.
|
https://en.wikipedia.org/wiki/Graph_coloring
|
Acognitive computeris a computer that hardwiresartificial intelligenceandmachine learningalgorithms into anintegrated circuitthat closely reproduces the behavior of the human brain.[1]It generally adopts aneuromorphic engineeringapproach. Synonyms includeneuromorphic chipandcognitive chip.[2][3]
In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance inimage recognition.[4]
In 2013, IBM developedWatson, a cognitive computer that usesneural networksanddeep learningtechniques.[5]The following year, it developed the 2014TrueNorthmicrochip architecture[6]which is designed to be closer in structure to the human brain than thevon Neumann architectureused in conventional computers.[1]In 2017,Intelalso announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems[7][8]),Qualcomm, and others are improving neuromorphic processors steadily.
TrueNorth was aneuromorphicCMOSintegrated circuitproduced byIBMin 2014.[9]It is amanycore processornetwork on a chipdesign, with 4096cores, each one having 256 programmable simulatedneuronsfor a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basictransistor countis 5.4 billion.
Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents thevon Neumann-architecturebottleneck and is very energy-efficient, with IBM claiming a power consumption of 70milliwattsand a power density that is 1/10,000th of conventional microprocessors.[10]TheSyNAPSEchip operates at lower temperatures and power because it only draws power necessary for computation.[11]Skyrmionshave been proposed as models of the synapse on a chip.[12][13]
The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leakyintegrate-and-firemodel.[14]
According to IBM, it does not have aclock,[15]operates onunary numbers, and computes by counting to a maximum of 19 bits.[6][16]The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronouspacket-switchedmesh network on chip (NOC).[16]
IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, anintegrated programming environment, and libraries.[15]This lack ofbackward compatibilitywith any previous technology (e.g., C++ compilers) poses seriousvendor lock-inrisks and other adverse consequences that may prevent it from commercialization in the future.[15][failed verification]
In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.[17]
In 2023, IBM released its NorthPole chip, which is aproof-of-conceptfor dramatically improving performance by intertwining compute with memory on-chip, thus eliminating theVon Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can runResNet-50orYolo-v4image recognitiontasks about 22 times faster, with 25 times less energy and 5 times less space, when compared toGPUswhich use the same12-nm node processthat it was fabricated with. It includes 224 MB ofRAMand 256processor coresand can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425MHz.[4][18][19][20]This is an inferencing chip, but it cannot yet handle GPT-4 because of memory and accuracy limitations[21]
Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the HawaiianseamountLōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently thanconvolutional neural networksordeep learningneural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions.
The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024artificial neuronseach for a total of 131,072 simulated neurons.[22]This offers around 130 millionsynapses, far less than the human brain's 800trillionsynapses, and behind IBM'sTrueNorth.[23]Loihi is available for research purposes among more than 40 academic research groups as aUSBform factor.[24][25]
In October 2019, researchers fromRutgers Universitypublished a research paper to demonstrate theenergy efficiencyof Intel's Loihi in solvingsimultaneous localization and mapping.[26]
In March 2020, Intel andCornell Universitypublished a research paper to demonstrate the ability of Intel's Loihi to recognize differenthazardous materials, which could eventually aid to "diagnose diseases, detect weapons andexplosives, findnarcotics, and spot signs of smoke andcarbon monoxide".[27]
Intel's Loihi 2, named Pohoiki Beach, was released in September 2021 with 64 cores.[28]It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.[29]
Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations.
Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance.
Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S−1W−1on conventional deep neural networks.
Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S−1of memory bandwidth, 3.5 PB S−1of inter-core communication bandwidth, and 5 TB S−1of inter-chip bandwidth.
The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of anowlbrain or the cortex of acapuchin monkey.
Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures.
Intel claims that Hala Point can create LLMs but this has not been done.[30]Much further research is needed[21]
SpiNNaker(Spiking Neural Network Architecture) is amassively parallel,manycoresupercomputer architecturedesigned by the Advanced Processor Technologies Research Group at theDepartment of Computer Science, University of Manchester.[31]
Critics argue that a room-sized computer – as in the case ofIBM'sWatson– is not a viable alternative to a three-pound human brain.[32]Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.[33]
In 2021,The New York Timesreleased Steve Lohr's article "What Ever Happened to IBM’s Watson?".[34]He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor,[35]was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.
|
https://en.wikipedia.org/wiki/Cognitive_computer
|
Incomputing,decimal32is adecimal floating-pointcomputer numbering formatthat occupies 4 bytes (32 bits) in computer memory.
Like thebinary16andbinary32formats, decimal32 uses less space than the actually most common format binary64.
decimal32 supports'normal' values, which can have 7 digit precision from±1.000000×10^−95up to±9.999999×10^+96, plus'subnormal' valueswith ramp-down relative precision down to±1.×10^−101(one digit),signed zeros, signed infinities andNaN(Not a Number). The encoding is somewhat complex, see below.
The binary format with the same bit-size,binary32, has an approximate range from subnormal-minimum±1×10^−45over normal-minimum with full 24-bit precision:±1.1754944×10^−38to maximum±3.4028235×10^38.
decimal32 values are encoded in a 'not normalized' near to 'scientific format', with combining some bits of the exponent with the leading bits of the significand in a 'combination field'.
Besides the special cases infinities and NaNs there are four points relevant to understand the encoding of decimal32.
both produce the same result [2019 version[1]of IEEE 754 in clause 3.3, page 18]. Both applies to BID as well as DPD encoding. For decimalxxx datatypes the second view is more common, while for binaryxxx datatypes the first, the biases are different for each datatype.)
In all cases for decimal32, the value represented is
Alternatively it can be understood as(−1)sign× 10exponent−95×significandwith thesignificanddigits understood asd0.d−1d−2d−3d−4d−5d−6, note the radix dot making it a fraction.
For ±Infinity, besides the sign bit, all the remaining bits are ignored (i.e., both the exponent and significand fields have no effect).
For NaNs the sign bit has no meaning in the standard, and is ignored. Therefore, signed and unsigned NaNs are equivalent, even though some programs will show NaNs as signed. The bit m5determines whether the NaN is quiet (0) or signaling (1). The bits of the significand are the NaN's payload and can hold user defined data (e.g., to distinguish how NaNs were generated). Like for normal significands, the payload of NaNs can be either in BID or DPD encoding.
Be aware that the bit numbering used in the tables for e.g.m10… m0is in opposite direction than that used in the document for the IEEE 754 standardG0… G10.
The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values0 ...10111111b=0 ... 191d, appr. bias is to be subtracted. The resulting significand could be a positive binary integer of 24 bits up to1001 1111111111 1111111111b= 10485759d, but values above107− 1 =9999999= 98967F16=1001100010010110011111112are 'illegal' and have to be treated as zeroes. To obtain the individual decimal digits the significand has to be divided by 10 repeatedly.
The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values0 ...10111111b=0 ... 191d, appr. bias is to be subtracted. The significand's leading decimal digit forms from the(0)cdeor100ebits as binary integer. The subsequent digits are encoded in the 10 bit 'declet' fields 'tttttttttt' according the DPD rules (see below). The full decimal significand is then obtained by concatenating the leading and trailing decimal digits.
The 10-bit DPD to 3-digit BCD transcoding for the declets is given by the following table.b9… b0are the bits of the DPD, andd2… d0are the three BCD digits. Be aware that the bit numbering used here for e.g.b9… b0is in opposite direction than that used in the document for the IEEE 754 standardb0… b9, add. the decimal digits are numbered 0-base here while in opposite direction and 1-based in the IEEE 754 paper. The bits on white background are not counting for the value, but signal how to understand / shift the other bits. The concept is to denote which digits are small (0 … 7) and encoded in three bits, and which are not, then calculated from a prefix of '100', and one bit specifying if 8 or 9.
The 8 decimal values whose digits are all 8s or 9s have four codings each.
The bits marked x in the table above areignoredon input, but will always be 0 in computed results.
(The8 × 3 = 24non-standard encodings fill in the gap from103= 1000 and 210- 1 = 1023.)
Benefit of this encoding is access to individual digits by de- / encoding only 10 bits, disadvantage is that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require decoding to decimal digits (and evtl. re-encode to binary integers) first.
An alternate encoding in short BID sections, 10 bits declets encoding 0d... 1023dand simply using only the range from 0 to 999, would provide the same functionality, direct access to digits by de- / encoding 10 bits, with near zero performance penalty in modern systems, and preserve the option for bit-pattern oriented sort and compare, but the 'Sudoku encoding' shown above was chosen in history, may provide better performance in hardware implementations, and now 'is as it is'.
decimal32 has been introduced in the2008 version[3]ofIEEE 754, adopted by ISO as ISO/IEC/IEEE 60559:2011.[4]
DPD encoding is relatively efficient, not wasting more than about 2.4 percent of space vs. BID, because the 210= 1024 possible values in 10 bit is only little more than what is used to encode all numbers from 0 to 999.
Zero has 192 possible representations (384 when bothsigned zerosare included).
The gain in range and precision by the 'combination encoding' evolves because the taken 2 bits from the exponent only use three states, and the 4 MSBs of the significand stay within 0000 … 1001 (10 states). In total that is3 × 10 = 30possible states when combined in one encoding, which is representable in 5 bits (25=32{\displaystyle 2^{5}=32}).[clarification needed]
The decimal formats include denormal values, for a graceful degradation of precision near zero, but in contrast to the binary formats they are not marked / do not need a special exponent, in decimal32 they are just values too small to have full 7 digit precision even with the smallest exponent.[clarification needed]
In the cases of infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to infinities or NaNs by filling it with a single byte value.[citation needed]
|
https://en.wikipedia.org/wiki/Decimal32_floating-point_format
|
BugMeNotis anInternetservice that providesusernamesandpasswordsallowing Internet users to bypass mandatory free registration onwebsites. It was started in August 2003 by an anonymous person, later revealed to be Guy King,[1]and allowed Internet users to access websites that have registration walls (for instance, that ofThe New York Times) with the requirement of compulsory registration. This came in response to the increasing number of websites that request such registration, which many Internet users find to be an annoyance and a potential source ofemail spam.[2]
BugMeNot allows users of their service to add new accounts for sites with free registration. It also encourages users to usedisposable email addressservices to create such accounts. However, it does not allow them to add accounts for paid websites, as this could potentially lead tocredit card fraud.[3]BugMeNot also claims to remove accounts for any website, requesting that they do not provide accounts for non-registered users.
To help make access to their service easier, BugMeNot hosts abookmarkletthat can be used with any browser to automatically find a usable account from their service. They also host extensions for theweb browsersMozilla Firefox(but not on Firefox quantum yet),Internet Explorer, andGoogle Chrome(the extensions were created by Eric Hamiter with Dmytri Kleiner and Dean Wilson, respectively).[citation needed]There are also implementations in the form of a BugMeNotOpera widget, orUserJSscripts along with buttons, which makes it fully browser-integrated. AnAndroidapplication is also available.[4]
BugMeNot provides an option for site owners to block their site from the BugMeNot database, if they match one or more of the following criteria:[5]
No option is provided for users to request removing a block if a site ceases to meet the blocking criteria or has never met them in the first place.
Site blocking can be circumvented by BugMeNot users by publishing usernames and passwords under a similar, but different, domain name to which they apply. For example, the owners of the domain abc.def.com might request a block to be put in place, but this will not prevent users uploading access information under the name of def.abc.com. Since one domain owner cannot demand that another domain be blocked, the information remains and is accessibly provided that BugMeNot users tacitly agree that def.abc.com in fact refers to abc.def.com.[original research?]For example, Wikipedia logins are in the database under wikipedia.net because wikipedia.com and wikipedia.org have been banned under the first criterion.[6]
Nearly a year after it was created, BugMeNot was shut down temporarily by its service provider (at that time),HostGator. The site's creator claimed BugMeNot's host was pressured by websites to shut them down, though Hostgator claimed that the BugMeNot site was repeatedly crashing their servers.[7]
The BugMeNot domain was transferred briefly to another hosting company, dissidenthosting.com, but before the site was set up, it began to redirect visitors to web pages belonging to racist groups, without the knowledge or consent of the site's owner. BugMeNot moved again, toNearlyFreeSpeech.NET. BugMeNot's move to this provider, which also hosts a number of highly controversial sites, prompted BugMeNot's creator to say, "Personally, I don't care if I'm sharing a server with neo-Nazis. I might not agree with what they have to say, but the whole thing about freedom of speech is that people are free to speak."[8]
Shortly after BugMeNot returned, reports surfaced that some news sites had begun to attempt to block accounts posted on BugMeNot, though the extent and effectiveness of such efforts, as well as compliance with BugMeNot's Terms of Use,[9]are not known.
The operators of BugMeNot expanded the "MeNot" network in October 2006 with the addition ofRetailMeNot– a service for finding and sharing online coupon codes. Users can add coupons they have found through any method, as well as a description of the coupon and an expiration date. Users can also scan in printed coupons and upload them for others to print.
|
https://en.wikipedia.org/wiki/BugMeNot
|
The following is alist ofAMDCPUmicroarchitectures.
Historically, AMD's CPU families were given a "K-number" (which originally stood forKryptonite,[1]an allusion to theSupermancomic book character's fatal weakness) starting with their first internal x86 CPU design, the K5, to represent generational changes. AMD has not used K-nomenclaturecodenamesin official AMD documents and press releases since the beginning of 2005, whenK8described theAthlon 64processor family. AMD now refers to the codename K8 processors as theFamily 0Fhprocessors. 10h and 0Fh refer to the main result of theCPUIDx86processor instruction. Inhexadecimalnumbering, 0F(h) (where thehrepresents hexadecimal numbering) equals thedecimalnumber 15, and 10(h) equals the decimal number 16. (The "K10h" form that sometimes pops up is an improper hybrid of the "K" code andFamily XXhidentifier number.)
The Family hexadecimal identifier number can be determined for a particular processor using thefreewaresystem profilingapplicationCPU-Z, which shows the Family number in theExt. Familyfield of the application, as can be seen on various screenshots on theCPU-Z Validator World Recordswebsite.
Below is a list of microarchitectures many of which havecodenamesassociated:[2]
|
https://en.wikipedia.org/wiki/List_of_AMD_CPU_microarchitectures
|
Mobile RFID(M-RFID) are services that provide information on objects equipped with anRFID tagover a telecommunication network.[1]The reader or interrogator can be installed in a mobile device such as amobile phoneor PDA.[2]
Unlike ordinary fixed RFID, mobile RFID readers are mobile, and the tags fixed, instead of the other way around. The advantages of M-RFID over RFID include the absence of wires to fixed readers and the ability of a small number of mobile readers can cover a large area, instead of dozens of fixed readers.[3]
The main focus is on supporting supply chain management. But this application has also found its way inm-commerce.[citation needed]The customer in the supermarket can scan theElectronic Product Codefrom the tag and connects via the internet to get more information.[citation needed]
ISO/IEC 29143 "Information technology — Automatic Identification and Data Capture Technique — Air Interface specification for Mobile RFID interrogator"[4]is the first standard to be developed for Mobile RFID.[citation needed]
|
https://en.wikipedia.org/wiki/Mobile_RFID
|
Innumber theory,Euler's criterionis a formula for determining whether anintegeris aquadratic residuemoduloaprime. Precisely,
Letpbe anoddprime andabe an integercoprimetop. Then[1][2][3]
Euler's criterion can be concisely reformulated using theLegendre symbol:[4]
The criterion dates from a 1748 paper byLeonhard Euler.[5][6]
The proof uses the fact that the residue classes modulo a prime number are afield. See the articleprime fieldfor more details.
Because the modulus is prime,Lagrange's theoremapplies: a polynomial of degreekcan only have at mostkroots. In particular,x2≡a(modp)has at most 2 solutions for eacha. This immediately implies that besides 0 there are at leastp− 1/2distinct quadratic residues modulop: each of thep− 1possible values ofxcan only be accompanied by one other to give the same residue.
In fact,(p−x)2≡x2(modp).{\displaystyle (p-x)^{2}\equiv x^{2}{\pmod {p}}.}This is because(p−x)2≡p2−2xp+x2≡x2(modp).{\displaystyle (p-x)^{2}\equiv p^{2}-{2}{x}{p}+x^{2}\equiv x^{2}{\pmod {p}}.}So, thep−12{\displaystyle {\tfrac {p-1}{2}}}distinct quadratic residues are:12,22,...,(p−12)2(modp).{\displaystyle 1^{2},2^{2},...,({\tfrac {p-1}{2}})^{2}{\pmod {p}}.}
Asais coprime top,Fermat's little theoremsays that
which can be written as
Since the integers modpform a field, for eacha, one or the other of these factors must be zero. Therefore,
Now ifais a quadratic residue,a≡x2,
So every quadratic residue (modp) makes the first factor zero.
Applying Lagrange's theorem again, we note that there can be no more thanp− 1/2values ofathat make the first factor zero. But as we noted at the beginning, there are at leastp− 1/2distinct quadratic residues (modp) (besides 0). Therefore, they are precisely the residue classes that make the first factor zero. The otherp− 1/2residue classes, the nonresidues, must make the second factor zero, or they would not satisfy Fermat's little theorem. This is Euler's criterion.
This proof only uses the fact that any congruencekx≡l(modp){\displaystyle kx\equiv l\!\!\!{\pmod {p}}}has a unique (modulop{\displaystyle p}) solutionx{\displaystyle x}providedp{\displaystyle p}does not dividek{\displaystyle k}. (This is true because asx{\displaystyle x}runs through all nonzero remainders modulop{\displaystyle p}without repetitions, so doeskx{\displaystyle kx}: if we havekx1≡kx2(modp){\displaystyle kx_{1}\equiv kx_{2}{\pmod {p}}}, thenp∣k(x1−x2){\displaystyle p\mid k(x_{1}-x_{2})}, hencep∣(x1−x2){\displaystyle p\mid (x_{1}-x_{2})}, butx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}aren't congruent modulop{\displaystyle p}.) It follows from this fact that all nonzero remainders modulop{\displaystyle p}the square of which isn't congruent toa{\displaystyle a}can be grouped into unordered pairs(x,y){\displaystyle (x,y)}according to the rule that the product of the members of each pair is congruent toa{\displaystyle a}modulop{\displaystyle p}(since by this fact for everyy{\displaystyle y}we can find such anx{\displaystyle x}, uniquely, and vice versa, and they will differ from each other ify2{\displaystyle y^{2}}is not congruent toa{\displaystyle a}). Ifa{\displaystyle a}is not a quadratic residue, this is simply a regrouping of allp−1{\displaystyle p-1}nonzero residues into(p−1)/2{\displaystyle (p-1)/2}pairs, hence we conclude that1⋅2⋅...⋅(p−1)≡ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. Ifa{\displaystyle a}is a quadratic residue, exactly two remainders were not among those paired,r{\displaystyle r}and−r{\displaystyle -r}such thatr2≡a(modp){\displaystyle r^{2}\equiv a\!\!\!{\pmod {p}}}. If we pair those two absent remainders together, their product will be−a{\displaystyle -a}rather thana{\displaystyle a}, whence in this case1⋅2⋅...⋅(p−1)≡−ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv -a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. In summary, considering these two cases we have demonstrated that fora≢0(modp){\displaystyle a\not \equiv 0\!\!\!{\pmod {p}}}we have1⋅2⋅...⋅(p−1)≡−(ap)ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv -\left({\frac {a}{p}}\right)a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. It remains to substitutea=1{\displaystyle a=1}(which is obviously a square) into this formula to obtain at onceWilson's theorem, Euler's criterion, and (by squaring both sides of Euler's criterion)Fermat's little theorem.
Example 1: Finding primes for whichais a residue
Leta= 17. For which primespis 17 a quadratic residue?
We can test primep's manually given the formula above.
In one case, testingp= 3, we have 17(3 − 1)/2= 171≡ 2 ≡ −1 (mod 3), therefore 17 is not a quadratic residue modulo 3.
In another case, testingp= 13, we have 17(13 − 1)/2= 176≡ 1 (mod 13), therefore 17 is a quadratic residue modulo 13. As confirmation, note that 17 ≡ 4 (mod 13), and 22= 4.
We can do these calculations faster by using various modular arithmetic and Legendre symbol properties.
If we keep calculating the values, we find:
Example 2: Finding residues given a prime modulusp
Which numbers are squares modulo 17 (quadratic residues modulo 17)?
We can manually calculate it as:
So the set of the quadratic residues modulo 17 is {1,2,4,8,9,13,15,16}. Note that we did not need to calculate squares for the values 9 through 16, as they are all negatives of the previously squared values (e.g. 9 ≡ −8 (mod 17), so 92≡ (−8)2= 64 ≡ 13 (mod 17)).
We can find quadratic residues or verify them using the above formula. To test if 2 is a quadratic residue modulo 17, we calculate 2(17 − 1)/2= 28≡ 1 (mod 17), so it is a quadratic residue. To test if 3 is a quadratic residue modulo 17, we calculate 3(17 − 1)/2= 38≡ 16 ≡ −1 (mod 17), so it is not a quadratic residue.
Euler's criterion is related to thelaw of quadratic reciprocity.
In practice, it is more efficient to use an extended variant ofEuclid's algorithmto calculate theJacobi symbol(an){\displaystyle \left({\frac {a}{n}}\right)}. Ifn{\displaystyle n}is an odd prime, this is equal to the Legendre symbol, and decides whethera{\displaystyle a}is a quadratic residue modulon{\displaystyle n}.
On the other hand, since the equivalence ofan−12{\displaystyle a^{\frac {n-1}{2}}}to the Jacobi symbol holds for all odd primes, but not necessarily for composite numbers, calculating both and comparing them can be used as a primality test, specifically theSolovay–Strassen primality test. Composite numbers for which the congruence holds for a givena{\displaystyle a}are calledEuler–Jacobi pseudoprimesto basea{\displaystyle a}.
TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs ofquadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Euler%27s_criterion
|
The use ofevidence under Bayes' theoremrelates to the probability of finding evidence in relation to the accused, whereBayes' theoremconcerns theprobabilityof an event and its inverse. Specifically, it compares the probability of finding particular evidence if the accused were guilty, versus if they were not guilty. An example would be the probability of finding a person's hair at the scene, if guilty, versus if just passing through the scene. Another issue would be finding a person's DNA where they lived, regardless of committing a crime there.
Amongevidencescholars, the study of evidence in recent decades has become broadly interdisciplinary, incorporating insights frompsychology,economics, andprobability theory. One area of particular interest and controversy has beenBayes' theorem.[1]Bayes' theorem is an elementary proposition ofprobability theory. It provides a way of updating, in light of new information, one’s probability that a proposition is true. Evidence scholars have been interested in its application to their field, either to study the value ofrules of evidence, or to help determine facts attrial.
Suppose that the proposition to be proven is that the defendant was the source of a hair found at the crime scene. Before learning that the hair was a genetic match for the defendant’s hair, the factfinder believes that the odds are 2 to 1 that the defendant was the source of the hair. If they used Bayes’ theorem, they could multiply those prior odds by a “likelihood ratio” in order to update the odds after learning that the hair matched the defendant’s hair. The likelihood ratio is astatisticderived by comparing the odds that the evidence (expert testimonyof a match) would be found if the defendant was the source with the odds that it would be found if the defendant was not the source. If it is ten times more likely that the testimony of a match would occur if the defendant was the source than if not, then the factfinder should multiply their prior odds by ten, giving posterior odds of 20 to 1.
Bayesian skeptics have objected to this use of Bayes’ theorem in litigation on a variety of grounds. These run from jury confusion and computational complexity to the assertion that standard probability theory is not a normatively satisfactory basis for adjudication of rights.
Bayesian enthusiasts have replied on two fronts. First, they have said that whatever its value inlitigation, Bayes' theorem is valuable in studying evidence rules. For example, it can be used to model relevance. It teaches that the relevance of evidence that a proposition is true depends on how much the evidence changes the prior odds, and that how much it changes the prior odds depends on how likely the evidence would be found (or not) if the proposition were true. These basic insights are also useful in studying individual evidence rules, such as the rule allowing witnesses to be impeached with prior convictions.
Second, they have said that it is practical to useBayes' theoremin a limited set of circumstances in litigation (such as integrating genetic match evidence with other evidence), and that assertions thatprobability theoryis inappropriate forjudicialdeterminations are nonsensical or inconsistent.
Some observers believe that in recent years (i) the debate about probabilities has become stagnant, (ii) the protagonists in the probabilities debate have been talking past each other, (iii) not much is happening at the high-theory level, and (iv) the most interesting work is in theempiricalstudy of the efficacy of instructions on Bayes’ theorem in improving jury accuracy. However, it is possible that thisskepticismabout the probabilities debate in law rests on observations of the arguments made by familiar protagonists in the legal academy. In fields outside of law, work on formal theories relating to uncertainty continues unabated. One important development has been the work on "soft computing" such as has been carried on, for example, atBerkeleyunderLotfi Zadeh's BISC (Berkeley Initiative in Soft Computing). Another example is the increasing amount of work, by people both in and outside law, on "argumentation" theory. Also, work on Bayes nets continues. Some of this work is beginning to filter into legal circles. See, for example, the many papers on formal approaches to uncertainty (including Bayesian approaches) in the Oxford journal: Law, Probability and Risk[1].
There are some famous cases whereBayes' theoremcan be applied.
|
https://en.wikipedia.org/wiki/Evidence_under_Bayes%27_theorem
|
Innon-parametric statistics, theTheil–Sen estimatoris a method forrobustlyfitting a lineto sample points in the plane (simple linear regression) by choosing themedianof theslopesof all lines through pairs of points. It has also been calledSen's slope estimator,[1][2]slope selection,[3][4]thesingle median method,[5]theKendall robust line-fit method,[6]and theKendall–Theil robust line.[7]It is named afterHenri TheilandPranab K. Sen, who published papers on this method in 1950 and 1968 respectively,[8]and afterMaurice Kendallbecause of its relation to theKendall tau rank correlation coefficient.[9]
Theil–Sen regression has several advantages overOrdinary least squaresregression. It is insensitive tooutliers. It can be used for significance tests even when residuals are not normally distributed.[10]It can be significantly more accurate thannon-robust simple linear regression(least squares) forskewedandheteroskedasticdata, and competes well against least squares even fornormally distributeddata in terms ofstatistical power.[11]It has been called "the most popular nonparametric technique for estimating a linear trend".[2]There are fast algorithms for efficiently computing the parameters.
As defined byTheil (1950), the Theil–Sen estimator of a set of two-dimensional points(xi,yi)is the medianmof the slopes(yj−yi)/(xj−xi)determined by all pairs of sample points.Sen (1968)extended this definition to handle the case in which two data points have the samexcoordinate. In Sen's definition, one takes the median of the slopes defined only from pairs of points having distinctxcoordinates.[8]
Once the slopemhas been determined, one may determine a line from the sample points by setting they-interceptbto be the median of the valuesyi−mxi. The fit line is then the liney=mx+bwith coefficientsmandbinslope–intercept form.[12]As Sen observed, this choice of slope makes theKendall tau rank correlation coefficientbecome approximately zero, when it is used to compare the valuesxiwith their associatedresidualsyi−mxi−b. Intuitively, this suggests that how far the fit line passes above or below a data point is not correlated with whether that point is on the left or right side of the data set. The choice ofbdoes not affect the Kendall coefficient, but causes the median residual to become approximately zero; that is, the fit line passes above and below equal numbers of points.[9]
Aconfidence intervalfor the slope estimate may be determined as the interval containing the middle 95% of the slopes of lines determined by pairs of points[13]and may be estimated quickly by sampling pairs of points and determining the 95% interval of the sampled slopes. According to simulations, approximately 600 sample pairs are sufficient to determine an accurate confidence interval.[11]
A variation of the Theil–Sen estimator, therepeated median regressionofSiegel (1982), determines for each sample point(xi,yi), the medianmiof the slopes(yj−yi)/(xj−xi)of lines through that point, and then determines the overall estimator as the median of these medians. It can tolerate a greater number of outliers than the Theil–Sen estimator, but known algorithms for computing it efficiently are more complicated and less practical.[14]
A different variant pairs up sample points by the rank of theirx-coordinates: the point with the smallest coordinate is paired with the first point above the median coordinate, the second-smallest point is paired with the next point above the median, and so on. It then computes the median of the slopes of the lines determined by these pairs of points, gaining speed by examining significantly fewer pairs than the Theil–Sen estimator.[15]
Variations of the Theil–Sen estimator based onweighted medianshave also been studied, based on the principle that pairs of samples whosex-coordinates differ more greatly are more likely to have an accurate slope and therefore should receive a higher weight.[16]
For seasonal data, it may be appropriate to smooth out seasonal variations in the data by considering only pairs of sample points that both belong to the same month or the same season of the year, and finding the median of the slopes of the lines determined by this more restrictive set of pairs.[17]
The Theil–Sen estimator is anunbiased estimatorof the true slope insimple linear regression.[18]For many distributions of theresponse error, this estimator has highasymptotic efficiencyrelative toleast-squaresestimation.[19]Estimators with low efficiency require more independent observations to attain the same sample variance of efficient unbiased estimators.
The Theil–Sen estimator is morerobustthan the least-squares estimator because it is much less sensitive tooutliers. It has abreakdown pointof
meaning that it can tolerate arbitrary corruption of up to 29.3% of the input data-points without degradation of its accuracy.[12]However, the breakdown point decreases for higher-dimensional generalizations of the method.[20]A higher breakdown point, 50%, holds for a different robust line-fitting algorithm, therepeated median estimatorof Siegel.[12]
The Theil–Sen estimator isequivariantunder everylinear transformationof its response variable, meaning that transforming the data first and then fitting a line, or fitting a line first and then transforming it in the same way, both produce the same result.[21]However, it is not equivariant underaffine transformationsof both the predictor and response variables.[20]
The median slope of a set ofnsample points may be computed exactly by computing allO(n2)lines through pairs of points, and then applying a linear timemedian finding algorithm. Alternatively, it may be estimated by sampling pairs of points. This problem is equivalent, underprojective duality, to the problem of finding the crossing point in anarrangement of linesthat has the medianx-coordinate among all such crossing points.[22]
The problem of performing slope selection exactly but more efficiently than the brute force quadratic time algorithm has been extensively studied incomputational geometry. Several different methods are known for computing the Theil–Sen estimator exactly inO(nlogn)time, either deterministically[3]or usingrandomized algorithms.[4]Siegel's repeated median estimator can also be constructed in the same time bound.[23]In models of computation in which the input coordinates are integers and in whichbitwise operationson integers take constant time, the Theil–Sen estimator can be constructed even more quickly, in randomized expected timeO(nlogn){\displaystyle O(n{\sqrt {\log n}})}.[24]
An estimator for the slope with approximately median rank, having the same breakdown point as the Theil–Sen estimator, may be maintained in thedata stream model(in which the sample points are processed one by one by an algorithm that does not have enough persistent storage to represent the entire data set) using an algorithm based onε-nets.[25]
In theRstatistics package, both the Theil–Sen estimator and Siegel's repeated median estimator are available through themblmlibrary.[26]A free standaloneVisual Basicapplication for Theil–Sen estimation,KTRLine, has been made available by theUS Geological Survey.[27]The Theil–Sen estimator has also been implemented inPythonas part of theSciPyandscikit-learnlibraries.[28]
Theil–Sen estimation has been applied toastronomydue to its ability to handlecensored regression models.[29]Inbiophysics,Fernandes & Leblanc (2005)suggest its use forremote sensingapplications such as the estimation of leaf area from reflectance data due to its "simplicity in computation, analytical estimates of confidence intervals, robustness to outliers, testable assumptions regarding residuals and ... limited a priori information regarding measurement errors".[30]For measuring seasonal environmental data such aswater quality, a seasonally adjusted variant of the Theil–Sen estimator has been proposed as preferable to least squares estimation due to its high precision in the presence of skewed data.[17]Incomputer science, the Theil–Sen method has been used to estimate trends insoftware aging.[31]Inmeteorologyandclimatology, it has been used to estimate the long-term trends of wind occurrence and speed.[32]
|
https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator
|
Adecision treeis adecision supportrecursive partitioning structure that uses atree-likemodelof decisions and their possible consequences, includingchanceevent outcomes, resource costs, andutility. It is one way to display analgorithmthat only contains conditional control statements.
Decision trees are commonly used inoperations research, specifically indecision analysis,[1]to help identify a strategy most likely to reach a goal, but are also a popular tool inmachine learning.
A decision tree is aflowchart-like structure in which each internal node represents a test on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf representclassificationrules.
Indecision analysis, a decision tree and the closely relatedinfluence diagramare used as a visual and analytical decision support tool, where theexpected values(orexpected utility) of competing alternatives are calculated.
A decision tree consists of three types of nodes:[2]
Decision trees are commonly used inoperations researchandoperations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by aprobabilitymodel as a best choice model or online selection modelalgorithm.[citation needed]Another use of decision trees is as a descriptive means for calculatingconditional probabilities.
Decision trees,influence diagrams,utility functions, and otherdecision analysistools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research ormanagement sciencemethods. These tools are also used to predict decisions of householders in normal and emergency scenarios.[3][4]
Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). So used manually they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually – as the aside example shows – although increasingly, specialized software is employed.
The decision tree can belinearizedintodecision rules,[5]where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form:
Decision rules can be generated by constructingassociation ruleswith the target variable on the right. They can also denote temporal or causal relations.[6]
Commonly a decision tree is drawn usingflowchartsymbols as it is easier for many to read and understand. Note there is a conceptual error in the "Proceed" calculation of the tree shown below; the error relates to the calculation of "costs" awarded in a legal action.
Analysis can take into account the decision maker's (e.g., the company's) preference orutility function, for example:
The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B").
Another example, commonly used inoperations researchcourses, is the distribution of lifeguards on beaches (a.k.a. the "Life's a Beach" example).[7]The example describes two beaches with lifeguards to be distributed on each beach. There is maximum budgetBthat can be distributed among the two beaches (in total), and using a marginal returns table, analysts can decide how many lifeguards to allocate to each beach.
In this example, a decision tree can be drawn to illustrate the principles ofdiminishing returnson beach #1.
The decision tree illustrates that when sequentially distributing lifeguards, placing a first lifeguard on beach #1 would be optimal if there is only the budget for 1 lifeguard. But if there is a budget for two guards, then placing both on beach #2 would prevent more overall drownings.
Much of the information in a decision tree can be represented more compactly as aninfluence diagram, focusing attention on the issues and relationships between events.
Decision trees can also be seen asgenerative modelsof induction rules from empirical data. An optimal decision tree is then defined as a tree that accounts for most of the data, while minimizing the number of levels (or "questions").[8]Several algorithms to generate such optimal trees have been devised, such asID3/4/5,[9]CLS, ASSISTANT, and CART.
Among decision support tools, decision trees (andinfluence diagrams) have several advantages. Decision trees:
Disadvantages of decision trees:
A few things should be considered when improving the accuracy of the decision tree classifier. The following are some possible optimizations to consider when looking to make sure the decision tree model produced makes the correct decision or classification. Note that these things are not the only things to consider but only some.
Theaccuracyof the decision tree can change based on the depth of the decision tree. In many cases, the tree’s leaves arepurenodes.[11]When a node is pure, it means that all the data in that node belongs to a single class.[12]For example, if the classes in the data set are Cancer and Non-Cancer a leaf node would be considered pure when all the sample data in a leaf node is part of only one class, either cancer or non-cancer. It is important to note that a deeper tree is not always better when optimizing the decision tree. A deeper tree can influence the runtime in a negative way. If a certain classification algorithm is being used, then a deeper tree could mean the runtime of this classification algorithm is significantly slower. There is also the possibility that the actual algorithm building the decision tree will get significantly slower as the tree gets deeper. If the tree-building algorithm being used splits pure nodes, then a decrease in the overall accuracy of the tree classifier could be experienced. Occasionally, going deeper in the tree can cause an accuracy decrease in general, so it is very important to test modifying the depth of the decision tree and selecting the depth that produces the best results. To summarize, observe the points below, we will define the number D as the depth of the tree.
Possible advantages of increasing the number D:
Possible disadvantages of increasing D
The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model.
The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using theinformation-gainfunction may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction inentropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes.
The main advantages and disadvantages ofinformation gainand phi function
This is the information gain function formula. The formula states the information gain is a function of the entropy of a node of the decision tree minus the entropy of a candidate split at node t of a decision tree.
This is the phi function formula. The phi function is maximized when the chosen feature splits the samples in a way that produces homogenous splits and have around the same number of samples in each split.
We will set D, which is the depth of the decision tree we are building, to three (D = 3). We also have the following data set of cancer and non-cancer samples and the mutation features that the samples either have or do not have. If a sample has a feature mutation then the sample is positive for that mutation, and it will be represented by one. If a sample does not have a feature mutation then the sample is negative for that mutation, and it will be represented by zero.
To summarize, C stands for cancer and NC stands for non-cancer. The letter M stands formutation, and if a sample has a particular mutation it will show up in the table as a one and otherwise zero.
Now, we can use the formulas to calculate the phi function values and information gain values for each M in the dataset. Once all the values are calculated the tree can be produced. The first thing to be done is to select the root node. In information gain and the phi function we consider the optimal split to be the mutation that produces the highest value for information gain or the phi function. Now assume that M1 has the highest phi function value and M4 has the highest information gain value. The M1 mutation will be the root of our phi function tree and M4 will be the root of our information gain tree. You can observe the root nodes below
Now, once we have chosen the root node we can split the samples into two groups based on whether a sample is positive or negative for the root node mutation. The groups will be called group A and group B. For example, if we use M1 to split the samples in the root node we get NC2 and C2 samples in group A and the rest of the samples NC4, NC3, NC1, C1 in group B.
Disregarding the mutation chosen for the root node, proceed to place the next best features that have the highest values for information gain or the phi function in the left or right child nodes of the decision tree. Once we choose the root node and the two child nodes for the tree of depth = 3 we can just add the leaves. The leaves will represent the final classification decision the model has produced based on the mutations a sample either has or does not have. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes.
Now assume theclassificationresults from both trees are given using aconfusion matrix.
Information gain confusion matrix:
Phi function confusion matrix:
The tree using information gain has the same results when using the phi function when calculating the accuracy. When we classify the samples based on the model using information gain we get one true positive, one false positive, zero false negatives, and four true negatives. For the model using the phi function we get two true positives, zero false positives, one false negative, and three true negatives. The next step is to evaluate the effectiveness of the decision tree using some key metrics that will be discussed in the evaluating a decision tree section below. The metrics that will be discussed below can help determine the next steps to be taken when optimizing the decision tree.
The above information is not where it ends for building and optimizing a decision tree. There are many techniques for improving the decision tree classification models we build. One of the techniques is making our decision tree model from abootstrappeddataset. The bootstrapped dataset helps remove the bias that occurs when building a decision tree model with the same data the model is tested with. The ability to leverage the power ofrandom forestscan also help significantly improve the overall accuracy of the model being built. This method generates many decisions from many decision trees and tallies up the votes from each decision tree to make the final classification. There are many techniques, but the main objective is to test building your decision tree model in different ways to make sure it reaches the highest performance level possible.
It is important to know the measurements used to evaluate decision trees. The main metrics used areaccuracy,sensitivity,specificity,precision,miss rate,false discovery rate, andfalse omission rate. All these measurements are derived from the number oftrue positives,false positives,True negatives, andfalse negativesobtained when running a set of samples through the decision tree classification model. Also, a confusion matrix can be made to display these results. All these main metrics tell something different about the strengths and weaknesses of the classification model built based on your decision tree. For example, a low sensitivity with high specificity could indicate the classification model built from the decision tree does not do well identifying cancer samples over non-cancer samples.
Let us take the confusion matrix below.
We will now calculate the values accuracy, sensitivity, specificity, precision, miss rate, false discovery rate, and false omission rate.
Accuracy:
Accuracy=(TP+TN)/(TP+TN+FP+FN){\displaystyle {\text{Accuracy}}=(TP+TN)/(TP+TN+FP+FN)}
=(11+105)/162=71.60%{\displaystyle =(11+105)/162=71.60\%}
Sensitivity (TPR – true positive rate):[14]
TPR=TP/(TP+FN){\displaystyle {\text{TPR}}=TP/(TP+FN)}
=11/(11+45)=19.64%{\displaystyle =11/(11+45)=19.64\%}
Specificity (TNR – true negative rate):
TNR=TN/(TN+FP){\displaystyle {\text{TNR}}=TN/(TN+FP)}
=105/(105+1)=99.06%{\displaystyle =105/(105+1)=99.06\%}
Precision (PPV – positive predictive value):
PPV=TP/(TP+FP){\displaystyle {\text{PPV}}=TP/(TP+FP)}
=11/(11+1)=91.66%{\displaystyle =11/(11+1)=91.66\%}
Miss Rate (FNR – false negative rate):
FNR=FN/(FN+TP){\displaystyle {\text{FNR}}=FN/(FN+TP)}
=45/(45+11)=80.35%{\displaystyle =45/(45+11)=80.35\%}
False discovery rate (FDR):
FDR=FP/(FP+TP){\displaystyle {\text{FDR}}=FP/(FP+TP)}
=1/(1+11)=8.30%{\displaystyle =1/(1+11)=8.30\%}
False omission rate (FOR):
FOR=FN/(FN+TN){\displaystyle {\text{FOR}}=FN/(FN+TN)}
=45/(45+105)=30.00%{\displaystyle =45/(45+105)=30.00\%}
Once we have calculated the key metrics we can make some initial conclusions on the performance of the decision tree model built. The accuracy that we calculated was 71.60%. The accuracy value is good to start but we would like to get our models as accurate as possible while maintaining the overall performance. The sensitivity value of 19.64% means that out of everyone who was actually positive for cancer tested positive. If we look at the specificity value of 99.06% we know that out of all the samples that were negative for cancer actually tested negative. When it comes to sensitivity and specificity it is important to have a balance between the two values, so if we can decrease our specificity to increase the sensitivity that would prove to be beneficial.[15]These are just a few examples on how to use these values and the meanings behind them to evaluate the decision tree model and improve upon the next iteration.
|
https://en.wikipedia.org/wiki/Decision_tree
|
Incomputer programming,dataflow programmingis aprogramming paradigmthat models a program as adirected graphof the data flowing between operations, thus implementingdataflowprinciples and architecture.[1]Dataflowprogramming languagesshare some features offunctional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the termdatastreaminstead ofdataflowto avoid confusion with dataflow computing ordataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered byJack Dennisand his graduate students at MIT in the 1960s.
Traditionally, a program is modelled as a series of operations happening in a specific order; this may be referred to as sequential,[2]: p.3procedural,[3]control flow[3](indicating that the program chooses a specific path), orimperative programming. The program focuses on commands, in line with thevon Neumann[2]: p.3vision of sequential programming, where data is normally "at rest".[3]: p.7
In contrast, dataflow programming emphasizes the movement of data and models programs as a series of connections. Explicitly defined inputs and outputs connect operations, which function likeblack boxes.[3]: p.2An operation runs as soon as all of its inputs become valid.[4]Thus, dataflow languages are inherently parallel and can work well in large, decentralized systems.[2]: p.3[5][6]
One of the key concepts in computer programming is the idea ofstate, essentially a snapshot of various conditions in the system. Most programming languages require a considerable amount of state information, which is generally hidden from the programmer. Often, the computer itself has no idea which piece of information encodes the enduring state. This is a serious problem, as the state information needs to be shared across multiple processors inparallel processingmachines. Most languages force the programmer to add extra code to indicate which data and parts of the code are important to the state. This code tends to be both expensive in terms of performance, as well as difficult to read or debug.Explicit parallelismis one of the main reasons for the poor performance ofEnterprise Java Beanswhen building data-intensive, non-OLTPapplications.[citation needed]
Where a sequential program can be imagined as a single worker moving between tasks (operations), a dataflow program is more like a series of workers on anassembly line, each doing a specific task whenever materials are available. Since the operations are only concerned with the availability of data inputs, they have no hidden state to track, and are all "ready" at the same time.
Dataflow programs are represented in different ways. A traditional program is usually represented as a series of text instructions, which is reasonable for describing a serial system which pipes data between small, single-purpose tools that receive, process, and return. Dataflow programs start with an input, perhaps thecommand lineparameters, and illustrate how that data is used and modified. The flow of data is explicit, often visually illustrated as a line or pipe.
In terms of encoding, a dataflow program might be implemented as ahash table, with uniquely identified inputs as the keys, used to look up pointers to the instructions. When any operation completes, the program scans down the list of operations until it finds the first operation where all inputs are currently valid, and runs it. When that operation finishes, it will typically output data, thereby making another operation become valid.
For parallel operation, only the list needs to be shared; it is the state of the entire program. Thus the task of maintaining state is removed from the programmer and given to the language'sruntime. On machines with a single processor core where an implementation designed for parallel operation would simply introduce overhead, this overhead can be removed completely by using a different runtime.
Some recent dataflow libraries such asDifferential/TimelyDataflow have usedincremental computingfor much more efficient data processing.[1][7][8]
A pioneer dataflow language was BLOck DIagram (BLODI), published in 1961 byJohn Larry Kelly, Jr., Carol Lochbaum andVictor A. Vyssotskyfor specifyingsampled data systems.[9]A BLODI specification of functional units (amplifiers, adders, delay lines, etc.) and their interconnections was compiled into a single loop that updated the entire system for one clock tick.
In a 1966 Ph.D. thesis,The On-line Graphical Specification of Computer Procedures,[10]Bert Sutherlandcreated one of the first graphical dataflow programming frameworks in order to make parallel programming easier. Subsequent dataflow languages were often developed at the largesupercomputerlabs. POGOL, an otherwise conventional data-processing language developed atNSA, compiled large-scale applications composed of multiple file-to-file operations, e.g. merge, select, summarize, or transform, into efficient code that eliminated the creation of or writing to intermediate files to the greatest extent possible.[11]SISAL, a popular dataflow language developed atLawrence Livermore National Laboratory, looks like most statement-driven languages, but variables should beassigned once. This allows thecompilerto easily identify the inputs and outputs. A number of offshoots of SISAL have been developed, includingSAC,Single Assignment C, which tries to remain as close to the popularC programming languageas possible.
The United States Navy funded development of signal processing graph notation (SPGN) and ACOS starting in the early 1980s. This is in use on a number of platforms in the field today.[12]
A more radical concept isPrograph, in which programs are constructed as graphs onscreen, and variables are replaced entirely with lines linking inputs to outputs. Prograph was originally written on theMacintosh, which remained single-processor until the introduction of theDayStar Genesis MPin 1996.[citation needed]
There are many hardware architectures oriented toward the efficient implementation of dataflow programming models.[vague]MIT's tagged token dataflow architecture was designed byGreg Papadopoulos.[undue weight?–discuss]
Data flow has been proposed[by whom?]as an abstraction for specifying the global behavior of distributed system components: in thelive distributed objectsprogramming model,distributed data flowsare used to store and communicate state, and as such, they play the role analogous to variables, fields, and parameters in Java-like programming languages[original research?].
Dataflow programming languages include:
|
https://en.wikipedia.org/wiki/Dataflow_programming
|
Inversive congruential generatorsare a type of nonlinear congruentialpseudorandom number generator, which use themodular multiplicative inverse(if it exists) to generate the next number in a sequence. The standard formula for an inversive congruential generator, modulo some primeqis:
Such a generator is denoted symbolically asICG(q,a,c,seed)and is said to be an ICG with parametersq,a,cand seedseed.
The sequence(xn)n≥0{\displaystyle (x_{n})_{n\geq 0}}must havexi=xj{\displaystyle x_{i}=x_{j}}after finitely many steps, and since the next element depends only on its direct predecessor, alsoxi+1=xj+1{\displaystyle x_{i+1}=x_{j+1}}etc. The maximum possibleperiodfor the modulusqisqitself, i.e. the sequence includes every value from 0 toq− 1 before repeating.
A sufficient condition for the sequence to have the maximum possible period is to chooseaandcsuch that thepolynomialf(x)=x2−cx−a∈Fq[x]{\displaystyle f(x)=x^{2}-cx-a\in \mathbb {F} _{q}[x]}(polynomial ring overFq{\displaystyle \mathbb {F} _{q}}) isprimitive. This is not a necessary condition; there are choices ofq,aandcfor whichf(x){\displaystyle f(x)}is not primitive, but the sequence nevertheless has a period ofq. Any polynomial, primitive or not, that leads to a maximal-period sequence is called an inversive maximal-period (IMP) polynomial. Chou describes analgorithmfor choosing the parametersaandcto get such polynomials.[1]
Eichenauer-Herrmann, Lehn, Grothe andNiederreiterhave shown that inversive congruential generators have good uniformity properties, in particular with regard to lattice structure and serial correlations.
ICG(5, 2, 3, 1) gives the sequence 1, 0, 3, 2, 4, 1, 0, 3, 2, 4, 1, 0, ...
In this example,f(x)=x2−3x−2{\displaystyle f(x)=x^{2}-3x-2}is irreducible inF5[x]{\displaystyle \mathbb {F} _{5}[x]}, as none of 0, 1, 2, 3 or 4 is a root. It can also be verified thatxis aprimitive elementofF5[x]/(f){\displaystyle \mathbb {F} _{5}[x]/(f)}and hencefis primitive.
The construction of acompound inversive generator(CIG) relies on combining two or more inversive congruential generators according to the method described below.
Letp1,…,pr{\displaystyle p_{1},\dots ,p_{r}}be distinct prime integers, eachpj≥5{\displaystyle p_{j}\geq 5}. For each indexj,1≤j≤r, let(xn)n≥0{\displaystyle (x_{n})_{n\geq 0}}be a sequence of elements ofFpj{\displaystyle \mathbb {F} _{p_{j}}}periodic with period lengthpj{\displaystyle p_{j}}. In other words,{xn(j)∣0≤n≤pj}∈Fpj{\displaystyle \{x_{n}^{(j)}\mid 0\leq n\leq p_{j}\}\in \mathbb {F} _{p_{j}}}.
For each indexj, 1 ≤j≤ r, we considerTj=T/pj{\displaystyle T_{j}=T/p_{j}}, whereT=p1⋯pr{\displaystyle T=p_{1}\cdots p_{r}}is the period length of the following sequence(xn)n≥0{\displaystyle (x_{n})_{n\geq 0}}.
The sequence(xn)n≥0{\displaystyle (x_{n})_{n\geq 0}}of compound pseudorandom numbers is defined as the sum
The compound approach allows combining inversive congruential generators, provided they have full period, in parallel generation systems.
The CIG are accepted for practical purposes for a number of reasons.
Firstly, binary sequences produced in this way are free of undesirable statistical deviations. Inversive sequences extensively tested with variety of statistical tests remain stable under the variation of parameter.[2][3][4]
Secondly, there exists a steady and simple way of parameter choice, based on the Chou algorithm[1]that guarantees maximum period length.
Thirdly, compound approach has the same properties as single inversive generators,[5][6]but it also provides period length significantly greater than obtained by a single inversive congruential generator. They seem to be designed for application with multiprocessor parallel hardware platforms.
There exists an algorithm[7]that allows designing compound generators with predictable period length, predictable linear complexity level, with excellent statistical properties of produced bit streams.
The procedure of designing this complex structure starts with defining finite field ofpelements and ends with choosing the parametersaandcfor each inversive congruential generator being the component of the compound generator. It means that each generator is associated to a fixed IMP polynomial. Such a condition is sufficient for maximum period of each inversive congruential generator[8]and finally for maximum period of the compound generator. The construction of IMP polynomials is the most efficient approach to find parameters for inversive congruential generator with maximum period length.
Equidistribution and statistical independence properties of the generated sequences, which are very important for their usability in astochastic simulation, can be analyzed based on thediscrepancyofs-tuples of successive pseudorandom numbers withs=1{\displaystyle s=1}ands=2{\displaystyle s=2}respectively.
The discrepancy computes the distance of a generator from a uniform one. A low discrepancy means that the sequence generated can be used forcryptographicpurposes, and the first aim of the inversive congruential generator is to provide pseudorandom numbers.
ForNarbitrary pointst1,…,tN−1∈[0,1){\displaystyle {\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1}\in [0,1)}the discrepancy is defined byDN(t1,…,tN−1)=supJ|FN(J)−V(J)|{\displaystyle D_{N}({\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1})={\rm {sup}}_{J}|F_{N}(J)-V(J)|},
where the supremum is extended over all subintervalsJof[0,1)s{\displaystyle [0,1)^{s}},FN(J){\displaystyle F_{N}(J)}isN−1{\displaystyle N^{-1}}times the number of points amongt1,…,tN−1{\displaystyle {\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1}}falling intoJandV(J){\displaystyle V(J)}denotes thes-dimensional volume ofJ.
Until now, we had sequences of integers from 0 toT−1{\displaystyle T-1}, in order to have sequences of[0,1)s{\displaystyle [0,1)^{s}}, one can divide a sequences of integers by its periodT.
From this definition, we can say that if the sequencet1,…,tN−1{\displaystyle {\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1}}is perfectly random then its well distributed on the intervalJ=[0,1)s{\displaystyle J=[0,1)^{s}}thenV(J)=1{\displaystyle V(J)=1}and all points are inJsoFN(J)=N/N=1{\displaystyle F_{N}(J)=N/N=1}henceDN(t1,…,tN−1)=0{\displaystyle D_{N}({\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1})=0}but instead if the sequence is concentrated close to one point then the subintervalJis very smallV(j)≈0{\displaystyle V(j)\approx 0}andFN(j)≈N/N≈1{\displaystyle F_{N}(j)\approx N/N\approx 1}soDN(t1,…,tN−1)=1{\displaystyle D_{N}({\mathbf {t} }_{1},\dots ,{\mathbf {t} }_{N-1})=1}Then we have from the better and worst case:
Some further notation is necessary. For integersk≥1{\displaystyle k\geq 1}andq≥2{\displaystyle q\geq 2}letCk(q){\displaystyle C_{k}(q)}be the set of nonzero lattice points(h1,…,hk)∈Zk{\displaystyle (h_{1},\dots ,h_{k})\in Z^{k}}with−q/2<hj<q/2{\displaystyle -q/2<h_{j}<q/2}for1≤j≤k{\displaystyle 1\leq j\leq k}.
Define
and
forh=(h1,…,hk)∈Ck(q){\displaystyle {\mathbf {h} }=(h_{1},\dots ,h_{k})\in C_{k}(q)}. For realt{\displaystyle t}the abbreviatione(t)=exp(2π⋅it){\displaystyle e(t)={\rm {exp}}(2\pi \cdot it)}is used, andu⋅v{\displaystyle u\cdot v}stands for the standard inner product ofu,v{\displaystyle u,v}inRk{\displaystyle R^{k}}.
LetN≥1{\displaystyle N\geq 1}andq≥2{\displaystyle q\geq 2}be integers. Lettn=yn/q∈[0,1)k{\displaystyle {\mathbf {t} }_{n}=y_{n}/q\in [0,1)^{k}}withyn∈{0,1,…,q−1}k{\displaystyle y_{n}\in \{0,1,\dots ,q-1\}^{k}}for0≤n<N{\displaystyle 0\leq n<N}.
Then the discrepancy of the pointst0,…,tN−1{\displaystyle {\mathbf {t} }_{0},\dots ,{\mathbf {t} }_{N-1}}satisfies
The discrepancy ofN{\displaystyle N}arbitrary pointst1,…,tN−1∈[0,1)k{\displaystyle \mathbf {t} _{1},\dots ,\mathbf {t} _{N-1}\in [0,1)^{k}}satisfies
for any nonzero lattice pointh=(h1,…,hk)∈Zk{\displaystyle {\mathbf {h} }=(h_{1},\dots ,h_{k})\in Z^{k}}, wherel{\displaystyle l}denotes the number of nonzero coordinates ofh{\displaystyle {\mathbf {h} }}.
These two theorems show that the CIG is not perfect because the discrepancy is greater strictly than a positive value but also the CIG is not the worst generator as the discrepancy is lower than a value less than 1.
There exist also theorems which bound the average value of the discrepancy for Compound Inversive Generators and also ones which take values such that the discrepancy is bounded by some value depending on the parameters. For more details see the original paper.[9]
|
https://en.wikipedia.org/wiki/Inversive_congruential_generator
|
In the field ofmathematics,normsare defined for elements within avector space. Specifically, when the vector space comprises matrices, such norms are referred to asmatrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.
Given afieldK{\displaystyle \ K\ }of eitherrealorcomplex numbers(or any complete subset thereof), letKm×n{\displaystyle \ K^{m\times n}\ }be theK-vector spaceof matrices withm{\displaystyle m}rows andn{\displaystyle n}columns and entries in the fieldK.{\displaystyle \ K~.}A matrix norm is anormonKm×n.{\displaystyle \ K^{m\times n}~.}
Norms are often expressed withdouble vertical bars(like so:‖A‖{\displaystyle \ \|A\|\ }). Thus, the matrix norm is afunction‖⋅‖:Km×n→R0+{\displaystyle \ \|\cdot \|:K^{m\times n}\to \mathbb {R} ^{0+}\ }that must satisfy the following properties:[1][2]
For all scalarsα∈K{\displaystyle \ \alpha \in K\ }and matricesA,B∈Km×n,{\displaystyle \ A,B\in K^{m\times n}\ ,}
The only feature distinguishing matrices from rearranged vectors ismultiplication. Matrix norms are particularly useful if they are alsosub-multiplicative:[1][2][3]
Every norm onKn×n{\displaystyle \ K^{n\times n}\ }can be rescaled to be sub-multiplicative; in some books, the terminologymatrix normis reserved for sub-multiplicative norms.[4]
Suppose avector norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}onKn{\displaystyle K^{n}}and a vector norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}onKm{\displaystyle K^{m}}are given. Anym×n{\displaystyle m\times n}matrixAinduces a linear operator fromKn{\displaystyle K^{n}}toKm{\displaystyle K^{m}}with respect to the standard basis, and one defines the correspondinginduced normoroperator normorsubordinate normon the spaceKm×n{\displaystyle K^{m\times n}}of allm×n{\displaystyle m\times n}matrices as follows:‖A‖α,β=sup{‖Ax‖β:x∈Knsuch that‖x‖α≤1}{\displaystyle \|A\|_{\alpha ,\beta }=\sup\{\|Ax\|_{\beta }:x\in K^{n}{\text{ such that }}\|x\|_{\alpha }\leq 1\}}wheresup{\displaystyle \sup }denotes thesupremum. This norm measures how much the mapping induced byA{\displaystyle A}can stretch vectors.
Depending on the vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }},‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}used, notation other than‖⋅‖α,β{\displaystyle \|\cdot \|_{\alpha ,\beta }}can be used for the operator norm.
If thep-norm for vectors(1≤p≤∞{\displaystyle 1\leq p\leq \infty }) is used for both spacesKn{\displaystyle K^{n}}andKm,{\displaystyle K^{m},}then the corresponding operator norm is:[2]‖A‖p=sup{‖Ax‖p:x∈Knsuch that‖x‖p≤1}.{\displaystyle \|A\|_{p}=\sup\{\|Ax\|_{p}:x\in K^{n}{\text{ such that }}\|x\|_{p}\leq 1\}.}These induced norms are different from the"entry-wise"p-norms and theSchattenp-normsfor matrices treated below, which are also usually denoted by‖A‖p.{\displaystyle \|A\|_{p}.}
Geometrically speaking, one can imagine ap-norm unit ballVp,n={x∈Kn:‖x‖p≤1}{\displaystyle V_{p,n}=\{x\in K^{n}:\|x\|_{p}\leq 1\}}inKn{\displaystyle K^{n}}, then apply the linear mapA{\displaystyle A}to the ball. It would end up becoming a distorted convex shapeAVp,n⊂Km{\displaystyle AV_{p,n}\subset K^{m}}, and‖A‖p{\displaystyle \|A\|_{p}}measures the longest "radius" of the distorted convex shape. In other words, we must take ap-norm unit ballVp,m{\displaystyle V_{p,m}}inKm{\displaystyle K^{m}}, then multiply it by at least‖A‖p{\displaystyle \|A\|_{p}}, in order for it to be large enough to containAVp,n{\displaystyle AV_{p,n}}.
Whenp=1,{\displaystyle \ p=1\ ,}orp=∞,{\displaystyle \ p=\infty \ ,}we have simple formulas.
which is simply the maximum absolute column sum of the matrix.‖A‖∞=max1≤i≤m∑j=1n|aij|,{\displaystyle \|A\|_{\infty }=\max _{1\leq i\leq m}\sum _{j=1}^{n}\left|a_{ij}\right|\ ,}which is simply the maximum absolute row sum of the matrix.
For example, forA=[−357264028],{\displaystyle A={\begin{bmatrix}-3&5&7\\~~2&6&4\\~~0&2&8\\\end{bmatrix}}\ ,}we have that‖A‖1=max{|−3|+2+0,5+6+2,7+4+8}=max{5,13,19}=19,{\displaystyle \|A\|_{1}=\max {\bigl \{}\ |{-3}|+2+0\ ,~5+6+2\ ,~7+4+8\ {\bigr \}}=\max {\bigl \{}\ 5\ ,~13\ ,~19\ {\bigr \}}=19\ ,}‖A‖∞=max{|−3|+5+7,2+6+4,0+2+8}=max{15,12,10}=15.{\displaystyle \|A\|_{\infty }=\max {\bigl \{}\ |{-3}|+5+7\ ,~2+6+4\ ,~0+2+8\ {\bigr \}}=\max {\bigl \{}\ 15\ ,~12\ ,~10\ {\bigr \}}=15~.}
Whenp=2{\displaystyle p=2}(theEuclidean normorℓ2{\displaystyle \ell _{2}}-norm for vectors), the induced matrix norm is thespectral norm. The two values donotcoincide in infinite dimensions — seeSpectral radiusfor further discussion. The spectral radius should not be confused with the spectral norm. The spectral norm of a matrixA{\displaystyle A}is the largestsingular valueofA{\displaystyle A}, i.e., the square root of the largesteigenvalueof the matrixA∗A,{\displaystyle A^{*}A,}whereA∗{\displaystyle A^{*}}denotes theconjugate transposeofA{\displaystyle A}:[5]‖A‖2=λmax(A∗A)=σmax(A).{\displaystyle \|A\|_{2}={\sqrt {\lambda _{\max }\left(A^{*}A\right)}}=\sigma _{\max }(A).}whereσmax(A){\displaystyle \sigma _{\max }(A)}represents the largest singular value of matrixA.{\displaystyle A.}
There are further properties:
We can generalize the above definition. Suppose we have vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}for spacesKn{\displaystyle K^{n}}andKm{\displaystyle K^{m}}respectively; the corresponding operator norm is‖A‖α,β=sup{‖Ax‖β:x∈Knsuch that‖x‖α≤1}{\displaystyle \|A\|_{\alpha ,\beta }=\sup\{\|Ax\|_{\beta }:x\in K^{n}{\text{ such that }}\|x\|_{\alpha }\leq 1\}}In particular, the‖A‖p{\displaystyle \|A\|_{p}}defined previously is the special case of‖A‖p,p{\displaystyle \|A\|_{p,p}}.
In the special cases ofα=2{\displaystyle \alpha =2}andβ=∞{\displaystyle \beta =\infty }, the induced matrix norms can be computed by‖A‖2,∞=max1≤i≤m‖Ai:‖2,{\displaystyle \|A\|_{2,\infty }=\max _{1\leq i\leq m}\|A_{i:}\|_{2},}whereAi:{\displaystyle A_{i:}}is the i-th row of matrixA{\displaystyle A}.
In the special cases ofα=1{\displaystyle \alpha =1}andβ=2{\displaystyle \beta =2}, the induced matrix norms can be computed by‖A‖1,2=max1≤j≤n‖A:j‖2,{\displaystyle \|A\|_{1,2}=\max _{1\leq j\leq n}\|A_{:j}\|_{2},}whereA:j{\displaystyle A_{:j}}is the j-th column of matrixA{\displaystyle A}.
Hence,‖A‖2,∞{\displaystyle \|A\|_{2,\infty }}and‖A‖1,2{\displaystyle \|A\|_{1,2}}are the maximum row and column 2-norm of the matrix, respectively.
Any operator norm isconsistentwith the vector norms that induce it, giving‖Ax‖β≤‖A‖α,β‖x‖α.{\displaystyle \|Ax\|_{\beta }\leq \|A\|_{\alpha ,\beta }\|x\|_{\alpha }.}
Suppose‖⋅‖α,β{\displaystyle \|\cdot \|_{\alpha ,\beta }};‖⋅‖β,γ{\displaystyle \|\cdot \|_{\beta ,\gamma }}; and‖⋅‖α,γ{\displaystyle \|\cdot \|_{\alpha ,\gamma }}are operator norms induced by the respective pairs of vector norms(‖⋅‖α,‖⋅‖β){\displaystyle (\|\cdot \|_{\alpha },\|\cdot \|_{\beta })};(‖⋅‖β,‖⋅‖γ){\displaystyle (\|\cdot \|_{\beta },\|\cdot \|_{\gamma })}; and(‖⋅‖α,‖⋅‖γ){\displaystyle (\|\cdot \|_{\alpha },\|\cdot \|_{\gamma })}. Then,
this follows from‖ABx‖γ≤‖A‖β,γ‖Bx‖β≤‖A‖β,γ‖B‖α,β‖x‖α{\displaystyle \|ABx\|_{\gamma }\leq \|A\|_{\beta ,\gamma }\|Bx\|_{\beta }\leq \|A\|_{\beta ,\gamma }\|B\|_{\alpha ,\beta }\|x\|_{\alpha }}andsup‖x‖α=1‖ABx‖γ=‖AB‖α,γ.{\displaystyle \sup _{\|x\|_{\alpha }=1}\|ABx\|_{\gamma }=\|AB\|_{\alpha ,\gamma }.}
Suppose‖⋅‖α,α{\displaystyle \|\cdot \|_{\alpha ,\alpha }}is an operator norm on the space of square matricesKn×n{\displaystyle K^{n\times n}}induced by vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}.
Then, the operator norm is a sub-multiplicative matrix norm:‖AB‖α,α≤‖A‖α,α‖B‖α,α.{\displaystyle \|AB\|_{\alpha ,\alpha }\leq \|A\|_{\alpha ,\alpha }\|B\|_{\alpha ,\alpha }.}
Moreover, any such norm satisfies the inequality
for all positive integersr, whereρ(A)is thespectral radiusofA. ForsymmetricorhermitianA, we have equality in (1) for the 2-norm, since in this case the 2-normisprecisely the spectral radius ofA. For an arbitrary matrix, we may not have equality for any norm; a counterexample would beA=[0100],{\displaystyle A={\begin{bmatrix}0&1\\0&0\end{bmatrix}},}which has vanishing spectral radius. In any case, for any matrix norm, we have thespectral radius formula:limr→∞‖Ar‖1/r=ρ(A).{\displaystyle \lim _{r\to \infty }\|A^{r}\|^{1/r}=\rho (A).}
If the vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}are given in terms ofenergy normsbased onsymmetricpositive definitematricesP{\displaystyle P}andQ{\displaystyle Q}respectively, the resulting operator norm is given as‖A‖P,Q=sup{‖Ax‖Q:‖x‖P≤1}.{\displaystyle \|A\|_{P,Q}=\sup\{\|Ax\|_{Q}:\|x\|_{P}\leq 1\}.}
Using the symmetricmatrix square rootsofP{\displaystyle P}andQ{\displaystyle Q}respectively, the operator norm can be expressed as the spectral norm of a modified matrix:
‖A‖P,Q=‖Q1/2AP−1/2‖2.{\displaystyle \|A\|_{P,Q}=\|Q^{1/2}AP^{-1/2}\|_{2}.}
A matrix norm‖⋅‖{\displaystyle \|\cdot \|}onKm×n{\displaystyle K^{m\times n}}is calledconsistentwith a vector norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}onKn{\displaystyle K^{n}}and a vector norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}onKm{\displaystyle K^{m}}, if:‖Ax‖β≤‖A‖‖x‖α{\displaystyle \left\|Ax\right\|_{\beta }\leq \left\|A\right\|\left\|x\right\|_{\alpha }}for allA∈Km×n{\displaystyle A\in K^{m\times n}}and allx∈Kn{\displaystyle x\in K^{n}}. In the special case ofm=nandα=β{\displaystyle \alpha =\beta },‖⋅‖{\displaystyle \|\cdot \|}is also calledcompatiblewith‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}.
All induced norms are consistent by definition. Also, any sub-multiplicative matrix norm onKn×n{\displaystyle K^{n\times n}}induces a compatible vector norm onKn{\displaystyle K^{n}}by defining‖v‖:=‖(v,v,…,v)‖{\displaystyle \left\|v\right\|:=\left\|\left(v,v,\dots ,v\right)\right\|}.
These norms treat anm×n{\displaystyle m\times n}matrix as a vector of sizem⋅n{\displaystyle m\cdot n}, and use one of the familiar vector norms. For example, using thep-norm for vectors,p≥ 1, we get:
This is a different norm from the inducedp-norm (see above) and the Schattenp-norm (see below), but the notation is the same.
The special casep= 2 is the Frobenius norm, andp= ∞ yields the maximum norm.
Let(a1,…,an){\displaystyle (a_{1},\ldots ,a_{n})}be the dimensionmcolumns of matrixA{\displaystyle A}. From the original definition, the matrixA{\displaystyle A}presentsndata points in anm-dimensional space. TheL2,1{\displaystyle L_{2,1}}norm[6]is the sum of the Euclidean norms of the columns of the matrix:
TheL2,1{\displaystyle L_{2,1}}norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used inrobust data analysisandsparse coding.
Forp,q≥ 1, theL2,1{\displaystyle L_{2,1}}norm can be generalized to theLp,q{\displaystyle L_{p,q}}norm as follows:
Whenp=q= 2for theLp,q{\displaystyle L_{p,q}}norm, it is called theFrobenius normor theHilbert–Schmidt norm, though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional)Hilbert space. This norm can be defined in various ways:
where thetraceis the sum of diagonal entries, andσi(A){\displaystyle \sigma _{i}(A)}are thesingular valuesofA{\displaystyle A}. The second equality is proven by explicit computation oftrace(A∗A){\displaystyle \mathrm {trace} (A^{*}A)}. The third equality is proven bysingular value decompositionofA{\displaystyle A}, and the fact that the trace is invariant under circular shifts.
The Frobenius norm is an extension of the Euclidean norm toKn×n{\displaystyle K^{n\times n}}and comes from theFrobenius inner producton the space of all matrices.
The Frobenius norm is sub-multiplicative and is very useful fornumerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using theCauchy–Schwarz inequality. In fact, it is more than sub-multiplicative, as‖AB‖F≤‖A‖op‖B‖F{\displaystyle \|AB\|_{F}\leq \|A\|_{op}\|B\|_{F}}where the operator norm‖⋅‖op≤‖⋅‖F{\displaystyle \|\cdot \|_{op}\leq \|\cdot \|_{F}}.
Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant underrotations(andunitaryoperations in general). That is,‖A‖F=‖AU‖F=‖UA‖F{\displaystyle \|A\|_{\text{F}}=\|AU\|_{\text{F}}=\|UA\|_{\text{F}}}for any unitary matrixU{\displaystyle U}. This property follows from the cyclic nature of the trace (trace(XYZ)=trace(YZX)=trace(ZXY){\displaystyle \operatorname {trace} (XYZ)=\operatorname {trace} (YZX)=\operatorname {trace} (ZXY)}):
and analogously:
where we have used the unitary nature ofU{\displaystyle U}(that is,U∗U=UU∗=I{\displaystyle U^{*}U=UU^{*}=\mathbf {I} }).
It also satisfies
and
where⟨A,B⟩F{\displaystyle \langle A,B\rangle _{\text{F}}}is theFrobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices)
Themax normis the elementwise norm in the limit asp=qgoes to infinity:
This norm is notsub-multiplicative; but modifying the right-hand side tomnmaxi,j|aij|{\displaystyle {\sqrt {mn}}\max _{i,j}\vert a_{ij}\vert }makes it so.
Note that in some literature (such asCommunication complexity), an alternative definition of max-norm, also called theγ2{\displaystyle \gamma _{2}}-norm, refers to the factorization norm:
The Schattenp-norms arise when applying thep-norm to the vector ofsingular valuesof a matrix.[2]If the singular values of them×n{\displaystyle m\times n}matrixA{\displaystyle A}are denoted byσi, then the Schattenp-norm is defined by
These norms again share the notation with the induced and entry-wisep-norms, but they are different.
All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that‖A‖=‖UAV‖{\displaystyle \|A\|=\|UAV\|}for all matricesA{\displaystyle A}and allunitary matricesU{\displaystyle U}andV{\displaystyle V}.
The most familiar cases arep= 1, 2, ∞. The casep= 2 yields the Frobenius norm, introduced before. The casep= ∞ yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally,p= 1 yields thenuclear norm(also known as thetrace norm, or theKy Fan'n'-norm[7]), defined as:
whereA∗A{\displaystyle {\sqrt {A^{*}A}}}denotes a positive semidefinite matrixB{\displaystyle B}such thatBB=A∗A{\displaystyle BB=A^{*}A}. More precisely, sinceA∗A{\displaystyle A^{*}A}is apositive semidefinite matrix, itssquare rootis well defined. The nuclear norm‖A‖∗{\displaystyle \|A\|_{*}}is aconvex envelopeof the rank functionrank(A){\displaystyle {\text{rank}}(A)}, so it is often used inmathematical optimizationto search for low-rank matrices.
Combiningvon Neumann's trace inequalitywithHölder's inequalityfor Euclidean space yields a version ofHölder's inequalityfor Schatten norms for1/p+1/q=1{\displaystyle 1/p+1/q=1}:
In particular, this implies the Schatten norm inequality
A matrix norm‖⋅‖{\displaystyle \|\cdot \|}is calledmonotoneif it is monotonic with respect to theLoewner order. Thus, a matrix norm is increasing if
The Frobenius norm and spectral norm are examples of monotone norms.[8]
Another source of inspiration for matrix norms arises from considering a matrix as theadjacency matrixof aweighted,directed graph.[9]The so-called "cut norm" measures how close the associated graph is to beingbipartite:‖A‖◻=maxS⊆[n],T⊆[m]|∑s∈S,t∈TAt,s|{\displaystyle \|A\|_{\Box }=\max _{S\subseteq [n],T\subseteq [m]}{\left|\sum _{s\in S,t\in T}{A_{t,s}}\right|}}whereA∈Km×n.[9][10][11]Equivalent definitions (up to a constant factor) impose the conditions2|S| >n& 2|T| >m;S=T; orS∩T= ∅.[10]
The cut-norm is equivalent to the induced operator norm‖·‖∞→1, which is itself equivalent to another norm, called theGrothendiecknorm.[11]
To define the Grothendieck norm, first note that a linear operatorK1→K1is just a scalar, and thus extends to a linear operator on anyKk→Kk. Moreover, given any choice of basis forKnandKm, any linear operatorKn→Kmextends to a linear operator(Kk)n→ (Kk)m, by letting each matrix element on elements ofKkvia scalar multiplication. The Grothendieck norm is the norm of that extended operator; in symbols:[11]‖A‖G,k=supeachuj,vj∈Kk;‖uj‖=‖vj‖=1∑j∈[n],ℓ∈[m](uj⋅vj)Aℓ,j{\displaystyle \|A\|_{G,k}=\sup _{{\text{each }}u_{j},v_{j}\in K^{k};\|u_{j}\|=\|v_{j}\|=1}{\sum _{j\in [n],\ell \in [m]}{(u_{j}\cdot v_{j})A_{\ell ,j}}}}
The Grothendieck norm depends on choice of basis (usually taken to be thestandard basis) andk.
For any two matrix norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}, we have that:
for some positive numbersrands, for all matricesA∈Km×n{\displaystyle A\in K^{m\times n}}. In other words, all norms onKm×n{\displaystyle K^{m\times n}}areequivalent; they induce the sametopologyonKm×n{\displaystyle K^{m\times n}}. This is true because the vector spaceKm×n{\displaystyle K^{m\times n}}has the finitedimensionm×n{\displaystyle m\times n}.
Moreover, for every matrix norm‖⋅‖{\displaystyle \|\cdot \|}onRn×n{\displaystyle \mathbb {R} ^{n\times n}}there exists a unique positive real numberk{\displaystyle k}such thatℓ‖⋅‖{\displaystyle \ell \|\cdot \|}is a sub-multiplicative matrix norm for everyℓ≥k{\displaystyle \ell \geq k}; to wit,
A sub-multiplicative matrix norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}is said to beminimal, if there exists no other sub-multiplicative matrix norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}satisfying‖⋅‖β<‖⋅‖α{\displaystyle \|\cdot \|_{\beta }<\|\cdot \|_{\alpha }}.
Let‖A‖p{\displaystyle \|A\|_{p}}once again refer to the norm induced by the vectorp-norm (as above in the Induced norm section).
For matrixA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}ofrankr{\displaystyle r}, the following inequalities hold:[12][13]
|
https://en.wikipedia.org/wiki/Matrix_norm
|
Ineconomicsandfinance,risk aversionis the tendency of people to prefer outcomes with lowuncertaintyto those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome.[1]
Risk aversion explains the inclination to agree to a situation with a lower average payoff that is more predictable rather than another situation with a less predictable payoff that is higher on average. For example, a risk-averse investor might choose to put their money into abankaccount with a low but guaranteed interest rate, rather than into astockthat may have high expected returns, but also involves a chance of losing value.
A person is given the choice between two scenarios: one with a guaranteed payoff, and one with a risky payoff with same average value. In the former scenario, the person receives $50. In the uncertain scenario, a coin is flipped to decide whether the person receives $100 or nothing. The expected payoff for both scenarios is $50, meaning that an individual who was insensitive to risk would not care whether they took the guaranteed payment or the gamble. However, individuals may have differentrisk attitudes.[2][3][4]
A person is said to be:
The average payoff of the gamble, known as itsexpected value, is $50. The smallest guaranteed dollar amount that an individual would be indifferent to compared to an uncertain gain of a specific average predicted value is called thecertainty equivalent, which is also used as a measure of risk aversion. An individual that is risk averse has a certainty equivalent that is smaller than the prediction of uncertain gains. Therisk premiumis the difference between the expected value and the certainty equivalent. For risk-averse individuals, risk premium is positive, for risk-neutral persons it is zero, and for risk-loving individuals their risk premium is negative.
Inexpected utilitytheory, an agent has a utility functionu(c) wherecrepresents the value that he might receive in money or goods (in the above exampleccould be $0 or $40 or $100).
The utility functionu(c) is defined onlyup topositiveaffine transformation– in other words, a constant could be added to the value ofu(c) for allc, and/oru(c) could be multiplied by a positive constant factor, without affecting the conclusions.
An agent is risk-averse if and only if the utility function isconcave. For instanceu(0) could be 0,u(100) might be 10,u(40) might be 5, and for comparisonu(50) might be 6.
The expected utility of the above bet (with a 50% chance of receiving 100 and a 50% chance of receiving 0) is
and if the person has the utility function withu(0)=0,u(40)=5, andu(100)=10 then the expected utility of the bet equals 5, which is the same as the known utility of the amount 40. Hence the certainty equivalent is 40.
The risk premium is ($50 minus $40)=$10, or in proportional terms
or 25% (where $50 is the expected value of the risky bet: (120+12100{\displaystyle {\tfrac {1}{2}}0+{\tfrac {1}{2}}100}). This risk premium means that the person would be willing to sacrifice as much as $10 in expected value in order to achieve perfect certainty about how much money will be received. In other words, the person would be indifferent between the bet and a guarantee of $40, and would prefer anything over $40 to the bet.
In the case of a wealthier individual, the risk of losing $100 would be less significant, and for such small amounts his utility function would be likely to be almost linear. For instance, if u(0) = 0 and u(100) = 10, then u(40) might be 4.02 and u(50) might be 5.01.
The utility function for perceived gains has two key properties: an upward slope, and concavity. (i) The upward slope implies that the person feels that more is better: a larger amount received yields greater utility, and for risky bets the person would prefer a bet which isfirst-order stochastically dominantover an alternative bet (that is, if the probability mass of the second bet is pushed to the right to form the first bet, then the first bet is preferred). (ii) The concavity of the utility function implies that the person is risk averse: a sure amount would always be preferred over a risky bet having the same expected value; moreover, for risky bets the person would prefer a bet which is amean-preserving contractionof an alternative bet (that is, if some of the probability mass of the first bet is spread out without altering the mean to form the second bet, then the first bet is preferred).
There are various measures of the risk aversion expressed by those given utility function. Several functional forms often used for utility functions are represented by these measures.
The higher the curvature ofu(c){\displaystyle u(c)}, the higher the risk aversion. However, since expected utility functions are not uniquely defined (are defined only up toaffine transformations), a measure that stays constant with respect to these transformations is needed rather than just the second derivative ofu(c){\displaystyle u(c)}. One such measure is theArrow–Pratt measure of absolute risk aversion(ARA), after the economistsKenneth ArrowandJohn W. Pratt,[5][6]also known as thecoefficient of absolute risk aversion, defined as
whereu′(c){\displaystyle u'(c)}andu″(c){\displaystyle u''(c)}denote the first and second derivatives with respect toc{\displaystyle c}ofu(c){\displaystyle u(c)}. For example, ifu(c)=α+βln(c),{\displaystyle u(c)=\alpha +\beta ln(c),}sou′(c)=β/c{\displaystyle u'(c)=\beta /c}andu″(c)=−β/c2,{\displaystyle u''(c)=-\beta /c^{2},}thenA(c)=1/c.{\displaystyle A(c)=1/c.}Note howA(c){\displaystyle A(c)}does not depend onα{\displaystyle \alpha }andβ,{\displaystyle \beta ,}so affine transformations ofu(c){\displaystyle u(c)}do not change it.
The following expressions relate to this term:
The solution to this differential equation (omitting additive and multiplicative constant terms, which do not affect the behavior implied by the utility function) is:
whereR=1/a{\displaystyle R=1/a}andcs=−b/a{\displaystyle c_{s}=-b/a}.
Note that whena=0{\displaystyle a=0}, this is CARA, asA(c)=1/b=const{\displaystyle A(c)=1/b=const}, and whenb=0{\displaystyle b=0}, this is CRRA (see below), ascA(c)=1/a=const{\displaystyle cA(c)=1/a=const}.
See[7]
and this can hold only ifu‴(c)>0{\displaystyle u'''(c)>0}. Therefore, DARA implies that the utility function is positively skewed; that is,u‴(c)>0{\displaystyle u'''(c)>0}.[8]Analogously, IARA can be derived with the opposite directions of inequalities, which permits but does not require a negatively skewed utility function (u‴(c)<0{\displaystyle u'''(c)<0}). An example of a DARA utility function isu(c)=log(c){\displaystyle u(c)=\log(c)}, withA(c)=1/c{\displaystyle A(c)=1/c}, whileu(c)=c−αc2,{\displaystyle u(c)=c-\alpha c^{2},}α>0{\displaystyle \alpha >0}, withA(c)=2α/(1−2αc){\displaystyle A(c)=2\alpha /(1-2\alpha c)}would represent a quadratic utility function exhibiting IARA.
TheArrow–Pratt measure of relative risk aversion(RRA) orcoefficient of relative risk aversionis defined as[11]
Unlike ARA whose units are in $−1, RRA is a dimensionless quantity, which allows it to be applied universally. Like for absolute risk aversion, the corresponding termsconstant relative risk aversion(CRRA) anddecreasing/increasing relative risk aversion(DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if the utility function changes from risk averse to risk loving ascvaries, i.e. utility is not strictly convex/concave over allc. A constant RRA implies a decreasing ARA, but the reverse is not always true. As a specific example of constant relative risk aversion, the utility functionu(c)=log(c){\displaystyle u(c)=\log(c)}impliesRRA = 1.
Inintertemporal choiceproblems, theelasticity of intertemporal substitutionoften cannot be disentangled from the coefficient of relative risk aversion. Theisoelastic utilityfunction
exhibits constant relative risk aversion withR(c)=ρ{\displaystyle R(c)=\rho }and the elasticity of intertemporal substitutionεu(c)=1/ρ{\displaystyle \varepsilon _{u(c)}=1/\rho }. Whenρ=1,{\displaystyle \rho =1,}usingl'Hôpital's ruleshows that this simplifies to the case oflog utility,u(c) = logc, and theincome effectandsubstitution effecton saving exactly offset.
A time-varying relative risk aversion can be considered.[12]
The most straightforward implications of changing risk aversion occur in the context of forming a portfolio with one risky asset and one risk-free asset.[5][6]If an investor experiences an increase in wealth, he/she will choose to decrease the total amount of wealth invested in the risky asset in proportion to absolute risk aversion and will decrease the relative fraction of the portfolio made up of the risky asset in proportion to relative risk aversion. Thus economists avoid using utility functions which exhibit increasing absolute risk aversion, because they have an unrealistic behavioral implication.
In onemodelinmonetary economics, an increase in relative risk aversion increases the impact of households' money holdings on the overall economy. In other words, the more the relative risk aversion increases, the more money demand shocks will impact the economy.[13]
Inmodern portfolio theory, risk aversion is measured as the additional expected reward an investor requires to accept additional risk. If an investor is risk-averse, they will invest in multiple uncertain assets, but only when the predicted return on a portfolio that is uncertain is greater than the predicted return on one that is not uncertain will the investor prefer the former.[1]Here, therisk-return spectrumis relevant, as it results largely from this type of risk aversion. Here risk is measured as thestandard deviationof the return on investment, i.e. thesquare rootof itsvariance. In advanced portfolio theory, different kinds of risk are taken into consideration. They are measured as then-th rootof the n-thcentral moment. The symbol used for risk aversion is A or An.
Thevon Neumann-Morgenstern utility theoremis another model used to denote how risk aversion influences an actor’s utility function. An extension of theexpected utilityfunction, the von Neumann-Morgenstern model includes risk aversion axiomatically rather than as an additional variable.[14]
John von NeumannandOskar Morgensternfirst developed the model in their bookTheory of Games and Economic Behaviour.[14]Essentially, von Neumann and Morgenstern hypothesised that individuals seek to maximise their expected utility rather than the expected monetary value of assets.[15]In defining expected utility in this sense, the pair developed a function based on preference relations. As such, if an individual’s preferences satisfy four key axioms, then a utility function based on how they weigh different outcomes can be deduced.[16]
In applying this model to risk aversion, the function can be used to show how an individual’s preferences of wins and losses will influence their expected utility function. For example, if a risk-averse individual with $20,000 in savings is given the option to gamble it for $100,000 with a 30% chance of winning, they may still not take the gamble in fear of losing their savings. This does not make sense using the traditional expected utility model however;
EU(A)=0.3($100,000)+0.7($0){\displaystyle EU(A)=0.3(\$100,000)+0.7(\$0)}
EU(A)=$30,000{\displaystyle EU(A)=\$30,000}
EU(A)>$20,000{\displaystyle EU(A)>\$20,000}
The von Neumann-Morgenstern model can explain this scenario. Based on preference relations, a specific utilityu{\displaystyle u}can be assigned to both outcomes. Now the function becomes;
EU(A)=0.3u($100,000)+0.7u($0){\displaystyle EU(A)=0.3u(\$100,000)+0.7u(\$0)}
For a risk averse person,u{\displaystyle u}would equal a value that means that the individual would rather keep their $20,000 in savings than gamble it all to potentially increase their wealth to $100,000. Hence a risk averse individuals’ function would show that;
EU(A)≺$20,000(keepingsavings){\displaystyle EU(A)\prec \$20,000(keepingsavings)}
Using expected utility theory's approach to risk aversion to analyzesmall stakes decisionshas come under criticism.Matthew Rabinhas showed that a risk-averse, expected-utility-maximizing individual who,
from any initial wealth level [...] turns down gambles where she loses $100 or gains $110, each with 50% probability [...] will turn down 50–50 bets of losing $1,000 or gaining any sum of money.[17]
Rabin criticizes this implication of expected utility theory on grounds of implausibility—individuals who are risk averse for small gambles due to diminishing marginal utility would exhibit extreme forms of risk aversion in risky decisions under larger stakes. One solution to the problem observed by Rabin is that proposed byprospect theoryandcumulative prospect theory, where outcomes are considered relative to a reference point (usually the status quo), rather than considering only the final wealth.
Another limitation is the reflection effect, which demonstrates the reversing of risk aversion. This effect was first presented byKahnemanandTverskyas a part of theprospect theory, in thebehavioral economicsdomain.
The reflection effect is an identified pattern of opposite preferences between negative as opposed to positive prospects: people tend to avoid risk when the gamble is between gains, and to seek risks when the gamble is between losses.[18]For example, most people prefer a certain gain of 3,000 to an 80% chance of a gain of 4,000. When posed the same problem, but for losses, most people prefer an 80% chance of a loss of 4,000 to a certain loss of 3,000.
The reflection effect (as well as thecertainty effect) is inconsistent with the expected utility hypothesis. It is assumed that the psychological principle which stands behind this kind of behavior is the overweighting of certainty. Options which are perceived as certain are over-weighted relative to uncertain options. This pattern is an indication of risk-seeking behavior in negative prospects and eliminates other explanations for the certainty effect such as aversion for uncertainty or variability.[18]
The initial findings regarding the reflection effect faced criticism regarding its validity, as it was claimed that there are insufficient evidence to support the effect on the individual level. Subsequently, an extensive investigation revealed its possible limitations, suggesting that the effect is most prevalent when either small or large amounts and extreme probabilities are involved.[19][20]
Numerous studies have shown that in riskless bargaining scenarios, being risk-averse is disadvantageous. Moreover, opponents will always prefer to play against the most risk-averse person.[21]Based on both thevon Neumann-MorgensternandNash Game Theorymodel, a risk-averse person will happily receive a smaller commodity share of the bargain.[22]This is because their utility function concaves hence their utility increases at a decreasing rate while their non-risk averse opponents may increase at a constant or increasing rate.[23]Intuitively, a risk-averse person will hence settle for a smaller share of the bargain as opposed to a risk-neutral or risk-seeking individual. Intuitively, a risk-averse person will hence settle for a smaller share of the bargain as opposed to a risk-neutral or risk-seeking individual. This paradox is exemplified in pedestrian behavior, where risk-averse individuals often choose routes they perceive as safer, even when those choices increase their overall exposure to danger.[24]
Attitudes towards risk have attracted the interest of the field ofneuroeconomicsandbehavioral economics. A 2009 study by Christopoulos et al. suggested that the activity of a specific brain area (right inferior frontal gyrus) correlates with risk aversion, with more risk averse participants (i.e. those having higher risk premia) also having higher responses to safer options.[25]This result coincides with other studies,[25][26]that show thatneuromodulationof the same area results in participants making more or less risk averse choices, depending on whether the modulation increases or decreases the activity of the target area.
In the real world, many government agencies, e.g.Health and Safety Executive, are fundamentally risk-averse in their mandate. This often means that they demand (with the power of legal enforcement) that risks be minimized, even at the cost of losing the utility of the risky activity.
It is important to consider theopportunity costwhen mitigating a risk; the cost of not taking the risky action. Writing laws focused on the risk without the balance of the utility may misrepresent society's goals. The public understanding of risk, which influences political decisions, is an area which has recently been recognised as deserving focus. In 2007Cambridge Universityinitiated theWinton Professorship of the Public Understanding of Risk, a role described as outreach rather than traditional academic research by the holder,David Spiegelhalter.[27]
Children's services such asschoolsandplaygroundshave become the focus of much risk-averse planning, meaning that children are often prevented from benefiting from activities that they would otherwise have had. Many playgrounds have been fitted with impact-absorbing matting surfaces. However, these are only designed to save children from death in the case of direct falls on their heads and do not achieve their main goals.[28]They are expensive, meaning that less resources are available to benefit users in other ways (such as building a playground closer to the child's home, reducing the risk of a road traffic accident on the way to it), and—some argue—children may attempt more dangerous acts, with confidence in the artificial surface. Shiela Sage, an early years school advisor, observes "Children who are only ever kept in very safe places, are not the ones who are able to solve problems for themselves. Children need to have a certain amount of risk taking ... so they'll know how to get out of situations."[29][citation needed]
One experimental study with student-subject playing the game of the TV showDeal or No Dealfinds that people are more risk averse in the limelight than in the anonymity of a typical behavioral laboratory. In the laboratory treatments, subjects made decisions in a standard, computerized laboratory setting as typically employed in behavioral experiments. In the limelight treatments, subjects made their choices in a simulated game show environment, which included a live audience, a game show host, and video cameras.[30]In line with this, studies on investor behavior find that investors trade more and more speculatively after switching from phone-based to online trading[31][32]and that investors tend to keep their core investments with traditional brokers and use a small fraction of their wealth to speculate online.[33]
The basis of the theory, on the connection between employment status and risk aversion, is the varying income level of individuals. On average higher income earners are less risk averse than lower income earners. In terms of employment the greater the wealth of an individual the less risk averse they can afford to be, and they are more inclined to make the move from a secure job to anentrepreneurial venture. The literature assumes a small increase in income or wealth initiates the transition from employment to entrepreneurship-based decreasing absolute risk aversion (DARA), constant absolute risk aversion (CARA), and increasing absolute risk aversion (IARA) preferences as properties in theirutility function.[34]Theapportioningrisk perspective can also be used to as a factor in the transition of employment status, only if the strength ofdownside risk aversionexceeds the strength of risk aversion.[34]If using the behavioural approach to model an individual’s decision on their employment status there must be more variables than risk aversion and any absolute risk aversion preferences.
Incentive effects are a factor in the behavioural approach an individual takes in deciding to move from a secure job to entrepreneurship. Non-financial incentives provided by an employer can change the decision to transition into entrepreneurship as the intangible benefits helps to strengthen how risk averse an individual is relative to the strength of downside risk aversion. Utility functions do not equate for such effects and can often screw the estimated behavioural path that an individual takes towards their employment status.[35]
The design of experiments, to determine at what increase of wealth or income would an individual change their employment status from a position of security to more risky ventures, must include flexible utility specifications with salient incentives integrated with risk preferences.[35]The application of relevant experiments can avoid the generalisation of varying individual preferences through the use of this model and its specified utility functions.
U.Sankar (1971), A Utility Function for Wealth for a Risk Averter, Journal of Economic Theory.
|
https://en.wikipedia.org/wiki/Risk_aversion
|
TheJaccard indexis astatisticused for gauging thesimilarityanddiversityofsamplesets.
It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also calledintersection over union(IoU).
It was developed byGrove Karl Gilbertin 1884 as hisratio of verification (v)[1]and now is often called thecritical success indexin meteorology.[2]It was later developed independently byPaul Jaccard, originally giving the French namecoefficient de communauté(coefficient of community),[3][4]and independently formulated again by T. Tanimoto.[5]Thus, it is also calledTanimoto indexorTanimoto coefficientin some fields.
The Jaccard index measures similarity between finite non-empty sample sets and is defined as the size of theintersectiondivided by the size of theunionof the sample sets:
Note that by design,0≤J(A,B)≤1.{\displaystyle 0\leq J(A,B)\leq 1.}If the setsA{\displaystyle A}andB{\displaystyle B}have no elements in common, their intersection is empty, so|A∩B|=0{\displaystyle |A\cap B|=0}and thereforeJ(A,B)=0.{\displaystyle J(A,B)=0.}The other extreme is that the two sets are equal. In that caseA∩B=A∪B=A=B,{\displaystyle A\cap B=A\cup B=A=B,}so thenJ(A,B)=1.{\displaystyle J(A,B)=1.}The Jaccard index is widely used in computer science, ecology, genomics and other sciences wherebinary or binarized dataare used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard index.[6]
Jaccard similarity also applies to bags, i.e.,multisets. This has a similar formula,[7]but the symbols used represent bag intersection and bag sum (not union). The maximum value is 1/2.
TheJaccard distance, which measuresdissimilarity between sample sets, is complementary to the Jaccard index and is obtained by subtracting the Jaccard index from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union:
An alternative interpretation of the Jaccard distance is as the ratio of the size of thesymmetric differenceA△B=(A∪B)−(A∩B){\displaystyle A\mathbin {\triangle } B=(A\cup B)-(A\cap B)}to the union.
Jaccard distance is commonly used to calculate ann×nmatrix forclusteringandmultidimensional scalingofnsample sets.
This distance is ametricon the collection of all finite sets.[8][9][10]
There is also a version of the Jaccard distance formeasures, includingprobability measures. Ifμ{\displaystyle \mu }is a measure on ameasurable spaceX{\displaystyle X}, then we define the Jaccard index by
and the Jaccard distance by
Care must be taken ifμ(A∪B)=0{\displaystyle \mu (A\cup B)=0}or∞{\displaystyle \infty }, since these formulas are not well defined in these cases.
TheMinHashmin-wise independent permutationslocality sensitive hashingscheme may be used to efficiently compute an accurate estimate of the Jaccard similarity index of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of ahash function.
Given two objects,AandB, each withnbinaryattributes, the Jaccard index is a useful measure of the overlap thatAandBshare with their attributes. Each attribute ofAandBcan either be 0 or 1. The total number of each combination of attributes for bothAandBare specified as follows:
Each attribute must fall into one of these four categories, meaning that
The Jaccard similarity index,J, is given as
The Jaccard distance,dJ, is given as
Statistical inference can be made based on the Jaccard similarity index, and consequently related metrics.[6]Given two sample setsAandBwithnattributes, a statistical test can be conducted to see if an overlap isstatistically significant. The exact solution is available, although computation can be costly asnincreases.[6]Estimation methods are available either by approximating amultinomial distributionor by bootstrapping.[6]
When used for binary attributes, the Jaccard index is very similar to thesimple matching coefficient. The main difference is that the SMC has the termM00{\displaystyle M_{00}}in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences (when an attribute is present in both sets) and mutual absence (when an attribute is absent in both sets) as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets.
Inmarket basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC.
In other contexts, where 0 and 1 carry equivalent information (symmetry), the SMC is a better measure of similarity. For example, vectors of demographic variables stored indummy variables, such as gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes (in this case, male and female), thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions.
Ifx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})}andy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}are two vectors with all realxi,yi≥0{\displaystyle x_{i},y_{i}\geq 0}, then their Jaccard similarity index (also known then as Ruzicka similarity[citation needed]) is defined as
and Jaccard distance (also known then as Soergel distance)
With even more generality, iff{\displaystyle f}andg{\displaystyle g}are two non-negative measurable functions on a measurable spaceX{\displaystyle X}with measureμ{\displaystyle \mu }, then we can define
wheremax{\displaystyle \max }andmin{\displaystyle \min }are pointwise operators. Then Jaccard distance is
Then, for example, for two measurable setsA,B⊆X{\displaystyle A,B\subseteq X}, we haveJμ(A,B)=J(χA,χB),{\displaystyle J_{\mu }(A,B)=J(\chi _{A},\chi _{B}),}whereχA{\displaystyle \chi _{A}}andχB{\displaystyle \chi _{B}}are the characteristic functions of the corresponding set.
The weighted Jaccard similarity described above generalizes the Jaccard Index to positive vectors, where a set corresponds to a binary vector given by theindicator function, i.e.xi∈{0,1}{\displaystyle x_{i}\in \{0,1\}}. However, it does not generalize the Jaccard Index to probability distributions, where a set corresponds to a uniform probability distribution, i.e.
It is always less if the sets differ in size. If|X|>|Y|{\displaystyle |X|>|Y|}, andxi=1X(i)/|X|,yi=1Y(i)/|Y|{\displaystyle x_{i}=\mathbf {1} _{X}(i)/|X|,y_{i}=\mathbf {1} _{Y}(i)/|Y|}then
Instead, a generalization that is continuous between probability distributions and their corresponding support sets is
which is called the "Probability" Jaccard.[11]It has the following bounds against the Weighted Jaccard on probability vectors.
Here the upper bound is the (weighted)Sørensen–Dice coefficient.
The corresponding distance,1−JP(x,y){\displaystyle 1-J_{\mathcal {P}}(x,y)}, is a metric over probability distributions, and apseudo-metricover non-negative vectors.
The Probability Jaccard Index has a geometric interpretation as the area of an intersection ofsimplices. Every point on a unitk{\displaystyle k}-simplex corresponds to a probability distribution onk+1{\displaystyle k+1}elements, because the unitk{\displaystyle k}-simplex is the set of points ink+1{\displaystyle k+1}dimensions that sum to 1. To derive the Probability Jaccard Index geometrically, represent a probability distribution as the unit simplex divided into sub simplices according to the mass of each item. If you overlay two distributions represented in this way on top of each other, and intersect the simplices corresponding to each item, the area that remains is equal to the Probability Jaccard Index of the distributions.
Consider the problem of constructing random variables such that they collide with each other as much as possible. That is, ifX∼x{\displaystyle X\sim x}andY∼y{\displaystyle Y\sim y}, we would like to constructX{\displaystyle X}andY{\displaystyle Y}to maximizePr[X=Y]{\displaystyle \Pr[X=Y]}. If we look at just two distributionsx,y{\displaystyle x,y}in isolation, the highestPr[X=Y]{\displaystyle \Pr[X=Y]}we can achieve is given by1−TV(x,y){\displaystyle 1-{\text{TV}}(x,y)}whereTV{\displaystyle {\text{TV}}}is theTotal Variation distance. However, suppose we weren't just concerned with maximizing that particular pair, suppose we would like to maximize the collision probability of any arbitrary pair. One could construct an infinite number of random variables one for each distributionx{\displaystyle x}, and seek to maximizePr[X=Y]{\displaystyle \Pr[X=Y]}for all pairsx,y{\displaystyle x,y}. In a fairly strong sense described below, the Probability Jaccard Index is an optimal way to align these random variables.
For any sampling methodG{\displaystyle G}and discrete distributionsx,y{\displaystyle x,y}, ifPr[G(x)=G(y)]>JP(x,y){\displaystyle \Pr[G(x)=G(y)]>J_{\mathcal {P}}(x,y)}then for somez{\displaystyle z}whereJP(x,z)>JP(x,y){\displaystyle J_{\mathcal {P}}(x,z)>J_{\mathcal {P}}(x,y)}andJP(y,z)>JP(x,y){\displaystyle J_{\mathcal {P}}(y,z)>J_{\mathcal {P}}(x,y)}, eitherPr[G(x)=G(z)]<JP(x,z){\displaystyle \Pr[G(x)=G(z)]<J_{\mathcal {P}}(x,z)}orPr[G(y)=G(z)]<JP(y,z){\displaystyle \Pr[G(y)=G(z)]<J_{\mathcal {P}}(y,z)}.[11]
That is, no sampling method can achieve more collisions thanJP{\displaystyle J_{\mathcal {P}}}on one pair without achieving fewer collisions thanJP{\displaystyle J_{\mathcal {P}}}on another pair, where the reduced pair is more similar underJP{\displaystyle J_{\mathcal {P}}}than the increased pair. This theorem is true for the Jaccard Index of sets (if interpreted as uniform distributions) and the probability Jaccard, but not of the weighted Jaccard. (The theorem uses the word "sampling method" to describe a joint distribution over all distributions on a space, because it derives from the use ofweighted minhashing algorithmsthat achieve this as their collision probability.)
This theorem has a visual proof on three element distributions using the simplex representation.
Various forms of functions described as Tanimoto similarity and Tanimoto distance occur in the literature and on the Internet. Most of these are synonyms for Jaccard similarity and Jaccard distance, but some are mathematically different. Many sources[12]cite an IBM Technical Report[5]as the seminal reference.
In "A Computer Program for Classifying Plants", published in October 1960,[13]a method of classification based on a similarity ratio, and a derived distance function, is given. It seems that this is the most authoritative source for the meaning of the terms "Tanimoto similarity" and "Tanimoto Distance". The similarity ratio is equivalent to Jaccard similarity, but the distance function isnotthe same as Jaccard distance.
In that paper, a "similarity ratio" is given overbitmaps, where each bit of a fixed-size array represents the presence or absence of a characteristic in the plant being modelled. The definition of the ratio is the number of common bits, divided by the number of bits set (i.e.nonzero) in either sample.
Presented in mathematical terms, if samplesXandYare bitmaps,Xi{\displaystyle X_{i}}is theith bit ofX, and∧,∨{\displaystyle \land ,\lor }arebitwiseand,oroperators respectively, then the similarity ratioTs{\displaystyle T_{s}}is
If each sample is modelled instead as a set of attributes, this value is equal to the Jaccard index of the two sets. Jaccard is not cited in the paper, and it seems likely that the authors were not aware of it.[citation needed]
Tanimoto goes on to define a "distance" based on this ratio, defined for bitmaps with non-zero similarity:
This coefficient is, deliberately, not a distance metric. It is chosen to allow the possibility of two specimens, which are quite different from each other, to both be similar to a third. It is easy to construct an example which disproves the property oftriangle inequality.
Tanimoto distance is often referred to, erroneously, as a synonym for Jaccard distance1−Ts{\displaystyle 1-T_{s}}. This function is a proper distance metric. "Tanimoto Distance" is often stated as being a proper distance metric, probably because of its confusion with Jaccard distance.[clarification needed][citation needed]
If Jaccard or Tanimoto similarity is expressed over a bit vector, then it can be written as
where the same calculation is expressed in terms of vector scalar product and magnitude. This representation relies on the fact that, for a bit vector (where the value of each dimension is either 0 or 1) then
and
This is a potentially confusing representation, because the function as expressed over vectors is more general, unless its domain is explicitly restricted. Properties ofTs{\displaystyle T_{s}}do not necessarily extend tof{\displaystyle f}. In particular, the difference function1−f{\displaystyle 1-f}does not preservetriangle inequality, and is not therefore a proper distance metric, whereas1−Ts{\displaystyle 1-T_{s}}is.
There is a real danger that the combination of "Tanimoto Distance" being defined using this formula, along with the statement "Tanimoto Distance is a proper distance metric" will lead to the false conclusion that the function1−f{\displaystyle 1-f}is in fact a distance metric over vectors ormultisetsin general, whereas its use in similarity search or clustering algorithms may fail to produce correct results.
Lipkus[9]uses a definition of Tanimoto similarity which is equivalent tof{\displaystyle f}, and refers to Tanimoto distance as the function1−f{\displaystyle 1-f}. It is, however, made clear within the paper that the context is restricted by the use of a (positive) weighting vectorW{\displaystyle W}such that, for any vectorAbeing considered,Ai∈{0,Wi}.{\displaystyle A_{i}\in \{0,W_{i}\}.}Under these circumstances, the function is a proper distance metric, and so a set of vectors governed by such a weighting vector forms ametric spaceunder this function.
Inconfusion matricesemployed forbinary classification, the Jaccard index can be framed in the following formula:
where TP are the true positives, FP the false positives and FN the false negatives.[14]
|
https://en.wikipedia.org/wiki/Jaccard_index
|
Meta(from theμετά,meta, meaning 'after' or 'beyond') is an adjective meaning 'more comprehensive' or 'transcending'.[1]
In modern nomenclature, the prefix
meta can also serve as a prefix meaning self-referential, as a field of study or endeavor (metatheory: theory about a theory;metamathematics: mathematical theories about mathematics; meta-axiomatics or meta-axiomaticity: axioms about axiomatic systems; metahumor: joking about the ways humor is expressed; etc.).[2]
InGreek, the prefixmeta-is generally less esoteric than inEnglish; Greekmeta-is equivalent to theLatinwordspost-orad-. The use of the prefix in this sense occurs occasionally inscientific Englishterms derived fromGreek. For example, the termMetatheria(the name for thecladeofmarsupialmammals) uses the prefixmeta-in the sense that theMetatheriaoccur on the tree of life adjacent to theTheria(theplacental mammals).
Inepistemology, and often in common use, the prefixmeta-is used to mean 'about (its own category)'. For example,metadatais data about data (who has produced them, when, what format the data are in and so on). In a database, metadata is also data about data stored in a data dictionary, describing information (data) about database tables such as the table name, table owner, details about columns, etc. – essentially describing the table. In psychology,metamemoryrefers to an individual'sknowledgeabout whether or not they would remember something if they concentrated on recalling it. The modern sense of "an X about X" has given rise to concepts like "meta-cognition" (cognition about cognition), "meta-emotion" (emotion about emotion), "meta-discussion" (discussion about discussion), "meta-joke" (joke about jokes), and "metaprogramming" (writing programs about writing programs). In arule-based system, ametaruleis a rule governing the application of other rules.[3]
"Metagaming", accordingly, refers to games about games. However, it has a different meaning depending on the context. Inrole-playing games, this means that someone with a higher level of knowledge is playing; that is, that the player incorporates factors that are outside the actual framework of the game – the player has knowledge that was not acquired through experiencing the game, but through external sources. This type of metagaming is often frowned upon in many role-playing game communities because it impairs game balance and equality of opportunity.[4]Metagaming can also refer to a game that is used to create or change the rules while playing a game. One can play this type of metagame and choose which rules apply during the game itself, potentially changing the level of difficulty. Such metagames include campaign role-playing games likeHalo 3.[5]Complex card or board games, e.g.pokerorchess, are also often referred to as metagames. According to Nigel Howard, this type of metagame is defined as a decision-making process that is derived from the analysis of possible outcomes in relation to external variables that change a problem.[6]
Any subject can be said to have ametatheory, a theoretical consideration of its properties – such as itsfoundations,methods,form, andutility– on a higher level of abstraction. In linguistics, grammar is considered to be ametalanguage: a language operating on a higher level to describe properties of the plain language, and not itself.
The prefix comes from theGreekprepositionandprefixmeta-(μετα-), from μετά,[7]which typically means "after", "beside", "with" or "among". Other meanings include "beyond", "adjacent" and "self", and it is also used in the forms μετ- before vowels and μεθ- "meth-" beforeaspirated vowels.
The earliest form of the word "meta" is theMycenaean Greekme-ta, written inLinear Bsyllabic script.[8]The Greek preposition iscognatewith theOld Englishprepositionmid"with", still found as a prefix inmidwife. Its use in English is the result ofback-formationfrom the word "metaphysics". In originMetaphysicswas just the title of one of the principal works ofAristotle; it was so named (byAndronicus of Rhodes) because in the customary ordering of the works of Aristotle it was the book followingPhysics; it thus meant nothing more than "[the book that comes] after [the book entitled]Physics". However, even Latin writers misinterpreted this as entailing metaphysics constituted "the science of what is beyond the physical".[9]Nonetheless, Aristotle'sMetaphysicsenunciates considerations of a nature[clarification needed]above physical reality, which one can examine through certain philosophy – for example, such a thing as anunmoved mover. The use of the prefix was later extended to other contexts, based on the understanding of metaphysics as meaning "the science of what is beyond the physical".
TheOxford English Dictionarycites uses of themeta-prefix as "beyond, about" (such as meta-economics and meta-philosophy) going back to 1917. However, these formations are parallel to the original "metaphysics" and "metaphysical", that is, as a prefix to general nouns (fields of study) or adjectives. Going by theOEDcitations, it began being used with specific nouns in connection with mathematical logic sometime before 1929. (In 1920David Hilbertproposed a research project in what was called "metamathematics.")
A notable early citation isW. V. O. Quine's 1937 use of the word "metatheorem",[10]where meta- has the modern meaning of "an X about X".
Douglas Hofstadter, in his 1979 bookGödel, Escher, Bach(and in the 1985 sequel,Metamagical Themas), popularized this meaning of the term. The book, which deals withself-referenceandstrange loops, and touches on Quine and his work, was influential in many computer-related subcultures and may be responsible for the popularity of the prefix, for its use as a solo term, and for the many recent coinages which use it.[11]Hofstadter uses meta as a stand-alone word, as an adjective, and as a directional preposition ("going meta," a term he coins for the old rhetorical trick of taking a debate or analysis to another level of abstraction, as when somebody says "This debate isn't going anywhere"). This book may also be responsible for the association of "meta" with strange loops, as opposed to just abstraction. According to Hofstadter, it is aboutself-reference, which means a sentence, idea or formula refers to itself. The Merriam-Webster Dictionary describes it as "showing or suggesting an explicit awareness of itself or oneself as a member of its category: cleverly self-referential".[12]The sentence "This sentence contains thirty-six letters," and the sentence which embeds it, are examples of "metasentences" referencing themselves in this way. As maintained in the bookGödel, Escher, Bach, a strange loop is given if different logical statements or theories are put together in contradiction, thus distorting the meaning and generating logical paradoxes. One example is theliar paradox, a paradox in philosophy or logic that arises when a sentence claims its own falsehood (or untruth); for instance: "This sentence is not true." Until the beginning of the 20th century, this kind of paradox was a considerable problem for a philosophical theory of truth.Alfred Tarskisolved this difficulty by proving that such paradoxes do not exist with a consistent separation of object language and metalanguage.[13]"For every formalized language, a formally correct and factually applicable definition of the true statement can be constructed in the metalanguage with the sole help of expressions of a general-logical character, expressions of the language itself and of terms from the morphology of the language, but on the condition that the metalanguage is of a higher order than the language that is the subject of the investigation."[14]
Metagamingis a general term describing an approach to playing a game as optimally as possible within its current rules. The shorthandmetahas beenbackronymedas "Most Effective Tactics Available" to tersely explain the concept.
In the world ofcompetitive games, rule imprecisions and non-goal oriented play are not commonplace. As a result, the extent of metagaming narrows down mostly to studying strategies of top players and exploiting commonly-used strategies for an advantage.[15]Those may evolve as updates are released or new, better, strategies are discovered by top players.[16]The opposite metagame of playing a relatively unknown strategy for surprisal is often calledoff-meta.[15]
This usage is particularly common in games that have large, organized play systems or tournament circuits. Some examples of this kind of environment are tournament scenes for tabletop or computer collectible card games likeMagic: The Gathering,Gwent: The Witcher Card GameorHearthstone, tabletop war-gaming such asWarhammer 40,000andFlames of War, or team-basedmultiplayeronline gamessuch asStar Conflict,Dota 2,League of Legends, andTeam Fortress 2. In some games, such asHeroes of the Storm, variedlevel designmakes the battleground a significant factor in the metagame.[16]
|
https://en.wikipedia.org/wiki/Meta_(prefix)
|
Inprobability theoryandmachine learning, themulti-armed bandit problem(sometimes called theK-[1]orN-armed bandit problem[2]) is a problem in which a decision maker iteratively selects one of multiple fixed choices (i.e., arms or actions) when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.[3]
Instances of the multi-armed bandit problem include the task of iteratively allocating a fixed, limited set of resources between competing (alternative) choices in a way that minimizes theregret.[4][5]A notable alternative setup for the multi-armed bandit problem includes the "best arm identification (BAI)" problem where the goal is instead to identify the best choice by the end of a finite number of rounds.[6]
The multi-armed bandit problem is a classicreinforcement learningproblem that exemplifies theexploration–exploitation tradeoff dilemma. In contrast to general RL, the selected actions in bandit problems do not affect the reward distribution of the arms. The name comes from imagining agamblerat a row ofslot machines(sometimes known as "one-armed bandits"), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.[7]The multi-armed bandit problem also falls into the broad category ofstochastic scheduling.
In the problem, each machine provides a random reward from aprobability distributionspecific to that machine, that is not knowna priori. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls.[4][5]The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get moreinformationabout the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in machine learning. In practice, multi-armed bandits have been used to model problems such as managing research projects in a large organization, like a science foundation or apharmaceutical company.[4][5]In early versions of the problem, the gambler begins with no initial knowledge about the machines.
Herbert Robbinsin 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments".[8]A theorem, theGittins index, first published byJohn C. Gittins, gives an optimal policy for maximizing the expected discounted reward.[9]
The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize their decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize their total value over the period of time considered. There are many practical applications of the bandit model, for example:
In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as theexploitation vs. exploration tradeoffinmachine learning.
The model has also been used to control dynamic allocation of resources to different projects, answering the question of which project to work on, given uncertainty about the difficulty and payoff of each possibility.[14]
Originally considered by Allied scientists inWorld War II, it proved so intractable that, according toPeter Whittle, the problem was proposed to be dropped overGermanyso that German scientists could also waste their time on it.[15]
The version of the problem now commonly analyzed was formulated byHerbert Robbinsin 1952.
The multi-armed bandit (short:banditor MAB) can be seen as a set of realdistributionsB={R1,…,RK}{\displaystyle B=\{R_{1},\dots ,R_{K}\}}, each distribution being associated with the rewards delivered by one of theK∈N+{\displaystyle K\in \mathbb {N} ^{+}}levers. Letμ1,…,μK{\displaystyle \mu _{1},\dots ,\mu _{K}}be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizonH{\displaystyle H}is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-stateMarkov decision process. Theregretρ{\displaystyle \rho }afterT{\displaystyle T}rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards:
ρ=Tμ∗−∑t=1Tr^t{\displaystyle \rho =T\mu ^{*}-\sum _{t=1}^{T}{\widehat {r}}_{t}},
whereμ∗{\displaystyle \mu ^{*}}is the maximal reward mean,μ∗=maxk{μk}{\displaystyle \mu ^{*}=\max _{k}\{\mu _{k}\}}, andr^t{\displaystyle {\widehat {r}}_{t}}is the reward in roundt.
Azero-regret strategyis a strategy whose average regret per roundρ/T{\displaystyle \rho /T}tends to zero with probability 1 when the number of played rounds tends to infinity.[16]Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played.
A common formulation is theBinary multi-armed banditorBernoulli multi-armed bandit,which issues a reward of one with probabilityp{\displaystyle p}, and otherwise a reward of zero.
Another formulation of the multi-armed bandit has each arm representing an independent Markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the Markov state evolution probabilities. There is a reward depending on the current state of the machine. In a generalization called the "restless bandit problem", the states of non-played arms can also evolve over time.[17]There has also been discussion of systems where the number of choices (about which arm to play) increases over time.[18]
Computer science researchers have studied multi-armed bandits under worst-case assumptions, obtaining algorithms to minimize regret in both finite and infinite (asymptotic) time horizons for both stochastic[1]and non-stochastic[19]arm payoffs.
An important variation of the classicalregret minimizationproblem in multi-armed bandits is the one of Best Arm Identification (BAI),[20]also known aspure exploration. This problem is crucial in various applications, including clinical trials, adaptive routing, recommendation systems, and A/B testing.
In BAI, the objective is to identify the arm having the highest expected reward. An algorithm in this setting is characterized by asampling rule, adecision rule,and astopping rule, described as follows:
There are two predominant settings in BAI:
Fixed budget setting:Given a time horizonT≥1{\displaystyle T\geq 1}, the objective is to identify the arm with the highest expected rewarda⋆∈argmaxkμk{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}minimizing probability of errorδ{\displaystyle \delta }.
Fixed confidence setting:Given a confidence levelδ∈(0,1){\displaystyle \delta \in (0,1)}, the objective is to identify the arm with the highest expected rewarda⋆∈argmaxkμk{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}with the least possible amount of trials and with probability of errorP(a^τ≠a⋆)≤δ{\displaystyle \mathbb {P} ({\hat {a}}_{\tau }\neq a^{\star })\leq \delta }.
For example using adecision rule, we could usem1{\displaystyle m_{1}}wherem{\displaystyle m}is themachineno.1 (you can use a different variable respectively) and1{\displaystyle 1}is the amount for each time an attempt is made at pulling the lever, where∫∑m1,m2,(...)=M{\displaystyle \int \sum m_{1},m_{2},(...)=M}, identifyM{\displaystyle M}as the sum of each attemptsm1+m2{\displaystyle m_{1}+m_{2}}, (...) as needed, and from there you can get a ratio, sum or mean as quantitative probability and sample your formulation for each slots.
You can also do∫∑k∝iN−(nj){\displaystyle \int \sum _{k\propto _{i}}^{N}-(n_{j})}wherem1+m2{\displaystyle m1+m2}equal to each a unique machine slot,x,y{\displaystyle x,y}is the amount each time the lever is triggered,N{\displaystyle N}is the sum of(m1x,y)+(m2x,y)(...){\displaystyle (m1_{x},_{y})+(m2_{x},_{y})(...)},k{\displaystyle k}would be the total available amount in your possession,k{\displaystyle k}is relative toN{\displaystyle N}whereN=n(na,b),(n1a,b),(n2a,b){\displaystyle N=n(n_{a},b),(n1_{a},b),(n2_{a},b)}reducednj{\displaystyle n_{j}}as the sum of each gain or loss froma,b{\displaystyle a,b}(let's say you have 100$ that is defined asn{\displaystyle n}anda{\displaystyle a}would be a gainb{\displaystyle b}is equal to a loss, from there you get your results either positive or negative to add forN{\displaystyle N}with your own specific rule) andi{\displaystyle i}as the maximum you are willing to spend.
It is possible to express this construction using a combination of multiple algebraic formulation, as mentioned above where you can limit withT{\displaystyle T}for, or in Time and so on.
A major breakthrough was the construction of optimal population selection strategies, or policies (that possess uniformly maximum convergence rate to the population with highest mean) in the work described below.
In the paper "Asymptotically efficient adaptive allocation rules", Lai and Robbins[21](following papers of Robbins and his co-workers going back to Robbins in the year 1952) constructed convergent population selection policies that possess the fastest rate of convergence (to the population with highest mean) for the case that the population reward distributions are the one-parameter exponential family. Then, inKatehakisandRobbins[22]simplifications of the policy and the main proof were given for the case of normal populations with known variances. The next notable progress was obtained by Burnetas andKatehakisin the paper "Optimal adaptive policies for sequential allocation problems",[23]where index based policies with uniformly maximum convergence rate were constructed, under more general conditions that include the case in which the distributions of outcomes from each population depend on a vector of unknown parameters. Burnetas and Katehakis (1996) also provided an explicit solution for the important case in which the distributions of outcomes follow arbitrary (i.e., non-parametric) discrete, univariate distributions.
Later in "Optimal adaptive policies for Markov decision processes"[24]Burnetas and Katehakis studied the much larger model of Markov Decision Processes under partial information, where the transition law and/or the expected one period rewards may depend on unknown parameters. In this work, the authors constructed an explicit form for a class of adaptive policies with uniformly maximum convergence rate properties for the total expected finite horizon reward under sufficient assumptions of finite state-action spaces and irreducibility of the transition law. A main feature of these policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right-hand side of the estimated average reward optimality equations. These inflations have recently been called the optimistic approach in the work of Tewari and Bartlett,[25]Ortner[26]Filippi, Cappé, and Garivier,[27]and Honda and Takemura.[28]
For Bernoulli multi-armed bandits, Pilarski et al.[29]studied computation methods of deriving fully optimal solutions (not just asymptotically) using dynamic programming in the paper "Optimal Policy for Bernoulli Bandits: Computation and Algorithm Gauge."[29]Via indexing schemes, lookup tables, and other techniques, this work provided practically applicable optimal solutions for Bernoulli bandits provided that time horizons and numbers of arms did not become excessively large. Pilarski et al.[30]later extended this work in "Delayed Reward Bernoulli Bandits: Optimal Policy and Predictive Meta-Algorithm PARDI"[30]to create a method of determining the optimal policy for Bernoulli bandits when rewards may not be immediately revealed following a decision and may be delayed. This method relies upon calculating expected values of reward outcomes which have not yet been revealed and updating posterior probabilities when rewards are revealed.
When optimal solutions to multi-arm bandit tasks[31]are used to derive the value of animals' choices, the activity of neurons in the amygdala and ventral striatum encodes the values derived from these policies, and can be used to decode when the animals make exploratory versus exploitative choices. Moreover, optimal policies better predict animals' choice behavior than alternative strategies (described below). This suggests that the optimal solutions to multi-arm bandit problems are biologically plausible, despite being computationally demanding.[32]
Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the four broad categories detailed below.
Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common agreedybehavior where thebestlever (based on previous observations) is always pulled except when a (uniformly) random action is taken.
Probability matching strategies reflect the idea that the number of pulls for a given lever shouldmatchits actual probability of being the optimal lever. Probability matching strategies are also known asThompson samplingor Bayesian Bandits,[37][38]and are surprisingly easy to implement if you can sample from the posterior for the mean value of each alternative.
Probability matching strategies also admit solutions to so-called contextual bandit problems.[37]
Pricing strategies establish apricefor each lever. For example, as illustrated with the POKER algorithm,[16]the price can be the sum of the expected reward plus an estimation of extra future rewards that will gain through the additional knowledge. The lever of highest price is always pulled.
A useful generalization of the multi-armed bandit is the contextual multi-armed bandit. At each iteration an agent still has to choose between arms, but they also see a d-dimensional feature vector, the context vector they can use together with the rewards of the arms played in the past to make the choice of the arm to play. Over time, the learner's aim is to collect enough information about how the context vectors and rewards relate to each other, so that it can predict the next best arm to play by looking at the feature vectors.[39]
Many strategies exist that provide an approximate solution to the contextual bandit problem, and can be put into two broad categories detailed below.
In practice, there is usually a cost associated with the resource consumed by each action and the total cost is limited by a budget in many applications such as crowdsourcing and clinical trials. Constrained contextual bandit (CCB) is such a model that considers both the time and budget constraints in a multi-armed bandit setting.
A. Badanidiyuru et al.[54]first studied contextual bandits with budget constraints, also referred to as Resourceful Contextual Bandits, and show that aO(T){\displaystyle O({\sqrt {T}})}regret is achievable. However, their work focuses on a finite set of policies, and the algorithm is computationally inefficient.
A simple algorithm with logarithmic regret is proposed in:[55]
Another variant of the multi-armed bandit problem is called the adversarial bandit, first introduced by Auer and Cesa-Bianchi (1998). In this variant, at each iteration, an agent chooses an arm and an adversary simultaneously chooses the payoff structure for each arm. This is one of the strongest generalizations of the bandit problem[56]as it removes all assumptions of the distribution and a solution to the adversarial bandit problem is a generalized solution to the more specific bandit problems.
An example often considered for adversarial bandits is theiterated prisoner's dilemma. In this example, each adversary has two arms to pull. They can either Deny or Confess. Standard stochastic bandit algorithms don't work very well with these iterations. For example, if the opponent cooperates in the first 100 rounds, defects for the next 200, then cooperate in the following 300, etc. then algorithms such as UCB won't be able to react very quickly to these changes. This is because after a certain point sub-optimal arms are rarely pulled to limit exploration and focus on exploitation. When the environment changes the algorithm is unable to adapt or may not even detect the change.
Source:[57]
EXP3 is a popular algorithm for adversarial multiarmed bandits, suggested and analyzed in this setting by Auer et al. [2002b].
Recently there was an increased interest in the performance of this algorithm in the stochastic setting, due to its new applications to stochastic multi-armed bandits with side information [Seldin et al., 2011] and to multi-armed bandits in the mixed stochastic-adversarial setting [Bubeck and Slivkins, 2012].
The paper presented an empirical evaluation and improved analysis of the performance of the EXP3 algorithm in the stochastic setting, as well as a modification of the EXP3 algorithm capable of achieving "logarithmic" regret in stochastic environment.
Exp3 chooses an arm at random with probability(1−γ){\displaystyle (1-\gamma )}it prefers arms with higher weights (exploit), it chooses with probabilityγ{\displaystyle \gamma }to uniformly randomly explore. After receiving the rewards the weights are updated. The exponential growth significantly increases the weight of good arms.
The (external) regret of the Exp3 algorithm is at mostO(KTlog(K)){\displaystyle O({\sqrt {KTlog(K)}})}
We follow the arm that we think has the best performance so far adding exponential noise to it to provide exploration.[58]
In the original specification and in the above variants, the bandit problem is specified with a discrete and finite number of arms, often indicated by the variableK{\displaystyle K}. In the infinite armed case, introduced by Agrawal (1995),[59]the "arms" are a continuous variable inK{\displaystyle K}dimensions.
This framework refers to the multi-armed bandit problem in anon-stationarysetting (i.e., in presence ofconcept drift). In the non-stationary setting, it is assumed that the expected reward for an armk{\displaystyle k}can change at every time stept∈T{\displaystyle t\in {\mathcal {T}}}:μt−1k≠μtk{\displaystyle \mu _{t-1}^{k}\neq \mu _{t}^{k}}. Thus,μtk{\displaystyle \mu _{t}^{k}}no longer represents the whole sequence of expected (stationary) rewards for armk{\displaystyle k}. Instead,μk{\displaystyle \mu ^{k}}denotes the sequence of expected rewards for armk{\displaystyle k}, defined asμk={μtk}t=1T{\displaystyle \mu ^{k}=\{\mu _{t}^{k}\}_{t=1}^{T}}.[60]
Adynamic oraclerepresents the optimal policy to be compared with other policies in the non-stationary setting. The dynamic oracle optimises the expected reward at each stept∈T{\displaystyle t\in {\mathcal {T}}}by always selecting the best arm, with expected reward ofμt∗{\displaystyle \mu _{t}^{*}}. Thus, the cumulative expected rewardD(T){\displaystyle {\mathcal {D}}(T)}for the dynamic oracle at final time stepT{\displaystyle T}is defined as:
D(T)=∑t=1Tμt∗.{\displaystyle {\mathcal {D}}(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}.}
Hence, theregretρπ(T){\displaystyle \rho ^{\pi }(T)}for policyπ{\displaystyle \pi }is computed as the difference betweenD(T){\displaystyle {\mathcal {D}}(T)}and the cumulative expected reward at stepT{\displaystyle T}for policyπ{\displaystyle \pi }:
ρπ(T)=∑t=1Tμt∗−Eπμ[∑t=1Trt]=D(T)−Eπμ[∑t=1Trt].{\displaystyle \rho ^{\pi }(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right]={\mathcal {D}}(T)-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right].}
Garivier and Moulines derive some of the first results with respect to bandit problems where the underlying model can change during play. A number of algorithms were presented to deal with this case, including Discounted UCB[61]and Sliding-Window UCB.[62]A similar approach based on Thompson Sampling algorithm is the f-Discounted-Sliding-Window Thompson Sampling (f-dsw TS)[63]proposed by Cavenaghi et al. The f-dsw TS algorithm exploits a discount factor on the reward history and an arm-related sliding window to contrast concept drift in non-stationary environments. Another work by Burtini et al. introduces a weighted least squares Thompson sampling approach (WLS-TS), which proves beneficial in both the known and unknown non-stationary cases.[64]
Many variants of the problem have been proposed in recent years.
The dueling bandit variant was introduced by Yue et al. (2012)[65]to model the exploration-versus-exploitation tradeoff for relative feedback.
In this variant the gambler is allowed to pull two levers at the same time, but they only get a binary feedback telling which lever provided the best reward. The difficulty of this problem stems from the fact that the gambler has no way of directly observing the reward of their actions.
The earliest algorithms for this problem were InterleaveFiltering[65]and Beat-The-Mean.[66]The relative feedback of dueling bandits can also lead tovoting paradoxes. A solution is to take theCondorcet winneras a reference.[67]
More recently, researchers have generalized algorithms from traditional MAB to dueling bandits: Relative Upper Confidence Bounds (RUCB),[68]Relative EXponential weighing (REX3),[69]Copeland Confidence Bounds (CCB),[70]Relative Minimum Empirical Divergence (RMED),[71]and Double Thompson Sampling (DTS).[72]
Approaches using multiple bandits that cooperate sharing knowledge in order to better optimize their performance started in 2013 with "A Gang of Bandits",[73]an algorithm relying on a similarity graph between the different bandit problems to share knowledge. The need of a similarity graph was removed in 2014 by the work on the CLUB algorithm.[74]Following this work, several other researchers created algorithms to learn multiple models at the same time under bandit feedback.
For example, COFIBA was introduced by Li and Karatzoglou and Gentile (SIGIR 2016),[75]where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data.
The Combinatorial Multiarmed Bandit (CMAB) problem[76][77][78]arises when instead of a single discrete variable to choose from, an agent needs to choose values for a set of variables. Assuming each variable is discrete, the number of possible choices per iteration is exponential in the number of variables. Several CMAB settings have been studied in the literature, from settings where the variables are binary[77]to more general setting where each variable can take an arbitrary set of values.[78]
|
https://en.wikipedia.org/wiki/Multi-armed_bandit
|
In software, astack buffer overfloworstack buffer overrunoccurs when a program writes to amemoryaddress on the program'scall stackoutside of the intended data structure, which is usually a fixed-lengthbuffer.[1][2]Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known asbuffer overflow(or buffer overrun).[1]Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.
A stack buffer overflow can be caused deliberately as part of an attack known asstack smashing. If the affected program is running with special privileges, or accepts data from untrusted network hosts (e.g. awebserver) then the bug is a potentialsecurity vulnerability. If the stack buffer is filled with data supplied from an untrusted user then that user can corrupt the stack in such a way as to inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods forattackersto gain unauthorized access to a computer.[3][4][5]
The canonical method forexploitinga stack-based buffer overflow is to overwrite the function return address with a pointer to attacker-controlled data (usually on the stack itself).[3][6]This is illustrated withstrcpy()in the following example:
This code takes an argument from the command line and copies it to a local stack variablec. This works fine for command-line arguments smaller than 12 characters (as can be seen in figure B below). Any arguments larger than 11 characters long will result in corruption of the stack. (The maximum number of characters that is safe is one less than the size of the buffer here because in the C programming language, strings are terminated by a null byte character. A twelve-character input thus requires thirteen bytes to store, the input followed by the sentinel zero byte. The zero byte then ends up overwriting a memory location that's one byte beyond the end of the buffer.)
The program stack infoo()with various inputs:
In figure C above, when an argument larger than 11 bytes is supplied on the command linefoo()overwrites local stack data, the saved frame pointer, and most importantly, the return address. Whenfoo()returns, it pops the return address off the stack and jumps to that address (i.e. starts executing instructions from that address). Thus, the attacker has overwritten the return address with a pointer to the stack bufferchar c[12], which now contains attacker-supplied data. In an actual stack buffer overflow exploit the string of "A"'s would instead beshellcodesuitable to the platform and desired function. If this program had special privileges (e.g. theSUIDbit set to run as thesuperuser), then the attacker could use this vulnerability to gain superuser privileges on the affected machine.[3]
The attacker can also modify internal variable values to exploit some bugs.
With this example:
There are typically two methods that are used to alter the stored address in the stack - direct and indirect. Attackers started developing indirect attacks, which have fewer dependencies, in order to bypass protection measures that were made to reduce direct attacks.[7]
A number of platforms have subtle differences in their implementation of the call stack that can affect the way a stack buffer overflow exploit will work. Some machine architectures store the top-level return address of the call stack in a register. This means that any overwritten return address will not be used until a later unwinding of the call stack. Another example of a machine-specific detail that can affect the choice of exploitation techniques is the fact that mostRISC-style machine architectures will not allow unaligned access to memory.[8]Combined with a fixed length for machine opcodes, this machine limitation can make the technique of jumping to the stack almost impossible to implement (with the one exception being when the program actually contains the unlikely code to explicitly jump to the stack register).[9][10]
Within the topic of stack buffer overflows, an often-discussed-but-rarely-seen architecture is one in which the stack grows in the opposite direction. This change in architecture is frequently suggested as a solution to the stack buffer overflow problem because any overflow of a stack buffer that occurs within the same stack frame cannot overwrite the return pointer. However, any overflow that occurs in a buffer from a previous stack frame will still overwrite a return pointer and allow for malicious exploitation of the bug.[11]For instance, in the example above, the return pointer forfoowill not be overwritten because the overflow actually occurs within the stack frame formemcpy. However, because the buffer that overflows during the call tomemcpyresides in a previous stack frame, the return pointer formemcpywill have a numerically higher memory address than the buffer. This means that instead of the return pointer forfoobeing overwritten, the return pointer formemcpywill be overwritten. At most, this means that growing the stack in the opposite direction will change some details of how stack buffer overflows are exploitable, but it will not reduce significantly the number of exploitable bugs.[citation needed]
Over the years, a number ofcontrol-flow integrityschemes have been developed to inhibit malicious stack buffer overflow exploitation. These may usually be classified into three categories:
Stack canaries, named for their analogy to acanary in a coal mine, are used to detect a stack buffer overflow before execution of malicious code can occur. This method works by placing a small integer, the value of which is randomly chosen at program start, in memory just before the stack return pointer. Most buffer overflows overwrite memory from lower to higher memory addresses, so in order to overwrite the return pointer (and thus take control of the process) the canary value must also be overwritten. This value is checked to make sure it has not changed before a routine uses the return pointer on the stack.[2]This technique can greatly increase the difficulty of exploiting a stack buffer overflow because it forces the attacker to gain control of the instruction pointer by some non-traditional means such as corrupting other important variables on the stack.[2]
Another approach to preventing stack buffer overflow exploitation is to enforce a memory policy on the stack memory region that disallows execution from the stack (W^X, "Write XOR Execute"). This means that in order to execute shellcode from the stack an attacker must either find a way to disable the execution protection from memory, or find a way to put their shellcode payload in a non-protected region of memory. This method is becoming more popular now that hardware support for the no-execute flag is available in most desktop processors.
While this method prevents the canonical stack smashing exploit, stack overflows can be exploited in other ways. First, it is common to find ways to store shellcode in unprotected memory regions like the heap, and so very little need change in the way of exploitation.[12]
Another attack is the so-calledreturn to libcmethod for shellcode creation. In this attack the malicious payload will load the stack not with shellcode, but with a proper call stack so that execution is vectored to a chain of standard library calls, usually with the effect of disabling memory execute protections and allowing shellcode to run as normal.[13]This works because the execution never actually vectors to the stack itself.
A variant of return-to-libc isreturn-oriented programming(ROP), which sets up a series of return addresses, each of which executes a small sequence of cherry-picked machine instructions within the existing program code or system libraries, sequence which ends with a return. These so-calledgadgetseach accomplish some simple register manipulation or similar execution before returning, and stringing them together achieves the attacker's ends. It is even possible to use "returnless" return-oriented programming by exploiting instructions or groups of instructions that behave much like a return instruction.[14]
Instead of separating the code from the data, another mitigation technique is to introduce randomization to the memory space of the executing program. Since the attacker needs to determine where executable code that can be used resides, either an executable payload is provided (with an executable stack) or one is constructed using code reuse such as in ret2libc or return-oriented programming (ROP). Randomizing the memory layout will, as a concept, prevent the attacker from knowing where any code is. However, implementations typically will not randomize everything; usually the executable itself is loaded at a fixed address and hence even whenASLR(address space layout randomization) is combined with a non-executable stack the attacker can use this fixed region of memory. Therefore, all programs should be compiled withPIE(position-independent executables) such that even this region of memory is randomized. The entropy of the randomization is different from implementation to implementation and a low enough entropy can in itself be a problem in terms of brute forcing the memory space that is randomized.
The previous mitigations make the steps of the exploitation harder. But it is still possible to exploit a stack buffer overflow if some vulnerabilities are presents or if some conditions are met.[15]
An attacker is able to exploit theformat string vulnerabilityfor revealing the memory locations in the vulnerable program.[16]
WhenData Execution Preventionis enabled to forbid any execute access to the stack, the attacker can still use the overwritten return address (the instruction pointer) to point to data in acode segment(.texton Linux) or every other executable section of the program. The goal is to reuse existing code.[17]
Consists to overwrite the return pointer a bit before a return instruction (ret in x86) of the program. The instructions between the new return pointer and the return instruction will be executed and the return instruction will return to the payload controlled by the exploiter.[17][clarification needed]
Jump Oriented Programming is a technique that uses jump instructions to reuse code instead of the ret instruction.[18]
A limitation of ASLR realization on 64-bit systems is that it is vulnerable to memory disclosure and information leakage attacks. The attacker can launch the ROP by revealing a single function address using information leakage attack. The following section describes the similar existing strategy for breaking down the ASLR protection.[19]
|
https://en.wikipedia.org/wiki/Stack_buffer_overflow
|
Insociology, asocial organizationis a pattern ofrelationshipsbetween and amongindividualsandgroups.[1][2]Characteristics of social organization can include qualities such as sexual composition, spatiotemporal cohesion,leadership,structure, division of labor, communication systems, and so on.[3][4]
Because of these characteristics of social organization, people can monitor their everyday work and involvement in other activities that are controlled forms of human interaction. These interactions include: affiliation, collective resources, substitutability of individuals and recorded control. These interactions come together to constitute common features in basic social units such as family, enterprises, clubs, states, etc. These are social organizations.[5]
Common examples of modern social organizations aregovernment agencies,[6][7]NGOs, andcorporations.[8][9]
Social organizations happen in everyday life. Many people belong to various social structures—institutional and informal. These include clubs, professional organizations, and religious institutions.[10]To have a sense of identity with the social organization, being closer to one another helps build a sense of community.[11]While organizations link many like-minded people, it can also cause a separation with others not in their organization due to the differences in thought. Social organizations are structured to where there is a hierarchical system.[12]A hierarchical structure in social groups influences the way a group is structured and how likely it is that the group remains together.
Four other interactions can also determine if the group stays together. A group must have a strong affiliation within itself. To be affiliated with an organization means having a connection and acceptance in that group. Affiliation means an obligation to come back to that organization. To be affiliated with an organization, it must know and recognize that you are a member. The organization gains power through the collective resources of these affiliations. Often affiliates have something invested in these resources that motivate them to continue to make the organization better. On the other hand, the organization must keep in mind thesubstitutabilityof these individuals. While the organization needs the affiliates and the resources to survive, it also must be able to replace leaving individuals to keep the organization going. Because of all these characteristics, it can often be difficult to be organized within the organization. This is where recorded control comes in, as writing things down makes them more clear and organized.[5]
Social organizations within society are constantly changing.[13]Smaller scale social organizations in society include groups forming from common interests and conversations. Social organizations are created constantly and with time change.[citation needed]
Smaller scaled social organizations include many everyday groups that people would not even think have these characteristics. These small social organizations can include things such as bands, clubs, or even sports teams. Within all of these small scaled groups, they contain the same characteristics as a large scale organization would. While these small social organizations do not have nearly as many people as large scale ones, they still interact and function in similar ways.
Looking at a common small organization, a school sports team, it is easy to see how it can be a social organization. The members of the team all have the same goals, which is to win, and they all work together to accomplish that common goal. It is also clear to see the structure in the team. While everyone has the same goal in mind[citation needed], they have different roles, or positions, that play a part to get there. To achieve their goal they must be united.
In large-scale organizations, there is always some extent of bureaucracy. Having bureaucracy includes: a set of rules, specializations, and a hierarchical system. This allows for these larger sized organizations to try maximize efficiency. Large-scaled organizations also come with making sure managerial control is right. Typically, the impersonal authority approach is used. This is when the position of power is detached and impersonal with the other members of the organization. This is done to make sure that things run smoothly and the social organization stays the best it can be.[14]
A big social organization that most people are somewhat familiar with is ahospital. Within the hospital are small social organization—for example, the nursing staff and the surgery team. These smaller organizations work closer together to accomplish more for their area, which in turn makes the hospital more successful and long lasting. As a whole, the hospital contains all the characteristics of being a social organization. In a hospital, there are various relationships between all of the members of the staff and also with the patients. This is a main reason that a hospital is a social organization. There is also division of labor, structure, cohesiveness, and communication systems. To operate to the utmost effectiveness, a hospital needs to contain all of the characteristics of a social organization because that is what makes it strong. Without one of these things, it would be difficult for this organization to run.[citation needed]
Although the assumption that many organizations run better with bureaucracy and a hierarchical system with management, there are other factors that can prove that wrong. These factors are whether or not the organization isparallelorinterdependent. To be parallel in an organization means that each department or section does not depend on the other in order to do its job. To be Interdependent means that you do depend on others to get the job done. If an organization is parallel, the hierarchical structure would not be necessary and would not be as effect as it would in an interdependent organization. Because of all the different sub-structures in parallel organizations (the different departments), it would be hard for hierarchical management to be in charge due to the different jobs. On the other hand, an interdependent organization would be easier to manage that way due to the cohesiveness throughout each department in the organization.[14]
Societies can be organized throughindividualisticor collectivist means, which can have implications foreconomic growth, legal and political institutions and effectiveness and social relations. This is based on the premise that the organization of society is a reflection of its cultural, historical, social, political and economic processes which therefore govern interaction.
Collectivist social organization sometimes refers todeveloping countriesthat bypass formal institutions and rather rely on informal institutions to uphold contractual obligations. This organization relies on a horizontal social structure, stressing relationships withincommunitiesrather than asocial hierarchybetween them. This kind of system has been largely attributed to cultures with strong religious,ethnic, orfamilialgroup ties.[citation needed]
In contrast, individualistic social organization implies interaction between individuals of different social groups. Enforcement stems from formal institutions such ascourts of law. The economy and society are completely integrated, enabling transactions across groups and individuals, who may similarly switch from group to group, and allowing individuals to be less dependent on one group.[original research?]This kind of social organization is traditionally associated withWestern societies.[15][dubious–discuss]
One type of collectivism is racial collectivism, or race collectivism.[16]Racial collectivism is a form of social organization based onraceorethniclines as opposed to other factors such aspoliticalorclassaffiliated collectivism. Examples of societies that have attempted, historically had, or still have a racial collectivist structure, at least in part, includeNazismandNazi Germany,racial segregation in the United States(especially prior to thecivil rights movementof the 1950s and 1960s),Apartheidin South Africa,White Zimbabweans, thecaste system of India, and many other nations and regions of the world.[16][17]
Social organizations may be seen in digital spaces, and online communities show patterns of how people would react in social networking situations.[18]The technology allows people to use the constructed social organizations as a way to engage with one another without having to physically be in the same place.
Looking at social organization online is a different way to think about it and a little challenging to connect the characteristics. While the characteristics of social organization are not completely the same for online organizations, they can be connected and talked about in a different context to make the cohesiveness between the two apparent. Online, there are various forms of communication and ways that people connect. Again, this allows them to talk and share common interests (which is what makes them a social organization) and be a part of the organization without having to physically be with the other members. Although these online social organization do not take place in person, they still function as social organization because of the relationships within the group and the goal to keep the communities going.
|
https://en.wikipedia.org/wiki/Social_organization
|
Zero to the power of zero, denoted as00, is amathematical expressionwith different interpretations depending on the context. In certain areas of mathematics, such ascombinatoricsandalgebra,00is conventionally defined as 1 because this assignment simplifies manyformulasand ensures consistency in operations involvingexponents. For instance, in combinatorics, defining00= 1aligns with the interpretation of choosing 0 elements from asetand simplifiespolynomialandbinomial expansions.
However, in other contexts, particularly inmathematical analysis,00is often considered anindeterminate form. This is because the value ofxyas bothxandyapproach zero can lead to different results based on thelimiting process. The expression arises in limit problems and may result in a range of values or diverge toinfinity, making it difficult to assign a single consistent value in these cases.
The treatment of00also varies across differentcomputer programming languagesandsoftware. While many follow the convention of assigning00= 1for practical reasons, others leave itundefinedor return errors depending on the context of use, reflecting the ambiguity of the expression in mathematical analysis.
Many widely used formulas involvingnatural-numberexponents require00to be defined as1. For example, the following three interpretations ofb0make just as much sense forb= 0as they do for positive integersb:
All three of these specialize to give00= 1.
When evaluatingpolynomials, it is convenient to define00as1. A (real) polynomial is an expression of the forma0x0+ ⋅⋅⋅ +anxn, wherexis an indeterminate, and the coefficientsaiarereal numbers. Polynomials are added termwise, and multiplied by applying thedistributive lawand the usual rules for exponents. With these operations, polynomials form aringR[x]. Themultiplicative identityofR[x]is the polynomialx0; that is,x0times any polynomialp(x)is justp(x).[2]Also, polynomials can be evaluated by specializingxto a real number. More precisely, for any given real numberr, there is a unique unitalR-algebra homomorphismevr:R[x] →Rsuch thatevr(x) =r. Becauseevris unital,evr(x0) = 1. That is,r0= 1for each real numberr, including 0. The same argument applies withRreplaced by anyring.[3]
Defining00= 1is necessary for many polynomial identities. For example, thebinomial theorem(1+x)n=∑k=0n(nk)xk{\textstyle (1+x)^{n}=\sum _{k=0}^{n}{\binom {n}{k}}x^{k}}holds forx= 0only if00= 1.[4]
Similarly, rings ofpower seriesrequirex0to be defined as 1 for all specializations ofx. For example, identities like11−x=∑n=0∞xn{\textstyle {\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n}}andex=∑n=0∞xnn!{\textstyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}hold forx= 0only if00= 1.[5]
In order for the polynomialx0to define acontinuous functionR→R, one must define00= 1.
Incalculus, thepower ruleddxxn=nxn−1{\textstyle {\frac {d}{dx}}x^{n}=nx^{n-1}}is valid forn= 1atx= 0only if00= 1.
Limits involvingalgebraic operationscan often be evaluated by replacing subexpressions with their limits; if the resulting expression does not determine the original limit, the expression is known as anindeterminate form.[6]The expression00is an indeterminate form: Given real-valued functionsf(t)andg(t)approaching0(astapproaches a real number or±∞) withf(t) > 0, the limit off(t)g(t)can be any non-negative real number or+∞, or it candiverge, depending onfandg. For example, each limit below involves a functionf(t)g(t)withf(t),g(t) → 0ast→ 0+(aone-sided limit), but their values are different:limt→0+tt=1,{\displaystyle \lim _{t\to 0^{+}}{t}^{t}=1,}limt→0+(e−1/t2)t=0,{\displaystyle \lim _{t\to 0^{+}}\left(e^{-1/t^{2}}\right)^{t}=0,}limt→0+(e−1/t2)−t=+∞,{\displaystyle \lim _{t\to 0^{+}}\left(e^{-1/t^{2}}\right)^{-t}=+\infty ,}limt→0+(a−1/t)−t=a.{\displaystyle \lim _{t\to 0^{+}}\left(a^{-1/t}\right)^{-t}=a.}
Thus, the two-variable functionxy, though continuous on the set{(x,y) :x> 0},cannot be extendedto acontinuous functionon{(x,y) :x> 0} ∪ {(0, 0)}, no matter how one chooses to define00.[7]
On the other hand, iffandgareanalytic functionson an open neighborhood of a numberc, thenf(t)g(t)→ 1astapproachescfrom any side on whichfis positive.[8]This and more general results can be obtained by studying the limiting behavior of the functionlog(f(t)g(t))=g(t)logf(t){\textstyle \log(f(t)^{g(t)})=g(t)\log f(t)}.[9][10]
In thecomplex domain, the functionzwmay be defined for nonzerozby choosing abranchoflogzand definingzwasewlogz. This does not define0wsince there is no branch oflogzdefined atz= 0, let alone in a neighborhood of0.[11][12][13]
In 1752,EulerinIntroductio in analysin infinitorumwrote thata0= 1[14]and explicitly mentioned that00= 1.[15]An annotation attributed[16]toMascheroniin a 1787 edition of Euler's bookInstitutiones calculi differentialis[17]offered the "justification"
00=(a−a)n−n=(a−a)n(a−a)n=1{\displaystyle 0^{0}=(a-a)^{n-n}={\frac {(a-a)^{n}}{(a-a)^{n}}}=1}
as well as another more involved justification. In the 1830s,Libri[18][16]published several further arguments attempting to justify the claim00= 1, though these were far from convincing, even by standards of rigor at the time.[19]
Euler, when setting00= 1, mentioned that consequently the values of the function0xtake a "huge jump", from∞forx< 0, to1atx= 0, to0forx> 0.[14]In 1814,Pfaffused asqueeze theoremargument to prove thatxx→ 1asx→ 0+.[8]
On the other hand, in 1821Cauchy[20]explained why the limit ofxyas positive numbersxandyapproach0while being constrained by some fixed relationcould be made to assume any value between0and∞by choosing the relation appropriately. He deduced that the limit of the fulltwo-variablefunctionxywithout a specified constraint is "indeterminate". With this justification, he listed00along with expressions like0/0in atable of indeterminate forms.
Apparently unaware of Cauchy's work,Möbius[8]in 1834, building on Pfaff's argument, claimed incorrectly thatf(x)g(x)→ 1wheneverf(x),g(x) → 0asxapproaches a numberc(presumablyfis assumed positive away fromc). Möbius reduced to the casec= 0, but then made the mistake of assuming that each offandgcould be expressed in the formPxnfor some continuous functionPnot vanishing at0and some nonnegative integern, which is true for analytic functions, but not in general. An anonymous commentator pointed out the unjustified step;[21]then another commentator who signed his name simply as "S" provided the explicit counterexamples(e−1/x)x→e−1and(e−1/x)2x→e−2asx→ 0+and expressed the situation by writing that "00can have many different values".[21]
There do not seem to be any authors assigning00a specific value other than 1.[22]
TheIEEE 754-2008floating-point standard is used in the design of most floating-point libraries. It recommends a number of operations for computing a power:[25]
Thepowvariant is inspired by thepowfunction fromC99, mainly for compatibility.[26]It is useful mostly for languages with a single power function. Thepownandpowrvariants have been introduced due to conflicting usage of the power functions and the different points of view (as stated above).[27]
The C and C++ standards do not specify the result of00(a domain error may occur). But for C, as ofC99, if thenormativeannex F is supported, the result for real floating-point types is required to be1because there are significant applications for which this value is more useful thanNaN[28](for instance, withdiscrete exponents); the result on complex types is not specified, even if the informative annex G is supported. TheJavastandard,[29]the.NET FrameworkmethodSystem.Math.Pow,[30]Julia, andPython[31][32]also treat00as1. Some languages document that their exponentiation operation corresponds to thepowfunction from theC mathematical library; this is the case withLua's^operator[33]andPerl's**operator[34](where it is explicitly mentioned that the result of0**0is platform-dependent).
R,[35]SageMath,[36]andPARI/GP[37]evaluatex0to1.Mathematica[38]simplifiesx0to1even if no constraints are placed onx; however, if00is entered directly, it is treated as an error or indeterminate.Mathematica[38]andPARI/GP[37][39]further distinguish between integer and floating-point values: If the exponent is a zero of integer type, they return a1of the type of the base; exponentiation with a floating-point exponent of value zero is treated as undefined, indeterminate or error.
|
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero
|
Content Security Policy(CSP) is acomputer securitystandard introduced to preventcross-site scripting(XSS),clickjackingand othercode injectionattacks resulting from execution of malicious content in the trustedweb pagecontext.[1]It is a Candidate Recommendation of theW3Cworking group on Web Application Security,[2]widely supported by modernweb browsers.[3]CSP provides a standard method for website owners to declare approved origins of content that browsers should be allowed to load on that website—covered types areJavaScript,CSS,HTML frames,web workers,fonts, images, embeddable objects such asJava applets,ActiveX, audio and video files, and otherHTML5features.
The standard, originally named Content Restrictions, was proposed by Robert Hansen in 2004,[4]first implemented inFirefox 4and quickly picked up by other browsers. Version 1 of the standard was published in 2012 as W3C candidate recommendation[5]and quickly with further versions (Level 2) published in 2014. As of 2023[update], the draft of Level 3 is being developed with the new features being quickly adopted by the web browsers.[6]
The following header names are in use as part of experimental CSP implementations:[3]
A website can declare multiple CSP headers, also mixing enforcement and report-only ones. Each header will be processed separately by the browser.
CSP can also be delivered within the HTML code using ameta tag, although in this case its effectiveness will be limited.[14]
Internet Explorer 10andInternet Explorer 11also support CSP, but only sandbox directive, using the experimentalX-Content-Security-Policyheader.[15]
A number of web application frameworks support CSP, for exampleAngularJS[16](natively) andDjango(middleware).[17]Instructions forRuby on Railshave been posted byGitHub.[18]Web framework support is however only required if the CSP contents somehow depend on the web application's state—such as usage of thenonceorigin. Otherwise, the CSP is rather static and can be delivered fromweb application tiersabove the application, for example onload balancerorweb server.
In December 2015[19]and December 2016,[20]a few methods of bypassing'nonce'allowlisting origins were published. In January 2016,[21]another method was published, which leverages server-wide CSP allowlisting to exploit old and vulnerable versions of JavaScript libraries hosted at the same server (frequent case with CDN servers). In May 2017[22]one more method was published to bypass CSP using web application frameworks code.
If theContent-Security-Policyheader is present in the server response, a compliant client enforces the declarative allowlist policy. One example goal of a policy is a stricter execution mode for JavaScript in order to prevent certain cross-site scripting attacks. In practice this means that a number of features are disabled by default:
While using CSP in a new application may be quite straightforward, especially with CSP-compatibleJavaScriptframework,[d]existing applications may require some refactoring—or relaxing the policy. Recommended coding practice for CSP-compatible web applications is to load code from external source files (<script src>), parseJSONinstead of evaluating it and useEventTarget.addEventListener()to set event handlers.[23]
Any time a requested resource or script execution violates the policy, the browser will fire aPOSTrequest to the value specified inreport-uri[24]orreport-to[25]containing details of the violation.
CSP reports are standardJSONstructures and can be captured either by application's ownAPI[26]or public CSP report receivers.[citation needed]
In 2018 security researchers showed how to send false positive reports to the designated receiver specified inreport-uri. This allows potential attackers to arbitrarily trigger those alarms and might render them less useful in case of a real attack.[27]This behaviour is intended and cannot be fixed, as the browser (client) is sending the reports.
According to the original CSP (1.0) Processing Model (2012–2013),[28]CSP should not interfere with the operation of browser add-ons or extensions installed by the user. This feature of CSP would have effectively allowed any add-on, extension, orBookmarkletto inject script into web sites, regardless of the origin of that script, and thus be exempt from CSP policies.
However, this policy has since been modified (as of CSP 1.1[29]) with the following wording. Note the use of the word "may" instead of the prior absolute "should (not)" wording:
Note: User agentsmayallow users to modify or bypass policy enforcement through user preferences, bookmarklets, third-party additions to the user agent, and other such mechanisms.
The absolute "should" wording was being used by browser users to request/demand adherence to the policy and have changes installed in popular browsers (Firefox, Chrome, Safari) to support it. This was particularly contentious when sites like Twitter and GitHub started using strong CSP policies, which 'broke' the use of Bookmarklets.[30]
TheW3CWeb Application Security Working Group considers such script to be part of theTrusted Computing Baseimplemented by the browser; however, it has been argued to the working group by a representative ofCox Communicationsthat this exemption is a potential security hole that could be exploited by malicious or compromised add-ons or extensions.[31][32]
As of 2015[update]a number of new browser security standards are being proposed by W3C, most of them complementary to CSP:[33]
|
https://en.wikipedia.org/wiki/Content_Security_Policy
|
Incryptography, acustom hardware attackuses specifically designedapplication-specific integrated circuits(ASIC) to decipherencrypted messages.
Mounting a cryptographicbrute force attackrequires a large number of similar computations: typically trying onekey, checking if the resulting decryption gives a meaningful answer, and then trying the next key if it does not. Computers can perform these calculations at a rate of millions per second, and thousands of computers can be harnessed together in adistributed computingnetwork. But the number of computations required on averagegrows exponentiallywith the size of the key, and for many problems standard computers are not fast enough. On the other hand, many cryptographic algorithms lend themselves to fast implementation in hardware, i.e. networks oflogic circuits, also known as gates.Integrated circuits(ICs) are constructed of these gates and often can execute cryptographic algorithms hundreds of times faster than a general purpose computer.[1]
Each IC can contain large numbers of gates (hundreds of millions in 2005). Thus, the same decryption circuit, orcell, can be replicated thousands of times on one IC. The communications requirements for these ICs are very simple. Each must be initially loaded with a starting point in the key space and, in some situations, with a comparison test value (seeknown plaintext attack). Output consists of a signal that the IC has found an answer and the successful key.
Since ICs lend themselves to mass production, thousands or even millions of ICs can be applied to a single problem. The ICs themselves can be mounted inprinted circuit boards. A standard board design can be used for different problems since the communication requirements for the chips are the same. Wafer-scale integration is another possibility. The primary limitations on this method are the cost ofchip design,IC fabrication, floor space, electric power and thermal dissipation.[2]
The earliest custom hardware attack may have been theBombeused to recoverEnigma machinekeys inWorld War II. In 1998, a custom hardware attack was mounted against theData Encryption Standardcipher by theElectronic Frontier Foundation. Their "Deep Crack" machine cost U.S. $250,000 to build and decrypted theDES Challenge II-2test message after 56 hours of work. The only other confirmed DES cracker was theCOPACOBANAmachine (Cost-Optimized PArallel COde Breaker) built in 2006. Unlike Deep Crack, COPACOBANA consists of commercially available FPGAs (reconfigurable logic gates). COPACOBANA costs about $10,000[3]to build and will recover aDESkey in under 6.4 days on average. The cost decrease by roughly a factor of 25 over the EFF machine is an impressive example of the continuous improvement ofdigital hardware. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007,SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008, their COPACOBANA RIVYERA reduced the time to break DES to the current record of less than one day, using 128 Spartan-3 5000's.[4]It is generally believed[citation needed]that large government code breaking organizations, such as the U.S.National Security Agency, make extensive use of custom hardware attacks, but no examples have beendeclassifiedor leaked as of 2005[update].
|
https://en.wikipedia.org/wiki/Custom_hardware_attack
|
Acomputer wormis a standalonemalwarecomputer programthat replicates itself in order to spread to other computers.[1]It often uses acomputer networkto spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue.[2]Computer worms userecursive methodsto copy themselves without host programs and distribute themselves based on exploiting the advantages ofexponential growth, thus controlling and infecting more and more computers in a short time.[3]Worms almost always cause at least some harm to the network, even if only by consumingbandwidth, whereasvirusesalmost always corrupt or modify files on a targeted computer.
Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as theMorris wormandMydoomshowed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects.
The first ever computer worm is generally accepted to be a self-replicating version ofCreepercreated byRay Tomlinsonand Bob Thomas atBBNin 1971 to replicate itself across theARPANET.[4][5]Tomlinson also devised the firstantivirus software, namedReaper, to delete the Creeper program.
The term "worm" was first used in this sense inJohn Brunner's 1975 novel,The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful people who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!"[6]"Then the answer dawned on him, and he almost laughed. Fluckner had resorted to one of the oldest tricks in the store and turned loose in the continental net a self-perpetuating tapeworm, probably headed by a denunciation group "borrowed" from a major corporation, which would shunt itself from one nexus to another every time his credit-code was punched into a keyboard. It could take days to kill a worm like that, and sometimes weeks."[6]
Xerox PARCwas studying the use of "worm" programs fordistributed computingin 1979.[7]
On November 2, 1988,Robert Tappan Morris, aCornell Universitycomputer science graduate student, unleashed what became known as theMorris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected.[8]During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of theCERT Coordination Center[9]and Phage mailing list.[10]Morris himself became the first person tried and convicted under the 1986Computer Fraud and Abuse Act.[11]
Conficker, a computer worm discovered in 2008 that primarily targetedMicrosoft Windowsoperating systems, is a worm that employs three different spreading strategies: local probing, neighborhood probing, and global probing.[12]This worm was considered a hybrid epidemic and affected millions of computers. The term "hybrid epidemic" is used because of the three separate methods it employed to spread, which was discovered through code analysis.[13]
Independence
Computer viruses generally require a host program.[14]The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by thehost program, but can run independently and actively carry out attacks.[15][16]
Exploit attacks
Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the "Nimda" virusexploitsvulnerabilitiesto attack.
Complexity
Some worms are combined with web page scripts, and are hidden inHTMLpages usingVBScript,ActiveXand other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined withbackdoorprograms orTrojan horses, such as "Code Red".[17]
Contagiousness
Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread throughshared folders,e-mails,[18]malicious web pages, and servers with a large number of vulnerabilities in the network.[19]
Any code designed to do more than spread the worm is typically referred to as the "payload". Typical malicious payloads might delete files on a host system (e.g., theExploreZipworm), encrypt files in aransomwareattack, orexfiltrate datasuch as confidential documents or passwords.[20]
Some worms may install abackdoor. This allows the computer to be remotely controlled by the worm author as a "zombie". Networks of such machines are often referred to asbotnetsand are very commonly used for a range of malicious purposes, including sendingspamor performingDoSattacks.[21][22][23]
Some special worms attack industrial systems in a targeted manner.Stuxnetwas primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to "issue orders" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits (e.g.:[1]) inWindows systemsand SiemensSIMATICWinCCsystems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts.[24][25][26]
Worms spread by exploiting vulnerabilities in operating systems.
Vendors with security problems supply regular security updates[27](see "Patch Tuesday"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, azero-day attackis possible.
Users need to be wary of opening unexpected emails,[28][29]and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with theILOVEYOUworm, and with the increased growth and efficiency ofphishingattacks, it remains possible to trick the end-user into running malicious code.
Anti-virusandanti-spywaresoftware are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of afirewallis also recommended.
Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and runningfirewalland antivirus software.[30]
Mitigation techniques include:
Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect.[31][32]In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer.[33]
Ahelpful wormoranti-wormis a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms atXerox PARC, there have been attempts to create useful worms. Those worms allowedJohn Shochand Jon Hupp to test theEthernetprinciples on their network ofXerox Altocomputers.[34]Similarly, theNachifamily of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities.[35]In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms asmalware. Another example of this approach isRoku OSpatching a bug allowing for Roku OS to be rooted via an update to their screensaver channels, which the screensaver would attempt to connect to the telnet and patch the device.[36]
One study proposed the first computer worm that operates on the second layer of theOSI model(Data link Layer), utilizing topology information such asContent-addressable memory(CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered.[37]
Anti-worms have been used to combat the effects of theCode Red,[38]Blaster, andSantyworms.Welchiais an example of a helpful worm.[39]Utilizing the same deficiencies exploited by theBlaster worm, Welchia infected computers and automatically began downloadingMicrosoftsecurity updates forWindowswithout the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit.[39]
Other examples of helpful worms are "Den_Zuko", "Cheeze", "CodeGreen", and "Millenium".[39]
Art worms support artists in the performance of massive scale ephemeral artworks. It turns the infected computers into nodes that contribute to the artwork.[40]
|
https://en.wikipedia.org/wiki/Computer_worm
|
Aconspiracy theoryis an explanation for an event or situation that asserts the existence of aconspiracy(generally by powerful sinister groups, often political in motivation),[3][4][5]when other explanations are more probable.[3][6][7]The term generally has a negativeconnotation, implying that the appeal of a conspiracy theory is based in prejudice, emotional conviction, or insufficient evidence.[8]A conspiracy theory is distinct from a conspiracy; it refers to a hypothesized conspiracy with specific characteristics, including but not limited to opposition to the mainstream consensus among those who are qualified to evaluate its accuracy, such asscientistsorhistorians.[9][10][11]
Conspiracy theories tend to be internally consistent and correlate with each other;[12]they are generally designed to resistfalsificationeither by evidence against them or a lack of evidence for them.[13]They are reinforced bycircular reasoning: both evidence against the conspiracyandabsence of evidence for it are misinterpreted as evidence of its truth.[8][14]Stephan Lewandowskyobserves "This interpretation relies on the notion that, the stronger the evidence against a conspiracy, the more the conspirators must want people to believe their version of events."[15]As a consequence, the conspiracy becomes a matter of faith rather than something that can be proven or disproven.[1][16]Studies have linked belief in conspiracy theories to distrust of authority and politicalcynicism.[17][18][19]Some researchers suggest thatconspiracist ideation—belief in conspiracy theories—may be psychologically harmful or pathological.[20][21]Such belief is correlated withpsychological projection,paranoia, andMachiavellianism.[22][23]
Psychologists usually attribute belief in conspiracy theories to a number of psychopathological conditions such asparanoia,schizotypy,narcissism, andinsecure attachment,[9]or to a form ofcognitive biascalled "illusory pattern perception".[24][25]It has also been linked with the so-calledDark triadpersonality types, whose common feature is lack ofempathy.[26]However, a 2020 review article found that mostcognitive scientistsview conspiracy theorizing as typically nonpathological, given that unfounded belief in conspiracy is common across both historical and contemporary cultures, and may arise from innate human tendencies towards gossip, group cohesion, and religion.[9]One historical review of conspiracy theories concluded that "Evidence suggests that the aversive feelings that people experience when in crisis—fear, uncertainty, and the feeling of being out of control—stimulate a motivation to make sense of the situation, increasing the likelihood of perceiving conspiracies in social situations."[27]
Historically, conspiracy theories have been closely linked toprejudice,propaganda,witch hunts,wars, andgenocides.[12][28][29][30][31]They are often strongly believed by the perpetrators ofterrorist attacks, and were used as justification byTimothy McVeighandAnders Breivik, as well as by governments such asNazi Germany, theSoviet Union,[28]andTurkey.[32]AIDS denialismby the government ofSouth Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS.[33][34][35]QAnonanddenialismabout the2020 United States presidential electionresults led to theJanuary 6 United States Capitol attack,[36][37][38]and belief inconspiracy theories about genetically modified foodsled the government ofZambiato reject food aid during afamine,[29]at a time when three million people in the country were suffering fromhunger.[39]Conspiracy theories are a significant obstacle to improvements inpublic health,[29][40]encouraging opposition to such public health measures asvaccinationandwater fluoridation. They have been linked to outbreaks ofvaccine-preventable diseases.[29][33][40][41]Other effects of conspiracy theories include reduced trust inscientific evidence,[12][29][42]radicalization and ideological reinforcement ofextremistgroups,[28][43]and negative consequences for theeconomy.[28]
Conspiracy theories once limited to fringe audiences have become commonplace inmass media, theInternet, andsocial media,[9][12]emerging as acultural phenomenonof the late 20th and early 21st centuries.[44][45][46][47]They are widespread around the world and are often commonly believed, some even held by the majority of the population.[48][49][50]Interventions to reduce the occurrence of conspiracy beliefs include maintaining anopen society, encouraging people to useanalytical thinking, and reducing feelings of uncertainty, anxiety, or powerlessness.[42][48][49][51]
TheOxford English Dictionarydefinesconspiracy theoryas "the theory that an event or phenomenon occurs as a result of a conspiracy between interested parties;spec.a belief that some covert but influential agency (typically political in motivation and oppressive in intent) is responsible for an unexplained event". It cites a 1909 article inThe American Historical Reviewas the earliest usage example,[52][53]although it also appeared in print for several decades before.[54]
The earliest known usage was by the American authorCharles Astor Bristed, in a letter to the editor published inThe New York Timeson 11 January 1863.[55]He used it to refer to claims that British aristocrats were intentionallyweakening the United States during the American Civil Warin order to advance their financial interests.
England has had quite enough to do in Europe and Asia, without going out of her way to meddle with America. It was a physical and moral impossibility that she could be carrying on a gigantic conspiracy against us. But our masses, having only a rough general knowledge of foreign affairs, and not unnaturally somewhat exaggerating the space which we occupy in the world's eye, do not appreciate the complications which rendered such a conspiracy impossible. They only look at the sudden right-about-face movement of the English Press and public, which is most readily accounted for on the conspiracy theory.[55]
The term is also used as a way to discreditdissentinganalyses.[56]Robert Blaskiewicz comments that examples of the term were used as early as the nineteenth century and states that its usage has always been derogatory.[57]According to a study by Andrew McKenzie-McHarg, in contrast, in the nineteenth century the termconspiracy theorysimply "suggests a plausible postulate of a conspiracy" and "did not, at this stage, carry any connotations, either negative or positive", though sometimes a postulate so-labeled was criticized.[58]The author and activistGeorge Monbiotargued that the terms "conspiracy theory" and "conspiracy theorist" are misleading, as conspiracies truly exist andtheoriesare "rational explanations subject to disproof". Instead, he proposed the terms "conspiracy fiction" and "conspiracy fantasist".[59]
The term "conspiracy theory" is itself the subject of a conspiracy theory, which posits that the term was popularized by theCIAin order to discredit conspiratorial believers, particularly critics of theWarren Commission, by making them a target of ridicule.[60]In his 2013 bookConspiracy Theory in America, the political scientist Lance deHaven-Smith wrote that the term entered everyday language in the United States after 1964, the year in which the Warren Commission published its findings on theassassination of John F. Kennedy, withThe New York Timesrunning five stories that year using the term.[61]
Whether the CIA was responsible for popularising the term "conspiracy theory" was analyzed by Michael Butter, a Professor of American Literary and Cultural History at theUniversity of Tübingen. Butter wrote in 2020 that the CIA documentConcerning Criticism of the Warren Report, which proponents of the theory use as evidence of CIA motive and intention, does not contain the phrase "conspiracy theory" in the singular, and only uses the term "conspiracy theories" once, in the sentence: "Conspiracy theories have frequently thrown suspicion on our organisation [sic], for example, by falsely alleging thatLee Harvey Oswaldworked for us."[62]
A conspiracy theory is not simply aconspiracy, which refers to any covert plan involving two or more people.[10]In contrast, the term "conspiracy theory" refers tohypothesizedconspiracies that have specific characteristics. For example, conspiracist beliefs invariably oppose the mainstream consensus among those people who are qualified to evaluate their accuracy, such asscientistsorhistorians.[11]Conspiracy theorists see themselves as having privileged access to socially persecuted knowledge or a stigmatized mode of thought that separates them from the masses who believe the official account.[10]Michael Barkundescribes a conspiracy theory as a "template imposed upon the world to give the appearance of order to events".[10]
Real conspiracies, even very simple ones, are difficult to conceal and routinely experience unexpected problems.[63]In contrast, conspiracy theories suggest that conspiracies are unrealistically successful and that groups of conspirators, such asbureaucracies, can act with near-perfect competence and secrecy. The causes of events or situations are simplified to exclude complex or interacting factors, as well as the role of chance and unintended consequences. Nearly all observations are explained as having been deliberately planned by the alleged conspirators.[63]
In conspiracy theories, the conspirators are usually claimed to be acting with extreme malice.[63]As described by Robert Brotherton:
The malevolent intent assumed by most conspiracy theories goes far beyond everyday plots borne out of self-interest, corruption, cruelty, and criminality. The postulated conspirators are not merely people with selfish agendas or differing values. Rather, conspiracy theories postulate a black-and-white world in which good is struggling against evil. The general public is cast as the victim of organised persecution, and the motives of the alleged conspirators often verge on pure maniacal evil. At the very least, the conspirators are said to have an almost inhuman disregard for the basic liberty and well-being of the general population. More grandiose conspiracy theories portray the conspirators as being Evil Incarnate: of having caused all the ills from which we suffer, committing abominable acts of unthinkable cruelty on a routine basis, and striving ultimately to subvert or destroy everything we hold dear.[63]
A conspiracy theory may take any matter as its subject, but certain subjects attract greater interest than others. Favored subjects include famous deaths and assassinations, morally dubious government activities, suppressed technologies, and "false flag" terrorism. Among the longest-standing and most widely recognized conspiracy theories are notions concerning theassassination of John F. Kennedy, the1969 Apollo Moon landings, and the9/11 terrorist attacks, as well as numerous theories pertaining to alleged plots for world domination by various groups, both real and imaginary.[64]
Conspiracy beliefs are widespread around the world.[48]In rural Africa, common targets of conspiracy theorizing include societal elites, enemy tribes, and the Western world, with conspirators often alleged to enact their plans via sorcery or witchcraft; one common belief identifies modern technology as itself being a form of sorcery, created with the goal of harming or controlling the people.[48]In China, one widely published conspiracy theory claims that a number of events including therise of Hitler, the1997 Asian financial crisis, andclimate changewere planned by theRothschild family, which may have led to effects on discussions aboutChina's currency policy.[49][65]
Conspiracy theories once limited to fringe audiences have become commonplace inmass media, contributing to conspiracism emerging as acultural phenomenonin the United States of the late 20th and early 21st centuries.[44][45][46][47]The general predisposition to believe conspiracy theories cuts across partisan and ideological lines. Conspiratorial thinking is correlated with antigovernmental orientations and a low sense of political efficacy, with conspiracy believers perceiving a governmental threat to individual rights and displaying a deep skepticism that who one votes for really matters.[66]
Conspiracy theories are often commonly believed, some even being held by the majority of the population.[48][49][50]A broad cross-section of Americans today gives credence to at least some conspiracy theories.[67]For instance, a study conducted in 2016 found that 10% of Americans think thechemtrail conspiracy theoryis "completely true" and 20–30% think it is "somewhat true".[68]This puts "the equivalent of 120 million Americans in the 'chemtrails are real' camp".[68]Belief in conspiracy theories has therefore become a topic of interest for sociologists, psychologists and experts infolklore.
Conspiracy theories are widely present on theWebin the form ofblogsandYouTubevideos, as well as onsocial media. Whether the Web has increased the prevalence of conspiracy theories or not is an open research question.[69]The presence and representation of conspiracy theories insearch engineresults has been monitored and studied, showing significant variation across different topics, and a general absence of reputable, high-quality links in the results.[70]
One conspiracy theory that propagated through former US President Barack Obama's time in office[71]claimed that he wasborn in Kenya, instead of Hawaii where he was actually born.[72]Former governor of Arkansas and political opponent of ObamaMike Huckabeemade headlines in 2011[73]when he, among other members ofRepublicanleadership, continued to question Obama's citizenship status.
A conspiracy theory can be local or international, focused on single events or covering multiple incidents and entire countries, regions and periods of history.[10]According toRussell MuirheadandNancy Rosenblum, historically, traditional conspiracism has entailed a "theory", but over time, "conspiracy" and "theory" have become decoupled, as modern conspiracism is often without any kind of theory behind it.[75][76]
Jesse Walker(2013) has identified five kinds of conspiracy theories:[77]
Michael Barkunhas identified three classifications of conspiracy theory:[78]
Murray Rothbardargues in favor of a model that contrasts "deep" conspiracy theories to "shallow" ones. According to Rothbard, a "shallow" theorist observes an event and asksCui bono?("Who benefits?"), jumping to the conclusion that a posited beneficiary is responsible for covertly influencing events. On the other hand, the "deep" conspiracy theorist begins with a hunch and then seeks out evidence. Rothbard describes this latter activity as a matter of confirming with certain facts one's initial paranoia.[79]
Belief in conspiracy theories is generally based not on evidence but on the faith of the believer.[80]Noam Chomskycontrasts conspiracy theory toinstitutional analysis, which focuses mainly on the public, long-term behavior of publicly known institutions, as recorded in, for example, scholarly documents ormainstream mediareports.[81]Conspiracy theory conversely posits the existence of secretive coalitions of individuals and speculates on their alleged activities.[82][83]Belief in conspiracy theories is associated with biases in reasoning, such as theconjunction fallacy.[84]
Clare Birchall atKing's College Londondescribes conspiracy theory as a "form of popular knowledge or interpretation".[a]The use of the word 'knowledge' here suggests ways in which conspiracy theory may be considered in relation to legitimate modes of knowing.[b]The relationship between legitimate and illegitimate knowledge, Birchall claims, is closer than common dismissals of conspiracy theory contend.[86]
Theories involving multiple conspirators that are proven to be correct, such as theWatergate scandal, are usually referred to asinvestigative journalismorhistorical analysisrather than conspiracy theory.[87]Bjerg (2016) writes: "the way we normally use
the term conspiracy theory excludes instances where the theory has been
generally accepted as true. The Watergate scandal serves as the standard
reference."[88]By contrast, the term "Watergate conspiracy theory" is used to refer to a variety of hypotheses in which those convicted in the conspiracy were in fact the victims of a deeper conspiracy.[89]There are also attempts to analyze the theory of conspiracy theories (conspiracy theory theory) to ensure that the term "conspiracy theory" is used to refer to narratives that have been debunked by experts, rather than as a generalized dismissal.[90]
Conspiracy theory rhetoric exploits several importantcognitive biases, includingproportionality bias,attribution bias, andconfirmation bias.[33]Their arguments often take the form of asking reasonable questions, but without providing an answer based on strong evidence.[91]Conspiracy theories are most successful when proponents can gather followers from the general public, such as in politics, religion and journalism. These proponents may not necessarily believe the conspiracy theory; instead, they may just use it in an attempt to gain public approval. Conspiratorial claims can act as a successful rhetorical strategy to convince a portion of the public viaappeal to emotion.[29]
Conspiracy theories typically justify themselves by focusing on gaps or ambiguities in knowledge, and then arguing that the true explanation for thismust be a conspiracy.[63]In contrast, any evidence that directly supports their claims is generally of low quality. For example, conspiracy theories are often dependent oneyewitness testimony, despite its unreliability, while disregarding objective analyses of the evidence.[63]
Conspiracy theories are not able to befalsifiedand are reinforced byfallacious arguments. In particular, the logical fallacycircular reasoningis used by conspiracy theorists: both evidence against the conspiracy and an absence of evidence for it are re-interpreted as evidence of its truth,[8][14]whereby the conspiracy becomes a matter of faith rather than something that can be proved or disproved.[1][16]The epistemic strategy of conspiracy theories has been called "cascade logic": each time new evidence becomes available, a conspiracy theory is able to dismiss it by claiming that even more people must be part of the cover-up.[29][63]Any information that contradicts the conspiracy theory is suggested to be disinformation by the alleged conspiracy.[42]Similarly, the continued lack of evidence directly supporting conspiracist claims is portrayed as confirming the existence of a conspiracy of silence; the fact that other people have not found or exposed any conspiracy is taken as evidence that those people are part of the plot, rather than considering that it may be because no conspiracy exists.[33][63]This strategy lets conspiracy theories insulate themselves from neutral analyses of the evidence, and makes them resistant to questioning or correction, which is called "epistemic self-insulation".[33][63]
Conspiracy theorists often take advantage offalse balancein the media. They may claim to be presenting a legitimate alternative viewpoint that deserves equal time to argue its case; for example, this strategy has been used by theTeach the Controversycampaign to promoteintelligent design, which often claims that there is a conspiracy of scientists suppressing their views. If they successfully find a platform to present their views in a debate format, they focus on using rhetoricalad hominemsand attacking perceived flaws in the mainstream account, while avoiding any discussion of the shortcomings in their own position.[29]
The typical approach of conspiracy theories is to challenge any action or statement from authorities, using even the most tenuous justifications. Responses are then assessed using a double standard, where failing to provide an immediate response to the satisfaction of the conspiracy theorist will be claimed to prove a conspiracy. Any minor errors in the response are heavily emphasized, while deficiencies in the arguments of other proponents are generally excused.[29]
In science, conspiracists may suggest that ascientific theorycan be disproven by a single perceived deficiency, even though such events are extremely rare. In addition, both disregarding the claims and attempting to address them will be interpreted as proof of a conspiracy.[29]Other conspiracist arguments may not be scientific; for example, in response to theIPCC Second Assessment Reportin 1996, much of the opposition centered on promoting a procedural objection to the report's creation. Specifically, it was claimed that part of the procedure reflected a conspiracy to silence dissenters, which served as motivation for opponents of the report and successfully redirected a significant amount of the public discussion away from the science.[29]
Historically, conspiracy theories have been closely linked toprejudice,witch hunts,wars, andgenocides.[28][29]They are often strongly believed by the perpetrators ofterroristattacks, and were used as justification byTimothy McVeigh,Anders BreivikandBrenton Tarrant, as well as by governments such asNazi Germanyand theSoviet Union.[28]AIDS denialismby the government ofSouth Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS,[33][34][35]while belief inconspiracy theories about genetically modified foodsled the government ofZambiato reject food aid during afamine,[29]at a time when 3 million people in the country were suffering fromhunger.[39]
Conspiracy theories are a significant obstacle to improvements inpublic health.[29][40]People who believe in health-related conspiracy theories are less likely to followmedical advice, and more likely to usealternative medicineinstead.[28]Conspiratorialanti-vaccinationbeliefs, such asconspiracy theories about pharmaceutical companies, can result in reduced vaccination rates and have been linked to outbreaks ofvaccine-preventable diseases.[33][29][41][40]Health-related conspiracy theories often inspire resistance towater fluoridation, and contributed to the impact of theLancet MMR autism fraud.[29][40]
Conspiracy theories are a fundamental component of a wide range of radicalized and extremist groups, where they may play an important role in reinforcing the ideology and psychology of their members as well as further radicalizing their beliefs.[28][43]These conspiracy theories often share common themes, even among groups that would otherwise be fundamentally opposed, such as theantisemiticconspiracy theories found among political extremists on both thefar rightandfar left.[28]More generally, belief in conspiracy theories is associated with holding extreme and uncompromising viewpoints, and may help people in maintaining those viewpoints.[42]While conspiracy theories are not always present in extremist groups, and do not always lead to violence when they are, they can make the group more extreme, provide an enemy to direct hatred towards, and isolate members from the rest of society. Conspiracy theories are most likely to inspire violence when they call for urgent action, appeal to prejudices, or demonize and scapegoat enemies.[43]
Conspiracy theorizing in the workplace can also have economic consequences. For example, it leads to lower job satisfaction and lower commitment, resulting in workers being more likely to leave their jobs.[28]Comparisons have also been made with the effects of workplace rumors, which share some characteristics with conspiracy theories and result in both decreased productivity and increased stress. Subsequent effects on managers include reduced profits, reduced trust from employees, and damage to the company's image.[28][93]
Conspiracy theories can divert attention from important social, political, and scientific issues.[94][95]In addition, they have been used to discredit scientific evidence to the general public or in a legal context. Conspiratorial strategies also share characteristics with those used by lawyers who are attempting to discredit expert testimony, such as claiming that the experts have ulterior motives in testifying, or attempting to find someone who will provide statements to imply that expert opinion is more divided than it actually is.[29]
It is possible that conspiracy theories may also produce some compensatory benefits to society in certain situations. For example, they may help people identify governmental deceptions, particularly in repressive societies, and encouragegovernment transparency.[49][94]However, real conspiracies are normally revealed by people working within the system, such aswhistleblowersandjournalists, and most of the effort spent by conspiracy theorists is inherently misdirected.[43]The most dangerous conspiracy theories are likely to be those that incite violence, scapegoat disadvantaged groups, or spreadmisinformationabout important societal issues.[96]
Strategies to address conspiracy theories have been divided into two categories based on whether the target audience is the conspiracy theorists or the general public.[51][49]These strategies have been described as reducingeither the supply or the demandfor conspiracy theories.[49]Both approaches can be used at the same time, although there may be issues of limited resources, or if arguments are used which may appeal to one audience at the expense of the other.[49]
Brief scientific literacy interventions, particularly those focusing on critical thinking skills, can effectively undermine conspiracy beliefs and related behaviors. Research led by Penn State scholars, published in theJournal of Consumer Research, found that enhancing scientific knowledge and reasoning through short interventions, such as videos explaining concepts like correlation and causation, reduces the endorsement of conspiracy theories. These interventions were most effective against conspiracy theories based on faulty reasoning and were successful even among groups prone to conspiracy beliefs. The studies, involving over 2,700 participants, highlight the importance of educational interventions in mitigating conspiracy beliefs, especially when timed to influence critical decision-making.[97][98]
People who feelempoweredare more resistant to conspiracy theories. Methods to promote empowerment include encouraging people to useanalytical thinking,primingpeople to think of situations where they are in control, and ensuring that decisions by society and government are seen to follow procedural fairness (the use of fair decision-making procedures).[51]
Methods of refutation which have shown effectiveness in various circumstances include: providing facts that demonstrate the conspiracy theory is false, attempting to discredit the source, explaining how the logic is invalid or misleading, and providing links to fact-checking websites.[51]It can also be effective to use these strategies in advance, informing people that they could encounter misleading information in the future, and why the information should be rejected (also called inoculation or prebunking).[51][99][100]While it has been suggested that discussing conspiracy theories can raise their profile and make them seem more legitimate to the public, the discussion can put people on guard instead as long as it is sufficiently persuasive.[9]
Other approaches to reduce the appeal of conspiracy theories in general among the public may be based in the emotional and social nature of conspiratorial beliefs. For example, interventions that promoteanalytical thinkingin the general public are likely to be effective. Another approach is to intervene in ways that decreasenegative emotions, and specifically to improve feelings of personal hope and empowerment.[48]
It is much more difficult to convince people who already believe in conspiracy theories.[49][51]Conspiracist belief systems are not based on external evidence, but instead usecircular logicwhere every belief is supported by other conspiracist beliefs.[51]In addition, conspiracy theories have a "self-sealing" nature, in which the types of arguments used to support them make them resistant to questioning from others.[49]
Characteristics of successful strategies for reaching conspiracy theorists have been divided into several broad categories: 1) Arguments can be presented by "trusted messengers", such as people who were formerly members of an extremist group. 2) Since conspiracy theorists think of themselves as people who value critical thinking, this can be affirmed and then redirected to encourage being more critical when analyzing the conspiracy theory. 3) Approaches demonstrate empathy, and are based on building understanding together, which is supported by modeling open-mindedness in order to encourage the conspiracy theorists to do likewise. 4) The conspiracy theories are not attacked with ridicule or aggressive deconstruction, and interactions are not treated like an argument to be won; this approach can work with the general public, but among conspiracy theorists it may simply be rejected.[51]
Interventions that reduce feelings of uncertainty, anxiety, or powerlessness result in a reduction in conspiracy beliefs.[42]Other possible strategies to mitigate the effect of conspiracy theories include education, media literacy, and increasing governmental openness and transparency.[99]Due to the relationship between conspiracy theories and political extremism, the academic literature onderadicalizationis also important.[51]
One approach describes conspiracy theories as resulting from a "crippled epistemology", in which a person encounters or accepts very few relevant sources of information.[49][101]A conspiracy theory is more likely to appear justified to people with a limited "informational environment" who only encounter misleading information. These people may be "epistemologicallyisolated" inself-enclosed networks. From the perspective of people within these networks, disconnected from the information available to the rest of society, believing in conspiracy theories may appear to be justified.[49][101]In these cases, the solution would be to break the group's informational isolation.[49]
Public exposure to conspiracy theories can be reduced by interventions that reduce their ability to spread, such as by encouraging people to reflect before sharing a news story.[51]Researchers Carlos Diaz Ruiz and Tomas Nilsson have proposed technical and rhetorical interventions to counter the spread of conspiracy theories on social media.[102]
The primary defense against conspiracy theories is to maintain anopen society, in which many sources of reliable information are available, and government sources are known to be credible rather than propaganda. Additionally, independent nongovernmental organizations are able to correct misinformation without requiring people to trust the government.[49]The absence ofcivil rightsandcivil libertiesreduces the number of information sources available to the population, which may lead people to support conspiracy theories.[49]Since the credibility of conspiracy theories can be increased if governments act dishonestly or otherwise engage in objectionable actions, avoiding such actions is also a relevant strategy.[99]
Joseph Pierre has said that mistrust in authoritative institutions is the core component underlying many conspiracy theories and that this mistrust creates an epistemic vacuum and makes individuals searching for answers vulnerable to misinformation. Therefore, one possible solution is offering consumers a seat at the table to mend their mistrust in institutions.[103]Regarding the challenges of this approach, Pierre has said, "The challenge with acknowledging areas of uncertainty within a public sphere is that doing so can be weaponized to reinforce a post-truth view of the world in which everything is debatable, and any counter-position is just as valid. Although I like to think of myself as a middle of the road kind of individual, it is important to keep in mind that the truth does not always lie in the middle of a debate, whether we are talking about climate change, vaccines, or antipsychotic medications."[104]
Researchers have recommended that public policies should take into account the possibility of conspiracy theories relating to any policy or policy area, and prepare to combat them in advance.[99][9]Conspiracy theories have suddenly arisen in the context of policy issues as disparate as land-use laws and bicycle-sharing programs.[99]In the case of public communications by government officials, factors that improve the effectiveness of communication include using clear and simple messages, and using messengers which are trusted by the target population. Government information about conspiracy theories is more likely to be believed if the messenger is perceived as being part of someone'sin-group. Official representatives may be more effective if they share characteristics with the target groups, such as ethnicity.[99]
In addition, when the government communicates with citizens to combat conspiracy theories, online methods are more efficient compared to other methods such as print publications. This also promotes transparency, can improve a message's perceived trustworthiness, and is more effective at reaching underrepresented demographics. However, as of 2019[update], many governmental websites do not take full advantage of the available information-sharing opportunities. Similarly, social media accounts need to be used effectively in order to achieve meaningful communication with the public, such as by responding to requests that citizens send to those accounts. Other steps include adapting messages to the communication styles used on the social media platform in question, and promoting a culture of openness. Since mixed messaging can support conspiracy theories, it is also important to avoid conflicting accounts, such as by ensuring the accuracy of messages on the social media accounts of individual members of the organization.[99]
Successful methods for dispelling conspiracy theories have been studied in the context ofpublic healthcampaigns. A key characteristic of communication strategies to address medical conspiracy theories is the use of techniques that rely less on emotional appeals. It is more effective to use methods that encourage people to process information rationally. The use of visual aids is also an essential part of these strategies. Since conspiracy theories are based on intuitive thinking, and visual information processing relies on intuition, visual aids are able to compete directly for the public's attention.[9]
In public health campaigns, information retention by the public is highest for loss-framed messages that include more extreme outcomes. However, excessively appealing to catastrophic scenarios (e.g. low vaccination rates causing an epidemic) may provoke anxiety, which is associated with conspiracism and could increase belief in conspiracy theories instead.Scare tacticshave sometimes had mixed results, but are generally considered ineffective. An example of this is the use of images that showcase disturbing health outcomes, such as the impact of smoking on dental health. One possible explanation is that information processed via the fear response is typically not evaluated rationally, which may prevent the message from being linked to the desired behaviors.[9]
A particularly important technique is the use offocus groupsto understand exactly what people believe, and the reasons they give for those beliefs. This allows messaging to focus on the specific concerns that people identify, and on topics that are easily misinterpreted by the public, since these are factors which conspiracy theories can take advantage of. In addition, discussions with focus groups and observations of the group dynamics can indicate which anti-conspiracist ideas are most likely to spread.[9]
Interventions that address medical conspiracy theories by reducing powerlessness include emphasizing the principle ofinformed consent, giving patients all the relevant information without imposing decisions on them, to ensure that they have a sense of control. Improving access to healthcare also reduces medical conspiracism. However, doing so by political efforts can also fuel additional conspiracy theories, which occurred with theAffordable Care Act(Obamacare) in the United States. Another successful strategy is to require people to watch a short video when they fulfil requirements such as registration for school or a drivers' license, which has been demonstrated to improve vaccination rates and signups for organ donation.[9]
Another approach is based on viewing conspiracy theories as narratives which express personal and cultural values, making them less susceptible to straightforward factual corrections, and more effectively addressed by counter-narratives.[100][105]Counter-narratives can be more engaging and memorable than simple corrections, and can be adapted to the specific values held by individuals and cultures. These narratives may depict personal experiences, or alternatively they can be cultural narratives. In the context of vaccination, examples of cultural narratives include stories about scientific breakthroughs, about the world before vaccinations, or about heroic and altruistic researchers. The themes to be addressed would be those that could be exploited by conspiracy theories to increasevaccine hesitancy, such as perceptions of vaccine risk, lack of patient empowerment, and lack of trust in medical authorities.[100]
It has been suggested that directly counteringmisinformationcan be counterproductive. For example, since conspiracy theories can reinterpret disconfirming information as part of their narrative, refuting a claim can result in accidentally reinforcing it,[63][106]which is referred to as a "backfire effect".[107]In addition, publishing criticism of conspiracy theories can result in legitimizing them.[94]In this context, possible interventions include carefully selecting which conspiracy theories to refute, requesting additional analyses from independent observers, and introducing cognitive diversity into conspiratorial communities by undermining their poor epistemology.[94]Any legitimization effect might also be reduced by responding to more conspiracy theories rather than fewer.[49]
There are psychological mechanisms by which backfire effects could potentially occur, but the evidence on this topic is mixed, and backfire effects are very rare in practice.[100][107][108]A 2020 review of the scientific literature on backfire effects found that there have been widespreadfailures to replicatetheir existence, even under conditions that would be theoretically favorable to observing them.[107]Due to the lack ofreproducibility, as of 2020[update]most researchers believe that backfire effects are either unlikely to occur on the broader population level, or they only occur in very specific circumstances, or they do not exist.[107]Brendan Nyhan, one of the researchers who initially proposed the occurrence of backfire effects, wrote in 2021 that the persistence of misinformation is most likely due to other factors.[108]
In general, people do reject conspiracy theories when they learn about their contradictions and lack of evidence.[9]For most people, corrections and fact-checking are very unlikely to have a negative impact, and there is no specific group of people in which backfire effects have been consistently observed.[107]Presenting people with factual corrections, or highlighting the logical contradictions in conspiracy theories, has been demonstrated to have a positive effect in many circumstances.[48][106]For example, this has been studied in the case of informing believers in9/11 conspiracy theoriesabout statements by actual experts and witnesses.[48]One possibility is that criticism is most likely to backfire if it challenges someone's worldview or identity. This suggests that an effective approach may be to provide criticism while avoiding such challenges.[106]
The widespread belief in conspiracy theories has become a topic of interest for sociologists, psychologists, and experts in folklore since at least the 1960s, whena number of conspiracy theoriesarose regardingthe assassinationof U.S. PresidentJohn F. Kennedy.SociologistTürkay Salim Nefes underlines the political nature of conspiracy theories. He suggests that one of the most important characteristics of these accounts is their attempt to unveil the "real but hidden" power relations in social groups.[109][110]The term "conspiracism" was popularized by academic Frank P. Mintz in the 1980s. According to Mintz, conspiracism denotes "belief in the primacy of conspiracies in the unfolding of history":[111]: 4
Conspiracism serves the needs of diverse political and social groups in America and elsewhere. It identifies elites, blames them for economic and social catastrophes, and assumes that things will be better once popular action can remove them from positions of power. As such, conspiracy theories do not typify a particular epoch or ideology.[111]: 199
Research suggests, on a psychological level,conspiracist ideation—belief in conspiracy theories—can be harmful or pathological,[20][21]and is highly correlated withpsychological projection, as well as withparanoia, which is predicted by the degree of a person'sMachiavellianism.[112]The propensity to believe in conspiracy theories is strongly associated with the mental health disorder ofschizotypy.[113][114][115][116][117]Conspiracy theories once limited to fringe audiences have become commonplace inmass media, emerging as acultural phenomenonof the late 20th and early 21st centuries.[44][45][46][47]Exposure to conspiracy theories in news media and popular entertainment increases receptiveness to conspiratorial ideas, and has also increased the social acceptability of fringe beliefs.[28][118]
Conspiracy theories often use complicated and detailed arguments, including ones that appear analytical or scientific. However, belief in conspiracy theories is primarily driven by emotion.[48]One of the most widely confirmed facts about conspiracy theories is that belief in a single conspiracy theory is often associated with belief in other conspiracy theories.[33][119]This even applies when the conspiracy theories directly contradict each other—e.g., believing thatOsama bin Ladenwas already dead before his compound in Pakistan was attacked makes the same person more likely to believe that he is still alive. One conclusion from this finding is that the content of a conspiracist belief is less important than the idea of a coverup by the authorities.[33][95][120]Analytical thinkingaids in reducing belief in conspiracy theories, in part because it emphasizes rational and critical cognition.[42]
Some psychological scientists assert that explanations related to conspiracy theories can be, and often are, "internally consistent" with strong beliefs previously held prior to the event that sparked the belief in a conspiracy.[42]People who believe in conspiracy theories tend to believe in other unsubstantiated claims, includingpseudoscienceandparanormalphenomena.[121]
Psychological motives for believing in conspiracy theories can be categorized as epistemic, existential, or social. These motives are particularly acute in vulnerable and disadvantaged populations. However, it does not appear that the beliefs help to address these motives; in fact, they may be self-defeating, acting to make the situation worse instead.[42][106]For example, while conspiratorial beliefs can result from a perceived sense ofpowerlessness, exposure to conspiracy theories immediately suppresses personal feelings of autonomy and control. Furthermore, they also make people less likely to take actions that could improve their circumstances.[42][106]
This is additionally supported by the fact that conspiracy theories have a number of disadvantageous attributes.[42]For example, they promote a hostile and distrustful view of other people and groups allegedly acting based on antisocial and cynical motivations. This is expected to lead to increasedsocial alienationandanomieand reducedsocial capital. Similarly, they depict the public as ignorant and powerless against the alleged conspirators, with important aspects of society determined by malevolent forces, a viewpoint that is likely to be disempowering.[42]
Each person may endorse conspiracy theories for one of many different reasons.[122]The most consistently demonstrated characteristics of people who find conspiracy theories appealing are a feeling ofalienation, unhappiness or dissatisfaction with their situation, an unconventional worldview, and a sense ofdisempowerment.[122]While various aspects of personality affect susceptibility to conspiracy theories, none of theBig Five personality traitsare associated with conspiracy beliefs.[122]
The political scientistMichael Barkun, discussing the usage of "conspiracy theory" in contemporary American culture, holds that this term is used for a belief that explains an event as the result of a secret plot by exceptionally powerful and cunning conspirators to achieve a malevolent end.[123][124]According to Barkun, the appeal of conspiracism is threefold:
This third point is supported by the research of Roland Imhoff, professor ofsocial psychologyat theJohannes Gutenberg University Mainz. His research suggests that the smaller the minority believing in a specific theory, the more attractive it is to conspiracy theorists.[125]Humanistic psychologistsargue that even if a posited cabal behind an alleged conspiracy is almost always perceived as hostile, there often remains an element of reassurance for theorists. This is because it is a consolation to imagine that humans create difficulties in human affairs and remain within human control. If a cabal can be implicated, there may be a hope of breaking its power or of joining it. Belief in the power of a cabal is an implicit assertion of human dignity—an unconscious affirmation that man is responsible for his own destiny.[126]
People formulate conspiracy theories to explain, for example, power relations in social groups and the perceived existence of evil forces.[c][124][109][110]Proposed psychological origins of conspiracy theorising include projection; the personal need to explain "a significant event [with] a significant cause;" and the product of various kinds and stages of thought disorder, such as paranoid disposition, ranging in severity to diagnosable mental illnesses. Some people prefer socio-political explanations over the insecurity of encounteringrandom, unpredictable, or otherwise inexplicable events.[127][128][129][130][131][132]According to Berlet and Lyons, "Conspiracism is a particular narrative form of scapegoating that frames demonized enemies as part of a vast insidious plot against the common good, while it valorizes the scapegoater as a hero for sounding the alarm".[133]
Some psychologists believe that a search for meaning is common in conspiracism. Once cognized,confirmation biasand avoidance ofcognitive dissonancemay reinforce the belief. When a conspiracy theory has become embedded within a social group,communal reinforcementmay also play a part.[134]
Inquiry into possible motives behind the accepting of irrational conspiracy theories has linked[135]these beliefs to distress resulting from an event that occurred, such as theevents of 9/11.Additional research suggests that "delusional ideation" is the trait most likely to indicate a stronger belief in conspiracy theories.[136]Research also shows an increased attachment to these irrational beliefs leads to a decreased desire for civic engagement.[84]Belief in conspiracy theories is correlated with low intelligence, lower analytical thinking,anxiety disorders,paranoia, andauthoritarianbeliefs.[137][138][139]
ProfessorQuassim Cassamargues that conspiracy theorists hold their beliefs due to flaws in their thinking and, more precisely, their intellectual character. He cites philosopherLinda Trinkaus Zagzebskiand her bookVirtues of the Mindin outlining intellectual virtues (such as humility, caution, and carefulness) and intellectual vices (such as gullibility, carelessness, and closed-mindedness). Whereas intellectual virtues help reach sound examination, intellectual vices "impede effective and responsible inquiry", meaning that those prone to believing in conspiracy theories possess certain vices while lacking necessary virtues.[140]
Some researchers have suggested that conspiracy theories could be partially caused by the human brain's mechanisms for detecting dangerous coalitions. Such a mechanism could have been helpful in the small-scale environment humanity evolved in but is mismatched in a modern, complex society and thus "misfire", perceiving conspiracies where none exist.[141]
Some historians have argued thatpsychological projectionis prevalent amongst conspiracy theorists. According to the argument, this projection is manifested in the form of attributing undesirable characteristics of the self to the conspirators. Historian Richard Hofstadter stated that:
This enemy seems on many counts a projection of the self; both the ideal and the unacceptable aspects of the self are attributed to him. A fundamental paradox of the paranoid style is the imitation of the enemy. The enemy, for example, may be the cosmopolitan intellectual, but the paranoid will outdo him in the apparatus of scholarship, even of pedantry. ... The Ku Klux Klan imitated Catholicism to the point of donning priestly vestments, developing an elaborate ritual and an equally elaborate hierarchy. TheJohn Birch Societyemulates Communist cells and quasi-secret operation through "front" groups, and preaches a ruthless prosecution of the ideological war along lines very similar to those it finds in the Communist enemy. Spokesmen of the various fundamentalist anti-Communist "crusades" openly express their admiration for the dedication, discipline, and strategic ingenuity the Communist cause calls forth.[130]
Hofstadter also noted that "sexual freedom" is a vice frequently attributed to the conspiracist's target group, noting that "very often the fantasies of true believers reveal strong sadomasochistic outlets, vividly expressed, for example, in the delight of anti-Masons with the cruelty of Masonic punishments".[130]
Marcel Danesisuggests that people who believe conspiracy theories have difficulty rethinking situations. Exposure to those theories has caused neural pathways to be more rigid and less subject to change. Initial susceptibility to believing these theories' lies, dehumanizing language, and metaphors leads to the acceptance of larger and more extensive theories because the hardened neural pathways are already present. Repetition of the "facts" of conspiracy theories and their connected lies simply reinforces the rigidity of those pathways. Thus, conspiracy theories and dehumanizing lies are not mere hyperbole; they can actually change the way people think:
Unfortunately, research into this brain wiring also shows that once people begin to believe lies, they are unlikely to change their minds even when confronted with evidence that contradicts their beliefs. It is a form of brainwashing. Once the brain has carved out a well-worn path of believing deceit, it is even harder to step out of that path – which is how fanatics are born. Instead, these people will seek out information that confirms their beliefs, avoid anything that is in conflict with them, or even turn the contrasting information on its head, so as to make it fit their beliefs.
People with strong convictions will have a hard time changing their minds, given how embedded a lie becomes in the mind. In fact, there are scientists and scholars still studying the best tools and tricks to combat lies with some combination of brain training and linguistic awareness.[142]
In addition to psychological factors such as conspiracist ideation, sociological factors also help account for who believes in which conspiracy theories. Such theories tend to get more traction among election losers in society, for example, and the emphasis on conspiracy theories by elites and leaders tends to increase belief among followers with higher levels of conspiracy thinking.[143]Christopher Hitchensdescribed conspiracy theories as the "exhaust fumes of democracy":[131]the unavoidable result of a large amount of information circulating among a large number of people.
Conspiracy theories may be emotionally satisfying, as they assign blame to a group to which the theorist does not belong and, thus, absolve the theorist of moral or political responsibility in society.[144]Likewise,Roger Cohenwriting forThe New York Timeshas said that, "captive minds; ... resort to conspiracy theory because it is the ultimate refuge of the powerless. If you cannot change your own life, it must be that some greater force controls the world."[132]
Sociological historian Holger Herwig found in studying German explanations for the origins ofWorld War I, "Those events that are most important are hardest to understand because they attract the greatest attention from myth makers and charlatans."[145]Justin FoxofTimemagazine argues that Wall Street traders are among the most conspiracy-minded group of people, and ascribes this to the reality of some financial market conspiracies, and to the ability of conspiracy theories to provide necessary orientation in the market's day-to-day movements.[127]
Bruno Latournotes that the language and intellectual tactics ofcritical theoryhave been appropriated by those he describes as conspiracy theorists, includingclimate-change denialistsand the9/11 Truth movement: "Maybe I am taking conspiracy theories too seriously, but I am worried to detect, in those mad mixtures of knee-jerk disbelief, punctilious demands for proofs, and free use of powerful explanation from the social neverland, many of the weapons of social critique."[146]
Michael Kelly, aWashington Postjournalist and critic ofanti-warmovements on both the left and right, coined the term "fusion paranoia" to refer to a political convergence of left-wing and right-wing activists around anti-war issues andcivil liberties, which he said were motivated by a shared belief in conspiracism or sharedanti-governmentviews.[147]
Barkun has adopted this term to refer to how the synthesis of paranoid conspiracy theories, which were once limited to American fringe audiences, has given them mass appeal and enabled them to become commonplace inmass media,[148]thereby inaugurating an unrivaled period of people actively preparing forapocalypticormillenarianscenarios in the United States of the late 20th and early 21st centuries.[149]Barkun notes the occurrence of lone-wolf conflicts with law enforcement acting as a proxy for threatening the established political powers.[150]
As evidence that undermines an alleged conspiracy grows, the number of alleged conspirators also grows in the minds of conspiracy theorists. This is because of an assumption that the alleged conspirators often have competing interests. For example, if Republican PresidentGeorge W. Bushis allegedly responsible for the 9/11 terrorist attacks, and the Democratic party did not pursue exposing this alleged plot, that must mean that both the Democratic and Republican parties are conspirators in the alleged plot. It also assumes that the alleged conspirators are so competent that they can fool the entire world, but so incompetent that even the unskilled conspiracy theorists can find mistakes they make that prove the fraud. At some point, the number of alleged conspirators, combined with the contradictions within the alleged conspirators' interests and competence, becomes so great that maintaining the theory becomes an obvious exercise in absurdity.[151]
The physicistDavid Robert Grimesestimated the time it would take for a conspiracy to be exposed based on the number of people involved.[152][153]His calculations used data from thePRISM surveillance program, theTuskegee syphilis experiment, and theFBI forensic scandal. Grimes estimated that:
Grimes's study did not consider exposure by sources outside of the alleged conspiracy. It only considered exposure from within the alleged conspiracy through whistleblowers or through incompetence.[154]Subsequent comments on thePubPeerwebsite[155]point out that these calculations must exclude successful conspiracies since, by definition, we don't know about them, and are wrong by an order of magnitude aboutBletchley Park, which remained a secret far longer than Grimes' calculations predicted.
The term "truth seeker" is adopted by some conspiracy theorists when describing themselves on social media.[156]Conspiracy theorists are often referred to derogatorily as "cookers" in Australia.[157]The term "cooker" is also loosely associated with thefar right.[158][159]
The philosopherKarl Popperdescribed the central problem of conspiracy theories as a form offundamental attribution error, where every event is generally perceived as being intentional and planned, greatly underestimating the effects of randomness and unintended consequences.[95]In his bookThe Open Society and Its Enemies, he used the term "the conspiracy theory of society" to denote the idea that social phenomena such as "war, unemployment, poverty, shortages ... [are] the result of direct design by some powerful individuals and groups".[161]Popper argued thattotalitarianismwas founded on conspiracy theories which drew on imaginary plots which were driven by paranoid scenarios predicated ontribalism,chauvinism, orracism. He also noted that conspirators very rarely achieved their goal.[162]
Historically, real conspiracies have usually had little effect on history and have hadunforeseen consequencesfor the conspirators, in contrast to conspiracy theories, which often posit grand, sinister organizations or world-changing events, the evidence for which has been erased or obscured.[163][164]As described byBruce Cumings, history is instead "moved by the broad forces and large structures of human collectivities".[163]
Conspiracy theories are a prevalent feature ofArabculture and politics.[165]Variants include conspiracies involving colonialism,Zionism, superpowers, oil, and thewar on terrorism, which is often referred to in Arab media as a "war against Islam".[165]For example,The Protocols of the Elders of Zion, an infamoushoaxdocument purporting to be a Jewish plan for world domination, is commonly read and promoted in the Muslim world.[166][167][168]Roger Cohenhas suggested that the popularity of conspiracy theories in the Arab world is "the ultimate refuge of the powerless".[132]Al-Mumin Said has noted the danger of such theories, for they "keep us not only from the truth but also from confronting our faults and problems".[169]Osama bin LadenandAyman al-Zawahiriused conspiracy theories about the United States to gain support foral-Qaedain the Arab world, and as rhetoric to distinguish themselves from similar groups, although they may not have believed the conspiratorial claims themselves.[170]
Conspiracy theories are a prevalent feature of culture and politics inTurkey. Conspiracism is an important phenomenon in understanding Turkish politics.[171]This is explained by a desire to "make up for our lostOttomangrandeur",[171]the humiliation of perceiving Turkey as part of "the malfunctioning half" of the world,[172]and a "low level of media literacy among the Turkish population."[173]
There are a wide variety of conspiracy theories including theJudeo-Masonic conspiracy theory,[174][175]theinternational Jewish conspiracy theory, and thewar against Islam conspiracy theory. For example,Islamists, dissatisfied with themodernistandsecularistreforms that took place throughout the history of the Ottoman Empire and the Turkish Republic, have put forward many conspiracy theories to defame theTreaty of Lausanne, an important peace treaty for the country, and the republic's founderKemal Atatürk.[176][177]Another example is theSèvres syndrome, a reference to theTreaty of Sèvresof 1920, a popular belief in Turkey that dangerous internal and external enemies, especiallythe West, are "conspiring to weaken and carve up the Turkish Republic".[178]
The historianRichard Hofstadteraddressed the role ofparanoiaand conspiracism throughoutU.S. historyin his 1964 essay "The Paranoid Style in American Politics".Bernard Bailyn's classicThe Ideological Origins of the American Revolution(1967) notes that a similar phenomenon could be found in North America during the time preceding theAmerican Revolution. Conspiracism labels people's attitudes and the type of conspiracy theories that are more global and historical in proportion.[179]
Harry G. West and others have noted that while conspiracy theorists may often be dismissed as a fringe minority, certain evidence suggests that a wide range of the U.S. believes in conspiracy theories. West also compares those theories tohypernationalismandreligious fundamentalism.[180][181]Theologian Robert Jewett and philosopherJohn Shelton Lawrenceattribute the enduring popularity of conspiracy theories in the U.S. to theCold War,McCarthyism, andcounterculturerejection of authority. They state that among both the left-wing and right-wing, there remains a willingness to use real events, such as Soviet plots, inconsistencies in theWarren Report, and the9/11attacks, to support the existence of unverified and ongoing large-scale conspiracies.[182]
In his studies of "American political demonology", historianMichael Paul Rogintoo analyzed this paranoid style of politics that has occurred throughout American history. Conspiracy theories frequently identify an imaginary subversive group that is supposedly attacking the nation and requires the government and allied forces to engage in harsh extra-legal repression of those threatening subversives. Rogin cites examples from the Red Scares of 1919 to McCarthy's anti-communist campaign in the 1950s and, more recently, fears of immigrant hordes invading the US. Unlike Hofstadter, Rogin saw these "countersubversive" fears as frequently coming from those in power and dominant groups instead of from the dispossessed. Unlike Robert Jewett, Rogin blamed not the counterculture but America's dominant culture of liberal individualism and the fears it stimulated to explain the periodic eruption of irrational conspiracy theories.[183]TheWatergate scandalhas also been used to bestow legitimacy to other conspiracy theories, withRichard Nixonhimself commenting that it served as a "Rorschach ink blot" which invited others to fill in the underlying pattern.[87]
Historian Kathryn S. Olmsted cites three reasons why Americans are prone to believing in government conspiracy theories:
Alex Jonesreferenced numerous conspiracy theories for convincing his supporters to endorseRon PauloverMitt Romneyin the2012 Republican Party presidential primariesandDonald TrumpoverHillary Clintonin the2016 United States presidential election.[185][186]Into the 2020s, theQAnon conspiracy theoryalleges that Trump is fighting against adeep-statecabalof child sex-abusing and Satan-worshippingDemocrats.[36][37][187][188][189][190]
Informational notes
Citations
California drought manipulation
|
https://en.wikipedia.org/wiki/Conspiracy_theory
|
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1]
Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2]
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4]
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6]
In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7]
Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10]
In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13]
In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14]
In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15]
A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16]
RFID tags are made out of three pieces:
The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed]
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18]
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19]
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
RFID systems can be classified by the type of tag and reader. There are 3 types:[20]
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27]
AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28]
Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29]
Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed]
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30]
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31]
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32]
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37]
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38]
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed]
RFID can be used in a variety of applications,[40][41]such as:
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45]
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46]
RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47]
Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49]
RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed]
In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50]
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?]
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed]
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?]
Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54]
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56]
Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59]
RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60]
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61]
At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62]
The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed]
Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017).
Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67]
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed]
TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68]
In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69]
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71]
Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78]
TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79]
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81]
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82]
The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85]
Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90]
Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94]
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96]
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97]
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99]
Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed]
RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104]
In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99]
RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed]
In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed]
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed]
Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed]
RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110]
TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113]
RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models.
Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116]
Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed]
To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118]
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed]
Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed]
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed]
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed]
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119]
Standardsthat have been made regarding RFID include:
In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed]
EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121]
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123]
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124]
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125]
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed]
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129]
A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130]
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost, many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such ascover-coding.[131]
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135]
Microchip–induced tumours have been noted during animal trials.[136][137]
In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142]
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144]
Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145]
The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed]
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146]
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149]
In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150]
During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151]
In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152]
RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153]
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154]
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156]
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158]
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
|
https://en.wikipedia.org/wiki/Radio-frequency_identification
|
Thedistributional learning theoryorlearning of probability distributionis a framework incomputational learning theory. It has been proposed fromMichael Kearns,Yishay Mansour,Dana Ron,Ronitt Rubinfeld,Robert SchapireandLinda Selliein 1994[1]and it was inspired from thePAC-frameworkintroduced byLeslie Valiant.[2]
In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields likemachine learning,approximation algorithms,applied probabilityandstatistics.
This article explains the basic definitions, tools and results in this framework from the theory of computation point of view.
LetX{\displaystyle \textstyle X}be the support of the distributions of interest. As in the original work of Kearns et al.[1]ifX{\displaystyle \textstyle X}is finite it can be assumed without loss of generality thatX={0,1}n{\displaystyle \textstyle X=\{0,1\}^{n}}wheren{\displaystyle \textstyle n}is the number of bits that have to be used in order to represent anyy∈X{\displaystyle \textstyle y\in X}. We focus in probability distributions overX{\displaystyle \textstyle X}.
There are two possible representations of a probability distributionD{\displaystyle \textstyle D}overX{\displaystyle \textstyle X}.
A distributionD{\displaystyle \textstyle D}is called to have a polynomial generator (respectively evaluator) if its generator (respectively evaluator) exists and can be computed in polynomial time.
LetCX{\displaystyle \textstyle C_{X}}a class of distribution over X, that isCX{\displaystyle \textstyle C_{X}}is a set such that everyD∈CX{\displaystyle \textstyle D\in C_{X}}is a probability distribution with supportX{\displaystyle \textstyle X}. TheCX{\displaystyle \textstyle C_{X}}can also be written asC{\displaystyle \textstyle C}for simplicity.
Before defining learnability, it is necessary to define good approximations of a distributionD{\displaystyle \textstyle D}. There are several ways to measure the distance between two distribution. The three more common possibilities are
The strongest of these distances is theKullback-Leibler divergenceand the weakest is theKolmogorov distance. This means that for any pair of distributionsD{\displaystyle \textstyle D},D′{\displaystyle \textstyle D'}:
Therefore, for example ifD{\displaystyle \textstyle D}andD′{\displaystyle \textstyle D'}are close with respect toKullback-Leibler divergencethen they are also close with respect
to all the other distances.
Next definitions hold for all the distances and therefore the symbold(D,D′){\displaystyle \textstyle d(D,D')}denotes the distance between the distributionD{\displaystyle \textstyle D}and the distributionD′{\displaystyle \textstyle D'}using one of the distances that we describe above. Although learnability of a class of distributions can be defined using any of these distances, applications refer to a specific distance.
The basic input that we use in order to learn a distribution is a number of samples drawn by this distribution. For the computational point of view the assumption is that such a sample is given in a constant amount of time. So it's like having access to an oracleGEN(D){\displaystyle \textstyle GEN(D)}that returns a sample from the distributionD{\displaystyle \textstyle D}. Sometimes the interest is, apart from measuring the time complexity, to measure the number of samples that have to be used in order to learn a specific distributionD{\displaystyle \textstyle D}in class of distributionsC{\displaystyle \textstyle C}. This quantity is calledsample complexityof the learning algorithm.
In order for the problem of distribution learning to be more clear consider the problem of supervised learning as defined in.[3]In this framework ofstatistical learning theorya training setS={(x1,y1),…,(xn,yn)}{\displaystyle \textstyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}and the goal is to find a target functionf:X→Y{\displaystyle \textstyle f:X\rightarrow Y}that minimizes some loss function, e.g. the square loss function. More formallyf=argming∫V(y,g(x))dρ(x,y){\displaystyle f=\arg \min _{g}\int V(y,g(x))d\rho (x,y)}, whereV(⋅,⋅){\displaystyle V(\cdot ,\cdot )}is the loss function, e.g.V(y,z)=(y−z)2{\displaystyle V(y,z)=(y-z)^{2}}andρ(x,y){\displaystyle \rho (x,y)}the probability distribution according to which the elements of the training set are sampled. If theconditional probability distributionρx(y){\displaystyle \rho _{x}(y)}is known then the target function has the closed formf(x)=∫yydρx(y){\displaystyle f(x)=\int _{y}yd\rho _{x}(y)}. So the setS{\displaystyle S}is a set of samples from theprobability distributionρ(x,y){\displaystyle \rho (x,y)}. Now the goal of distributional learning theory if to findρ{\displaystyle \rho }givenS{\displaystyle S}which can be used to find the target functionf{\displaystyle f}.
Definition of learnability
A class of distributionsC{\displaystyle \textstyle C}is calledefficiently learnableif for everyϵ>0{\displaystyle \textstyle \epsilon >0}and0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}given access toGEN(D){\displaystyle \textstyle GEN(D)}for an unknown distributionD∈C{\displaystyle \textstyle D\in C}, there exists a polynomial time algorithmA{\displaystyle \textstyle A}, called learning algorithm ofC{\displaystyle \textstyle C}, that outputs a generator or an evaluator of a distributionD′{\displaystyle \textstyle D'}such that
If we know thatD′∈C{\displaystyle \textstyle D'\in C}thenA{\displaystyle \textstyle A}is calledproper learning algorithm, otherwise is calledimproper learning algorithm.
In some settings the class of distributionsC{\displaystyle \textstyle C}is a class with well known distributions which can be described by a set of parameters. For instanceC{\displaystyle \textstyle C}could be the class of all the Gaussian distributionsN(μ,σ2){\displaystyle \textstyle N(\mu ,\sigma ^{2})}. In this case the algorithmA{\displaystyle \textstyle A}should be able to estimate the parametersμ,σ{\displaystyle \textstyle \mu ,\sigma }. In this caseA{\displaystyle \textstyle A}is calledparameter learning algorithm.
Obviously the parameter learning for simple distributions is a very well studied field that is called statistical estimation and there is a very long bibliography on different estimators for different kinds of simple known distributions. But distributions learning theory deals with learning class of distributions that have more complicated description.
In their seminal work, Kearns et al. deal with the case whereA{\displaystyle \textstyle A}is described in term of a finite polynomial sized circuit and they proved the following for some specific classes of distribution.[1]
One very common technique in order to find a learning algorithm for a class of distributionsC{\displaystyle \textstyle C}is to first find a smallϵ−{\displaystyle \textstyle \epsilon -}cover ofC{\displaystyle \textstyle C}.
Definition
A setCϵ{\displaystyle \textstyle C_{\epsilon }}is calledϵ{\displaystyle \textstyle \epsilon }-cover ofC{\displaystyle \textstyle C}if for everyD∈C{\displaystyle \textstyle D\in C}there is aD′∈Cϵ{\displaystyle \textstyle D'\in C_{\epsilon }}such thatd(D,D′)≤ϵ{\displaystyle \textstyle d(D,D')\leq \epsilon }. Anϵ−{\displaystyle \textstyle \epsilon -}cover is small if it has polynomial size with respect to the parameters that describeD{\displaystyle \textstyle D}.
Once there is an efficient procedure that for everyϵ>0{\displaystyle \textstyle \epsilon >0}finds a smallϵ−{\displaystyle \textstyle \epsilon -}coverCϵ{\displaystyle \textstyle C_{\epsilon }}of C then the only left task is to select fromCϵ{\displaystyle \textstyle C_{\epsilon }}the distributionD′∈Cϵ{\displaystyle \textstyle D'\in C_{\epsilon }}that is closer to the distributionD∈C{\displaystyle \textstyle D\in C}that has to be learned.
The problem is that givenD′,D″∈Cϵ{\displaystyle \textstyle D',D''\in C_{\epsilon }}it is not trivial how we can compared(D,D′){\displaystyle \textstyle d(D,D')}andd(D,D″){\displaystyle \textstyle d(D,D'')}in order to decide which one is the closest toD{\displaystyle \textstyle D}, becauseD{\displaystyle \textstyle D}is unknown. Therefore, the samples fromD{\displaystyle \textstyle D}have to be used to do these comparisons. Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed byDaskalakisandKamath[4]This algorithm sets up a fast tournament between the elements ofCϵ{\displaystyle \textstyle C_{\epsilon }}where the winnerD∗{\displaystyle \textstyle D^{*}}of this tournament is the element which isϵ−{\displaystyle \textstyle \epsilon -}close toD{\displaystyle \textstyle D}(i.e.d(D∗,D)≤ϵ{\displaystyle \textstyle d(D^{*},D)\leq \epsilon }) with probability at least1−δ{\displaystyle \textstyle 1-\delta }. In order to do so their algorithm usesO(logN/ϵ2){\displaystyle \textstyle O(\log N/\epsilon ^{2})}samples fromD{\displaystyle \textstyle D}and runs inO(NlogN/ϵ2){\displaystyle \textstyle O(N\log N/\epsilon ^{2})}time, whereN=|Cϵ|{\displaystyle \textstyle N=|C_{\epsilon }|}.
Learning of simple well known distributions is a well studied field and there are a lot of estimators that can be used. One more complicated class of distributions is the distribution of a sum of variables that follow simple distributions. These learning procedure have a close relation with limit theorems like the central limit theorem because they tend to examine the same object when the sum tends to an infinite sum. Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using thetotal variationdistance as a distance measure.
Considern{\displaystyle \textstyle n}independent Bernoulli random variablesX1,…,Xn{\displaystyle \textstyle X_{1},\dots ,X_{n}}with probabilities of successp1,…,pn{\displaystyle \textstyle p_{1},\dots ,p_{n}}. A Poisson Binomial Distribution of ordern{\displaystyle \textstyle n}is the distribution of the sumX=∑iXi{\displaystyle \textstyle X=\sum _{i}X_{i}}. For learning the classPBD={D:Dis a Poisson binomial distribution}{\displaystyle \textstyle PBD=\{D:D~{\text{ is a Poisson binomial distribution}}\}}. The first of the following results deals with the case of improper learning ofPBD{\displaystyle \textstyle PBD}and the second with the proper learning ofPBD{\displaystyle \textstyle PBD}.[5]
Theorem
LetD∈PBD{\displaystyle \textstyle D\in PBD}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′{\displaystyle \textstyle D'}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm isO~((1/ϵ3)log(1/δ)){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log(1/\delta ))}and the running time isO~((1/ϵ3)lognlog2(1/δ)){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log n\log ^{2}(1/\delta ))}.
Theorem
LetD∈PBD{\displaystyle \textstyle D\in PBD}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′∈PBD{\displaystyle \textstyle D'\in PBD}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm isO~((1/ϵ2))log(1/δ){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{2}))\log(1/\delta )}and the running time is(1/ϵ)O(log2(1/ϵ))O~(lognlog(1/δ)){\displaystyle \textstyle (1/\epsilon )^{O(\log ^{2}(1/\epsilon ))}{\tilde {O}}(\log n\log(1/\delta ))}.
One part of the above results is that the sample complexity of the learning algorithm doesn't depend onn{\displaystyle \textstyle n}, although the description ofD{\displaystyle \textstyle D}is linear inn{\displaystyle \textstyle n}. Also the second result is almost optimal with respect to the sample complexity because there is also a lower bound ofO(1/ϵ2){\displaystyle \textstyle O(1/\epsilon ^{2})}.
The proof uses a smallϵ−{\displaystyle \textstyle \epsilon -}cover ofPBD{\displaystyle \textstyle PBD}that has been produced by Daskalakis and Papadimitriou,[6]in order to get this algorithm.
Considern{\displaystyle \textstyle n}independent random variablesX1,…,Xn{\displaystyle \textstyle X_{1},\dots ,X_{n}}each of which follows an arbitrary distribution with support{0,1,…,k−1}{\displaystyle \textstyle \{0,1,\dots ,k-1\}}. Ak−{\displaystyle \textstyle k-}sum of independent integer random variable of ordern{\displaystyle \textstyle n}is the distribution of the sumX=∑iXi{\displaystyle \textstyle X=\sum _{i}X_{i}}. For learning the class
k−SIIRV={D:Dis a k-sum of independent integer random variable}{\displaystyle \textstyle k-SIIRV=\{D:D{\text{is a k-sum of independent integer random variable }}\}}
there is the following result
Theorem
LetD∈k−SIIRV{\displaystyle \textstyle D\in k-SIIRV}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′{\displaystyle \textstyle D'}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm ispoly(k/ϵ){\displaystyle \textstyle {\text{poly}}(k/\epsilon )}and the running time is alsopoly(k/ϵ){\displaystyle \textstyle {\text{poly}}(k/\epsilon )}.
Another part is that the sample and the time complexity does not depend onn{\displaystyle \textstyle n}. Its possible to conclude this independence for the previous section if we setk=2{\displaystyle \textstyle k=2}.[7]
Let the random variablesX∼N(μ1,Σ1){\displaystyle \textstyle X\sim N(\mu _{1},\Sigma _{1})}andY∼N(μ2,Σ2){\displaystyle \textstyle Y\sim N(\mu _{2},\Sigma _{2})}. Define the random variableZ{\displaystyle \textstyle Z}which takes the same value asX{\displaystyle \textstyle X}with probabilityw1{\displaystyle \textstyle w_{1}}and the same value asY{\displaystyle \textstyle Y}with probabilityw2=1−w1{\displaystyle \textstyle w_{2}=1-w_{1}}. Then ifF1{\displaystyle \textstyle F_{1}}is the density ofX{\displaystyle \textstyle X}andF2{\displaystyle \textstyle F_{2}}is the density ofY{\displaystyle \textstyle Y}the density ofZ{\displaystyle \textstyle Z}isF=w1F1+w2F2{\displaystyle \textstyle F=w_{1}F_{1}+w_{2}F_{2}}. In this caseZ{\displaystyle \textstyle Z}is said to follow a mixture of Gaussians. Pearson[8]was the first who introduced the notion of the mixtures of Gaussians in his attempt to explain the probability distribution from which he got same data that he wanted to analyze. So after doing a lot of calculations by hand, he finally fitted his data to a mixture of Gaussians. The learning task in this case is to determine the parameters of the mixturew1,w2,μ1,μ2,Σ1,Σ2{\displaystyle \textstyle w_{1},w_{2},\mu _{1},\mu _{2},\Sigma _{1},\Sigma _{2}}.
The first attempt to solve this problem was fromDasgupta.[9]In this workDasguptaassumes that the two means of the Gaussians are far enough from each other. This means that there is a lower bound on the distance||μ1−μ2||{\displaystyle \textstyle ||\mu _{1}-\mu _{2}||}. Using this assumption Dasgupta and a lot of scientists after him were able to learn the parameters of the mixture. The learning procedure starts withclusteringthe samples into two different clusters minimizing some metric. Using the assumption that the means of the Gaussians are far away from each other with high probability the samples in the first cluster correspond to samples from the first Gaussian and the samples in the second cluster to samples from the second one. Now that the samples are partitioned theμi,Σi{\displaystyle \textstyle \mu _{i},\Sigma _{i}}can be computed from simple statistical estimators andwi{\displaystyle \textstyle w_{i}}by comparing the magnitude of the clusters.
IfGM{\displaystyle \textstyle GM}is the set of all the mixtures of two Gaussians, using the above procedure theorems like the following can be proved.
Theorem[9]
LetD∈GM{\displaystyle \textstyle D\in GM}with||μ1−μ2||≥cnmax(λmax(Σ1),λmax(Σ2)){\displaystyle \textstyle ||\mu _{1}-\mu _{2}||\geq c{\sqrt {n\max(\lambda _{max}(\Sigma _{1}),\lambda _{max}(\Sigma _{2}))}}}, wherec>1/2{\displaystyle \textstyle c>1/2}andλmax(A){\displaystyle \textstyle \lambda _{max}(A)}the largest eigenvalue ofA{\displaystyle \textstyle A}, then there is an algorithm which givenϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds an approximationwi′,μi′,Σi′{\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}}of the parameters such thatPr[||wi−wi′||≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[||w_{i}-w'_{i}||\leq \epsilon ]\geq 1-\delta }(respectively forμi{\displaystyle \textstyle \mu _{i}}andΣi{\displaystyle \textstyle \Sigma _{i}}. The sample complexity of this algorithm isM=2O(log2(1/(ϵδ))){\displaystyle \textstyle M=2^{O(\log ^{2}(1/(\epsilon \delta )))}}and the running time isO(M2d+Mdn){\displaystyle \textstyle O(M^{2}d+Mdn)}.
The above result could also be generalized ink−{\displaystyle \textstyle k-}mixture of Gaussians.[9]
For the case of mixture of two Gaussians there are learning results without the assumption of the distance between their means, like the following one which uses the total variation distance as a distance measure.
Theorem[10]
LetF∈GM{\displaystyle \textstyle F\in GM}then there is an algorithm which givenϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}findswi′,μi′,Σi′{\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}}such that ifF′=w1′F1′+w2′F2′{\displaystyle \textstyle F'=w'_{1}F'_{1}+w'_{2}F'_{2}}, whereFi′=N(μi′,Σi′){\displaystyle \textstyle F'_{i}=N(\mu '_{i},\Sigma '_{i})}thenPr[d(F,F′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(F,F')\leq \epsilon ]\geq 1-\delta }. The sample complexity and the running time of this algorithm ispoly(n,1/ϵ,1/δ,1/w1,1/w2,1/d(F1,F2)){\displaystyle \textstyle {\text{poly}}(n,1/\epsilon ,1/\delta ,1/w_{1},1/w_{2},1/d(F_{1},F_{2}))}.
The distance betweenF1{\displaystyle \textstyle F_{1}}andF2{\displaystyle \textstyle F_{2}}doesn't affect the quality of the result of the algorithm but just the sample complexity and the running time.[9][10]
|
https://en.wikipedia.org/wiki/Distribution_learning_theory
|
The following is alist ofMacsoftware– notable computer applications for currentmacOSoperating systems.
For software designed for theClassic Mac OS, seeList of old Macintosh software.
This section listsbitmap graphics editorsandvector graphics editors.
macOS includes the built-in XProtect antimalware as part ofGateKeeper.
The software listed in this section isantivirus softwareandmalwareremoval software.
This section lists software forfile archiving,backup and restore,data compressionanddata recovery.
|
https://en.wikipedia.org/wiki/List_of_Mac_software
|
Inpublic-key cryptography,Edwards-curve Digital Signature Algorithm(EdDSA) is adigital signaturescheme using a variant ofSchnorr signaturebased ontwisted Edwards curves.[1]It is designed to be faster than existing digital signature schemes without sacrificing security. It was developed by a team includingDaniel J. Bernstein, Niels Duif,Tanja Lange, Peter Schwabe, and Bo-Yin Yang.[2]Thereference implementationispublic-domain software.[3]
The following is a simplified description of EdDSA, ignoring details of encoding integers and curve points as bit strings; the full details are in the papers and RFC.[4][2][1]
AnEdDSA signature schemeis a choice:[4]: 1–2[2]: 5–6[1]: 5–7
These parameters are common to all users of the EdDSA signature scheme. The security of the EdDSA signature scheme depends critically on the choices of parameters, except for the arbitrary choice of base point—for example,Pollard's rho algorithm for logarithmsis expected to take approximatelyℓπ/4{\displaystyle {\sqrt {\ell \pi /4}}}curve additions before it can compute a discrete logarithm,[5]soℓ{\displaystyle \ell }must be large enough for this to be infeasible, and is typically taken to exceed2200.[6]The choice ofℓ{\displaystyle \ell }is limited by the choice ofq{\displaystyle q}, since byHasse's theorem,#E(Fq)=2cℓ{\displaystyle \#E(\mathbb {F} _{q})=2^{c}\ell }cannot differ fromq+1{\displaystyle q+1}by more than2q{\displaystyle 2{\sqrt {q}}}. The hash functionH{\displaystyle H}is normally modelled as arandom oraclein formal analyses of EdDSA's security.
Within an EdDSA signature scheme,
2cSB=2cR+2cH(R∥A∥M)A.{\displaystyle 2^{c}SB=2^{c}R+2^{c}H(R\parallel A\parallel M)A.}
2cSB=2c(r+H(R∥A∥M)s)B=2crB+2cH(R∥A∥M)sB=2cR+2cH(R∥A∥M)A.{\displaystyle {\begin{aligned}2^{c}SB&=2^{c}(r+H(R\parallel A\parallel M)s)B\\&=2^{c}rB+2^{c}H(R\parallel A\parallel M)sB\\&=2^{c}R+2^{c}H(R\parallel A\parallel M)A.\end{aligned}}}
Ed25519is the EdDSA signature scheme usingSHA-512(SHA-2) and an elliptic curve related toCurve25519[2]where
−x2+y2=1−121665121666x2y2,{\displaystyle -x^{2}+y^{2}=1-{\frac {121665}{121666}}x^{2}y^{2},}
Thetwisted Edwards curveE/Fq{\displaystyle E/\mathbb {F} _{q}}is known asedwards25519,[7][1]and isbirationally equivalentto theMontgomery curveknown asCurve25519.
The equivalence is[2][7][8]x=uv−486664,y=u−1u+1.{\displaystyle x={\frac {u}{v}}{\sqrt {-486664}},\quad y={\frac {u-1}{u+1}}.}
The original team has optimized Ed25519 for thex86-64Nehalem/Westmereprocessor family. Verification can be performed in batches of 64 signatures for even greater throughput. Ed25519 is intended to provide attack resistance comparable to quality 128-bitsymmetric ciphers.[9]
Public keys are 256 bits long and signatures are 512 bits long.[10]
Ed25519 is designed to avoid implementations that use branch conditions or array indices that depend on secret data,[2]: 2[1]: 40in order to mitigateside-channel attacks.
As with other discrete-log-based signature schemes, EdDSA uses a secret value called anonceunique to each signature. In the signature schemesDSAandECDSA, this nonce is traditionally generated randomly for each signature—and if the random number generator is ever broken and predictable when making a signature, the signature can leak the private key, as happened with theSony PlayStation 3firmware update signing key.[11][12][13][14]
In contrast, EdDSA chooses the nonce deterministically as the hash of a part of the private key and the message. Thus, once a private key is generated, EdDSA has no further need for a random number generator in order to make signatures, and there is no danger that a broken random number generator used to make a signature will reveal the private key.[2]: 8
Note that there are two standardization efforts for EdDSA, one from IETF, an informationalRFC8032and one from NIST as part of FIPS 186-5.[15]The differences between the standards have been analyzed,[16][17]and test vectors are available.[18]
Notable uses of Ed25519 includeOpenSSH,[19]GnuPG[20]and various alternatives, and thesignifytool byOpenBSD.[21]Usage of Ed25519 (and Ed448) in the SSH protocol has been standardized.[22]In 2023 the final version of theFIPS186-5 standard included deterministic Ed25519 as an approved signature scheme.[15]
Ed448is the EdDSA signature scheme defined inRFC8032using the hash functionSHAKE256and the elliptic curveedwards448, an (untwisted)Edwards curverelated toCurve448inRFC7748.
Ed448 has also been approved in the final version of the FIPS 186-5 standard.[15]
|
https://en.wikipedia.org/wiki/EdDSA
|
Incryptography,MD5CRKwas avolunteer computingeffort (similar todistributed.net) launched by Jean-Luc Cooke and his company, CertainKey Cryptosystems, to demonstrate that theMD5message digestalgorithmis insecure by finding acollision– two messages that produce the same MD5 hash. The project went live on March 1, 2004. The project ended on August 24, 2004, after researchers independently demonstrated a technique for generating collisions in MD5 using analytical methods byXiaoyun Wang, Feng,Xuejia Lai, and Yu.[1]CertainKey awarded a 10,000Canadian Dollarprize to Wang, Feng, Lai and Yu for their discovery.[2]
A technique calledFloyd's cycle-finding algorithmwas used to try to find a collision for MD5. The algorithm can be described by analogy with arandom walk. Using the principle that any function with a finite number of possible outputs placed in a feedback loop will cycle, one can use a relatively small amount of memory to store outputs with particular structures and use them as "markers" to better detect when a marker has been "passed" before. These markers are calleddistinguished points, the point where two inputs produce the same output is called acollision point. MD5CRK considered any point whose first 32bitswere zeroes to be a distinguished point.
The expected time to find a collision is not equal to2N{\displaystyle 2^{N}}whereN{\displaystyle N}is the number of bits in the digest output. It is in fact2N!(2N−K)!×2NK{\displaystyle 2^{N}! \over {(2^{N}-K)!\times {2^{N}}^{K}}}, whereK{\displaystyle K}is the number of function outputs collected.
For this project, the probability of success afterK{\displaystyle K}MD5 computations can be approximated by:11−eK×(1−K)2N+1{\displaystyle 1 \over {1-e^{K\times (1-K) \over 2^{N+1}}}}.
The expected number of computations required to produce a collision in the 128-bit MD5 message digest function is thus:1.17741×2N/2=1.17741×264{\displaystyle {1.17741\times 2^{N/2}}={1.17741\times 2^{64}}}
To give some perspective to this, using Virginia Tech'sSystem Xwith a maximum performance of 12.25 Teraflops, it would take approximately2.17×1019/12.25×1012≈1,770,000{\displaystyle {2.17\times 10^{19}/12.25\times 10^{12}\approx 1,770,000}}seconds or about 3 weeks. Or for commodity processors at 2 gigaflops it would take 6,000 machines approximately the same amount of time.
|
https://en.wikipedia.org/wiki/MD5CRK
|
Aeuphemism(/ˈjuːfəmɪzəm/YOO-fə-miz-əm) is an innocuous word or expression used in place of one that is deemedoffensiveor suggests something unpleasant.[1]Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer totopics some considertaboosuch as mental or physical disability, sexual intercourse, bodily excretions, pain, violence, illness, or death in a polite way.[2]
Euphemismcomes from theGreekwordeuphemia(εὐφημία) which refers to the use of 'words of good omen'; it is a compound ofeû(εὖ), meaning 'good, well', andphḗmē(φήμη), meaning 'prophetic speech; rumour, talk'.[3]Euphemeis a reference to the female Greek spirit of words of praise and positivity, etc. The termeuphemismitself was used as a euphemism by theancient Greeks; with the meaning "to keep a holy silence" (speaking well by not speaking at all).[4]
Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, such asdeath,sex, and excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive, confuse ordeny. Euphemisms which emerge as dominant social euphemisms are often created to serve progressive causes.[5][6]TheOxford University Press'sDictionary of Euphemismsidentifies "late" as an occasionally ambiguous term, whose nature as a euphemism for dead and an adjective meaning overdue, can cause confusion in listeners.[7]
Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices,war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations atAuschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms".[8]Another example of this is during the 2022Russian invasion of Ukraine, where Russian PresidentVladimir Putin, in his speech starting the invasion, called the invasion a "special military operation".[9]
Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguistGhil'ad Zuckermann, Israeli Prime MinisterBenjamin Netanyahuused the neutral Hebrew lexical itemפעימותpeimót(literally 'beatings (of the heart)'), rather thanנסיגהnesigá('withdrawal'), to refer to the stages in the Israeli withdrawal from theWest Bank(seeWye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move.[10]Peimótwas thus used as a euphemism for 'withdrawal'.[10]: 181
Euphemism may be used as arhetorical strategy, in which case its goal is to change thevalenceof a description.[clarification needed]
Using a euphemism can in itself be controversial, as in the following examples:
The use of euphemism online is known as "algospeak" when used to evade automated online moderation techniques used on Meta and TikTok's platforms.[13][14][15][16][17]Algospeak has been used in debate about theIsraeli–Palestinian conflict.[18][19]
Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. To alter the pronunciation or spelling of a taboo word (such asprofanity) to form a euphemism is known astaboo deformation, or aminced oath. Such modifications include:
Euphemisms formed fromunderstatementsincludeasleepfor dead anddrinkingfor consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of manyrecurring jokespopularized by the satirical magazinePrivate Eye; it has been used by MPs to avoidunparliamentary language.
Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements,marketing,public relations, oradvertisinginitiatives, including:
Some examples ofCockneyrhyming slangmay serve the same purpose: to call a person aberksounds less offensive than to call a person acunt, thoughberkis short forBerkeley Hunt,[20]which rhymes withcunt.[21]
The use of a term with a softer connotation, though it shares the same meaning. For instance,screwed upis a euphemism for 'fucked up';hook-upandlaidare euphemisms for 'sexual intercourse'.
Expressions or words from a foreign language may be imported for use as euphemism. For example, the French wordenceintewas sometimes used instead of the English wordpregnant;[22]abattoirforslaughterhouse, although in French the word retains its explicit violent meaning 'a place for beating down', conveniently lost on non-French speakers.Entrepreneurforbusinessman, adds glamour;douche(French for 'shower') for vaginal irrigation device;bidet('little pony') for vessel for anal washing. Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English wordhandicapis used as a euphemism for their problematic wordsinfirmitéorinvalidité.[23]
Periphrasis, orcircumlocution, is one of the most common: to "speak around" a given word,implyingit without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas.
Bureaucraciesfrequently spawn euphemisms intentionally, asdoublespeakexpressions. For example, in the past, the US military used the term "sunshine units" for contamination byradioactive isotopes.[24]The United StatesCentral Intelligence Agencyrefers to systematictortureas "enhanced interrogation techniques".[25]An effective death sentence in the Soviet Union during theGreat Purgeoften used the clause "imprisonmentwithout right to correspondence": the person sentenced would be shot soon after conviction.[26]As early as 1939, Nazi officialReinhard Heydrichused the termSonderbehandlung("special treatment") to meansummary executionof persons viewed as "disciplinary problems" by the Nazis even before commencing thesystematic extermination of the Jews.Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps[27]after having been "evacuated" to their doom. Such was part of the formulation ofEndlösung der Judenfrage(the "Final Solution to the Jewish Question"), which became known to the outside world during theNuremberg Trials.[28]
Frequently, over time, euphemisms themselves become taboo words, through the linguistic process ofsemantic changeknown aspejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974,[29]also frequently referred to as the "euphemism treadmill", as worded bySteven Pinker.[30]For instance, the place of human defecation is a needy candidate for a euphemism in all eras.Toiletis an 18th-century euphemism, replacing the older euphemismhouse-of-office, which in turn replaced the even older euphemismsprivy-houseandbog-house.[31]In the 20th century, where the old euphemismslavatory(a place where one washes) andtoilet(a place where one dresses[32]) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced withbathroom(a place where one bathes),washroom(a place where one washes), orrestroom(a place where one rests) or even by the extreme formpowder room(a place where one applies facial cosmetics).[citation needed]The formwater closet, often shortened toW.C., is a less deflective form.[citation needed]The wordshitappears to have originally been a euphemism for defecation in Pre-Germanic, as theProto-Indo-European root*sḱeyd-, from which it was derived, meant 'to cut off'.[33]
Another example in American English is the replacement of "colored people" with "Negro" (euphemism by foreign language), which itself came to be replaced by either "African American" or "Black".[34]Also in the United States the term "ethnic minorities" in the 2010s has been replaced by "people of color".[34]
Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating fromVenus, the goddess of love, soon lost its deflective force in the post-classical education era, as "VD", which was replaced by thethree-letter initialism"STD" (sexually transmitted disease); later, "STD" was replaced by "STI" (sexually transmitted infection).[35]
Intellectually-disabled people were originally defined with words such as "morons" or "imbeciles", which then became commonly used insults. The medical diagnosis was changed to "mentally retarded", which morphed into the pejorative, "retard", against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as "special needs" instead, although the words "special" or "SPED" (short for "special education") have long been schoolyard insults.[36][better source needed]As of August 2013, theSocial Security Administrationreplaced the term "mental retardation" with "intellectual disability".[37]Since 2012, that change in terminology has been adopted by theNational Institutes of Healthand the medical industry at large.[38]There are numerousdisability-related euphemisms that have negative connotations.
|
https://en.wikipedia.org/wiki/Euphemism_treadmill
|
Open Mind Common Sense(OMCS) is anartificial intelligenceproject based at theMassachusetts Institute of Technology(MIT)Media Labwhose goal is to build and utilize a largecommonsense knowledge basefrom the contributions of many thousands of people across the Web. It has been active from 1999 to 2016.
Since its founding, it has accumulated more than a million English facts from over 15,000 contributors in addition to knowledge bases in other languages. Much of OMCS's software is built on three interconnected representations: the natural language corpus that people interact with directly, a semantic network built from this corpus calledConceptNet, and a matrix-based representation of ConceptNet calledAnalogySpacethat can infer new knowledge usingdimensionality reduction.[1]The knowledge collected by Open Mind Common Sense has enabled research projects at MIT and elsewhere.
The project was the brainchild ofMarvin Minsky, Push Singh,Catherine Havasi, and others. Development work began in September 1999, and the project opened to the Internet a year later. Havasi described it in her dissertation as "an attempt to ... harness some of the distributed human computing power of the Internet, an idea which was then only in its early stages."[2]The original OMCS was influenced by the websiteEverything2and its predecessor, and presents a minimalist interface that is inspired byGoogle.
Push Singh would have become a professor at theMIT Media Laband lead the Common Sense Computing group in 2007, but committed suicide on February 28, 2006.[3]
The project is currently run by the Digital Intuition Group at the MIT Media Lab under Catherine Havasi.[citation needed]
There are many different types of knowledge in OMCS. Some statements convey relationships between objects or events, expressed as simple phrases of natural language: some examples include "A coat is used for keeping warm", "The sun is very hot", and "The last thing you do when you cook dinner is wash your dishes". The database also contains information on the emotional content of situations, in such statements as "Spending time with friends causes happiness" and "Getting into a car wreck makes one angry". OMCS contains information on people's desires and goals, both large and small, such as "People want to be
respected" and "People want good coffee".[1]
Originally, these statements could be entered into the Web site as unconstrained sentences of text, which had to be parsed later. The current version ofthe Web sitecollects knowledge only using more structured fill-in-the-blank templates. OMCS also makes use of data collected by theGame With a Purpose"Verbosity".[4]
In its native form, the OMCS database is simply a collection of these short sentences that convey some common knowledge. In order to use this knowledge computationally, it has to be transformed into a more structured representation.
ConceptNet is asemantic networkbased on the information in the OMCS database. ConceptNet is expressed as a directed graph whose nodes are concepts, and whose edges are assertions of common sense about these concepts. Concepts represent sets of closely related natural language phrases, which could be noun phrases, verb phrases, adjective phrases, or clauses.[5]
ConceptNet is created from the natural-language assertions in OMCS by matching them against patterns using a shallow parser. Assertions are expressed as relations between two concepts, selected from a limited set of possible
relations. The various relations represent common sentence patterns found in the OMCS corpus, and in particular, every "fill-in-the-blanks" template used on the knowledge-collection Web site is associated with a particular relation.[5]
The data structures that make up ConceptNet were significantly reorganized in 2007, and published as ConceptNet 3.[5]The Software Agents group currently distributes a database and API for the new version 4.0.[6]
In 2010, OMCS co-founder and director Catherine Havasi, with Robyn Speer, Dennis Clark and Jason Alonso, createdLuminoso, a text analytics software company that builds on ConceptNet.[7][8][9][10]It uses ConceptNet as its primary lexical resource in order to help businesses make sense of and derive insight from vast amounts of qualitative data, including surveys, product reviews and social media.[7][11][12]
The information in ConceptNet can be used as a basis formachine learningalgorithms. One representation, called AnalogySpace, usessingular value decompositionto generalize and represent patterns in the knowledge in
ConceptNet, in a way that can be used in AI applications. Its creators distribute a Python machine learning toolkit called Divisi[13]for performing machine learning based ontext corpora, structured knowledge bases such as ConceptNet, and combinations of the two.
Other similar projects includeNever-Ending Language Learning,Mindpixel(discontinued),Cyc, Learner, SenticNet,Freebase,YAGO,DBpedia, and Open Mind 1001 Questions, which have explored alternative approaches to collecting knowledge and providing incentive for participation.
The Open Mind Common Sense project differs from Cyc because it has focused on representing the common sense knowledge it collected as English sentences, rather than using a formal logical structure. ConceptNet is described by one of its creators, Hugo Liu, as being structured more likeWordNetthan Cyc, due to its "emphasis on informal conceptual-connectedness over formal linguistic-rigor".[14]
|
https://en.wikipedia.org/wiki/Open_Mind_Common_Sense
|
In software distribution andsoftware development, aREADMEfilecontains information about the other files in adirectoryorarchiveof computersoftware. A form ofdocumentation, it is usually a simpleplain textfile calledREADME,Read Me,READ.ME,README.txt,[1]orREADME.md(to indicate the use ofMarkdown)
The file's name is generally written in uppercase. OnUnix-likesystems in particular, this causes it to stand out – both because lowercase filenames are more common, and because thelscommand commonly sorts and displays files inASCII-code order, in which uppercase filenames will appear first.[nb 1]
A README file typically encompasses:
The convention of including a README file began in the mid-1970s.[3][4][5][6][7][8][9]EarlyMacintosh system softwareinstalled a Read Me on the Startup Disk, and README files commonly accompanied third-party software.
In particular, there is a long history offree softwareandopen-source softwareincluding a README file; theGNU Coding Standardsencourage including one to provide "a general overview of the package".[10]
Since the advent of thewebas ade factostandardplatform forsoftware distribution, many software packages have moved (or occasionally, copied) some of the above ancillary files and pieces of information to awebsiteorwiki, sometimes including the README itself, or sometimes leaving behind only a brief README file without all of the information required by a new user of the software.
The popularsource codehosting websiteGitHubstrongly encourages the creation of a README file – if one exists in the main (top-level) directory of a repository, it is automatically presented on the repository's front page.[11]In addition to plain text, various other formats andfile extensionsare also supported,[12]and HTML conversion takes extensions into account – in particular aREADME.mdis treated asGitHub Flavored Markdown.
The expression "readme file" is also sometimes used generically, for other files with a similar purpose.[citation needed]For example, the source-code distributions of many free software packages (especially those following theGnits Standardsor those produced withGNU Autotools) include a standard set of readme files:
Also commonly distributed with software packages are anFAQfile and aTODOfile, which lists planned improvements.
This article is based in part on theJargon File, which is in the public domain.
|
https://en.wikipedia.org/wiki/README
|
Ingraph theory, aproper edge coloringof agraphis an assignment of "colors" to the edges of the graph so that no two incident edges have the same color. For example, the figure to the right shows an edge coloring of a graph by the colors red, blue, and green. Edge colorings are one of several different types ofgraph coloring. Theedge-coloring problemasks whether it is possible to color the edges of a given graph using at mostkdifferent colors, for a given value ofk, or with the fewest possible colors. The minimum required number of colors for the edges of a given graph is called thechromatic indexof the graph. For example, the edges of the graph in the illustration can be colored by three colors but cannot be colored by two colors, so the graph shown has chromatic index three.
ByVizing's theorem, the number of colors needed to edge color a simple graph is either its maximumdegreeΔorΔ+1. For some graphs, such asbipartite graphsand high-degreeplanar graphs, the number of colors is alwaysΔ, and formultigraphs, the number of colors may be as large as3Δ/2. There are polynomial time algorithms that construct optimal colorings of bipartite graphs, and colorings of non-bipartite simple graphs that use at mostΔ+1colors; however, the general problem of finding an optimal edge coloring isNP-hardand the fastest known algorithms for it take exponential time. Many variations of the edge-coloring problem, in which an assignments of colors to edges must satisfy other conditions than non-adjacency, have been studied. Edge colorings have applications in scheduling problems and in frequency assignment forfiber opticnetworks.
Acycle graphmay have its edges colored with two colors if the length of the cycle is even: simply alternate the two colors around the cycle. However, if the length is odd, three colors are needed.[1]
Acomplete graphKnwithnvertices is edge-colorable withn− 1colors whennis an even number; this is a special case ofBaranyai's theorem.Soifer (2008)provides the following geometric construction of a coloring in this case: placenpoints at the vertices and center of a regular(n− 1)-sided polygon. For each color class, include one edge from the center to one of the polygon vertices, and all of the perpendicular edges connecting pairs of polygon vertices. However, whennis odd,ncolors are needed: each color can only be used for(n− 1)/2edges, a1/nfraction of the total.[2]
Several authors have studied edge colorings of theodd graphs,n-regular graphs in which the vertices represent teams ofn− 1players selected from a pool of2n− 1players, and in which the edges represent possible pairings of these teams (with one player left as "odd man out" to referee the game). The case thatn= 3gives the well-knownPetersen graph. AsBiggs (1972)explains the problem (forn= 6), the players wish to find a schedule for these pairings such that each team plays each of its six games on different days of the week, with Sundays off for all teams; that is, formalizing the problem mathematically, they wish to find a 6-edge-coloring of the 6-regular odd graphO6. Whennis 3, 4, or 8, an edge coloring ofOnrequiresn+ 1colors, but when it is 5, 6, or 7, onlyncolors are needed.[3]
As with itsvertex counterpart, anedge coloringof a graph, when mentioned without any qualification, is always assumed to be a proper coloring of the edges, meaning no two adjacent edges are assigned the same color. Here, two distinct edges are considered to be adjacent when they share a common vertex. An edge coloring of a graphGmay also be thought of as equivalent to a vertex coloring of theline graphL(G), the graph that has a vertex for every edge ofGand an edge for every pair of adjacent edges inG.
A proper edge coloring withkdifferent colors is called a (proper)k-edge-coloring. A graph that can be assigned ak-edge-coloring is said to bek-edge-colorable. The smallest number of colors needed in a (proper) edge coloring of a graphGis thechromatic index, or edge chromatic number,χ′(G). The chromatic index is also sometimes written using the notationχ1(G); in this notation, the subscript one indicates that edges are one-dimensional objects. A graph isk-edge-chromatic if its chromatic index is exactlyk. The chromatic index should not be confused with thechromatic numberχ(G)orχ0(G), the minimum number of colors needed in a proper vertex coloring ofG.
Unless stated otherwise all graphs are assumed to be simple, in contrast tomultigraphsin which two or more edges may be connecting the same pair of endpoints and in which there may be self-loops. For many problems in edge coloring, simple graphs behave differently from multigraphs, and additional care is needed to extend theorems about edge colorings of simple graphs to the multigraph case.
Amatchingin a graphGis a set of edges, no two of which are adjacent; aperfect matchingis a matching that includes edges touching all of the vertices of the graph, and amaximum matchingis a matching that includes as many edges as possible. In an edge coloring, the set of edges with any one color must all be non-adjacent to each other, so they form a matching. That is, a proper edge coloring is the same thing as a partition of the graph into disjoint matchings.
If the size of a maximum matching in a given graph is small, then many matchings will be needed in order to cover all of the edges of the graph. Expressed more formally, this reasoning implies that if a graph hasmedges in total, and if at mostβedges may belong to a maximum matching, then every edge coloring of the graph must use at leastm/βdifferent colors.[4]For instance, the 16-vertex planar graph shown in the illustration hasm= 24edges. In this graph, there can be no perfect matching; for, if the center vertex is matched, the remaining unmatched vertices may be grouped into three different connected components with four, five, and five vertices, and the components with an odd number of vertices cannot be perfectly matched. However, the graph has maximum matchings with seven edges, soβ = 7. Therefore, the number of colors needed to edge-color the graph is at least 24/7, and since the number of colors must be an integer it is at least four.
For aregular graphof degreekthat does not have a perfect matching, this lower bound can be used to show that at leastk+ 1colors are needed.[4]In particular, this is true for a regular graph with an odd number of vertices (such as the odd complete graphs); for such graphs, by thehandshaking lemma,kmust itself be even. However, the inequalityχ′ ≥m/βdoes not fully explain the chromatic index of every regular graph, because there are regular graphs that do have perfect matchings but that are notk-edge-colorable. For instance, thePetersen graphis regular, withm= 15and withβ = 5edges in its perfect matchings, but it does not have a 3-edge-coloring.
The edge chromatic number of a graphGis very closely related to themaximum degreeΔ(G), the largest number of edges incident to any single vertex ofG. Clearly,χ′(G) ≥ Δ(G), for ifΔdifferent edges all meet at the same vertexv, then all of these edges need to be assigned different colors from each other, and that can only be possible if there are at leastΔcolors available to be assigned.Vizing's theorem(named forVadim G. Vizingwho published it in 1964) states that this bound is almost tight: for any graph, the edge chromatic number is eitherΔ(G)orΔ(G) + 1.
Whenχ′(G) = Δ(G),Gis said to be of class 1; otherwise, it is said to be of class 2.
Every bipartite graph is of class 1,[5]andalmost allrandom graphsare of class 1.[6]However, it isNP-completeto determine whether an arbitrary graph is of class 1.[7]
Vizing (1965)proved thatplanar graphsof maximum degree at least eight are of class one and conjectured that the same is true for planar graphs of maximum degree seven or six. On the other hand, there exist planar graphs of maximum degree ranging from two through five that are of class two. The conjecture has since been proven for graphs of maximum degree seven.[8]Bridgelessplanarcubic graphsare all of class 1; this is an equivalent form of thefour color theorem.[9]
A1-factorizationof ak-regular graph, a partition of the edges of the graph intoperfect matchings, is the same thing as ak-edge-coloring of the graph. That is, a regular graph has a 1-factorization if and only if it is of class 1. As a special case of this, a 3-edge-coloring of acubic(3-regular) graph is sometimes called aTait coloring.
Not every regular graph has a 1-factorization; for instance, thePetersen graphdoes not. More generally thesnarksare defined as the graphs that, like the Petersen graph, are bridgeless, 3-regular, and of class 2.
According to the theorem ofKőnig (1916), every bipartite regular graph has a 1-factorization. The theorem was stated earlier in terms ofprojective configurationsand was proven byErnst Steinitz.
Formultigraphs, in which multiple parallel edges may connect the same two vertices, results that are similar to but weaker than Vizing's theorem are known relating the edge chromatic numberχ′(G), the maximum degreeΔ(G), and the multiplicityμ(G), the maximum number of edges in any bundle of parallel edges. As a simple example showing that Vizing's theorem does not generalize to multigraphs, consider aShannon multigraph, a multigraph with three vertices and three bundles ofμ(G)parallel edges connecting each of the three pairs of vertices. In this example,Δ(G) = 2μ(G)(each vertex is incident to only two out of the three bundles ofμ(G)parallel edges) but the edge chromatic number is3μ(G)(there are3μ(G)edges in total, and every two edges are adjacent, so all edges must be assigned different colors to each other). In a result that inspired Vizing,[10]Shannon (1949)showed that this is the worst case:χ′(G) ≤ (3/2)Δ(G)for any multigraphG. Additionally, for any multigraphG,χ′(G) ≤ Δ(G) + μ(G), an inequality that reduces to Vizing's theorem in the case of simple graphs (for whichμ(G) = 1).
Because the problem of testing whether a graph is class 1 isNP-complete, there is no known polynomial time algorithm for edge-coloring every graph with an optimal number of colors. Nevertheless, a number of algorithms have been developed that relax one or more of these criteria: they only work on a subset of graphs, or they do not always use an optimal number of colors, or they do not always run in polynomial time.
In the case ofbipartite graphsor multigraphs with maximum degreeΔ, the optimal number of colors is exactlyΔ.Cole, Ost & Schirra (2001)showed that an optimal edge coloring of these graphs can be found in the near-linear time boundO(mlog Δ), wheremis the number of edges in the graph; simpler, but somewhat slower, algorithms are described byCole & Hopcroft (1982)andAlon (2003). The algorithm ofAlon (2003)begins by making the input graph regular, without increasing its degree or significantly increasing its size, by merging pairs of vertices that belong to the same side of the bipartition and then adding a small number of additional vertices and edges. Then, if the degree is odd, Alon finds a single perfect matching in near-linear time, assigns it a color, and removes it from the graph, causing the degree to become even. Finally, Alon applies an observation ofGabow (1976), that selecting alternating subsets of edges in anEuler tourof the graph partitions it into two regular subgraphs, to split the edge coloring problem into two smaller subproblems, and his algorithm solves the two subproblemsrecursively. The total time for his algorithm isO(mlogm).
Forplanar graphswith maximum degreeΔ ≥ 7, the optimal number of colors is again exactlyΔ. With the stronger assumption thatΔ ≥ 9, it is possible to find an optimal edge coloring in linear time (Cole & Kowalik 2008).
For d-regular graphs which are pseudo-random in the sense that theiradjacency matrixhas second largest eigenvalue (in absolute value) at mostd1−ε, d is the optimal number of colors (Ferber & Jain 2020).
Misra & Gries (1992)andGabow et al. (1985)describe polynomial time algorithms for coloring any graph withΔ + 1colors, meeting the bound given by Vizing's theorem; seeMisra & Gries edge coloring algorithm.
For multigraphs,Karloff & Shmoys (1987)present the following algorithm, which they attribute toEli Upfal. Make the input multigraphGEulerianby adding a new vertex connected by an edge to every odd-degree vertex, find an Euler tour, and choose an orientation for the tour. Form a bipartite graphHin which there are two copies of each vertex ofG, one on each side of the bipartition, with an edge from a vertexuon the left side of the bipartition to a vertexvon the right side of the bipartition whenever the oriented tour has an edge fromutovinG. Apply a bipartite graph edge coloring algorithm toH. Each color class inHcorresponds to a set of edges inGthat form a subgraph with maximum degree two; that is, a disjoint union of paths and cycles, so for each color class inHit is possible to form three color classes inG. The time for the algorithm is bounded by the time to edge color a bipartite graph,O(mlog Δ)using the algorithm ofCole, Ost & Schirra (2001). The number of colors this algorithm uses is at most3⌈Δ2⌉{\displaystyle 3\left\lceil {\frac {\Delta }{2}}\right\rceil }, close to but not quite the same as Shannon's bound of⌊3Δ2⌋{\displaystyle \left\lfloor {\frac {3\Delta }{2}}\right\rfloor }. It may also be made into aparallel algorithmin a straightforward way. In the same paper, Karloff and Shmoys also present a linear time algorithm for coloring multigraphs of maximum degree three with four colors (matching both Shannon's and Vizing's bounds) that operates on similar principles: their algorithm adds a new vertex to make the graph Eulerian, finds an Euler tour, and then chooses alternating sets of edges on the tour to split the graph into two subgraphs of maximum degree two. The paths and even cycles of each subgraph may be colored with two colors per subgraph. After this step, each remaining odd cycle contains at least one edge that may be colored with one of the two colors belonging to the opposite subgraph. Removing this edge from the odd cycle leaves a path, which may be colored using the two colors for its subgraph.
Agreedy coloringalgorithm that considers the edges of a graph or multigraph one by one, assigning each edge thefirst availablecolor, may sometimes use as many as2Δ − 1colors, which may be nearly twice as many number of colors as is necessary. However, it has the advantage that it may be used in theonline algorithmsetting in which the input graph is not known in advance; in this setting, itscompetitive ratiois two, and this is optimal: no other online algorithm can achieve a better performance.[11]However, if edges arrive in a random order, and the input graph has a degree that is at least logarithmic, then smaller competitive ratios can be achieved.[12]
Several authors have made conjectures that imply that thefractional chromatic indexof any multigraph (a number that can be computed in polynomial time usinglinear programming) is within one of the chromatic index.[13]If these conjectures are true, it would be possible to compute a number that is never more than one off from the chromatic index in the multigraph case, matching what is known via Vizing's theorem for simple graphs. Although unproven in general, these conjectures are known to hold when the chromatic index is at leastΔ+Δ/2{\displaystyle \Delta +{\sqrt {\Delta /2}}}, as can happen for multigraphs with sufficiently large multiplicity.[14]
It is straightforward to test whether a graph may be edge colored with one or two colors, so the first nontrivial case of edge coloring is testing whether a graph has a 3-edge-coloring.
AsKowalik (2009)showed, it is possible to test whether a graph has a 3-edge-coloring in timeO(1.344n), while using only polynomial space. Although this time bound is exponential, it is significantly faster than a brute force search over all possible assignments of colors to edges. Everybiconnected3-regular graph withnvertices hasO(2n/2)3-edge-colorings; all of which can be listed in timeO(2n/2)(somewhat slower than the time to find a single coloring); asGreg Kuperbergobserved, the graph of aprismover ann/2-sided polygon hasΩ(2n/2)colorings (lower instead of upper bound), showing that this bound is tight.[15]
By applying exact algorithms for vertex coloring to theline graphof the input graph, it is possible to optimally edge-color any graph withmedges, regardless of the number of colors needed, in time2mmO(1)and exponential space, or in timeO(2.2461m)and only polynomial space (Björklund, Husfeldt & Koivisto 2009).
Because edge coloring is NP-complete even for three colors, it is unlikely to befixed parameter tractablewhen parametrized by the number of colors. However, it is tractable for other parameters. In particular,Zhou, Nakano & Nishizeki (1996)showed that for graphs oftreewidthw, an optimal edge coloring can be computed in timeO(nw(6w)w(w+ 1)/2), a bound that depends superexponentially onwbut only linearly on the numbernof vertices in the graph.
Nemhauser & Park (1991)formulate the edge coloring problem as aninteger programand describe their experience using an integer programming solver to edge color graphs. However, they did not perform any complexity analysis of their algorithm.
A graph isuniquelyk-edge-colorable if there is only one way of partitioning the edges intokcolor classes, ignoring thek!possible permutations of the colors. Fork≠ 3, the only uniquelyk-edge-colorable graphs are paths, cycles, andstars, but fork= 3other graphs may also be uniquelyk-edge-colorable. Every uniquely 3-edge-colorable graph has exactly threeHamiltonian cycles(formed by deleting one of the three color classes) but there exist 3-regular graphs that have three Hamiltonian cycles and are not uniquely 3-colorable, such as thegeneralized Petersen graphsG(6n+ 3, 2)forn≥ 2. The only known nonplanar uniquely 3-colorable graph is the generalized Petersen graphG(9,2), and it has been conjectured that no others exist.[16]
Folkman & Fulkerson (1969)investigated the non-increasing sequences of numbersm1,m2,m3, ...with the property that there exists a proper edge coloring of a given graphGwithm1edges of the first color,m2edges of the second color, etc. They observed that, if a sequencePis feasible in this sense, and is greater inlexicographic orderthan a sequenceQwith the same sum, thenQis also feasible. For, ifP>Qin lexicographic order, thenPcan be transformed intoQby a sequence of steps, each of which reduces one of the numbersmiby one unit and increases another later numbermjwithi<jby one unit. In terms of edge colorings, starting from a coloring that realizesP, each of these same steps may be performed by swapping colorsiandjon aKempe chain, a maximal path of edges that alternate between the two colors. In particular, any graph has anequitableedge coloring, an edge coloring with an optimal number of colors in which every two color classes differ in size by at most one unit.
TheDe Bruijn–Erdős theoremmay be used to transfer many edge coloring properties of finite graphs toinfinite graphs. For instance, Shannon's and Vizing's theorems relating the degree of a graph to its chromatic index both generalize straightforwardly to infinite graphs.[17]
Richter (2011)considers the problem of finding agraph drawingof a givencubic graphwith the properties that all of the edges in the drawing have one of three different slopes and that no two edges lie on the same line as each other. If such a drawing exists, then clearly the slopes of the edges may be used as colors in a 3-edge-coloring of the graph. For instance, the drawing of theutility graphK3,3as the edges and long diagonals of aregular hexagonrepresents a 3-edge-coloring of the graph in this way. As Richter shows, a 3-regular simple bipartite graph, with a given Tait coloring, has a drawing of this type that represents the given coloring if and only if the graph is3-edge-connected. For a non-bipartite graph, the condition is a little more complicated: a given coloring can be represented by a drawing if thebipartite double coverof the graph is 3-edge-connected, and if deleting any monochromatic pair of edges leads to a subgraph that is still non-bipartite. These conditions may all be tested easily in polynomial time; however, the problem of testing whether a 4-edge-colored 4-regular graph has a drawing with edges of four slopes, representing the colors by slopes, is complete for theexistential theory of the reals, a complexity class at least as difficult as being NP-complete.
As well as being related to the maximum degree and maximum matching number of a graph, the chromatic index is closely related to thelinear arboricityla(G)of a graphG, the minimum number of linear forests (disjoint unions of paths) into which the graph's edges may be partitioned. A matching is a special kind of linear forest, and in the other direction, any linear forest can be 2-edge-colored, so for everyGit follows thatla(G) ≤ χ′(G) ≤ 2 la(G).Akiyama's conjecture(named forJin Akiyama) states thatla(G)≤⌈Δ+12⌉{\displaystyle \mathop {\mathrm {la} } (G)\leq \left\lceil {\frac {\Delta +1}{2}}\right\rceil }, from which it would follow more strongly that2 la(G) − 2 ≤ χ′(G) ≤ 2 la(G). For graphs of maximum degree three,la(G)is always exactly two, so in this case the boundχ′(G) ≤ 2 la(G)matches the bound given by Vizing's theorem.[18]
TheThue numberof a graph is the number of colors required in an edge coloring meeting the stronger requirement that, in every even-length path, the first and second halves of the path form different sequences of colors.
Thearboricityof a graph is the minimum number of colors required so that the edges of each color have no cycles (rather than, in the standard edge coloring problem, having no adjacent pairs of edges). That is, it is the minimum number offorestsinto which the edges of the graph may be partitioned into.[19]Unlike the chromatic index, the arboricity of a graph may be computed in polynomial time.[20]
List edge-coloringis a problem in which one is given a graph in which each edge is associated with a list of colors, and must find a proper edge coloring in which the color of each edge is drawn from that edge's list. The list chromatic index of a graphGis the smallest numberkwith the property that, no matter how one chooses lists of colors for the edges, as long as each edge has at leastkcolors in its list, then a coloring is guaranteed to be possible. Thus, the list chromatic index is always at least as large as the chromatic index. TheDinitz conjectureon the completion of partialLatin squaresmay be rephrased as the statement that the list edge chromatic number of thecomplete bipartite graphKn,nequals its edge chromatic number,n.Galvin (1995)resolved the conjecture by proving, more generally, that in every bipartite graph the chromatic index and list chromatic index are equal. The equality between the chromatic index and the list chromatic index has been conjectured to hold, even more generally, for arbitrary multigraphs with no self-loops; this conjecture remains open.
Many other commonly studied variations of vertex coloring have also been extended to edge colorings. For instance, complete edge coloring is the edge-coloring variant ofcomplete coloring, a proper edge coloring in which each pair of colors must be represented by at least one pair of adjacent edges and in which the goal is to maximize the total number of colors.[21]Strong edge coloring is the edge-coloring variant ofstrong coloring, an edge coloring in which every two edges with adjacent endpoints must have different colors.[22]Strong edge coloring has applications inchannel allocation schemesforwireless networks.[23]
Acyclic edge coloring is the edge-coloring variant ofacyclic coloring, an edge coloring for which every two color classes form an acyclic subgraph (that is, a forest).[24]The acyclic chromatic index of a graphG{\displaystyle G}, denoted bya′(G){\displaystyle a'(G)}, is the smallest number of colors needed to have a proper acyclic edge coloring ofG{\displaystyle G}. It has been conjectured thata′(G)≤Δ+2{\displaystyle a'(G)\leq \Delta +2}, whereΔ{\displaystyle \Delta }is the maximum degree ofG{\displaystyle G}.[25]Currently the best known bound isa′(G)≤⌈3.74(Δ−1)⌉{\displaystyle a'(G)\leq \lceil 3.74(\Delta -1)\rceil }.[26]The problem becomes easier whenG{\displaystyle G}has largegirth. More specifically, there is a constantc{\displaystyle c}such that if the girth ofG{\displaystyle G}is at leastcΔlogΔ{\displaystyle c\Delta \log \Delta }, thena′(G)≤Δ+2{\displaystyle a'(G)\leq \Delta +2}.[27]A similar result is that for allϵ>0{\displaystyle \epsilon >0}there exists ang{\displaystyle g}such that ifG{\displaystyle G}has girth at leastg{\displaystyle g}, thena′(G)≤(1+ϵ)Δ{\displaystyle a'(G)\leq (1+\epsilon )\Delta }.[28]
Eppstein (2013)studied 3-edge-colorings of cubic graphs with the additional property that no two bichromatic cycles share more than a single edge with each other. He showed that the existence of such a coloring is equivalent to the existence of adrawing of the graphon a three-dimensional integer grid, with edges parallel to the coordinate axes and each axis-parallel line containing at most two vertices. However, like the standard 3-edge-coloring problem, finding a coloring of this type is NP-complete.
Total coloringis a form of coloring that combines vertex and edge coloring, by requiring both the vertices and edges to be colored. Any incident pair of a vertex and an edge, or an edge and an edge, must have distinct colors, as must any two adjacent vertices. It has been conjectured (combining Vizing's theorem andBrooks' theorem) that any graph has a total coloring in which the number of colors is at most the maximum degree plus two, but this remains unproven.
If a 3-regular graph on a surface is 3-edge-colored, itsdual graphforms atriangulationof the surface which is also edge colored (although not, in general, properly edge colored) in such a way that every triangle has one edge of each color. Other colorings and orientations of triangulations, with other local constraints on how the colors are arranged at the vertices or faces of the triangulation, may be used to encode several types of geometric object. For instance, rectangular subdivisions (partitions of a rectangular subdivision into smaller rectangles, with three rectangles meeting at every vertex) may be described combinatorially by a "regular labeling", a two-coloring of the edges of a triangulation dual to the subdivision, with the constraint that the edges incident to each vertex form four contiguous subsequences, within each of which the colors are the same. This labeling is dual to a coloring of the rectangular subdivision itself in which the vertical edges have one color and the horizontal edges have the other color. Similar local constraints on the order in which colored edges may appear around a vertex may also be used to encode straight-line grid embeddings of planar graphs and three-dimensional polyhedra with axis-parallel sides. For each of these three types of regular labelings, the set of regular labelings of a fixed graph forms adistributive latticethat may be used to quickly list all geometric structures based on the same graph (such as all axis-parallel polyhedra having the same skeleton) or to find structures satisfying additional constraints.[29]
Adeterministic finite automatonmay be interpreted as adirected graphin which each vertex has the same out-degreed, and in which the edges ared-colored in such a way that every two edges with the same source vertex have distinct colors. Theroad coloring problemis the problem of edge-coloring a directed graph with uniform out-degrees, in such a way that the resulting automaton has asynchronizing word.Trahtman (2009)solved the road coloring problem by proving that such a coloring can be found whenever the given graph isstrongly connectedandaperiodic.
Ramsey's theoremconcerns the problem ofk-coloring the edges of a largecomplete graphKnin order to avoid creating monochromatic complete subgraphsKsof some given sizes. According to the theorem, there exists a numberRk(s)such that, whenevern≥R(s), such a coloring is not possible. For instance,R2(3) = 6, that is, if the edges of the graphK6are 2-colored, there will always be a monochromatic triangle.
A path in an edge-colored graph is said to be arainbowpath if no color repeats on it. A graph is said to be rainbow colored if there is a rainbow path between any two pairs of vertices.
An edge-colouring of a graph G with colours 1. . . t is aninterval t coloringif all colours are used, and the colours of edges incident to each vertex of G are distinct and form an interval of integers.
Edge colorings of complete graphs may be used to schedule around-robin tournamentinto as few rounds as possible so that each pair of competitors plays each other in one of the rounds; in this application, the vertices of the graph correspond to the competitors in the tournament, the edges correspond to games, and the edge colors correspond to the rounds in which the games are played.[30]Similar coloring techniques may also be used to schedule other sports pairings that are not all-play-all; for instance, in theNational Football League, the pairs of teams that will play each other in a given year are determined, based on the teams' records from the previous year, and then an edge coloring algorithm is applied to the graph formed by the set of pairings in order to assign games to the weekends on which they are played.[31]For this application, Vizing's theorem implies that no matter what set of pairings is chosen (as long as no teams play each other twice in the same season), it is always possible to find a schedule that uses at most one more weekend than there are games per team.
Open shop schedulingis a problem ofscheduling production processes, in which there are a set of objects to be manufactured, each object has a set of tasks to be performed on it (in any order), and each task must be performed on a specific machine, preventing any other task that requires the same machine from being performed at the same time. If all tasks have the same length, then this problem may be formalized as one of edge coloring a bipartite multigraph, in which the vertices on one side of the bipartition represent the objects to be manufactured, the vertices on the other side of the bipartition represent the manufacturing machines, the edges represent tasks that must be performed, and the colors represent time steps in which each task may be performed. Since bipartite edge coloring may be performed in polynomial time, the same is true for this restricted case of open shop scheduling.[32]
Gandham, Dawande & Prakash (2005)study the problem of link scheduling fortime-division multiple accessnetwork communications protocols onsensor networksas a variant of edge coloring. In this problem, one must choose time slots for the edges of a wireless communications network so that each node of the network can communicate with each neighboring node without interference. Using a strong edge coloring (and using two time slots for each edge color, one for each direction) would solve the problem but might use more time slots than necessary. Instead, they seek a coloring of the directed graph formed by doubling each undirected edge of the network, with the property that each directed edgeuvhas a different color from the edges that go out fromvand from the neighbors ofv. They propose a heuristic for this problem based on a distributed algorithm for(Δ + 1)-edge-coloring together with a postprocessing phase that reschedules edges that might interfere with each other.
Infiber-optic communication, thepath coloringproblem is the problem of assigning colors (frequencies of light) to pairs of nodes that wish to communicate with each other, and paths through a fiber-optic communications network for each pair, subject to the restriction that no two paths that share a segment of fiber use the same frequency as each other. Paths that pass through the same communication switch but not through any segment of fiber are allowed to use the same frequency. When the communications network is arranged as astar network, with a single central switch connected by separate fibers to each of the nodes, the path coloring problem may be modeled exactly as a problem of edge coloring a graph or multigraph, in which the communicating nodes form the graph vertices, pairs of nodes that wish to communicate form the graph edges, and the frequencies that may be used for each pair form the colors of the edge coloring problem. For communications networks with a more general tree topology, local path coloring solutions for the star networks defined by each switch in the network may be patched together to form a single global solution.[33]
Jensen & Toft (1995)list 23 open problems concerning edge coloring. They include:
|
https://en.wikipedia.org/wiki/Edge_coloring
|
Enhanced Messaging Service(EMS) was a cross-industry collaboration betweenmagic4,Ericsson,Motorola,SiemensandAlcatelamong others, which provided an application-level extension toShort Message Service(SMS) forcellular phonesavailable onGSM,TDMAandCDMAnetworks. EMS is defined in3GPPTechnical Specification3GPP TS 23.040 (originally GSM 03.40).[1]
EMS was an intermediate technology, between SMS andMMS, providing some of the features of MMS. EMS was a technology designed to work with existing networks, but was ultimately made obsolete by MMS. An EMS-enabled mobile phone could send and receive messages that had special text formatting (such as bold or italic), animations, pictures, icons, sound effects and specialringtones. EMS messages sent to devices that did not support it would be displayed as SMS messages, though they may be unreadable due to the presence of additional data that cannot be rendered by the device.
In some countries, EMS messages could not generally be sent between subscribers of different mobile phone carriers, as they will frequently be dropped by the inter-carrier network or by the receiving carrier. However, in other countries, such as the UK, inter-carrier interoperability was generally achieved. EMS never really picked up due to interoperability limitations and in fact very few operators ever introduced it.
On June 9, 2008, the CTIA organization officially released an RFI for Enhanced Messaging implementation with focus on Group Messaging.[2]The EM term in this context loosely refers to an improved mobile messaging product that combines the simplicity of Text Messaging with the successful rich features of the Internet'sinstant messaging. Other references to this new service have been made as "SMS 2" or "Instant SMS".
|
https://en.wikipedia.org/wiki/Enhanced_Messaging_Service
|
Incomputing,computer performanceis the amount of useful work accomplished by acomputer system. Outside of specific contexts, computer performance is estimated in terms of accuracy,efficiencyand speed of executingcomputer programinstructions. When it comes to high computer performance, one or more of the following factors might be involved:
The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be
Whilst the above definition relates to a scientific, technical approach, the following definition given byArnold Allenwould be useful for a non-technical audience:
The wordperformancein computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"[1]
Computer softwareperformance, particularlysoftware applicationresponse time, is an aspect ofsoftware qualitythat is important inhuman–computer interactions.
Performance engineering within systems engineering encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the performance requirements defined for the solution.
Performance engineering continuously deals with trade-offs between types of performance. Occasionally aCPU designercan find a way to make aCPUwith better overall performance by improving one of the aspects of performance, presented below, without sacrificing the CPU's performance in other areas. For example, building the CPU out of better, fastertransistors.
However, sometimes pushing one type of performance to an extreme leads to a CPU with worse overall performance, because other important aspects were sacrificed to get one impressive-looking number, for example, the chip'sclock rate(see themegahertz myth).
Application Performance Engineering (APE) is a specific methodology withinperformance engineeringdesigned to meet the challenges associated with application performance in increasingly distributed mobile, cloud and terrestrial IT environments. It includes the roles, skills, activities, practices, tools and deliverables applied at every phase of the application lifecycle that ensure an application will be designed, implemented and operationally supported to meet non-functional performance requirements.
Computer performancemetrics(things to measure) includeavailability,response time,channel capacity,latency,completion time,service time,bandwidth,throughput,relative efficiency,scalability,performance per watt,compression ratio,instruction path lengthandspeed up.CPUbenchmarks are available.[2]
Availability of a system is typically measured as a factor of its reliability - as reliability increases, so does availability (that is, lessdowntime). Availability of a system may also be increased by the strategy of focusing on increasing testability and maintainability and not on reliability. Improving maintainability is generally easier than reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, it is likely to dominate the availability (prediction uncertainty) problem, even while maintainability levels are very high.
Response time is the total amount of time it takes to respond to a request for service. In computing, that service can be any unit of work from a simpledisk IOto loading a complexweb page. The response time is the sum of three numbers:[3]
Most consumers pick a computer architecture (normallyIntelIA-32architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (seemegahertz myth).
Some system designers building parallel computers pick CPUs based on the speed per dollar.
Channel capacity is the tightest upper bound on the rate ofinformationthat can be reliably transmitted over acommunications channel. By thenoisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units ofinformationper unit time) that can be achieved with arbitrarily small error probability.[4][5]
Information theory, developed byClaude E. ShannonduringWorld War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of themutual informationbetween the input and output of the channel, where the maximization is with respect to the input distribution.[6]
Latency is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place. This velocity is always lower or equal to speed of light. Therefore, every physical system that has non-zero spatial dimensions will experience some sort of latency.
The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability.
Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock. The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high.
System designers buildingreal-time computingsystems want to guarantee worst-case response. That is easier to do when the CPU has lowinterrupt latencyand when it has a deterministic response.
In computer networking, bandwidth is a measurement of bit-rate of available or consumeddata communicationresources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.).
Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog bandwidth.
In general terms, throughput is the rate of production or the rate at which something can be processed.
In communication networks, throughput is essentially synonymous to digital bandwidth consumption. Inwireless networksorcellular communication networks, thesystem spectral efficiencyin bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area.
In integrated circuits, often a block in adata flow diagramhas a single input and a single output, and operates on discrete packets of information. Examples of such blocks areFFTmodules orbinary multipliers. Because the units of throughput are the reciprocal of the unit forpropagation delay, which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as anASICorembedded processorto a communications channel, simplifying system analysis.
Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth.
The amount ofelectric powerused by the computer (power consumption). This becomes especially important for systems with limited power sources such as solar, batteries, and human power.
System designers buildingparallel computers, such asGoogle's hardware, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.[7]
For spaceflight computers, the processing speed per watt ratio is a more useful performance criterion than raw processing speed due to limited on-board resources of power.[8]
Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off.
This is an important performance feature of mobile systems, from the smart phones you keep in your pocket to the portable embedded systems in a spacecraft.
The effect of computing on the environment, during manufacturing and recycling as well as during use. Measurements are taken with the objectives of reducing waste, reducing hazardous materials, and minimizing a computer'secological footprint.
The number oftransistorson anintegrated circuit(IC). Transistor count is the most common measure of IC complexity.
Because there are so many programs to test a CPU on all aspects of performance,benchmarkswere developed.
The most famous benchmarks are the SPECint andSPECfpbenchmarks developed byStandard Performance Evaluation Corporationand theCertification Markbenchmark developed by the Embedded Microprocessor Benchmark ConsortiumEEMBC.
In software engineering, performance testing is, in general, conducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate, or verify other quality attributes of the system, such as scalability, reliability, and resource usage.
Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design, and architecture of a system.
Insoftware engineering, profiling ("program profiling", "software profiling") is a form ofdynamic program analysisthat measures, for example, the space (memory) or timecomplexity of a program, theusage of particular instructions, or frequency and duration of function calls. The most common use of profiling information is to aid programoptimization.
Profiling is achieved by instrumenting either the programsource codeor its binary executable form using a tool called aprofiler(orcode profiler). A number of different techniques may be used by profilers, such as event-based, statistical, instrumented, and simulation methods.
Performance tuning is the improvement ofsystemperformance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem, which can be real or anticipated. Most systems will respond to increasedloadwith some degree of decreasing performance. A system's ability to accept a higher load is calledscalability, and modifying a system to handle a higher load is synonymous to performance tuning.
Systematic tuning follows these steps:
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly touser acceptanceaspects.
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as provides a visual cue to let them know the system is handling their request.
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance.
The total amount of time (t) required to execute a particular benchmark program is
where
Even on one machine, a different compiler or the same compiler with differentcompiler optimizationswitches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark?
A CPU designer is often required to implement a particularinstruction set, and so cannot change N.
Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to aspeed-demonCPU design.
Sometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such asout-of-order execution,superscalarCPUs, larger caches, caches with improved hit rates, improvedbranch prediction,speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design.[10]For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.[9]
|
https://en.wikipedia.org/wiki/Computer_performance
|
Incomputer science,all-pairs testingorpairwise testingis acombinatorialmethod ofsoftware testingthat, foreach pairof input parameters to a system (typically, asoftwarealgorithm), tests all possible discrete combinations of those parameters. Using carefully chosentest vectors, this can be done much faster than an exhaustive search ofall combinationsof all parameters by "parallelizing" the tests of parameter pairs.[1]
In most cases, a single input parameter or an interaction between two parameters is what causes a program's bugs.[2]Bugs involving interactions between three or more parameters are both progressively less common[3]and also progressively more expensive to find, such testing has as its limit the testing of all possible inputs.[4]Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage.[5]
More rigorously, if we assume that a test case hasN{\displaystyle N}parameters given in a set{Pi}={P1,P2,...,PN}{\displaystyle \{P_{i}\}=\{P_{1},P_{2},...,P_{N}\}}.
The range of the parameters are given byR(Pi)=Ri{\displaystyle R(P_{i})=R_{i}}.
Let's assume that|Ri|=ni{\displaystyle |R_{i}|=n_{i}}.
We note that the number of all possible test cases is a∏ni{\displaystyle \prod n_{i}}. Imagining that the code deals with the conditions taking only two parameters at a time, might reduce the number of needed test cases.[clarification needed]
To demonstrate, suppose there are X,Y,Z parameters.
We can use apredicateof the formP(X,Y,Z){\displaystyle P(X,Y,Z)}of order 3, which takes all 3 as input, or rather three different order 2 predicates of the formp(u,v){\displaystyle p(u,v)}.P(X,Y,Z){\displaystyle P(X,Y,Z)}can be written in an equivalent form ofpxy(X,Y),pyz(Y,Z),pzx(Z,X){\displaystyle p_{xy}(X,Y),p_{yz}(Y,Z),p_{zx}(Z,X)}where comma denotes any combination. If the code is written as conditions taking "pairs" of parameters,
then the set of choices of rangesX={ni}{\displaystyle X=\{n_{i}\}}can be amultiset[clarification needed], because there can be multiple parameters having same number of choices.
max(S){\displaystyle max(S)}is one of the maximum of the multisetS{\displaystyle S}The number of pair-wise test cases on this test function would be:-T=max(X)×max(X∖max(X)){\displaystyle T=max(X)\times max(X\setminus max(X))}
Therefore, if then=max(X){\displaystyle n=max(X)}andm=max(X∖max(X)){\displaystyle m=max(X\setminus max(X))}then the number of tests is typically O(nm), wherenandmare the number of possibilities for each of the two parameters with the most choices, and it can be quite a lot less than the exhaustive∏ni{\displaystyle \prod n_{i}}·
N-wise testing can be considered the generalized form of pair-wise testing.[citation needed]
The idea is to applysortingto the setX={ni}{\displaystyle X=\{n_{i}\}}so thatP={Pi}{\displaystyle P=\{P_{i}\}}gets ordered too.
Let the sorted set be aN{\displaystyle N}tuple :-
Ps=<Pi>;i<j⟹|R(Pi)|<|R(Pj)|{\displaystyle P_{s}=<P_{i}>\;;\;i<j\implies |R(P_{i})|<|R(P_{j})|}
Now we can take the setX(2)={PN−1,PN−2}{\displaystyle X(2)=\{P_{N-1},P_{N-2}\}}and call it the pairwise testing.
Generalizing further we can take the setX(3)={PN−1,PN−2,PN−3}{\displaystyle X(3)=\{P_{N-1},P_{N-2},P_{N-3}\}}and call it the 3-wise testing.
Eventually, we can sayX(T)={PN−1,PN−2,...,PN−T}{\displaystyle X(T)=\{P_{N-1},P_{N-2},...,P_{N-T}\}}T-wise testing.
The N-wise testing then would just be, all possible combinations from the above formula.
Consider the parameters shown in the table below.
'Enabled', 'Choice Type' and 'Category' have a choice range of 2, 3 and 4, respectively. An exhaustive test would involve 24 tests (2 x 3 x 4). Multiplying the two largest values (3 and 4) indicates that a pair-wise tests would involve 12 tests. The pairwise test cases, generated by Microsoft's "pict" tool, are shown below.
|
https://en.wikipedia.org/wiki/All-pairs_testing
|
The following is a list ofintegrals(anti-derivativefunctions) ofhyperbolic functions. For a complete list of integral functions, seelist of integrals.
In all formulas the constantais assumed to be nonzero, andCdenotes theconstant of integration.
|
https://en.wikipedia.org/wiki/List_of_integrals_of_hyperbolic_functions
|
Akey ringis a file which contains multiple public keys ofcertificate authority(CA).
A key ring is a file which is necessary forSecure Sockets Layer(SSL) connection over the web. It is securely stored on theserverwhichhosts the website. It contains thepublic/private key pairfor the particular website. It also contains the public/private key pairs from various certificate authorities and the trusted root certificate for the various certification authorities.
An entity or website administrator has to send acertificate signing request(CSR) to the CA. The CA then returns a signedcertificateto the entity. This certificate received from the CA has to be stored in the key ring.
|
https://en.wikipedia.org/wiki/Key_ring_file
|
Cache hierarchy,ormulti-level cache, is a memory architecture that uses a hierarchy of memory stores based on varying access speeds to cache data. Highly requested data is cached in high-speed access memory stores, allowing swifter access bycentral processing unit(CPU) cores.
Cache hierarchy is a form and part ofmemory hierarchyand can be considered a form oftiered storage.[1]This design was intended to allow CPU cores to process faster despite thememory latencyofmain memoryaccess. Accessing main memory can act as a bottleneck forCPU core performanceas the CPU waits for data, while making all of main memory high-speed may be prohibitively expensive. High-speed caches are a compromise allowing high-speed access to the data most-used by the CPU, permitting a fasterCPU clock.[2]
In the history of computer and electronic chip development, there was a period when increases in CPU speed outpaced the improvements in memory access speed.[3]The gap between the speed of CPUs and memory meant that the CPU would often be idle.[4]CPUs were increasingly capable of running and executing larger amounts of instructions in a given time, but the time needed to access data from main memory prevented programs from fully benefiting from this capability.[5]This issue motivated the creation of memory models with higher access rates in order to realize the potential of faster processors.[6]
This resulted in the concept ofcache memory, first proposed byMaurice Wilkes, a British computer scientist at the University of Cambridge in 1965. He called such memory models "slave memory".[7]Between roughly 1970 and 1990, papers and articles byAnant Agarwal,Alan Jay Smith,Mark D. Hill, Thomas R. Puzak, and others discussed better cache memory designs. The first cache memory models were implemented at the time, but even as researchers were investigating and proposing better designs, the need for faster memory models continued. This need resulted from the fact that although early cache models improved data access latency, with respect to cost and technical limitations it was not feasible for a computer system's cache to approach the size of main memory. From 1990 onward, ideas such as adding another cache level (second-level), as a backup for the first-level cache were proposed.Jean-Loup Baer, Wen-Hann Wang, Andrew W. Wilson, and others have conducted research on this model. When several simulations and implementations demonstrated the advantages of two-level cache models, the concept of multi-level caches caught on as a new and generally better model of cache memories. Since 2000, multi-level cache models have received widespread attention and are currently implemented in many systems, such as the three-level caches that are present in Intel's Core i7 products.[8]
Accessing main memory for each instruction execution may result in slow processing, with the clock speed depending on the time required to find and fetch the data. In order to hide this memory latency from the processor, data caching is used.[9]Whenever the data is required by the processor, it is fetched from the main memory and stored in the smaller memory structure called a cache. If there is any further need of that data, the cache is searched first before going to the main memory.[10]This structure resides closer to the processor in terms of the time taken to search and fetch data with respect to the main memory.[11]The advantages of using cache can be proven by calculating the average access time (AAT) for the memory hierarchy with and without the cache.[12]
Caches, being small in size, may result in frequent misses – when a search of the cache does not provide the sought-after information – resulting in a call to main memory to fetch data. Hence, the AAT is affected by the miss rate of each structure from which it searches for the data.[13]
AAT for main memory is given by Hit timemain memory. AAT for caches can be given by:
The hit time for caches is less than the hit time for the main memory, so the AAT for data retrieval is significantly lower when accessing data through the cache rather than main memory.[14]
While using the cache may improve memory latency, it may not always result in the required improvement for the time taken to fetch data due to the way caches are organized and traversed. For example, direct-mapped caches that are the same size usually have a higher miss rate than fully associative caches. This may also depend on the benchmark of the computer testing the processor and on the pattern of instructions. But using a fully associative cache may result in more power consumption, as it has to search the whole cache every time. Due to this, the trade-off between power consumption (and associated heat) and the size of the cache becomes critical in the cache design.[13]
In the case of a cache miss, the purpose of using such a structure will be rendered useless and the computer will have to go to the main memory to fetch the required data. However, with amultiple-level cache, if the computer misses the cache closest to the processor (level-one cache or L1) it will then search through the next-closest level(s) of cache and go to main memory only if these methods fail. The general trend is to keep the L1 cache small and at a distance of 1–2 CPU clock cycles from the processor, with the lower levels of caches increasing in size to store more data than L1, hence being more distant but with a lower miss rate. This results in a better AAT.[15]The number of cache levels can be designed by architects according to their requirements after checking for trade-offs between cost, AATs, and size.[16][17]
With the technology-scaling that allowed memory systems able to be accommodated on a single chip, most modern day processors have up to three or four cache levels.[18]The reduction in the AAT can be understood by this example, where the computer checks AAT for different configurations up to L3 caches.
Example: main memory = 50ns, L1 = 1 ns with 10% miss rate, L2 = 5 ns with 1% miss rate, L3 = 10 ns with 0.2% miss rate.
In a banked cache, the cache is divided into a cache dedicated toinstructionstorage and a cache dedicated to data. In contrast, a unified cache contains both the instructions and data in the same cache.[22]During a process, the L1 cache (or most upper-level cache in relation to its connection to the processor) is accessed by the processor to retrieve both instructions and data. Requiring both actions to be implemented at the same time requires multiple ports and more access time in a unified cache. Having multiple ports requires additional hardware and wiring, leading to a significant structure between the caches and processing units.[23]To avoid this, the L1 cache is often organized as a banked cache which results in fewer ports, less hardware, and generally lower access times.[13]
Modern processors have split caches, and in systems with multilevel caches higher level caches may be unified while lower levels split.[24]
Whether a block present in the upper cache layer can also be present in the lower cache level is governed by the memory system'sinclusion policy, which may be inclusive, exclusive or non-inclusive non-exclusive (NINE).[citation needed]
With an inclusive policy, all the blocks present in the upper-level cache have to be present in the lower-level cache as well. Each upper-level cache component is a subset of the lower-level cache component. In this case, since there is a duplication of blocks, there is some wastage of memory. However, checking is faster.[citation needed]
Under an exclusive policy, all the cache hierarchy components are completely exclusive, so that any element in the upper-level cache will not be present in any of the lower cache components. This enables complete usage of the cache memory. However, there is a high memory-access latency.[25]
The above policies require a set of rules to be followed in order to implement them. If none of these are forced, the resulting inclusion policy is called non-inclusive non-exclusive (NINE). This means that the upper-level cache may or may not be present in the lower-level cache.[21]
There are two policies which define the way in which a modified cache block will be updated in the main memory: write through and write back.[citation needed]
In the case of write through policy, whenever the value of the cache block changes, it is further modified in the lower-level memory hierarchy as well.[26]This policy ensures that the data is stored safely as it is written throughout the hierarchy.
However, in the case of the write back policy, the changed cache block will be updated in the lower-level hierarchy only when the cache block is evicted. A "dirty bit" is attached to each cache block and set whenever the cache block is modified.[27]During eviction, blocks with a set dirty bit will be written to the lower-level hierarchy. Under this policy, there is a risk for data-loss as the most recently changed copy of a datum is only stored in the cache and therefore some corrective techniques must be observed.
In case of a write where the byte is not present in the cache block, the byte may be brought to the cache as determined by a write allocate or write no-allocate policy.[28]Write allocate policy states that in case of a write miss, the block is fetched from the main memory and placed in the cache before writing.[29]In the write no-allocate policy, if the block is missed in the cache it will write in the lower-level memory hierarchy without fetching the block into the cache.[30]
The common combinations of the policies are"write back, write allocate" and "write through, write no-allocate".
A private cache is assigned to one particular core in a processor, and cannot be accessed by any other cores. In some architectures, each core has its own private cache; this creates the risk of duplicate blocks in a system's cache architecture, which results in reduced capacity utilization. However, this type of design choice in a multi-layer cache architecture can also be good for a lower data-access latency.[28][31][32]
A shared cache is a cache which can be accessed by multiple cores.[33]Since it is shared, each block in the cache is unique and therefore has a larger hit rate as there will be no duplicate blocks. However, data-access latency can increase as multiple cores try to access the same cache.[34]
Inmulti-core processors, the design choice to make a cache shared or private impacts the performance of the processor.[35]In practice, the upper-level cache L1 (or sometimes L2)[36][37]is implemented as private and lower-level caches are implemented as shared. This design provides high access rates for the high-level caches and low miss rates for the lower-level caches.[35]
Up to 64-core:
6-core (performance| efficiency):
96-core:
20-core (4:1 "performance" core | "efficiency" core):
6- to 16-core:
|
https://en.wikipedia.org/wiki/Cache_hierarchy
|
Algorithmic transparencyis the principle that the factors that influence the decisions made byalgorithmsshould be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services,[1]the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
The phrases "algorithmic transparency" and "algorithmic accountability"[2]are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.[3]
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.,[4]as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency[5]In the United States, theFederal Trade Commission's Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research.[6]In theEuropean Union, the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means.[7]Furthermore, the European Union founded The European Center for Algorithmic Transparency (ECAT).[8]
|
https://en.wikipedia.org/wiki/Algorithmic_transparency
|
Inmathematics, especiallygroup theory, thecentralizer(also calledcommutant[1][2]) of asubsetSin agroupGis the setCG(S){\displaystyle \operatorname {C} _{G}(S)}of elements ofGthatcommutewith every element ofS, or equivalently, the set of elementsg∈G{\displaystyle g\in G}such thatconjugationbyg{\displaystyle g}leaves each element ofSfixed. ThenormalizerofSinGis thesetof elementsNG(S){\displaystyle \mathrm {N} _{G}(S)}ofGthat satisfy the weaker condition of leaving the setS⊆G{\displaystyle S\subseteq G}fixed under conjugation. The centralizer and normalizer ofSaresubgroupsofG. Many techniques in group theory are based on studying the centralizers and normalizers of suitable subsetsS.
Suitably formulated, the definitions also apply tosemigroups.
Inring theory, thecentralizer of a subset of aringis defined with respect to the multiplication of the ring (a semigroup operation). The centralizer of a subset of a ringRis asubringofR. This article also deals with centralizers and normalizers in aLie algebra.
Theidealizerin a semigroup or ring is another construction that is in the same vein as the centralizer and normalizer.
Thecentralizerof a subsetS{\displaystyle S}of group (or semigroup)Gis defined as[3]
where only the first definition applies to semigroups.
If there is no ambiguity about the group in question, theGcan be suppressed from the notation. WhenS={a}{\displaystyle S=\{a\}}is asingletonset, we write CG(a) instead of CG({a}). Another less common notation for the centralizer is Z(a), which parallels the notation for thecenter. With this latter notation, one must be careful to avoid confusion between thecenterof a groupG, Z(G), and thecentralizerof anelementginG, Z(g).
ThenormalizerofSin the group (or semigroup)Gis defined as
where again only the first definition applies to semigroups. If the setS{\displaystyle S}is a subgroup ofG{\displaystyle G}, then the normalizerNG(S){\displaystyle N_{G}(S)}is the largest subgroupG′⊆G{\displaystyle G'\subseteq G}whereS{\displaystyle S}is anormal subgroupofG′{\displaystyle G'}. The definitions ofcentralizerandnormalizerare similar but not identical. Ifgis in the centralizer ofS{\displaystyle S}andsis inS{\displaystyle S}, then it must be thatgs=sg, but ifgis in the normalizer, thengs=tgfor sometinS{\displaystyle S}, withtpossibly different froms. That is, elements of the centralizer ofS{\displaystyle S}must commute pointwise withS{\displaystyle S}, but elements of the normalizer ofSneed only commute withS as a set. The same notational conventions mentioned above for centralizers also apply to normalizers. The normalizer should not be confused with thenormal closure.
ClearlyCG(S)⊆NG(S){\displaystyle C_{G}(S)\subseteq N_{G}(S)}and both are subgroups ofG{\displaystyle G}.
IfRis a ring or analgebra over a field, andS{\displaystyle S}is a subset ofR, then the centralizer ofS{\displaystyle S}is exactly as defined for groups, withRin the place ofG.
IfL{\displaystyle {\mathfrak {L}}}is aLie algebra(orLie ring) with Lie product [x,y], then the centralizer of a subsetS{\displaystyle S}ofL{\displaystyle {\mathfrak {L}}}is defined to be[4]
The definition of centralizers for Lie rings is linked to the definition for rings in the following way. IfRis an associative ring, thenRcan be given thebracket product[x,y] =xy−yx. Of course thenxy=yxif and only if[x,y] = 0. If we denote the setRwith the bracket product as LR, then clearly thering centralizerofS{\displaystyle S}inRis equal to theLie ring centralizerofS{\displaystyle S}in LR.
The normalizer of a subsetS{\displaystyle S}of a Lie algebra (or Lie ring)L{\displaystyle {\mathfrak {L}}}is given by[4]
While this is the standard usage of the term "normalizer" in Lie algebra, this construction is actually theidealizerof the setS{\displaystyle S}inL{\displaystyle {\mathfrak {L}}}. IfS{\displaystyle S}is an additive subgroup ofL{\displaystyle {\mathfrak {L}}}, thenNL(S){\displaystyle \mathrm {N} _{\mathfrak {L}}(S)}is the largest Lie subring (or Lie subalgebra, as the case may be) in whichS{\displaystyle S}is a Lieideal.[5]
Consider the group
Take a subsetH{\displaystyle H}of the groupG{\displaystyle G}:
Note that[1,2,3]{\displaystyle [1,2,3]}is the identity permutation inG{\displaystyle G}and retains the order of each element and[1,3,2]{\displaystyle [1,3,2]}is the permutation that fixes the first element and swaps the second and third element.
The normalizer ofH{\displaystyle H}with respect to the groupG{\displaystyle G}are all elements ofG{\displaystyle G}that yield the setH{\displaystyle H}(potentially permuted) when the element conjugatesH{\displaystyle H}.
Working out the example for each element ofG{\displaystyle G}:
Therefore, the normalizerNG(H){\displaystyle N_{G}(H)}ofH{\displaystyle H}inG{\displaystyle G}is{[1,2,3],[1,3,2]}{\displaystyle \{[1,2,3],[1,3,2]\}}since both these group elements preserve the setH{\displaystyle H}under conjugation.
The centralizer of the groupG{\displaystyle G}is the set of elements that leave each element ofH{\displaystyle H}unchanged by conjugation; that is, the set of elements that commutes with every element inH{\displaystyle H}.
It's clear in this example that the only such element in S3isH{\displaystyle H}itself ([1, 2, 3], [1, 3, 2]).
LetS′{\displaystyle S'}denote the centralizer ofS{\displaystyle S}in the semigroupA{\displaystyle A}; i.e.S′={x∈A∣sx=xsfor everys∈S}.{\displaystyle S'=\{x\in A\mid sx=xs{\text{ for every }}s\in S\}.}ThenS′{\displaystyle S'}forms asubsemigroupandS′=S‴=S′′′′′{\displaystyle S'=S'''=S'''''}; i.e. a commutant is its ownbicommutant.
Source:[6]
Source:[4]
|
https://en.wikipedia.org/wiki/Centralizer
|
Informal languagetheory, acontext-free grammaris inGreibach normal form(GNF) if the right-hand sides of allproductionrules start with aterminal symbol, optionally followed by some non-terminals. A non-strict form allows one exception to this format restriction for allowing theempty word(epsilon, ε) to be a member of the described language. The normal form was established bySheila Greibachand it bears her name.
More precisely, a context-free grammar is in Greibach normal form, if all production rules are of the form:
whereA{\displaystyle A}is anonterminal symbol,a{\displaystyle a}is a terminal symbol, andA1A2…An{\displaystyle A_{1}A_{2}\ldots A_{n}}is a (possibly empty) sequence of nonterminal symbols.
Observe that the grammar does not haveleft recursions.
Every context-free grammar can be transformed into an equivalent grammar in Greibach normal form.[1]Various constructions exist. Some do not permit the second form of rule and cannot transform context-free grammars that can generate the empty word. For one such construction the size of the constructed grammar is O(n4) in the general case and O(n3) if no derivation of the original grammar consists of a single nonterminal symbol, wherenis the size of the original grammar.[2]This conversion can be used to prove that everycontext-free languagecan be accepted by a real-time (non-deterministic)pushdown automaton, i.e., the automaton reads a letter from its input every step.
Given a grammar in GNF and a derivable string in the grammar with lengthn, anytop-down parserwill halt at depthn.
|
https://en.wikipedia.org/wiki/Greibach_normal_form
|
Remote monitoring and management(RMM) is the process of supervising and controlling IT systems (such as network devices, desktops, servers and mobile devices) by means of locally installedagentsthat can be accessed by a management service provider.[1][2]
Functions include the ability to:
Traditionally this function has been done on site at a company but many MSPs are performing this function remotely using integratedSoftware as a Service(SaaS) platforms.
|
https://en.wikipedia.org/wiki/Remote_monitoring_and_management
|
Blockmodeling linked networksis an approach inblockmodelingin analysing thelinked networks. Such approach is based on thegeneralizedmultilevel blockmodelingapproach.[1]: 259The main objective of this approach is to achieveclusteringof the nodes from all involved sets, while at the same time using all available information. At the same time, all one-mode and two-node networks, that are connected, are blockmodeled, which results in obtaining only one clustering, using nodes from each sets. Each cluster ideally contains only nodes from one set, which also allows the modeling of the links among clusters from different sets (through two-mode networks).[1]: 260This approach was introduced byAleš Žibernain 2014.[2][3]
Blockmodeling linked networks can be done using:[1]: 260–261[2]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Blockmodeling_linked_networks
|
Theclique gameis apositional gamewhere two players alternately pick edges, trying to occupy a completecliqueof a given size.
The game is parameterized by two integersn>k. The game-board is the set of all edges of acomplete graphonnvertices. The winning-sets are all the cliques onkvertices. There are several variants of this game:
The clique game (in its strong-positional variant) was first presented byPaul ErdősandJohn Selfridge, who attributed it to Simmons.[1]They called it theRamsey game, since it is closely related toRamsey's theorem(see below).
Ramsey's theoremimplies that, whenever we color a graph with 2 colors, there is at least one monochromatic clique. Moreover, for every integerk, there exists an integerR(k,k)such that, in every graph withn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}vertices, any 2-coloring contains a monochromatic clique of size at leastk. This means that, ifn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}, the clique game can never end in a draw. aStrategy-stealing argumentimplies that the first player can always force at least a draw; therefore, ifn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}, Maker wins. By substituting known bounds for the Ramsey number we get that Maker wins wheneverk≤log2n2{\displaystyle k\leq {\log _{2}n \over 2}}.
On the other hand, the Erdos-Selfridge theorem[1]implies that Breaker wins wheneverk≥2log2n{\displaystyle k\geq {2\log _{2}n}}.
Beckimproved these bounds as follows:[2]
Instead of playing on complete graphs, the clique game can also be played on complete hypergraphs of higher orders. For example, in the clique game on triplets, the game-board is the set of triplets of integers 1,...,n(so its size is(n3){\displaystyle {n \choose 3}}), and winning-sets are all sets of triplets ofkintegers (so the size of any winning-set in it is(k3){\displaystyle {k \choose 3}}).
ByRamsey's theoremon triples, ifn≥R3(k,k){\displaystyle n\geq R_{3}(k,k)}, Maker wins. The currently known upper bound onR3(k,k){\displaystyle R_{3}(k,k)}is very large,2k2/6<R3(k,k)<224k−10{\displaystyle 2^{k^{2}/6}<R_{3}(k,k)<2^{2^{4k-10}}}. In contrast,Beck[3]proves that2k2/6<R3∗(k,k)<k42k3/6{\displaystyle 2^{k^{2}/6}<R_{3}^{*}(k,k)<k^{4}2^{k^{3}/6}}, whereR3∗(k,k){\displaystyle R_{3}^{*}(k,k)}is the smallest integer such that Maker has a winning strategy. In particular, ifk42k3/6<n{\displaystyle k^{4}2^{k^{3}/6}<n}then the game is Maker's win.
|
https://en.wikipedia.org/wiki/Clique_game
|
Inlogic, astrict conditional(symbol:◻{\displaystyle \Box }, or ⥽) is a conditional governed by amodal operator, that is, alogical connectiveofmodal logic. It islogically equivalentto thematerial conditionalofclassical logic, combined with thenecessityoperator frommodal logic. For any twopropositionspandq, theformulap→qsays thatpmaterially impliesqwhile◻(p→q){\displaystyle \Box (p\rightarrow q)}says thatpstrictly impliesq.[1]Strict conditionals are the result ofClarence Irving Lewis's attempt to find a conditional for logic that can adequately expressindicative conditionalsin natural language.[2][3]They have also been used in studyingMolinisttheology.[4]
The strict conditionals may avoidparadoxes of material implication. The following statement, for example, is not correctly formalized by material implication:
This condition should clearly be false: the degree of Bill Gates has nothing to do with whether Elvis is still alive. However, the direct encoding of this formula inclassical logicusing material implication leads to:
This formula is true because whenever the antecedentAis false, a formulaA→Bis true. Hence, this formula is not an adequate translation of the original sentence. An encoding using the strict conditional is:
In modal logic, this formula means (roughly) that, in every possible world in which Bill Gates graduated in medicine, Elvis never died. Since one can easily imagine a world where Bill Gates is a medicine graduate and Elvis is dead, this formula is false. Hence, this formula seems to be a correct translation of the original sentence.
Although the strict conditional is much closer to being able to express natural language conditionals than the material conditional, it has its own problems withconsequentsthat arenecessarily true(such as 2 + 2 = 4) or antecedents that are necessarily false.[5]The following sentence, for example, is not correctly formalized by a strict conditional:
Using strict conditionals, this sentence is expressed as:
In modal logic, this formula means that, in every possible world where Bill Gates graduated in medicine, it holds that 2 + 2 = 4. Since 2 + 2 is equal to 4 in all possible worlds, this formula is true, although it does not seem that the original sentence should be. A similar situation arises with 2 + 2 = 5, which is necessarily false:
Some logicians view this situation as indicating that the strict conditional is still unsatisfactory. Others have noted that the strict conditional cannot adequately expresscounterfactual conditionals,[6]and that it does not satisfy certain logical properties.[7]In particular, the strict conditional istransitive, while the counterfactual conditional is not.[8]
Some logicians, such asPaul Grice, have usedconversational implicatureto argue that, despite apparent difficulties, the material conditional is just fine as a translation for the natural language 'if...then...'. Others still have turned torelevance logicto supply a connection between the antecedent and consequent of provable conditionals.
In aconstructivesetting, the symmetry between ⥽ and◻{\displaystyle \Box }is broken, and the two connectives can be studied independently. Constructive strict implication can be used to investigateinterpretabilityofHeyting arithmeticand to modelarrowsand guardedrecursionin computer science.[9]
|
https://en.wikipedia.org/wiki/Strict_conditional
|
This is a list of notable tools forstatic program analysis(program analysis is a synonym for code analysis).
(7.9)
(6.3.5)
Tools that usesound, i.e. over-approximating a rigorous model,formal methodsapproach to static analysis (e.g., using staticprogram assertions). Sound methods contain no false negatives for bug-free programs, at least with regards to the idealized mathematical model they are based on (there is no "unconditional" soundness). Note that there is no guarantee they will reportallbugs for buggy programs, they will report at least one.
|
https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
|
Afallacyis the use ofinvalidor otherwise faulty reasoning in the construction of an argument. All forms of human communication can contain fallacies.
Because of their variety, fallacies are challenging to classify. They can be classified by their structure (formal fallacies) or content (informal fallacies). Informal fallacies, the larger group, may then be subdivided into categories such as improper presumption, faulty generalization, error in assigning causation, and relevance, among others.
The use of fallacies is common when the speaker's goal of achieving common agreement is more important to them than utilizing sound reasoning. When fallacies are used, the premise should be recognized as not well-grounded, the conclusion as unproven (but not necessarily false), and the argument as unsound.[1]
A formal fallacy is an error in theargument's form.[2]All formal fallacies are types ofnon sequitur.
A propositional fallacy is an error that concerns compound propositions. For a compound proposition to be true, the truth values of its constituent parts must satisfy the relevant logical connectives that occur in it (most commonly: [and], [or], [not], [only if], [if and only if]). The following fallacies involve relations whose truth values are not guaranteed and therefore not guaranteed to yield true conclusions.Types ofpropositionalfallacies:
A quantification fallacy is an error in logic where the quantifiers of the premises are in contradiction to the quantifier of the conclusion.Types ofquantificationfallacies:
Syllogistic fallacies– logical fallacies that occur insyllogisms.
Informal fallacies – arguments that are logically unsound for lack of well-grounded premises.[14]
Faulty generalization– reaching a conclusion from weak premises.
Questionable causeis a general type of error with many variants. Its primary basis is the confusion of association with causation, either by inappropriately deducing (or rejecting) causation or a broader failure to properly investigate the cause of an observed effect.
A red herring fallacy, one of the main subtypes of fallacies of relevance, is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. This includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.[70][71]
Red herring– introducing a second argument in response to the first argument that is irrelevant and draws attention away from the original topic (e.g.: saying "If you want to complain about the dishes I leave in the sink, what about the dirty clothes you leave in the bathroom?").[72]Injury trial, it is known as aChewbacca defense. In political strategy, it is called adead cat strategy.See alsoirrelevant conclusion.
|
https://en.wikipedia.org/wiki/List_of_fallacies
|
In computing,defense strategyis a concept and practice used by computer designers, users, and IT personnel to reducecomputer securityrisks.[1]
Boundary protection employs security measures and devices to prevent unauthorized access to computer systems (referred to as controlling the system border). The approach is based on the assumption that the attacker did not penetrate the system. Examples of this strategy include usinggateways,routers,firewalls, andpasswordchecks, deleting suspicious emails/messages, and limiting physical access.
Boundary protection is typically the main strategy for computing systems; if this type of defense is successful, no other strategies are required. This is a resource-consuming strategy with a known scope. External information system monitoring is part of boundary protection.[2]
Information System Monitoring employs security measures to find intruders or the damage done by them. This strategy is used when the system has been penetrated, but the intruder did not gain full control. Examples of this strategy includeantivirus software, applying apatch, andnetwork behavior anomaly detection.
This strategy's success is based on competition of offence and defence. This is a time and resource-consuming strategy, affecting performance. The scope is variable in time. It cannot be fully successful if not supported by other strategies.
Unavoidable actions employ security measures that cannot be prevented or neutralized. This strategy is based on the assumption that the system has been penetrated, but an intruder cannot prevent the defensive mechanism from being employed. Examples of this strategy includerebooting, usingphysical unclonable functions, and using asecurity switch.
Secure enclave is a strategy that employs security measures that prevent access to some parts of the system. This strategy is used when the system has been penetrated, but an intruder cannot access its special parts. Examples of this strategy include using theAccess level, using aTrusted Platform Module, using amicrokernel, using Diode (unidirectional network device), and usingair gaps.
This is a supporting strategy for boundary protection, information system monitoring and unavoidable action strategies. This is a time and resource-consuming strategy with a known scope. Even if this strategy is fully successful, it does not guarantee the overall success of the larger defense strategy.
False target is a strategy that deploys non-real targets for an intruder. It is used when the system has been penetrated, but the intruder does not know the system architecture. Examples of this strategy includehoneypots,virtual computers,virtual security switches, fake files, and address/password copies.
This is a supporting strategy for information system monitoring. It is a time-consuming strategy, and the scope is determined by the designer. It cannot be fully successful if not supported by other strategies.
Moving target is a security strategy based on frequent changes of data and processes. This strategy is based on the assumption that the system has been penetrated, but the intruder does not know the architecture of the system and its processes. Examples of this strategy are regular changes ofpasswordsorkeys (cryptography), using a dynamic platform, etc.
This is a supporting strategy for information system monitoring. It is a time-consuming strategy, and the scope is determined by the designer. It cannot be fully successful if not supported by other strategies. Actions are activated on a scheduled basis or as a response to a detected threat.
Useless information comprises security measures to turn important information into useless data for an intruder. The strategy is based on the assumption that the system has been penetrated, but the intruder is not able to decrypt information, or does not have enough time to decrypt it. For example,encrypting the file systemor usingencryption softwarecan render the data useless even if an attacker gets access to the file system, or usingdata masking, where sensitive data is hidden in non-sensitive data with modified content.
This is a supporting strategy for information system monitoring. It is a time and resource-consuming strategy, affecting performance. The scope is known. It cannot be successful if not supported by other strategies.Claude Shannon'stheorems show that if the encryption key is smaller than the secured information, theinformation-theoretic securitycan not be achieved. There is only one known unbreakable cryptographic system: theone-time pad. This strategy is not generally possible to use because of the difficulties involved in exchanging one-time pads without the risk of being compromised. Other cryptographic systems are only buying time or can be broken (seeCryptographic hash function#Degree_of_difficulty). This strategy needs to be supported by the moving target or deletes strategies.
Deletion is a strategy using security measures to prevent an intruder from gaining sensitive information at all costs. The strategy is based on the assumption that the damage from information disclosure would be greater than the damage caused by deleting the information or disabling the system required to gain access to the information. The strategy is part of thedata-centric securityapproach. Examples of this strategy include information deletion as a response to a security violation (such as unauthorized access attempts) andpasswordresets.
This is a supporting strategy for information system monitoring. It is a resource-consuming strategy, and the scope is determined by the designer. It cannot be fully successful on its own since the detected intrusion is not quarantined.
Information redundancy is a strategy performing security measures to keep redundancy for information and using it in case of damage. The strategy is based on the assumption that finding and repairing the damage is more complicated than the restoration of the system. Examples of this strategy include using system restoration, keeping backup files, and using a backup computer.
This is a supporting strategy for information system monitoring. This strategy consumes considerable resources, and the scope is known. It can be fully successful in its part.
Limiting of actions made by a robot is a strategy performing security measures to limit a robot's (software bot) actions. The strategy is based on the assumption that a robot can take more actions, or create damage that a human cannot create. Examples of this strategy include usinganti-spam techniques, usingCAPTCHAand otherhuman presence detectiontechniques, and usingDOS-based defense (protection fromDenial-of-service attack).
This is a supporting strategy for boundary protection and information system monitoring. It is a time and resource-consuming strategy, and the scope is determined by the designer. This strategy cannot be fully successful on its own.
Active defenseis a strategy performing security measures attacking the potential intruders. The strategy is based on the assumption that a potential intruder under attack has fewer abilities. Examples of this strategy include creating and using lists of trusted networks, devices, and applications, blocking untrusted addresses, and vendor management.
This is a supporting strategy for boundary protection and information system monitoring. It is a time and resource-consuming strategy, and the scope is determined by the designer. This strategy cannot be fully successful on its own.
This strategy can support any other strategy.[3][4][5][6][clarification needed]This is a resource-consuming strategy, and the scope is determined by the designer. An implementation may have a wide impact on devices.[7]This strategy can be fully successful, but in most cases, there is a trade-off of full system functionality for security. This strategy can be usedproactivelyor reactively. Actions done in response to an already detected problem may be too late.[8]Any implementation needs to be supported by the secure enclave strategy in order to prevent neutralizing action by unauthorized access to the protection mechanism.
Actions can be of the following types:
|
https://en.wikipedia.org/wiki/Defense_strategy_(computing)
|
Test automation management toolsare specific tools that provide acollaborativeenvironment that is intended to maketest automationefficient, traceable and clear for stakeholders. Test automation is becoming a cross-discipline (i.e. a mix of both testing and development practices.)
Test automationsystems usually need more reporting, analysis and meaningful information about project status. Test management systems target manual effort and do not give all the required information.[1]
Test automation management systems leverage automation efforts towards efficient and continuous processes of delivering test execution and new working tests by:
Test automation management tools fitAgileSystems Development Life Cycle methodologies. In most cases, test automation covers continuous changes to minimize manual regression testing. Changes are usually noted by monitoring test log diffs. For example, differences in the number of failures signal probable changes either in AUT or in test code (broken test code base, instabilities) or in both. Quick notice of changes and unified workflow of results analysis reduces testing costs and increases project quality.
Test-driven developmentutilizes test automation as the primary driver to rapid and high-quality software production. Concepts of green line and thoughtful design are supported with tests before actual coding, assuming there are special tools to track and analyze within TDD process.
Another test automation practice[2]iscontinuous integration, which explicitly supposes automated test suites as a final stage upon building, deployment and distributing new versions of software. Based on acceptance of test results, a build is declared either as qualified for further testing or rejected.[3]Dashboards provide relevant information on all stages of software development including test results. However, dashboards do not support comprehensive operations and views for an automation engineer. This is another reason for dedicated management tools that can supply high-level data to other project management tools such astest management, issue management andchange management.
|
https://en.wikipedia.org/wiki/Test_automation_management_tools
|
VisSimis a visualblock diagramprogram for the simulation ofdynamical systemsandmodel-based designofembedded systems, with its ownvisual language. It is developed by Visual Solutions ofWestford, Massachusetts. Visual Solutions was acquired byAltairin August 2014 and its products have been rebranded as Altair Embed as a part of Altair's Model Based Development Suite. With Embed, virtual prototypes of dynamic systems can be developed. Models are built by sliding blocks into the work area and wiring them together with the mouse. Embed automatically converts the control diagrams into C-code ready to be downloaded to the target hardware.
VisSim (now Altair Embed) uses a graphical data flow paradigm to implement dynamic systems, based on differential equations. Version 8 adds interactiveUMLOMG2 compliantstate chart graphsthat are placed in VisSim diagrams, which allows the modelling of state based systems such as startup sequencing of process plants or serial protocol decoding.
VisSim/Altair Embed is used incontrol systemdesign anddigital signal processingfor multi-domain simulation and design.[1]It includes blocks for arithmetic, boolean, andtranscendental functions, as well asdigital filters,transfer functions,numerical integrationand interactive plotting.[2]The most commonly modelled systems are aeronautical, biological/medical, digital power, electric motor, electrical, hydraulic, mechanical, process, thermal/HVACand econometric.[1]
A read-only version of the software,VisSim Viewer, is available free of charge and provides a way for people who do not own a license to use VisSim to run VisSim models.[3]This program is intended to allow models to be more widely shared while preserving the model in its published form.[3]The viewer can execute any VisSim model, and only changes to block and simulation parameters to illustrate different design scenarios, are allowed. Sliders and buttons may be activated if included in the model.
The "VisSim/C-Code" add-on generatesANSI Ccode for the model, and generates target specific code for on-chip devices like PWM, ADC, encoder, GPIO, I2C etc. This is useful for development ofembedded systems. After the behaviour of the controller has been simulated, C-code can be generated, compiled and run on the target. For debugging, VisSim supports an interactive JTAG linkage, called "Hotlink", that allows interactive gain change and plotting of on-target variables. The VisSim generated code has been called efficient and readable, making it well suited for development of embedded systems.[4]VisSim's author served on the X3J11 ANSI C committee and wrote several C compilers, in addition to co-authoring a book on C.[5]This deep understanding of ANSI C, and the nature of the resultingmachine codewhen compiled, is the key to the code generator's efficiency. VisSim can target small16-bitfixed pointsystems like theTexas InstrumentsMSP430, using only 740 bytes flash and 64 bytes of RAM for a small closed-loopPulse-width modulation(PWM) actuated system, as well as allowing very high control sample rates over 500 kHz on larger32-bitfloating-point processorslike theTexas Instruments150 MHz F28335.
The technique of simulating system performance off-line, and then generating code from the simulation is known as "model-based development". Model-based development forembedded systemsis becoming widely adopted for production systems because it shortens development cycles for hardware development in the same way thatModel-driven architectureshortens production cycles for software development.[6]
Model buildingis a visual way of describing a situation. In an engineering context, instead of writing and solving asystem of equation, model building involves using visual "blocks" to solve the problem. The advantage of using models is that in some cases problems which appear difficult if expressed mathematically may be easier to understand when represented pictorially.
VisSim uses a hierarchical composition to create nested block diagrams. A typical model would consist of "virtual plants" composed of various VisSim "layers", combined if necessary with custom blocks written in C or FORTRAN. A virtual controller can be added and tuned to give desired overall system response.Graphical control elementsuch as sliders and buttons allow control ofwhat-if analysisfor operator training or controller tuning.
Although VisSim was originally designed for use bycontrol engineers, it can be used for any type of mathematical model.
Screenshots show the simulation of asinefunction in VisSim. Noise is added to the model, then filtered out using aButterworth filter. The signal traces of the sine function with noise and filtered noise are first shown together, and then shown in separate windows in the plot block.
|
https://en.wikipedia.org/wiki/VisSim
|
Aschema(pl.:schemata) is a template incomputer scienceused in the field ofgenetic algorithmsthat identifies asubsetof strings with similarities at certain string positions. Schemata are a special case ofcylinder sets, forming abasisfor aproduct topologyon strings.[1]In other words, schemata can be used to generate atopologyon a space of strings.
For example, consider binary strings of length 6. The schema 1**0*1 describes the set of all words of length 6 with 1's at the first and sixth positions and a 0 at the fourth position. The * is awildcardsymbol, which means that positions 2, 3 and 5 can have a value of either 1 or 0. Theorder of a schemais defined as the number of fixed positions in the template, while thedefining lengthδ(H){\displaystyle \delta (H)}is the distance between the first and last specific positions. The order of 1**0*1 is 3 and its defining length is 5. Thefitness of a schemais the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function.
The length of a schemaH{\displaystyle H}, calledN(H){\displaystyle N(H)}, is defined as the total number of nodes in the schema.N(H){\displaystyle N(H)}is also equal to the number of nodes in the programs matchingH{\displaystyle H}.[2]
If the child of an individual that matches schema H does notitselfmatch H, the schema is said to have beendisrupted.[2]
Inevolutionary computingsuch asgenetic algorithmsandgenetic programming,propagationrefers to the inheritance of characteristics of one generation by the next. For example, a schema is propagated if individuals in the current generation match it and so do those in the next generation. Those in the next generation may be (but do not have to be) children of parents who matched it.
Recently schema have been studied usingorder theory.[3]
Two basic operators are defined for schema: expansion and compression. The expansion maps a schema onto a set of words which it represents, while the compression maps a set of words on to a schema.
In the following definitionsΣ{\displaystyle \Sigma }denotes an alphabet,Σl{\displaystyle \Sigma ^{l}}denotes all words of lengthl{\displaystyle l}over the alphabetΣ{\displaystyle \Sigma },Σ∗{\displaystyle \Sigma _{*}}denotes the alphabetΣ{\displaystyle \Sigma }with the extra symbol∗{\displaystyle *}.Σ∗l{\displaystyle \Sigma _{*}^{l}}denotes all schema of lengthl{\displaystyle l}over the alphabetΣ∗{\displaystyle \Sigma _{*}}as well as the empty schemaϵ∗{\displaystyle \epsilon _{*}}.
For any schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}the following operator↑s{\displaystyle {\uparrow }s}, called theexpansion{\displaystyle expansion}ofs{\displaystyle s}, which mapss{\displaystyle s}to a subset of words inΣl{\displaystyle \Sigma ^{l}}:
↑s:={b∈Σl|bi=siorsi=∗for eachi∈{1,...,l}}{\displaystyle {\uparrow }s:=\{b\in \Sigma ^{l}|b_{i}=s_{i}{\mbox{ or }}s_{i}=*{\mbox{ for each }}i\in \{1,...,l\}\}}
Where subscripti{\displaystyle i}denotes the character at positioni{\displaystyle i}in a word or schema. Whens=ϵ∗{\displaystyle s=\epsilon _{*}}then↑s=∅{\displaystyle {\uparrow }s=\emptyset }. More simply put,↑s{\displaystyle {\uparrow }s}is the set of all words inΣl{\displaystyle \Sigma ^{l}}that can be made by exchanging the∗{\displaystyle *}symbols ins{\displaystyle s}with symbols fromΣ{\displaystyle \Sigma }. For example, ifΣ={0,1}{\displaystyle \Sigma =\{0,1\}},l=3{\displaystyle l=3}ands=10∗{\displaystyle s=10*}then↑s={100,101}{\displaystyle {\uparrow }s=\{100,101\}}.
Conversely, for anyA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}we define↓A{\displaystyle {\downarrow }{A}}, called thecompression{\displaystyle compression}ofA{\displaystyle A}, which mapsA{\displaystyle A}on to a schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}:↓A:=s{\displaystyle {\downarrow }A:=s}wheres{\displaystyle s}is a schema of lengthl{\displaystyle l}such that the symbol at positioni{\displaystyle i}ins{\displaystyle s}is determined in the following way: ifxi=yi{\displaystyle x_{i}=y_{i}}for allx,y∈A{\displaystyle x,y\in A}thensi=xi{\displaystyle s_{i}=x_{i}}otherwisesi=∗{\displaystyle s_{i}=*}. IfA=∅{\displaystyle A=\emptyset }then↓A=ϵ∗{\displaystyle {\downarrow }A=\epsilon _{*}}. One can think of this operator as stacking up all the items inA{\displaystyle A}and if all elements in a column are equivalent, the symbol at that position ins{\displaystyle s}takes this value, otherwise there is a wild card symbol. For example, letA={100,000,010}{\displaystyle A=\{100,000,010\}}then↓A=∗∗0{\displaystyle {\downarrow }A=**0}.
Schemata can bepartially ordered. For anya,b∈Σ∗l{\displaystyle a,b\in \Sigma _{*}^{l}}we saya≤b{\displaystyle a\leq b}if and only if↑a⊆↑b{\displaystyle {\uparrow }a\subseteq {\uparrow }b}. It follows that≤{\displaystyle \leq }is apartial orderingon a set of schemata from thereflexivity,antisymmetryandtransitivityof thesubsetrelation. For example,ϵ∗≤11≤1∗≤∗∗{\displaystyle \epsilon _{*}\leq 11\leq 1*\leq **}.
This is because↑ϵ∗⊆↑11⊆↑1∗⊆↑∗∗=∅⊆{11}⊆{11,10}⊆{11,10,01,00}{\displaystyle {\uparrow }\epsilon _{*}\subseteq {\uparrow }11\subseteq {\uparrow }1*\subseteq {\uparrow }**=\emptyset \subseteq \{11\}\subseteq \{11,10\}\subseteq \{11,10,01,00\}}.
The compression and expansion operators form aGalois connection, where↓{\displaystyle \downarrow }is the lower adjoint and↑{\displaystyle \uparrow }the upper adjoint.[3]
For a setA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}, we call the process of calculating the compression on each subset of A, that is{↓X|X⊆A}{\displaystyle \{{\downarrow }X|X\subseteq A\}}, the schematic completion ofA{\displaystyle A}, denotedS(A){\displaystyle {\mathcal {S}}(A)}.[3]
For example, letA={110,100,001,000}{\displaystyle A=\{110,100,001,000\}}. The schematic completion ofA{\displaystyle A}, results in the following set:S(A)={001,100,000,110,00∗,∗00,1∗0,∗∗0,∗0∗,∗∗∗,ϵ∗}{\displaystyle {\mathcal {S}}(A)=\{001,100,000,110,00*,*00,1*0,**0,*0*,***,\epsilon _{*}\}}
Theposet(S(A),≤){\displaystyle ({\mathcal {S}}(A),\leq )}always forms acomplete latticecalled the schematic lattice.
The schematic lattice is similar to the concept lattice found inFormal concept analysis.
|
https://en.wikipedia.org/wiki/Schema_(genetic_algorithms)
|
Inmathematics, afieldis aseton whichaddition,subtraction,multiplication, anddivisionare defined and behave as the corresponding operations onrationalandreal numbers. A field is thus a fundamentalalgebraic structurewhich is widely used inalgebra,number theory, and many other areas of mathematics.
The best known fields are the field ofrational numbers, the field ofreal numbersand the field ofcomplex numbers. Many other fields, such asfields of rational functions,algebraic function fields,algebraic number fields, andp-adic fieldsare commonly used and studied in mathematics, particularly in number theory andalgebraic geometry. Mostcryptographic protocolsrely onfinite fields, i.e., fields with finitely manyelements.
The theory of fields proves thatangle trisectionandsquaring the circlecannot be done with acompass and straightedge.Galois theory, devoted to understanding the symmetries offield extensions, provides an elegant proof of theAbel–Ruffini theoremthat generalquintic equationscannot besolved in radicals.
Fields serve as foundational notions in several mathematical domains. This includes different branches ofmathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as thescalarsfor avector space, which is the standard general context forlinear algebra.Number fields, the siblings of the field of rational numbers, are studied in depth innumber theory.Function fieldscan help describe properties of geometric objects.
Informally, a field is a set, along with twooperationsdefined on that set: an addition operationa+band a multiplication operationa⋅b, both of which behave similarly as they do forrational numbersandreal numbers. This includes the existence of anadditive inverse−afor all elementsaand of amultiplicative inverseb−1for every nonzero elementb. This allows the definition of the so-calledinverse operations, subtractiona−band divisiona/b, asa−b=a+ (−b)anda/b=a⋅b−1.
Often the producta⋅bis represented by juxtaposition, asab.
Formally, a field is asetFtogether with twobinary operationsonFcalledadditionandmultiplication.[1]A binary operation onFis a mappingF×F→F, that is, a correspondence that associates with each ordered pair of elements ofFa uniquely determined element ofF.[2][3]The result of the addition ofaandbis called the sum ofaandb, and is denoteda+b. Similarly, the result of the multiplication ofaandbis called the product ofaandb, and is denoteda⋅b. These operations are required to satisfy the following properties, referred to asfield axioms.
These axioms are required to hold for allelementsa,b,cof the fieldF:
An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is agroupunder addition with0as the additive identity; the nonzero elements form a group under multiplication with1as the multiplicative identity; and multiplication distributes over addition.
Even more succinctly: a field is acommutative ringwhere0 ≠ 1and all nonzero elements areinvertibleunder multiplication.
Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties.Division by zerois, by definition, excluded.[4]In order to avoidexistential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and twonullaryoperations (the constants0and1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important inconstructive mathematicsandcomputing.[5]One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants1and−1, since0 = 1 + (−1)and−a= (−1)a.[a]
Rational numbers have been widely used a long time before the elaboration of the concept of field.
They are numbers that can be written asfractionsa/b, whereaandbareintegers, andb≠ 0. The additive inverse of such a fraction is−a/b, and the multiplicative inverse (provided thata≠ 0) isb/a, which can be seen as follows:
The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows:[6]
Thereal numbersR, with the usual operations of addition and multiplication, also form a field. Thecomplex numbersCconsist of expressions
whereiis theimaginary unit, i.e., a (non-real) number satisfyingi2= −1.
Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold forC. For example, the distributive law enforces
It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in theplane, withCartesian coordinatesgiven by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines.
In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers withcompass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field ofconstructible numbers.[7]Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using onlycompassandstraightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the fieldQof rational numbers. The illustration shows the construction ofsquare rootsof constructible numbers, not necessarily contained withinQ. Using the labeling in the illustration, construct the segmentsAB,BD, and asemicircleoverAD(center at themidpointC), which intersects theperpendicularline throughBin a pointF, at a distance of exactlyh=p{\displaystyle h={\sqrt {p}}}fromBwhenBDhas length one.
Not all real numbers are constructible. It can be shown that23{\displaystyle {\sqrt[{3}]{2}}}is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of acube with volume 2, another problem posed by the ancient Greeks.
In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements calledO,I,A, andB. The notation is chosen such thatOplays the role of the additive identity element (denoted 0 in the axioms above), andIis the multiplicative identity (denoted1in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example,
This field is called afinite fieldorGalois fieldwith four elements, and is denotedF4orGF(4).[8]Thesubsetconsisting ofOandI(highlighted in red in the tables at the right) is also a field, known as thebinary fieldF2orGF(2).
In this section,Fdenotes an arbitrary field andaandbare arbitraryelementsofF.
One hasa⋅ 0 = 0and−a= (−1) ⋅a. In particular, one may deduce the additive inverse of every element as soon as one knows−1.[9]
Ifab= 0thenaorbmust be0, since, ifa≠ 0, thenb= (a−1a)b=a−1(ab) =a−1⋅ 0 = 0. This means that every field is anintegral domain.
In addition, the following properties are true for any elementsaandb:
The axioms of a fieldFimply that it is anabelian groupunder addition. This group is called theadditive groupof the field, and is sometimes denoted by(F, +)when denoting it simply asFcould be confusing.
Similarly, thenonzeroelements ofFform an abelian group under multiplication, called themultiplicative group, and denoted by(F∖{0},⋅){\displaystyle (F\smallsetminus \{0\},\cdot )}or justF∖{0}{\displaystyle F\smallsetminus \{0\}}, orF×.
A field may thus be defined as setFequipped with two operations denoted as an addition and a multiplication such thatFis an abelian group under addition,F∖{0}{\displaystyle F\smallsetminus \{0\}}is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication isdistributiveover addition.[b]Some elementary statements about fields can therefore be obtained by applying general facts ofgroups. For example, the additive and multiplicative inverses−aanda−1are uniquely determined bya.
The requirement1 ≠ 0is imposed by convention to exclude thetrivial ring, which consists of a single element; this guides any choice of the axioms that define fields.
Every finitesubgroupof the multiplicative group of a field iscyclic(seeRoot of unity § Cyclic groups).
In addition to the multiplication of two elements ofF, it is possible to define the productn⋅aof an arbitrary elementaofFby a positiveintegernto be then-fold sum
If there is no positive integer such that
thenFis said to havecharacteristic0.[11]For example, the field of rational numbersQhas characteristic 0 since no positive integernis zero. Otherwise, if thereisa positive integernsatisfying this equation, the smallest such positive integer can be shown to be aprime number. It is usually denoted bypand the field is said to have characteristicpthen.
For example, the fieldF4has characteristic2since (in the notation of the above addition table)I+I= O.
IfFhas characteristicp, thenp⋅a= 0for allainF. This implies that
since all otherbinomial coefficientsappearing in thebinomial formulaare divisible byp. Here,ap:=a⋅a⋅ ⋯ ⋅a(pfactors) is thepth power, i.e., thep-fold product of the elementa. Therefore, theFrobenius map
is compatible with the addition inF(and also with the multiplication), and is therefore a field homomorphism.[12]The existence of this homomorphism makes fields in characteristicpquite different from fields of characteristic0.
AsubfieldEof a fieldFis a subset ofFthat is a field with respect to the field operations ofF. EquivalentlyEis a subset ofFthat contains1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that1 ∊E, that for alla,b∊Ebotha+banda⋅bare inE, and that for alla≠ 0inE, both−aand1/aare inE.
Field homomorphismsare mapsφ:E→Fbetween two fields such thatφ(e1+e2) =φ(e1) +φ(e2),φ(e1e2) =φ(e1)φ(e2), andφ(1E) = 1F, wheree1ande2are arbitrary elements ofE. All field homomorphisms areinjective.[13]Ifφis alsosurjective, it is called anisomorphism(or the fieldsEandFare called isomorphic).
A field is called aprime fieldif it has no proper (i.e., strictly smaller) subfields. Any fieldFcontains a prime field. If thecharacteristicofFisp(a prime number), the prime field is isomorphic to the finite fieldFpintroduced below. Otherwise the prime field is isomorphic toQ.[14]
Finite fields(also calledGalois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory exampleF4is a field with four elements. Its subfieldF2is the smallest field, because by definition a field has at least two distinct elements,0and1.
The simplest finite fields, with prime order, are most directly accessible usingmodular arithmetic. For a fixed positive integern, arithmetic "modulon" means to work with the numbers
The addition and multiplication on this set are done by performing the operation in question in the setZof integers, dividing bynand taking the remainder as result. This construction yields a field precisely ifnis aprime number. For example, taking the primen= 2results in the above-mentioned fieldF2. Forn= 4and more generally, for anycomposite number(i.e., any numbernwhich can be expressed as a productn=r⋅sof two strictly smaller natural numbers),Z/nZis not a field: the product of two non-zero elements is zero sincer⋅s= 0inZ/nZ, which, as was explainedabove, preventsZ/nZfrom being a field. The fieldZ/pZwithpelements (pbeing prime) constructed in this way is usually denoted byFp.
Every finite fieldFhasq=pnelements, wherepis prime andn≥ 1. This statement holds sinceFmay be viewed as avector spaceover its prime field. Thedimensionof this vector space is necessarily finite, sayn, which implies the asserted statement.[15]
A field withq=pnelements can be constructed as thesplitting fieldof thepolynomial
Such a splitting field is an extension ofFpin which the polynomialfhasqzeros. This meansfhas as many zeros as possible since thedegreeoffisq. Forq= 22= 4, it can be checked case by case using the above multiplication table that all four elements ofF4satisfy the equationx4=x, so they are zeros off. By contrast, inF2,fhas only two zeros (namely0and1), sofdoes not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic.[16]It is thus customary to speak ofthefinite field withqelements, denoted byFqorGF(q).
Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations,algebraic number theory, andalgebraic geometry.[17]A first step towards the notion of a field was made in 1770 byJoseph-Louis Lagrange, who observed that permuting the zerosx1,x2,x3of acubic polynomialin the expression
(withωbeing a thirdroot of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method ofScipione del FerroandFrançois Viète, which proceeds by reducing a cubic equation for an unknownxto a quadratic equation forx3.[18]Together with a similar observation forequations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups.[19]Vandermonde, also in 1770, and to a fuller extent,Carl Friedrich Gauss, in hisDisquisitiones Arithmeticae(1801), studied the equation
for a primepand, again using modern language, the resulting cyclicGalois group. Gauss deduced that aregularp-goncan be constructed ifp= 22k+ 1. Building on Lagrange's work,Paolo Ruffiniclaimed (1799) thatquintic equations(polynomial equations of degree5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled byNiels Henrik Abelin 1824.[20]Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known asGalois theorytoday. Both Abel and Galois worked with what is today called analgebraic number field, but conceived neither an explicit notion of a field, nor of a group.
In 1871Richard Dedekindintroduced, for a set of real or complex numbers that is closed under the four arithmetic operations, theGermanwordKörper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced byMoore (1893).[21]
By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system.
In 1881Leopold Kroneckerdefined what he called adomain of rationality, which is a field ofrational fractionsin modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such asQ(π)abstractly as the rational function fieldQ(X). Prior to this, examples of transcendental numbers were known sinceJoseph Liouville's work in 1844, untilCharles Hermite(1873) andFerdinand von Lindemann(1882) proved the transcendence ofeandπ, respectively.[23]
The first clear definition of an abstract field is due toWeber (1893).[24]In particular,Heinrich Martin Weber's notion included the fieldFp.Giuseppe Veronese(1891) studied the field of formal power series, which ledHensel (1904)to introduce the field ofp-adic numbers.Steinitz (1910)synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sectionsGalois theory,Constructing fieldsandElementary notionscan be found in Steinitz's work.Artin & Schreier (1927)linked the notion oforderings in a field, and thus the area of analysis, to purely algebraic properties.[25]Emil Artinredeveloped Galois theory from 1928 through 1942, eliminating the dependency on theprimitive element theorem.
Acommutative ringis a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inversesa−1.[26]For example, the integersZform a commutative ring, but not a field: thereciprocalof an integernis not itself an integer, unlessn= ±1.
In the hierarchy of algebraic structures fields can be characterized as the commutative ringsRin which every nonzero element is aunit(which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinctideals,(0)andR. Fields are also precisely the commutative rings in which(0)is the onlyprime ideal.
Given a commutative ringR, there are two ways to construct a field related toR, i.e., two ways of modifyingRsuch that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions ofZisQ, the rationals, while the residue fields ofZare the finite fieldsFp.
Given anintegral domainR, itsfield of fractionsQ(R)is built with the fractions of two elements ofRexactly asQis constructed from the integers. More precisely, the elements ofQ(R)are the fractionsa/bwhereaandbare inR, andb≠ 0. Two fractionsa/bandc/dare equal if and only ifad=bc. The operation on the fractions work exactly as for rational numbers. For example,
It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field.[27]
The fieldF(x)of therational fractionsover a field (or an integral domain)Fis the field of fractions of thepolynomial ringF[x]. The fieldF((x))ofLaurent series
over a fieldFis the field of fractions of the ringF[[x]]offormal power series(in whichk≥ 0). Since any Laurent series is a fraction of a power series divided by a power ofx(as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though.
In addition to the field of fractions, which embedsRinjectivelyinto a field, a field can be obtained from a commutative ringRby means of asurjective maponto a fieldF. Any field obtained in this way is aquotientR/m, wheremis amaximal idealofR. IfRhas only one maximal idealm, this field is called theresidue fieldofR.[28]
Theideal generated by a single polynomialfin the polynomial ringR=E[X](over a fieldE) is maximal if and only iffisirreducibleinE, i.e., iffcannot be expressed as the product of two polynomials inE[X]of smallerdegree. This yields a field
This fieldFcontains an elementx(namely theresidue classofX) which satisfies the equation
For example,Cis obtained fromRbyadjoiningtheimaginary unitsymboli, which satisfiesf(i) = 0, wheref(X) =X2+ 1. Moreover,fis irreducible overR, which implies that the map that sends a polynomialf(X) ∊R[X]tof(i)yields an isomorphism
Fields can be constructed inside a given bigger container field. Suppose given a fieldE, and a fieldFcontainingEas a subfield. For any elementxofF, there is a smallest subfield ofFcontainingEandx, called the subfield ofFgenerated byxand denotedE(x).[29]The passage fromEtoE(x)is referred to byadjoiningan elementtoE. More generally, for a subsetS⊂F, there is a minimal subfield ofFcontainingEandS, denoted byE(S).
Thecompositumof two subfieldsEandE′of some fieldFis the smallest subfield ofFcontaining bothEandE′. The compositum can be used to construct the biggest subfield ofFsatisfying a certain property, for example the biggest subfield ofF, which is, in the language introduced below, algebraic overE.[c]
The notion of a subfieldE⊂Fcan also be regarded from the opposite point of view, by referring toFbeing afield extension(or just extension) ofE, denoted by
and read "FoverE".
A basic datum of a field extension is itsdegree[F:E], i.e., the dimension ofFas anE-vector space. It satisfies the formula[30]
Extensions whose degree is finite are referred to as finite extensions. The extensionsC/RandF4/F2are of degree2, whereasR/Qis an infinite extension.
A pivotal notion in the study of field extensionsF/Earealgebraic elements. An elementx∈FisalgebraicoverEif it is arootof apolynomialwithcoefficientsinE, that is, if it satisfies apolynomial equation
withen, ...,e0inE, anden≠ 0.
For example, theimaginary unitiinCis algebraic overR, and even overQ, since it satisfies the equation
A field extension in which every element ofFis algebraic overEis called analgebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula.[31]
The subfieldE(x)generated by an elementx, as above, is an algebraic extension ofEif and only ifxis an algebraic element. That is to say, ifxis algebraic, all other elements ofE(x)are necessarily algebraic as well. Moreover, the degree of the extensionE(x) /E, i.e., the dimension ofE(x)as anE-vector space, equals the minimal degreensuch that there is a polynomial equation involvingx, as above. If this degree isn, then the elements ofE(x)have the form
For example, the fieldQ(i)ofGaussian rationalsis the subfield ofCconsisting of all numbers of the forma+biwhere bothaandbare rational numbers: summands of the formi2(and similarly for higher exponents) do not have to be considered here, sincea+bi+ci2can be simplified toa−c+bi.
The above-mentioned field ofrational fractionsE(X), whereXis anindeterminate, is not an algebraic extension ofEsince there is no polynomial equation with coefficients inEwhose zero isX. Elements, such asX, which are not algebraic are calledtranscendental. Informally speaking, the indeterminateXand its powers do not interact with elements ofE. A similar construction can be carried out with a set of indeterminates, instead of just one.
Once again, the field extensionE(x) /Ediscussed above is a key example: ifxis not algebraic (i.e.,xis not arootof a polynomial with coefficients inE), thenE(x)is isomorphic toE(X). This isomorphism is obtained by substitutingxtoXin rational fractions.
A subsetSof a fieldFis atranscendence basisif it isalgebraically independent(do not satisfy any polynomial relations) overEand ifFis an algebraic extension ofE(S). Any field extensionF/Ehas a transcendence basis.[32]Thus, field extensions can be split into ones of the formE(S) /E(purely transcendental extensions) and algebraic extensions.
A field isalgebraically closedif it does not have any strictly bigger algebraic extensions or, equivalently, if anypolynomial equation
has a solutionx∊F.[33]By thefundamental theorem of algebra,Cis algebraically closed, i.e.,anypolynomial equation with complex coefficients has a complex solution. The rational and the real numbers arenotalgebraically closed since the equation
does not have any rational or real solution. A field containingFis called analgebraic closureofFif it isalgebraicoverF(roughly speaking, not too big compared toF) and is algebraically closed (big enough to contain solutions of all polynomial equations).
By the above,Cis an algebraic closure ofR. The situation that the algebraic closure is a finite extension of the fieldFis quite special: by theArtin–Schreier theorem, the degree of this extension is necessarily2, andFiselementarily equivalenttoR. Such fields are also known asreal closed fields.
Any fieldFhas an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to asthealgebraic closure and denotedF. For example, the algebraic closureQofQis called the field ofalgebraic numbers. The fieldFis usually rather implicit since its construction requires theultrafilter lemma, a set-theoretic axiom that is weaker than theaxiom of choice.[34]In this regard, the algebraic closure ofFq, is exceptionally simple. It is the union of the finite fields containingFq(the ones of orderqn). For any algebraically closed fieldFof characteristic0, the algebraic closure of the fieldF((t))ofLaurent seriesis the field ofPuiseux series, obtained by adjoining roots oft.[35]
Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas.
A fieldFis called anordered fieldif any two elements can be compared, so thatx+y≥ 0andxy≥ 0wheneverx≥ 0andy≥ 0. For example, the real numbers form an ordered field, with the usual ordering≥. TheArtin–Schreier theoremstates that a field can be ordered if and only if it is aformally real field, which means that any quadratic equation
only has the solutionx1=x2= ⋯ =xn= 0.[36]The set of all possible orders on a fixed fieldFis isomorphic to the set ofring homomorphismsfrom theWitt ringW(F)ofquadratic formsoverF, toZ.[37]
AnArchimedean fieldis an ordered field such that for each element there exists a finite expression
whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains noinfinitesimals(elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield ofR.
An ordered field isDedekind-completeif allupper bounds,lower bounds(seeDedekind cut) and limits, which should exist, do exist. More formally, eachbounded subsetofFis required to have a least upper bound. Any complete field is necessarily Archimedean,[38]since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit.
Since every proper subfield of the reals also contains such gaps,Ris the unique complete ordered field, up to isomorphism.[39]Several foundational results incalculusfollow directly from this characterization of the reals.
ThehyperrealsR*form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis ofnon-standard analysis.
Another refinement of the notion of a field is atopological field, in which the setFis atopological space, such that all operations of the field (addition, multiplication, the mapsa↦ −aanda↦a−1) arecontinuous mapswith respect to the topology of the space.[40]The topology of all the fields discussed below is induced from ametric, i.e., afunction
that measures adistancebetween any two elements ofF.
ThecompletionofFis another field in which, informally speaking, the "gaps" in the original fieldFare filled, if there are any. For example, anyirrational numberx, such asx=√2, is a "gap" in the rationalsQin the sense that it is a real number that can be approximated arbitrarily closely by rational numbersp/q, in the sense that distance ofxandp/qgiven by theabsolute value|x−p/q|is as small as desired.
The following table lists some examples of this construction. The fourth column shows an example of a zerosequence, i.e., a sequence whose limit (forn→ ∞) is zero.
The fieldQpis used in number theory andp-adic analysis. The algebraic closureQpcarries a unique norm extending the one onQp, but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field ofcomplexp-adic numbersand is denoted byCp.[41]
The following topological fields are calledlocal fields:[42][d]
These two types of local fields share some fundamental similarities. In this relation, the elementsp∈Qpandt∈Fp((t))(referred to asuniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients inFp. (However, since the addition inQpis done usingcarrying, which is not the case inFp((t)), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper:
Differential fieldsare fields equipped with aderivation, i.e., allow to take derivatives of elements in the field.[44]For example, the fieldR(X), together with the standard derivative of polynomials forms a differential field. These fields are central todifferential Galois theory, a variant of Galois theory dealing withlinear differential equations.
Galois theory studiesalgebraic extensionsof a field by studying thesymmetryin the arithmetic operations of addition and multiplication. An important notion in this area is that offiniteGalois extensionsF/E, which are, by definition, those that areseparableandnormal. Theprimitive element theoremshows that finite separable extensions are necessarilysimple, i.e., of the form
wherefis an irreducible polynomial (as above).[45]For such an extension, being normal and separable means that all zeros offare contained inFand thatfhas only simple zeros. The latter condition is always satisfied ifEhas characteristic0.
For a finite Galois extension, theGalois groupGal(F/E)is the group offield automorphismsofFthat are trivial onE(i.e., thebijectionsσ:F→Fthat preserve addition and multiplication and that send elements ofEto themselves). The importance of this group stems from thefundamental theorem of Galois theory, which constructs an explicitone-to-one correspondencebetween the set ofsubgroupsofGal(F/E)and the set of intermediate extensions of the extensionF/E.[46]By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is notsolvable(cannot be built fromabelian groups), then the zeros offcannotbe expressed in terms of addition, multiplication, and radicals, i.e., expressions involvingn{\displaystyle {\sqrt[{n}]{~}}}. For example, thesymmetric groupsSnis not solvable forn≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as theAbel–Ruffini theorem:
Thetensor product of fieldsis not usually a field. For example, a finite extensionF/Eof degreenis a Galois extension if and only if there is an isomorphism ofF-algebras
This fact is the beginning ofGrothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects.[48]
Basic invariants of a fieldFinclude the characteristic and thetranscendence degreeofFover its prime field. The latter is defined as the maximal number of elements inFthat are algebraically independent over the prime field. Two algebraically closed fieldsEandFare isomorphic precisely if these two data agree.[49]This implies that any twouncountablealgebraically closed fields of the samecardinalityand the same characteristic are isomorphic. For example,Qp,CpandCare isomorphic (butnotisomorphic as topological fields).
Inmodel theory, a branch ofmathematical logic, two fieldsEandFare calledelementarily equivalentif every mathematical statement that is true forEis also true forFand conversely. The mathematical statements in question are required to befirst-ordersentences (involving0,1, the addition and multiplication). A typical example, forn> 0,nan integer, is
The set of such formulas for allnexpresses thatEis algebraically closed.
TheLefschetz principlestates thatCis elementarily equivalent to any algebraically closed fieldFof characteristic zero. Moreover, any fixed statementφholds inCif and only if it holds in any algebraically closed field of sufficiently high characteristic.[50]
IfUis anultrafilteron a setI, andFiis a field for everyiinI, theultraproductof theFiwith respect toUis a field.[51]It is denoted by
since it behaves in several ways as a limit of the fieldsFi:Łoś's theoremstates that any first order statement that holds for all but finitely manyFi, also holds for the ultraproduct. Applied to the above sentenceφ, this shows that there is an isomorphism[e]
The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primesp)
In addition, model theory also studies the logical properties of various other types of fields, such asreal closed fieldsorexponential fields(which are equipped with an exponential functionexp :F→F×).[52]
For fields that are not algebraically closed (or not separably closed), theabsolute Galois groupGal(F)is fundamentally important: extending the case of finite Galois extensions outlined above, this group governsallfinite separable extensions ofF. By elementary means, the groupGal(Fq)can be shown to be thePrüfer group, theprofinite completionofZ. This statement subsumes the fact that the only algebraic extensions ofGal(Fq)are the fieldsGal(Fqn)forn> 0, and that the Galois groups of these finite extensions are given by
A description in terms of generators and relations is also known for the Galois groups ofp-adic number fields (finite extensions ofQp).[53]
Representations of Galois groupsand of related groups such as theWeil groupare fundamental in many branches of arithmetic, such as theLanglands program. The cohomological study of such representations is done usingGalois cohomology.[54]For example, theBrauer group, which is classically defined as the group ofcentral simpleF-algebras, can be reinterpreted as a Galois cohomology group, namely
Milnor K-theoryis defined as
Thenorm residue isomorphism theorem, proved around 2000 byVladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism
Algebraic K-theoryis related to the group ofinvertible matriceswith coefficients the given field. For example, the process of taking thedeterminantof an invertible matrix leads to an isomorphismK1(F) =F×.Matsumoto's theoremshows thatK2(F)agrees withK2M(F). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general.
Ifa≠ 0, then theequation
has a unique solutionxin a fieldF, namelyx=a−1b.{\displaystyle x=a^{-1}b.}This immediate consequence of the definition of a field is fundamental inlinear algebra. For example, it is an essential ingredient ofGaussian eliminationand of the proof that anyvector spacehas abasis.[55]
The theory ofmodules(the analogue of vector spaces overringsinstead of fields) is much more complicated, because the above equation may have several or no solutions. In particularsystems of linear equations over a ringare much more difficult to solve than in the case of fields, even in the specially simple case of the ringZof the integers.
A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing
in a (large) finite fieldFqcan be performed much more efficiently than thediscrete logarithm, which is the inverse operation, i.e., determining the solutionnto an equation
Inelliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on anelliptic curve, i.e., the solutions of an equation of the form
Finite fields are also used incoding theoryandcombinatorics.
Functionson a suitabletopological spaceXinto a fieldFcan be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain:
This makes these functions aF-commutative algebra.
For having afieldof functions, one must consider algebras of functions that areintegral domains. In this case the ratios of two functions, i.e., expressions of the form
form a field, called field of functions.
This occurs in two main cases. WhenXis acomplex manifoldX. In this case, one considers the algebra ofholomorphic functions, i.e., complex differentiable functions. Their ratios form the field ofmeromorphic functionsonX.
Thefunction field of an algebraic varietyX(a geometric object defined as the common zeros of polynomial equations) consists of ratios ofregular functions, i.e., ratios of polynomial functions on the variety. The function field of then-dimensionalspaceover a fieldFisF(x1, ...,xn), i.e., the field consisting of ratios of polynomials innindeterminates. The function field ofXis the same as the one of anyopendense subvariety. In other words, the function field is insensitive to replacingXby a (slightly) smaller subvariety.
The function field is invariant underisomorphismandbirational equivalenceof varieties. It is therefore an important tool for the study ofabstract algebraic varietiesand for the classification of algebraic varieties. For example, thedimension, which equals the transcendence degree ofF(X), is invariant under birational equivalence.[56]Forcurves(i.e., the dimension is one), the function fieldF(X)is very close toX: ifXissmoothandproper(the analogue of beingcompact),Xcan be reconstructed, up to isomorphism, from its field of functions.[f]In higher dimension the function field remembers less, but still decisive information aboutX. The study of function fields and their geometric meaning in higher dimensions is referred to asbirational geometry. Theminimal model programattempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field.
Global fieldsare in the limelight inalgebraic number theoryandarithmetic geometry.
They are, by definition,number fields(finite extensions ofQ) or function fields overFq(finite extensions ofFq(t)). As for local fields, these two types of fields share several similar features, even though they are of characteristic0and positive characteristic, respectively. Thisfunction field analogycan help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, theRiemann hypothesisconcerning the zeros of theRiemann zeta function(open as of 2017) can be regarded as being parallel to theWeil conjectures(proven in 1974 byPierre Deligne).
Cyclotomic fieldsare among the most intensely studied number fields. They are of the formQ(ζn), whereζnis a primitiventhroot of unity, i.e., a complex numberζthat satisfiesζn= 1andζm≠ 1for all0 <m<n.[57]Fornbeing aregular prime,Kummerused cyclotomic fields to proveFermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation
Local fields are completions of global fields.Ostrowski's theoremasserts that the only completions ofQ, a global field, are the local fieldsQpandR. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called thelocal–global principle. For example, theHasse–Minkowski theoremreduces the problem of finding rational solutions of quadratic equations to solving these equations inRandQp, whose solutions can easily be described.[58]
Unlike for local fields, the Galois groups of global fields are not known.Inverse Galois theorystudies the (unsolved) problem whether any finite group is the Galois groupGal(F/Q)for some number fieldF.[59]Class field theorydescribes theabelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, theKronecker–Weber theorem, describes the maximal abelianQabextension ofQ: it is the field
obtained by adjoining all primitiventh roots of unity.Kronecker's Jugendtraumasks for a similarly explicit description ofFabof general number fieldsF. Forimaginary quadratic fields,F=Q(−d){\displaystyle F=\mathbf {Q} ({\sqrt {-d}})},d> 0, the theory ofcomplex multiplicationdescribesFabusingelliptic curves. For general number fields, no such explicit description is known.
In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept offield with one element, which is suggested to be a limit of the finite fieldsFp, asptends to1.[60]In addition to division rings, there are various other weaker algebraic structures related to fields such asquasifields,near-fieldsandsemifields.
There are alsoproper classeswith field structure, which are sometimes calledFields, with a capital 'F'. Thesurreal numbersform a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. Thenimbers, a concept fromgame theory, form such a Field as well.[61]
Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of adivision ringorskew field;[g]sometimes associativity is weakened as well. The only division rings that are finite-dimensionalR-vector spaces areRitself,C(which is a field), and thequaternionsH(in which multiplication is non-commutative). This result is known as theFrobenius theorem. TheoctonionsO, for which multiplication is neither commutative nor associative, is a normedalternativedivision algebra, but is not a division ring. This fact was proved using methods ofalgebraic topologyin 1958 byMichel Kervaire,Raoul Bott, andJohn Milnor.[62]
Wedderburn's little theoremstates that all finitedivision ringsare fields.
|
https://en.wikipedia.org/wiki/Field_(mathematics)
|
Programming languagesandcomputing platformsthat typically supportreflective programming(reflection) includedynamically typedlanguages such asSmalltalk,Perl,PHP,Python,VBScript, andJavaScript. Also the.NETlanguages are supported and theMaude systemof rewriting logic. Very rarely there are some non-dynamic or unmanaged languages, notable examples beingDelphi, eC andObjective-C.
|
https://en.wikipedia.org/wiki/List_of_reflective_programming_languages_and_platforms
|
Inpsychology, thecollective unconsciousness(German:kollektives Unbewusstes) is a coined term byCarl Jung, which is the belief that theunconscious mindcomprises theinstinctsofJungian archetypes—innate symbols understood from birth in all humans.[1]Jung considered the collective unconscious to underpin and surround the unconscious mind, distinguishing it from thepersonal unconsciousofFreudianpsychoanalysis. He believed that the concept of the collective unconscious helps to explain why similar themes occur in mythologies around the world. He argued that the collective unconscious had a profound influence on the lives of individuals, who lived out its symbols and clothed them in meaning through their experiences. The psychotherapeutic practice ofanalytical psychologyrevolves around examining the patient's relationship to the collective unconscious.
Psychiatrist and Jungian analyst Lionel Corbett argues that the contemporary terms "autonomous psyche" or "objective psyche" are more commonly used today in the practice of depth psychology rather than the traditional term of the "collective unconscious".[2]Critics of the collective unconscious concept have called it unscientific and fatalistic, or otherwise very difficult to test scientifically (due to the mystical aspect of the collective unconscious).[3]Proponents suggest that it is borne out by findings ofpsychology,neuroscience, andanthropology.
The term "collective unconscious" first appeared in Jung's 1916 essay, "The Structure of the Unconscious".[4]This essay distinguishes between the "personal", Freudian unconscious, filled with sexual fantasies and repressed images, and the "collective" unconscious encompassing the soul of humanity at large.[5]
In "The Significance of Constitution and Heredity in Psychology" (November 1929), Jung wrote:
And the essential thing, psychologically, is that in dreams, fantasies, and other exceptional states of mind the most far-fetched mythological motifs and symbols can appearautochthonouslyat any time, often, apparently, as the result of particular influences, traditions, and excitations working on the individual, but more often without any sign of them. These "primordial images" or "archetypes," as I have called them, belong to the basic stock of the unconscious psyche and cannot be explained as personal acquisitions. Together they make up that psychic stratum which has been called the collective unconscious.The existence of the collective unconscious means that individual consciousness is anything but atabula rasaand is not immune to predetermining influences. On the contrary, it is in the highest degree influenced by inherited presuppositions, quite apart from the unavoidable influences exerted upon it by the environment. The collective unconscious comprises in itself the psychic life of our ancestors right back to the earliest beginnings. It is the matrix of all conscious psychic occurrences, and hence it exerts an influence that compromises the freedom of consciousness in the highest degree, since it is continually striving to lead all conscious processes back into the old paths.[6]
On October 19, 1936, Jung delivered a lecture "The Concept of the Collective Unconscious" to the Abernethian Society atSt. Bartholomew's Hospitalin London.[7]He said:
My thesis then, is as follows: in addition to our immediate consciousness, which is of a thoroughly personal nature and which we believe to be the only empirical psyche (even if we tack on the personal unconscious as an appendix), there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes, which can only become conscious secondarily and which give definite form to certain psychic contents.[8]
Jung linked the collective unconscious to "what Freud called 'archaic remnants' – mental forms whose presence cannot be explained by anything in the individual's own life and which seem to be aboriginal, innate, and inherited shapes of the human mind".[9]He credited Freud for developing his "primal horde" theory inTotem and Tabooand continued further with the idea of an archaic ancestor maintaining its influence in the minds of present-day humans. Every human being, he wrote, "however high his conscious development, is still an archaic man at the deeper levels of his psyche."[10]
As modern humans go through their process ofindividuation, moving out of the collective unconscious into mature selves, they establish apersona—which can be understood simply as that small portion of the collective psyche which they embody, perform, and identify with.[11]
The collective unconscious exerts overwhelming influence on the minds of individuals. These effects of course vary widely, however, since they involve virtually every emotion and situation. At times, the collective unconscious can terrify, but it can also heal.[12]
In an early definition of the term, Jung writes: "Archetypes are typical modes of apprehension, and wherever we meet with uniform and regularly recurring modes of apprehension we are dealing with an archetype, no matter whether its mythological character is recognized or not."[13]He traces the term back toPhilo,Irenaeus, and theCorpus Hermeticum, which associate archetypes with divinity and the creation of the world, and notes the close relationship ofPlatonic ideas.[14]
These archetypes dwell in a world beyond the chronology of a human lifespan, developing on an evolutionary timescale. Regarding theanimus and anima, the male principle within the woman and the female principle within the man, Jung writes:
They evidently live and function in the deeper layers of the unconscious, especially in that phylogenetic substratum which I have called the collective unconscious. This localization explains a good deal of their strangeness: they bring into our ephemeral consciousness an unknown psychic life belonging to a remote past. It is the mind of our unknown ancestors, their way of thinking and feeling, their way of experiencing life and the world, gods, and men. The existence of these archaic strata is presumably the source of man's belief in reincarnations and in memories of "previous experiences". Just as the human body is a museum, so to speak, of its phylogenetic history, so too is the psyche.[15]
Jung also described archetypes as imprints of momentous or frequently recurring situations in the lengthy human past.[16]
A complete list of archetypes cannot be made, nor can differences between archetypes be absolutely delineated.[17]For example, the Eagle is a common archetype that may have a multiplicity of interpretations. It could mean the soul leaving the mortal body and connecting with the heavenly spheres, or it may mean that someone is sexually impotent, in that they have had their spiritual ego body engaged. In spite of this difficulty, Jungian analystJune Singersuggests a partial list of well-studied archetypes, listed in pairs of opposites:[18]
Jung made reference to contents of this category of the unconscious psyche as being similar toLevy-Bruhl's use of "collective representations",HubertandMauss's "categories of the imagination", andAdolf Bastian's "primordial thoughts". He also called archetypes "dominants" because of their profound influence on mental life.
Jung's exposition of the collective unconscious builds on the classic issue in psychology and biology regardingnature versus nurture. If we accept that nature, or heredity, has some influence on the individual psyche, we must examine the question of how this influence takes hold in the real world.[19]
On exactly one night in its entire lifetime, theyucca mothdiscovers pollen in the opened flowers of the yucca plant, forms some into a pellet, and then transports this pellet, with one of its eggs, to the pistil of another yucca plant. This activity cannot be "learned"; it makes more sense to describe the yucca moth as experiencingintuitionabout how to act.[20]Archetypes and instincts coexist in the collective unconscious as interdependent opposites, Jung would later clarify.[12][21]Whereas for most animals intuitive understandings completely intertwine with instinct, in humans the archetypes have become a separate register of mental phenomena.[22]
Humans experience five main types ofinstinct, wrote Jung: hunger, sexuality, activity, reflection, and creativity. These instincts, listed in order of increasing abstraction, elicit and constrain human behavior, but also leave room for freedom in their implementation and especially in their interplay. Even a simple hungry feeling can lead to many different responses, including metaphoricalsublimation.[22][23]These instincts could be compared to the "drives" discussed in psychoanalysis and other domains of psychology.[24]Several readers of Jung have observed that in his treatment of the collective unconscious, Jung suggests an unusual mixture of primordial, "lower" forces, and spiritual, "higher" forces.[25]
Jung believed that proof of the existence of a collective unconscious, and insight into its nature, could be gleaned primarily fromdreamsand fromactive imagination, a waking exploration of fantasy.[26]
Jung considered that 'theshadow' and theanima and animusdiffer from the other archetypes in the fact that their content is more directly related to the individual's personal situation'.[27]These archetypes, a special focus of Jung's work, become autonomous personalities within an individual psyche. Jung encouraged direct conscious dialogue of the patients with these personalities within.[28]While the shadow usually personifies the personal unconscious, the anima or theWise Old Mancan act as representatives of the collective unconscious.[29]
Jung suggested thatparapsychology,alchemy, andoccultreligious ideas could contribute understanding of the collective unconscious.[30]Based on his interpretation ofsynchronicityandextra-sensory perception, Jung argued that psychic activity transcended thebrain.[31]In alchemy, Jung found that plainwater, orseawater, corresponded to his concept of the collective unconscious.[32]
In humans, the psyche mediates between the primal force of the collective unconscious and the experience of consciousness or dream. Therefore, symbols may require interpretation before they can be understood as archetypes. Jung writes:
We have only to disregard the dependence of dream language on environment and substitute "eagle" for "aeroplane," "dragon" for "automobile" or "train," "snake-bite" for "injection," and so forth, in order to arrive at the more universal and more fundamental language of mythology. This give us access to the primordial images that underlie all thinking and have a considerable influence even on our scientific ideas.[33]
A single archetype can manifest in many different ways. Regarding the Mother archetype, Jung suggests that not only can it apply to mothers, grandmothers, stepmothers, mothers-in-law, and mothers in mythology, but to various concepts, places, objects, and animals:
Other symbols of the mother in a figurative sense appear in things representing the goal of our longing for redemption, such as Paradise, the Kingdom of God, the Heavenly Jerusalem. Many things arousing devotion or feelings of awe, as for instance the Church, university, city or country, heaven, earth, the woods, the sea or any still waters, matter even, the underworld and the moon, can be mother-symbols. The archetype is often associated with things and places standing for fertility and fruitfulness: the cornucopia, a ploughed field, a garden. It can be attached to a rock, a cave, a tree, a spring, a deep well, or to various vessels such as the baptismal font, or to vessel-shaped flowers like the rose or the lotus. Because of the protection it implies, the magic circle or mandala can be a form of mother archetype. Hollow objects such as ovens or cooking vessels are associated with the mother archetype, and, of course, the uterus,yoni, and anything of a like shape. Added to this list there are many animals, such as the cow, hare, and helpful animals in general.[34]
Care must be taken, however, to determine the meaning of a symbol through further investigation; one cannot simply decode a dream by assuming these meanings are constant. Archetypal explanations work best when an already-known mythological narrative can clearly help to explain the confusing experience of an individual.[35]
In his clinical psychiatry practice, Jung identified mythological elements which seemed to recur in the minds of his patients—above and beyond the usual complexes which could be explained in terms of their personal lives.[36]The most obvious patterns applied to the patient's parents: "Nobody knows better than the psychotherapist that the mythologizing of the parents is often pursued far into adulthood and is given up only with the greatest resistance."[37]
Jung cited recurring themes as evidence of the existence of psychic elements shared among all humans. For example: "The snake-motif was certainly not an individual acquisition of the dreamer, for snake-dreams are very common even among city-dwellers who have probably never seen a real snake."[38][35]Still better evidence, he felt, came when patients described complex images and narratives with obscure mythological parallels.[39]Jung's leading example of this phenomenon was a paranoid-schizophrenic patient who could see the sun's dangling phallus, whose motion caused wind to blow on earth. Jung found a direct analogue of this idea in the "Mithras Liturgy", from theGreek Magical Papyriof Ancient Egypt—only just translated into German—which also discussed a phallic tube, hanging from the sun, and causing wind to blow on earth. He concluded that the patient's vision and the ancient Liturgy arose from the same source in the collective unconscious.[40]
Going beyond the individual mind, Jung believed that "the whole of mythology could be taken as a sort of projection of the collective unconscious". Therefore, psychologists could learn about the collective unconscious by studyingreligionsandspiritual practicesof all cultures, as well as belief systems likeastrology.[41]
Popperiancritic Ray Scott Percival disputes some of Jung's examples and argues that his strongest claims are notfalsifiable. Percival takes special issue with Jung's claim that major scientific discoveries emanate from the collective unconscious and not from unpredictable or innovative work done by scientists. Percival charges Jung with excessivedeterminismand writes: "He could not countenance the possibility that people sometimes create ideas that cannot be predicted, even in principle." Regarding the claim that all humans exhibit certain patterns of mind, Percival argues that these common patterns could be explained by common environments (i.e. by shared nurture, not nature). Because all people have families, encounter plants and animals, and experience night and day, it should come as no surprise that they develop basic mental structures around these phenomena.[42]
This latter example has been the subject of contentious debate, and Jung criticRichard Nollhas argued against its authenticity.[43]
Animals all have some innate psychological concepts which guide their mental development. The concept ofimprintinginethologyis one well-studied example, dealing most famously with the Mother constructs of newborn animals. The many predetermined scripts for animal behavior are calledinnate releasing mechanisms.[44]
Proponents of the collective unconscious theory in neuroscience suggest that mental commonalities in humans originate especially from the subcortical area of the brain: specifically, thethalamusandlimbic system. These centrally located structures link the brain to the rest of the nervous system and are said to control vital processes including emotions and long-term memory .[25]
A more common experimental approach investigates the unique effects of archetypal images. An influential study of this type, by Rosen, Smith, Huston, & Gonzalez in 1991, found that people could better remember symbols paired with words representing their archetypal meaning. Using data from theArchive for Research in Archetypal Symbolismand a jury of evaluators, Rosen et al. developed an "Archetypal Symbol Inventory" listing symbols and one-word connotations. Many of these connotations were obscure to laypeople. For example, a picture of a diamond represented "self"; a square represented "Earth". They found that even when subjects did not consciously associate the word with the symbol, they were better able to remember the pairing of the symbol with its chosen word.[45]Brown & Hannigan replicated this result in 2013, and expanded the study slightly to include tests in English and in Spanish of people who spoke both languages.[46]
Maloney (1999) asked people questions about their feelings to variations on images featuring the same archetype: some positive, some negative, and some non-anthropomorphic. He found that although the images did not elicit significantly different responses to questions about whether they were "interesting" or "pleasant", but did provoke highly significant differences in response to the statement: "If I were to keep this image with me forever, I would be". Maloney suggested that this question led the respondents to process the archetypal images on a deeper level, which strongly reflected their positive or negative valence.[47]
Ultimately, although Jung referred to the collective unconscious as anempiricalconcept, based on evidence, its elusive nature does create a barrier to traditional experimental research. June Singer writes:
But the collective unconscious lies beyond the conceptual limitations of individual human consciousness, and thus cannot possibly be encompassed by them. We cannot, therefore, make controlled experiments to prove the existence of the collective unconscious, for the psyche of man, holistically conceived, cannot be brought under laboratory conditions without doing violence to its nature. ... In this respect, psychology may be compared to astronomy, the phenomena of which also cannot be enclosed within a controlled setting. The heavenly bodies must be observed where they exist in the natural universe, under their own conditions, rather than under conditions we might propose to set for them.[48]
Psychotherapy based on analytical psychology would seek to analyze the relationship between a person's individual consciousness and the deeper common structures which underlie them. Personal experiences both activate archetypes in the mind and give them meaning and substance for individual.[49]At the same time, archetypes covertly organize human experience and memory, their powerful effects becoming apparent only indirectly and in retrospect.[50][51]Understanding the power of the collective unconscious can help an individual to navigate through life.
In the interpretation of analytical psychologist Mary Williams, a patient who understands the impact of the archetype can help to dissociate the underlying symbol from the real person who embodies the symbol for the patient. In this way, the patient no longer uncritically transfers their feelings about the archetype onto people in everyday life, and as a result, can develop healthier and more personal relationships.[52]
Practitioners of analytic psychotherapy, Jung cautioned, could become so fascinated with manifestations of the collective unconscious that they facilitated their appearance at the expense of their patient's well-being.[52]Individuals withschizophrenia, it is said, fully identify with the collective unconscious, lacking a functioning ego to help them deal with actual difficulties of life.[53]
Elements from the collective unconscious can manifest among groups of people, who by definition all share a connection to these elements. Groups of people can become especially receptive to specific symbols due to the historical situation they find themselves in.[54]The common importance of the collective unconscious makes people ripe for political manipulation, especially in the era ofmass politics.[55]Jung compared mass movements to mass psychoses, comparable todemonic possessionin which people uncritically channel unconscious symbolism through the social dynamic of themoband theleader.[56]
Althoughcivilizationleads people to disavow their links with the mythological world of uncivilized societies, Jung argued that aspects of the primitive unconscious would nevertheless reassert themselves in the form ofsuperstitions, everyday practices, and unquestioned traditions such as theChristmas tree.[57]
Based on empirical inquiry, Jung felt that all humans, regardless of racial and geographic differences, share the same collective pool of instincts and images, though these manifest differently due to the moulding influence of culture.[58]However, above and in addition to the primordial collective unconscious, people within a certain culture may share additional bodies of primal collective ideas.[59]
Jung called theUFO phenomenona "living myth", a legend in the process of consolidation.[60]Belief in a messianic encounter with UFOs demonstrated the point, Jung argued, that even if a rationalistic modern ideology repressed the images of the collective unconscious, its fundamental aspects would inevitably resurface. The circular shape of the flying saucer confirms its symbolic connection to repressed but psychically necessary ideas of divinity.[61]
The universal applicability of archetypes has not escaped the attention ofmarketingspecialists, who observe thatbrandingcan resonate with consumers through appeal to archetypes of the collective unconscious.
Jung contrasted the collective unconscious with thepersonal unconscious, the unique aspects of an individual study which Jung says constitute the focus ofSigmund FreudandAlfred Adler.[62]Psychotherapy patients, it seemed to Jung, often described fantasies and dreams which repeated elements from ancient mythology. These elements appeared even in patients who were probably not exposed to the original story. For example, mythology offers many examples of the "dual mother" narrative, according to which a child has a biological mother and a divine mother. Therefore, argues Jung, Freudian psychoanalysis would neglect important sources for unconscious ideas, in the case of a patient with neurosis around a dual-mother image.[63]
This divergence over the nature of the unconscious has been cited as a key aspect of Jung's famous split fromSigmund Freudand his school ofpsychoanalysis.[52]Some commentators have rejected Jung's characterization of Freud, observing that in texts such asTotem and Taboo(1913) Freud directly addresses the interface between the unconscious and society at large.[42]Jung himself said that Freud had discovered a collective archetype, theOedipus complex, but that it "was the first archetype Freud discovered, the first and only one".[64]
Probably none of my empirical concepts has been met with so much misunderstanding as the idea of the collective unconscious.
Jung also distinguished the collective unconscious andcollective consciousness, between which lay "an almost unbridgeable gulf over which the subject finds himself suspended". According to Jung, collective consciousness (meaning something along the lines ofconsensus reality) offered only generalizations, simplistic ideas, and the fashionable ideologies of the age. This tension between collective unconscious and collective consciousness corresponds roughly to the "everlasting cosmic tug of war between good and evil" and has worsened in the time of themass man.[66][67]
Organized religion, exemplified by theCatholic Church, lies more with the collective consciousness; but, through its all-encompassingdogmait channels and molds the images which inevitably pass from the collective unconscious into the minds of people.[68][69](Conversely, religious critics includingMartin Buberaccused Jung of wrongly placing psychology above transcendental factors in explaining human experience.)[70]
In a minimalist interpretation of what would then appear as "Jung's much misunderstood idea of the collective unconscious", his idea was "simply that certain structures and predispositions of the unconscious are common to all of us ... [on] an inherited, species-specific, genetic basis".[71]Thus "one could as easily speak of the 'collective arm' – meaning the basic pattern of bones and muscles which all human arms share in common."[72]
Others point out however that "there does seem to be a basic ambiguity in Jung's various descriptions of the Collective Unconscious. Sometimes he seems to regard the predisposition to experience certain images as understandable in terms of some genetic model"[73]– as with the collective arm. However, Jung was "also at pains to stress thenuminousquality of these experiences, and there can be no doubt that he was attracted to the idea that the archetypes afford evidence of some communion with some divine or world mind', and perhaps 'his popularity as a thinker derives precisely from this"[74]– the maximal interpretation.
Marie-Louise von Franzaccepted that "it is naturally very tempting to identify the hypothesis of the collective unconscious historically and regressively with the ancient idea of an all-extensiveworld-soul."[75]New Agewriter Sherry Healy goes further, claiming that Jung himself "dared to suggest that the human mind could link to ideas and motivations called the collective unconscious ... a body of unconscious energy that lives forever."[76]This is the idea ofmonopsychism.
Other researchers, including Alexander Fowler, have proposed using the minimal interpretation of his work and incorporating it into that of the theory of biological evolution (i.e., sexual selection) or to unify disparate theoretical orientations within psychology such as neuropsychology, evolutionary psychology and analytical psychology as Jung's postulation of an evidenced mechanism for the genetic transmission of information through sexual selection provides a singular explanation for unanswered questions held by those of varied theoretical orientations.[77][78]
|
https://en.wikipedia.org/wiki/Collective_unconscious
|
In mathematics,Galois ringsare a type offinitecommutative ringswhich generalize both thefinite fieldsand therings of integers moduloaprime power. A Galois ring is constructed from the ringZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }similar to how a finite fieldFpr{\displaystyle \mathbb {F} _{p^{r}}}is constructed fromFp{\displaystyle \mathbb {F} _{p}}. It is aGalois extensionofZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }, when the concept of a Galois extension is generalized beyond the context offields.
Galois rings were studied byKrull(1924),[1]and independently by Janusz (1966)[2]and by Raghavendran (1969),[3]who both introduced the nameGalois ring. They are named afterÉvariste Galois, similar toGalois fields, which is another name for finite fields. Galois rings have found applications incoding theory, where certain codes are best understood aslinear codesoverZ/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} }using Galois rings GR(4,r).[4][5]
A Galois ring is a commutative ring ofcharacteristicpnwhich haspnrelements, wherepis prime andnandrare positive integers. It is usually denoted GR(pn,r). It can be defined as aquotient ring
wheref(x)∈Z[x]{\displaystyle f(x)\in \mathbb {Z} [x]}is amonic polynomialof degreerwhich isirreduciblemodulop.[6][7]Up to isomorphism, the ring depends only onp,n, andrand not on the choice offused in the construction.[8]
The simplest examples of Galois rings are important special cases:
A less trivial example is the Galois ring GR(4, 3). It is of characteristic 4 and has 43= 64 elements. One way to construct it isZ[x]/(4,x3+2x2+x−1){\displaystyle \mathbb {Z} [x]/(4,x^{3}+2x^{2}+x-1)}, or equivalently,(Z/4Z)[ξ]{\displaystyle (\mathbb {Z} /4\mathbb {Z} )[\xi ]}whereξ{\displaystyle \xi }is a root of the polynomialf(x)=x3+2x2+x−1{\displaystyle f(x)=x^{3}+2x^{2}+x-1}. Although any monic polynomial of degree 3 which is irreducible modulo 2 could have been used, this choice offturns out to be convenient because
in(Z/4Z)[x]{\displaystyle (\mathbb {Z} /4\mathbb {Z} )[x]}, which makesξ{\displaystyle \xi }a 7throot of unityin GR(4, 3). The elements of GR(4, 3) can all be written in the forma2ξ2+a1ξ+a0{\displaystyle a_{2}\xi ^{2}+a_{1}\xi +a_{0}}where each ofa0,a1, anda2is inZ/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} }. For example,ξ3=2ξ2−ξ+1{\displaystyle \xi ^{3}=2\xi ^{2}-\xi +1}andξ4=2ξ3−ξ2+ξ=−ξ2−ξ+2{\displaystyle \xi ^{4}=2\xi ^{3}-\xi ^{2}+\xi =-\xi ^{2}-\xi +2}.[4]
Every Galois ring GR(pn,r) has aprimitive (pr– 1)-th root of unity. It is the equivalence class ofxin the quotientZ[x]/(pn,f(x)){\displaystyle \mathbb {Z} [x]/(p^{n},f(x))}whenfis chosen to be aprimitive polynomial. This means that, in(Z/pnZ)[x]{\displaystyle (\mathbb {Z} /p^{n}\mathbb {Z} )[x]}, the polynomialf(x){\displaystyle f(x)}dividesxpr−1−1{\displaystyle x^{p^{r}-1}-1}and does not dividexm−1{\displaystyle x^{m}-1}for allm<pr– 1. Such anfcan be computed by starting with aprimitive polynomialof degreerover the finite fieldFp{\displaystyle \mathbb {F} _{p}}and usingHensel lifting.[9]
A primitive (pr– 1)-th root of unityξ{\displaystyle \xi }can be used to express elements of the Galois ring in a useful form called thep-adic representation. Every element of the Galois ring can be written uniquely as
where eachαi{\displaystyle \alpha _{i}}is in the set{0,1,ξ,ξ2,...,ξpr−2}{\displaystyle \{0,1,\xi ,\xi ^{2},...,\xi ^{p^{r}-2}\}}.[7][9]
Every Galois ring is alocal ring. The uniquemaximal idealis theprincipal ideal(p)=pGR(pn,r){\displaystyle (p)=p\operatorname {GR} (p^{n},r)}, consisting of all elements which are multiples ofp. Theresidue fieldGR(pn,r)/(p){\displaystyle \operatorname {GR} (p^{n},r)/(p)}is isomorphic to the finite field of orderpr. Furthermore,(0),(pn−1),...,(p),(1){\displaystyle (0),(p^{n-1}),...,(p),(1)}are all the ideals.[6]
The Galois ring GR(pn,r) contains a uniquesubringisomorphic to GR(pn,s) for everyswhich dividesr. These are the only subrings of GR(pn,r).[10]
Theunitsof a Galois ringRare all the elements which are not multiples ofp. The group of units,R×, can be decomposed as adirect productG1×G2, as follows. The subgroupG1is the group of (pr− 1)-th roots of unity. It is acyclic groupof orderpr− 1. The subgroupG2is 1+pR, consisting of all elements congruent to 1 modulop. It is a group of orderpr(n−1), with the following structure:
This description generalizes the structure of themultiplicative group of integers modulopn, which is the caser= 1.[11]
Analogous to the automorphisms of the finite fieldFpr{\displaystyle \mathbb {F} _{p^{r}}}, theautomorphism groupof the Galois ring GR(pn,r) is a cyclic group of orderr.[12]The automorphisms can be described explicitly using thep-adic representation. Specifically, the map
(where eachαi{\displaystyle \alpha _{i}}is in the set{0,1,ξ,ξ2,...,ξpr−2}{\displaystyle \{0,1,\xi ,\xi ^{2},...,\xi ^{p^{r}-2}\}}) is an automorphism, which is called the generalizedFrobenius automorphism. Thefixed pointsof the generalized Frobenius automorphism are the elements of the subringZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }. Iterating the generalized Frobenius automorphism gives all the automorphisms of the Galois ring.[13]
The automorphism group can be thought of as theGalois groupof GR(pn,r) overZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }, and the ring GR(pn,r) is aGalois extensionofZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }. More generally, wheneverris a multiple ofs, GR(pn,r) is a Galois extension of GR(pn,s), with Galois group isomorphic toGal(Fpr/Fps){\displaystyle \operatorname {Gal} (\mathbb {F} _{p^{r}}/\mathbb {F} _{p^{s}})}.[14][13]
|
https://en.wikipedia.org/wiki/Galois_ring
|
Johann Carl Friedrich Gauss(/ɡaʊs/ⓘ;[2]German:Gauß[kaʁlˈfʁiːdʁɪçˈɡaʊs]ⓘ;[3][4]Latin:Carolus Fridericus Gauss; 30 April 1777 – 23 February 1855) was a Germanmathematician,astronomer,geodesist, andphysicist, who contributed to many fields in mathematics and science. He was director of theGöttingen Observatoryand professor of astronomy from 1807 until his death in 1855.
While studying at theUniversity of Göttingen, he propounded several mathematicaltheorems. As an independent scholar, he wrote themasterpiecesDisquisitiones ArithmeticaeandTheoria motus corporum coelestium. Gauss produced the second and third complete proofs of thefundamental theorem of algebra. Innumber theory, he made numerous contributions, such as thecomposition law, thelaw of quadratic reciprocityand theFermat polygonal number theorem. He also contributed to the theory of binary and ternary quadratic forms, the construction of theheptadecagon, and the theory ofhypergeometric series. Due to Gauss' extensive and fundamental contributions to science and mathematics, more than100 mathematical and scientific conceptsare named after him.
Gauss was instrumental in the identification ofCeresas a dwarf planet. His work on the motion of planetoids disturbed by large planets led to the introduction of theGaussian gravitational constantand themethod of least squares, which he had discovered beforeAdrien-Marie Legendrepublished it. Gauss led the geodetic survey of the Kingdom of Hanover together with an arc measurement project from 1820 to 1844; he was one of the founders ofgeophysicsand formulated the fundamental principles ofmagnetism. His practical work led to the invention of theheliotropein 1821, amagnetometerin 1833 and – withWilhelm Eduard Weber– the first electromagnetictelegraphin 1833.
Gauss was the first to discover and studynon-Euclidean geometry, which he also named. He developed afast Fourier transformsome 160 years beforeJohn TukeyandJames Cooley.
Gauss refused to publish incomplete work and left several works to be editedposthumously. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment. Gauss was not a committed or enthusiastic teacher, generally preferring to focus on his own work. Nevertheless, some of his students, such asDedekindandRiemann, became well-known and influential mathematicians in their own right.
Gauss was born on 30 April 1777 inBrunswickin theDuchy of Brunswick-Wolfenbüttel(now in the German state ofLower Saxony). His family was of relatively low social status.[5]His father Gebhard Dietrich Gauss (1744–1808) worked variously as a butcher, bricklayer, gardener, and treasurer of a death-benefit fund. Gauss characterized his father as honourable and respected, but rough and dominating at home. He was experienced in writing and calculating, whereas his second wife Dorothea, Carl Friedrich's mother, was nearly illiterate.[6]He had one elder brother from his father's first marriage.[7]
Gauss was achild prodigyin mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of theDuke of Brunswickwho sent him to the localCollegium Carolinum,[a]which he attended from 1792 to 1795 withEberhard August Wilhelm von Zimmermannas one of his teachers.[9][10][11]Thereafter the Duke granted him the resources for studies of mathematics, sciences, andclassical languagesat theUniversity of Göttingenuntil 1798.[12]His professor in mathematics wasAbraham Gotthelf Kästner, whom Gauss called "the leading mathematician among poets, and the leading poet among mathematicians" because of hisepigrams.[13][b]Astronomy was taught byKarl Felix Seyffer, with whom Gauss stayed in correspondence after graduation;[14]Olbersand Gauss mocked him in their correspondence.[15]On the other hand, he thought highly ofGeorg Christoph Lichtenberg, his teacher of physics, and ofChristian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure.[14]Fellow students of this time wereJohann Friedrich Benzenberg,Farkas Bolyai, andHeinrich Wilhelm Brandes.[14]
He was likely a self-taught student in mathematics since he independently rediscovered several theorems.[11]He solved a geometrical problem that had occupied mathematicians since theAncient Greekswhen he determined in 1796 which regularpolygonscan be constructed bycompass and straightedge. This discovery ultimately led Gauss to choose mathematics instead ofphilologyas a career.[16]Gauss's mathematical diary, a collection of short remarks about his results from the years 1796 until 1814, shows that many ideas for his mathematical magnum opusDisquisitiones Arithmeticae(1801) date from this time.[17]
As an elementary student, Gauss and his class were tasked by their teacher, J.G. Büttner, to sum the numbers from 1 to 100. Much to Büttner's surprise, Gauss replied with the correct answer of 5050 in a vastly faster time than expected.[18]Gauss had realised that the sum could be rearranged as 50 pairs of 101 (1+100=101, 2+99=101, etc.). Thus, he simply multiplied 50 by 101.[19]Other accounts state that he computed the sum as 100 sets of 101 and divided by 2.[20]
Gauss graduated as aDoctor of Philosophyin 1799, not in Göttingen, as is sometimes stated,[c][21]but at the Duke of Brunswick's special request from the University of Helmstedt, the only state university of the duchy.Johann Friedrich Pfaffassessed his doctoral thesis, and Gauss got the degreein absentiawithout further oral examination.[11]The Duke then granted him the cost of living as a private scholar in Brunswick. Gauss subsequently refused calls from theRussian Academy of SciencesinSt. PeterburgandLandshut University.[22][23]Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. ArchitectPeter Joseph Krahemade preliminary designs, but one ofNapoleon's warscancelled those plans:[24]the Duke was killed in thebattle of Jenain 1806. The duchy was abolished in the following year, and Gauss's financial support stopped.
When Gauss was calculating asteroid orbits in the first years of the century, he established contact with the astronomical communities ofBremenandLilienthal, especiallyWilhelm Olbers,Karl Ludwig Harding, andFriedrich Wilhelm Bessel, forming part of the informal group of astronomers known as theCelestial police.[25]One of their aims was the discovery of further planets. They assembled data on asteroids and comets as a basis for Gauss's research on their orbits, which he later published in his astronomical magnum opusTheoria motus corporum coelestium(1809).[26]
In November 1807, Gauss was hired by theUniversity of Göttingen, then an institution of the newly foundedKingdom of WestphaliaunderJérôme Bonaparte, as full professor and director of theastronomical observatory,[27]and kept the chair until his death in 1855. He was soon confronted with the demand for two thousandfrancsfrom the Westphalian government as a war contribution, which he could not afford to pay. Both Olbers andLaplacewanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person fromFrankfurt, later discovered to bePrince-primateDalberg,[28]paid the sum.[27]
Gauss took on the directorship of the 60-year-old observatory, founded in 1748 byPrince-electorGeorge IIand built on a converted fortification tower,[29]with usable, but partly out-of-date instruments.[30]The construction of a new observatory had been approved by Prince-electorGeorge IIIin principle since 1802, and the Westphalian government continued the planning,[31]but Gauss could not move to his new place of work until September 1816.[23]He got new up-to-date instruments, including twomeridian circlesfromRepsold[32]andReichenbach,[33]and aheliometerfromFraunhofer.[34]
The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: astronomy was the main focus in the first two decades of the 19th century, geodesy in the third decade, and physics, mainly magnetism, in the fourth decade.[35]
Gauss made no secret of his aversion to giving academic lectures.[22][23]But from the start of his academic career at Göttingen, he continuously gave lectures until 1854.[36]He often complained about the burdens of teaching, feeling that it was a waste of his time. On the other hand, he occasionally described some students as talented.[22]Most of his lectures dealt with astronomy, geodesy, andapplied mathematics,[37]and only three lectures on subjects of pure mathematics.[22][d]Some of Gauss's students went on to become renowned mathematicians, physicists, and astronomers:Moritz Cantor,Dedekind,Dirksen,Encke,Gould,[e]Heine,Klinkerfues,Kupffer,Listing,Möbius,Nicolai,Riemann,Ritter,Schering,Scherk,Schumacher,von Staudt,Stern,Ursin; as geoscientistsSartorius von Waltershausen, andWappäus.[22]
Gauss did not write any textbook and disliked thepopularizationof scientific matters. His only attempts at popularization were his works on the date of Easter (1800/1802) and the essayErdmagnetismus und Magnetometerof 1836.[39]Gauss published his papers and books exclusively inLatinorGerman.[f][g]He wrote Latin in a classical style but used some customary modifications set by contemporary mathematicians.[42]
Gauss gave his inaugural lecture at Göttingen University in 1808. He described his approach to astronomy as based on reliable observations and accurate calculations, rather than on belief or empty hypothesizing.[37]At university, he was accompanied by a staff of other lecturers in his disciplines, who completed the educational program; these included the mathematician Thibaut with his lectures,[44]the physicistMayer, known for his textbooks,[45]his successorWebersince 1831, and in the observatoryHarding, who took the main part of lectures in practical astronomy. When the observatory was completed, Gauss occupied the western wing of the new observatory, while Harding took the eastern.[23]They had once been on friendly terms, but over time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer.[23][h]Gauss used the newmeridian circlesnearly exclusively, and kept them away from Harding, except for some very seldom joint observations.[47]
Brendelsubdivides Gauss's astronomic activity chronologically into seven periods, of which the years since 1820 are taken as a "period of lower astronomical activity".[48]The new, well-equipped observatory did not work as effectively as other ones; Gauss's astronomical research had the character of a one-man enterprise without a long-time observation program, and the university established a place for an assistant only after Harding died in 1834.[46][47][i]
Nevertheless, Gauss twice refused the opportunity to solve the problem, turning down offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy without burdening lecturing duties, as well as fromLeipzig Universityin 1810 and fromVienna Universityin 1842, perhaps because of the family's difficult situation.[46]Gauss's salary was raised from 1000Reichsthalerin 1810 to 2500 Reichsthaler in 1824,[23]and in his later years he was one of the best-paid professors of the university.[49]
When Gauss was asked for help by his colleague and friendFriedrich Wilhelm Besselin 1810, who was in trouble atKönigsberg Universitybecause of his lack of an academic title, Gauss provided adoctoratehonoris causafor Bessel from the Philosophy Faculty of Göttingen in March 1811.[j]Gauss gave another recommendation for an honorary degree forSophie Germainbut only shortly before her death, so she never received it.[52]He also gave successful support to the mathematicianGotthold Eisensteinin Berlin.[53]
Gauss was loyal to theHouse of Hanover. After KingWilliam IVdied in 1837, the new Hanoverian KingErnest Augustusannulled the 1833 constitution. Seven professors, later known as the "Göttingen Seven", protested against this, among them his friend and collaborator Wilhelm Weber and Gauss's son-in-law Heinrich Ewald. All of them were dismissed, and three of them were expelled, but Ewald and Weber could stay in Göttingen. Gauss was deeply affected by this quarrel but saw no possibility to help them.[54]
Gauss took part in academic administration: three times he was elected asdeanof the Faculty of Philosophy.[55]Being entrusted with the widow'spension fundof the university, he dealt withactuarial scienceand wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years.[55]
Gauss remained mentally active into his old age, even while suffering fromgoutand general unhappiness. On 23 February 1855, he died of a heart attack in Göttingen;[13]and was interred in theAlbani Cemeterythere.Heinrich Ewald, Gauss's son-in-law, andWolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral.[56]
Gauss was a successful investor and accumulated considerable wealth with stocks and securities, amounting to a value of more than 150,000 Thaler; after his death, about 18,000 Thaler were found hidden in his rooms.[57]
The day after Gauss's death his brain was removed, preserved, and studied byRudolf Wagner, who found its mass to be slightly above average, at 1,492 grams (3.29 lb).[58][59]Wagner's sonHermann, a geographer, estimated the cerebral area to be 219,588 square millimetres (340.362 sq in) in his doctoral thesis.[60]In 2013, a neurobiologist at theMax Planck Institute for Biophysical Chemistryin Göttingen discovered that Gauss's brain had been mixed up soon after the first investigations, due to mislabelling, with that of the physicianConrad Heinrich Fuchs, who died in Göttingen a few months after Gauss.[61]A further investigation showed no remarkable anomalies in the brains of either person. Thus, all investigations of Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs.[62]
Gauss married Johanna Osthoff on 9 October 1805 in St. Catherine's church in Brunswick.[63]They had two sons and one daughter: Joseph (1806–1873), Wilhelmina (1808–1840), and Louis (1809–1810). Johanna died on 11 October 1809, one month after the birth of Louis, who himself died a few months later.[64]Gauss chose the first names of his children in honour ofGiuseppe Piazzi, Wilhelm Olbers, and Karl Ludwig Harding, the discoverers of the first asteroids.[65]
On 4 August 1810, Gauss married Wilhelmine (Minna) Waldeck, a friend of his first wife, with whom he had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879), and Therese (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade.[66]Therese then took over the household and cared for Gauss for the rest of his life; after her father's death, she married actor Constantin Staufenau.[67]Her sister Wilhelmina married the orientalistHeinrich Ewald.[68]Gauss's mother Dorothea lived in his house from 1817 until she died in 1839.[12]
The eldest son Joseph, while still a schoolboy, helped his father as an assistant during the survey campaign in the summer of 1821. After a short time at university, in 1824 Joseph joined theHanoverian armyand assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network into the western parts of the kingdom. With his geodetical qualifications, he left the service and engaged in the construction of the railway network as director of theRoyal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months.[49][k]
Eugen left Göttingen in September 1830 and emigrated to the United States, where he spent five years with the army. He then worked for theAmerican Fur Companyin the Midwest. He later moved toMissouriand became a successful businessman.[49]Wilhelm married a niece of the astronomerBessel;[71]he then moved to Missouri, started as a farmer and became wealthy in the shoe business inSt. Louisin later years.[72]Eugene and William have numerous descendants in America, but the Gauss descendants left in Germany all derive from Joseph, as the daughters had no children.[49]
In the first two decades of the 19th century, Gauss was the only important mathematician in Germany comparable to the leading French mathematicians.[73]HisDisquisitiones Arithmeticaewas the first mathematical book from Germany to be translated into the French language.[74]
Gauss was "in front of the new development" with documented research since 1799, his wealth of new ideas, and his rigour of demonstration.[75]In contrast to previous mathematicians likeLeonhard Euler, who let their readers take part in their reasoning, including certain erroneous deviations from the correct path,[76]Gauss introduced a new style of direct and complete exposition that did not attempt to show the reader the author's train of thought.[77]
Gauss was the first to restore thatrigorof demonstration which we admire in the ancients and which had been forced unduly into the background by the exclusive interest of the preceding period innewdevelopments.
But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai as follows:[78]
It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from it, in order to go into darkness again.
His posthumous papers, his scientificdiary,[79]and short glosses in his own textbooks show that he empirically worked to a great extent.[80][81]He was a lifelong busy and enthusiastic calculator, working extraordinarily quickly and checking his results through estimation. Nevertheless, his calculations were not always free from mistakes.[82]He coped with the enormous workload by using skillful tools.[83]Gauss used numerousmathematical tables, examined their exactness, and constructed new tables on various matters for personal use.[84]He developed new tools for effective calculation, for example theGaussian elimination.[85]Gauss's calculations and the tables he prepared were often more precise than practically necessary.[86]Very likely, this method gave him additional material for his theoretical work.[83][87]
Gauss was only willing to publish work when he considered it complete and above criticism. Thisperfectionismwas in keeping with the motto of his personalsealPauca sed Matura("Few, but Ripe"). Many colleagues encouraged him to publicize new ideas and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself by claiming that the initial discovery of ideas was easy, but preparing a presentable elaboration was a demanding matter for him, for either lack of time or "serenity of mind".[39]Nevertheless, he published many short communications of urgent content in various journals, but left a considerable literary estate, too.[88][89]Gauss referred to mathematics as "the queen of sciences" and arithmetics as "the queen of mathematics",[90]and supposedly once espoused a belief in the necessity of immediately understandingEuler's identityas a benchmark pursuant to becoming a first-class mathematician.[91]
On certain occasions, Gauss claimed that the ideas of another scholar had already been in his possession previously. Thus his concept of priority as "the first to discover, not the first to publish" differed from that of his scientific contemporaries.[92]In contrast to his perfectionism in presenting mathematical ideas, his citations were criticized as negligent. He justified himself with an unusual view of correct citation practice: he would only give complete references, with respect to the previous authors of importance, which no one should ignore, but citing in this way would require knowledge of the history of science and more time than he wished to spend.[39]
Soon after Gauss's death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw him as a serene and forward-striving man with childlike modesty,[93]but also of "iron character"[94]with an unshakeable strength of mind.[95]Apart from his closer circle, others regarded him as reserved and unapproachable "like anOlympiansitting enthroned on the summit of science".[96]His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by his grumpy behaviour, but a short time later his mood could change, and he would become a charming, open-minded host.[39]Gauss disliked polemic natures; together with his colleagueHausmannhe opposed to a call forJustus Liebigon a university chair in Göttingen, "because he was always involved in some polemic."[97]
Gauss's life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the birth of their third child, he revealed the grief in a last letter to his dead wife in the style of an ancientthrenody, the most personal of his surviving documents.[98][99]His second wife and his two daughters suffered fromtuberculosis.[100]In a letter toBessel, dated December 1831, Gauss hinted at his distress, describing himself as "the victim of the worst domestic sufferings".[39]
Because of his wife's illness, both younger sons were educated for some years inCelle, far from Göttingen. The military career of his elder son Joseph ended after more than two decades at the poorly paid rank offirst lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married.[49]The second son Eugen shared a good measure of his father's talent in computation and languages but had a lively and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public,[101]Eugen suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken to start, after which his father refused further financial support.[49]The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties getting an appropriate education, and eventually emigrated as well. Only Gauss's youngest daughter Therese accompanied him in his last years of life.[67]
In his later years Gauss habitually collected various types of useful or useless numerical data, such as the number of paths from his home to certain places in Göttingen or peoples' ages in days; he congratulatedHumboldtin December 1851 for having reached the same age asIsaac Newtonat his death, calculated in days.[102]
Beyond his excellent knowledge ofLatin, he was also acquainted with modern languages. Gauss read both classical and modern literature, and English and French works in the original languages.[103][m]His favorite English author wasWalter Scott, his favorite GermanJean Paul. At the age of 62, he began to teach himselfRussian, very likely to understand scientific writings from Russia, among them those ofLobachevskyon non-Euclidean geometry.[105][106]Gauss liked singing and went to concerts.[107]He was a busy newspaper reader; in his last years, he would visit an academic press salon of the university every noon.[108]Gauss did not care much for philosophy, and mocked the "splitting hairs of the so-called metaphysicians", by which he meant proponents of the contemporary school ofNaturphilosophie.[109]
Gauss had an "aristocratic and through and through conservative nature", with little respect for people's intelligence and morals, following the motto "mundus vult decipi".[108]He disliked Napoleon and his system and was horrified by violence and revolution of all kinds. Thus he condemned the methods of theRevolutions of 1848, though he agreed with some of their aims, such as that of a unified Germany.[94][n]He had a low estimation of the constitutional system and he criticized parliamentarians of his time for their perceived ignorance and logical errors.[108]
Some Gauss biographers have speculated on his religious beliefs. He sometimes said "God arithmetizes"[110]and "I succeeded – not on account of my hard efforts, but by the grace of the Lord."[111]Gauss was a member of theLutheran church, like most of the population in northern Germany, but it seems that he did not believe all Lutherandogmaor understand the Bible fully literally.[112]According to Sartorius, Gauss'religious tolerance, "insatiable thirst for truth" and sense of justice were motivated by his religious convictions.[113]
In his doctoral thesis from 1799, Gauss proved thefundamental theorem of algebrawhich states that every non-constant single-variablepolynomialwith complex coefficients has at least one complexroot. Mathematicians includingJean le Rond d'Alemberthad produced false proofs before him, and Gauss's dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts led to considerable clarification of the concept of complex numbers.[114]
In the preface to theDisquisitiones, Gauss dates the beginning of his work on number theory to 1795. By studying the works of previous mathematicians like Fermat, Euler, Lagrange, and Legendre, he realized that these scholars had already found much of what he had independently discovered.[115]TheDisquisitiones Arithmeticae, written in 1798 and published in 1801, consolidated number theory as a discipline and covered both elementary and algebraicnumber theory. Therein he introduces thetriple barsymbol (≡) forcongruenceand uses it for a clean presentation ofmodular arithmetic.[116]It deals with theunique factorization theoremandprimitive roots modulo n. In the main sections, Gauss presents the first two proofs of the law ofquadratic reciprocity[117]and develops the theories ofbinary[118]and ternaryquadratic forms.[119]
TheDisquisitionesinclude theGauss composition lawfor binary quadratic forms, as well as the enumeration of the number of representations of an integer as the sum of three squares. As an almost immediate corollary of histheorem on three squares, he proves the triangular case of theFermat polygonal number theoremforn= 3.[120]From several analytic results onclass numbersthat Gauss gives without proof towards the end of the fifth section,[121]it appears that Gauss already knew theclass number formulain 1801.[122]
In the last section, Gauss gives proof for theconstructibilityof a regularheptadecagon(17-sided polygon) withstraightedge and compassby reducing this geometrical problem to an algebraic one.[123]He shows that a regular polygon is constructible if the number of its sides is either apower of 2or the product of a power of 2 and any number of distinctFermat primes. In the same section, he gives a result on the number of solutions of certain cubic polynomials with coefficients infinite fields, which amounts to counting integral points on anelliptic curve.[124]An unfinished chapter, consisting of work done during 1797–1799, was found among his papers after his death.[125][126]
One of Gauss's first results was the empirically found conjecture of 1792 – the later calledprime number theorem– giving an estimation of the number of prime numbers by using theintegral logarithm.[127][o]
In 1816,Olbersencouraged Gauss to compete for a prize from the French Academy for a proof forFermat's Last Theorem; he refused, considering the topic uninteresting. However, after his death a short undated paper was found with proofs of the theorem for the casesn= 3 andn= 5.[129]The particular case ofn= 3 was proved much earlier byLeonhard Euler, but Gauss developed a more streamlined proof which made use ofEisenstein integers; though more general, the proof was simpler than in the real integers case.[130]
Gauss contributed to solving theKepler conjecturein 1831 with the proof that agreatest packing densityof spheres in the three-dimensional space is given when the centres of the spheres form acubic face-centredarrangement,[131]when he reviewed a book ofLudwig August Seeberon the theory of reduction of positive ternary quadratic forms.[132]Having noticed some lacks in Seeber's proof, he simplified many of his arguments, proved the central conjecture, and remarked that this theorem is equivalent to the Kepler conjecture for regular arrangements.[133]
In two papers onbiquadratic residues(1828, 1832) Gauss introduced theringofGaussian integersZ[i]{\displaystyle \mathbb {Z} [i]}, showed that it is aunique factorization domain,[134]and generalized some key arithmetic concepts, such asFermat's little theoremandGauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity[134]– as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws.[135]
In the second paper, he stated the general law of biquadratic reciprocity and proved several special cases of it. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claimed the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws.[136]
One of Gauss's first discoveries was the notion of thearithmetic-geometric mean(AGM) of two positive real numbers.[137]He discovered its relation to elliptic integrals in the years 1798–1799 throughLanden's transformation, and a diary entry recorded the discovery of the connection ofGauss's constanttolemniscatic elliptic functions, a result that Gauss stated "will surely open an entirely new field of analysis".[138]He also made early inroads into the more formal issues of the foundations ofcomplex analysis, and from a letter to Bessel in 1811 it is clear that he knew the "fundamental theorem of complex analysis" –Cauchy's integral theorem– and understood the notion ofcomplex residueswhen integrating aroundpoles.[124][139]
Euler's pentagonal numbers theorem, together with other researches on the AGM and lemniscatic functions, led him to plenty of results onJacobi theta functions,[124]culminating in the discovery in 1808 of the later calledJacobi triple product identity, which includes Euler's theorem as a special case.[140]His works show that he knew modular transformations of order 3, 5, 7 for elliptic functions since 1808.[141][p][q]
Several mathematical fragments in hisNachlassindicate that he knew parts of the modern theory ofmodular forms.[124]In his work on themultivaluedAGM of two complex numbers, he discovered a deep connection between the infinitely many values of the AGM and its two "simplest values".[138]In his unpublished writings he recognized and made a sketch of the key concept offundamental domainfor themodular group.[143][144]One of Gauss's sketches of this kind was a drawing of atessellationof theunit diskby "equilateral"hyperbolic triangleswith all angles equal toπ/4{\displaystyle \pi /4}.[145]
An example of Gauss's insight in analysis is the cryptic remark that the principles of circle division by compass and straightedge can also be applied to the division of thelemniscate curve, which inspired Abel's theorem on lemniscate division.[r]Another example is his publication "Summatio quarundam serierum singularium" (1811) on the determination of the sign ofquadratic Gauss sums, in which he solved the main problem by introducingq-analogs of binomial coefficientsand manipulating them by several original identities that seem to stem from his work on elliptic function theory; however, Gauss cast his argument in a formal way that does not reveal its origin in elliptic function theory, and only the later work of mathematicians such asJacobiandHermitehas exposed the crux of his argument.[146]
In the "Disquisitiones generales circa series infinitam..." (1813), he provides the first systematic treatment of the generalhypergeometric functionF(α,β,γ,x){\displaystyle F(\alpha ,\beta ,\gamma ,x)}, and shows that many of the functions known at the time are special cases of the hypergeometric function.[147]This work is the first exact inquiry intoconvergenceof infinite series in the history of mathematics.[148]Furthermore, it deals with infinitecontinued fractionsarising as ratios of hypergeometric functions, which are now calledGauss continued fractions.[149]
In 1823, Gauss won the prize of the Danish Society with an essay onconformal mappings, which contains several developments that pertain to the field of complex analysis.[150]Gauss stated that angle-preserving mappings in the complex plane must be complexanalytic functions, and used the later-namedBeltrami equationto prove the existence ofisothermal coordinateson analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and anellipsoid of revolution.[151]
Gauss often deduced theoremsinductivelyfrom numerical data he had collected empirically.[81]As such, the use of efficient algorithms to facilitate calculations was vital to his research, and he made many contributions tonumerical analysis, such as the method ofGaussian quadrature, published in 1816.[152]
In a private letter toGerlingfrom 1823,[153]he described a solution of a 4x4 system of linear equations with theGauss-Seidel method– an "indirect"iterative methodfor the solution of linear systems, and recommended it over the usual method of "direct elimination" for systems of more than two equations.[154]
Gauss invented an algorithm for calculating what is now calleddiscrete Fourier transformswhen calculating the orbits of Pallas and Juno in 1805, 160 years beforeCooleyandTukeyfound their similarCooley–Tukey algorithm.[155]He developed it as atrigonometric interpolationmethod, but the paperTheoria Interpolationis Methodo Nova Tractatawas published only posthumously in 1876,[156]well afterJoseph Fourier's introduction of the subject in 1807.[157]
The geodetic survey ofHanoverfuelled Gauss's interest indifferential geometryandtopology, fields of mathematics dealing withcurvesandsurfaces. This led him in 1828 to the publication of a work that marks the birth of moderndifferential geometry of surfaces, as it departed from the traditional ways of treating surfaces ascartesian graphsof functions of two variables, and that initiated the exploration of surfaces from the "inner" point of view of a two-dimensional being constrained to move on it. As a result, theTheorema Egregium(remarkable theorem), established a property of the notion ofGaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuringanglesanddistanceson the surface, regardless of theembeddingof the surface in three-dimensional or two-dimensional space.[158]
The Theorema Egregium leads to the abstraction of surfaces as doubly-extendedmanifolds; it clarifies the distinction between the intrinsic properties of the manifold (themetric) and its physical realization in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that asphereor anellipsoidcannot be transformed to a plane without distortion, which causes a fundamental problem in designingprojectionsfor geographical maps.[158]A portion of this essay is dedicated to a profound study ofgeodesics. In particular, Gauss proves the localGauss–Bonnet theoremon geodesic triangles, and generalizesLegendre's theorem on spherical trianglesto geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a "sufficiently small" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle, regardless of the behaviour of the surface in the triangle interior.[159]
Gauss's memoir from 1828 lacks the conception ofgeodesic curvature. However, in a previously unpublished manuscript, very likely written in 1822–1825, he introduced the term "side curvature" (German: "Seitenkrümmung") and proved itsinvarianceunder isometric transformations, a result that was later obtained byFerdinand Mindingand published by him in 1830. This Gauss paper contains the core of his lemma on total curvature, but also its generalization, found and proved byPierre Ossian Bonnetin 1848 and known as theGauss–Bonnet theorem.[160]
During Gauss' lifetime, theParallel postulateofEuclidean geometrywas heavily discussed.[161]Numerous efforts were made to prove it in the frame of the Euclideanaxioms, whereas some mathematicians discussed the possibility of geometrical systems without it.[162]Gauss thought about the basics of geometry from the 1790s on, but only realized in the 1810s that a non-Euclidean geometry without the parallel postulate could solve the problem.[163][161]In a letter toFranz Taurinusof 1824, he presented a short comprehensible outline of what he named a "non-Euclidean geometry",[164]but he strongly forbade Taurinus to make any use of it.[163]Gauss is credited with having been the one to first discover and study non-Euclidean geometry, even coining the term as well.[165][164][166]
The first publications on non-Euclidean geometry in the history of mathematics were authored byNikolai Lobachevskyin 1829 andJanos Bolyaiin 1832.[162]In the following years, Gauss wrote his ideas on the topic but did not publish them, thus avoiding influencing the contemporary scientific discussion.[163][167]Gauss commended the ideas of Janos Bolyai in a letter to his father and university friend Farkas Bolyai[168]claiming that these were congruent to his own thoughts of some decades.[163][169]However, it is not quite clear to what extent he preceded Lobachevsky and Bolyai, as his written remarks are vague and obscure.[162]
Sartoriusfirst mentioned Gauss's work on non-Euclidean geometry in 1856, but only the publication of Gauss'sNachlassin Volume VIII of the Collected Works (1900) showed Gauss's ideas on the matter, at a time when non-Euclidean geometry was still an object of some controversy.[163]
Gauss was also an early pioneer oftopologyorGeometria Situs, as it was called in his lifetime. The first proof of thefundamental theorem of algebrain 1799 contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem.[170]
Another encounter with topological notions occurred to him in the course of his astronomical work in 1804, when he determined the limits of the region on thecelestial spherein which comets and asteroids might appear, and which he termed "Zodiacus". He discovered that if the Earth's and comet's orbits arelinked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid7 Iris, he published a further qualitative discussion of the Zodiacus.[171]
In Gauss's letters of 1820–1830, he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify "tract figures", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections ofknots.[172]To do so he devised a symbolical scheme, theGauss code, that in a sense captured the characteristic features of tract figures.[173][174]
In a fragment from 1833, Gauss defined thelinking numberof two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. On the same note, he lamented the little progress made in Geometria Situs, and remarked that one of its central problems will be "to count the intertwinings of two closed or infinite curves". His notebooks from that period reveal that he was also thinking about other topological objects such asbraidsandtangles.[171]
Gauss's influence in later years to the emerging field of topology, which he held in high esteem, was through occasional remarks and oral communications to Mobius and Listing.[175]
Gauss applied the concept of complex numbers to solve well-known problems in a new concise way. For example, in a short note from 1836 on geometric aspects of the ternary forms and their application to crystallography,[176]he stated thefundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers.[177]He described rotations of this sphere as the action of certainlinear fractional transformationson the extended complex plane,[178]and gave a proof for the geometric theorem that thealtitudesof a triangle always meet in a singleorthocenter.[179]
Gauss was concerned withJohn Napier's "Pentagramma mirificum" – a certain sphericalpentagram– for several decades;[180]he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic, and analytic aspects.[181]In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons, and Poncelet pentagons in the plane.[182]
Furthermore, he contributed a solution to the problem of constructing the largest-area ellipse inside a givenquadrilateral,[183][184]and discovered a surprising result about the computation of area ofpentagons.[185][186]
On 1 January 1801, Italian astronomerGiuseppe Piazzidiscovered a new celestial object, presumed it to be the long searched planet between Mars and Jupiter according to the so-calledTitius–Bode law, and named itCeres.[187]He could track it only for a short time until it disappeared behind the glare of the Sun. The mathematical tools of the time were not sufficient to predict the location of its reappearance from the few data available. Gauss tackled the problem and predicted a position for possible rediscovery in December 1801. This turned out to be accurate within a half-degree whenFranz Xaver von Zachon 7 and 31 December atGotha, and independentlyHeinrich Olberson 1 and 2 January inBremen, identified the object near the predicted position.[188][t]
Gauss's methodleads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose.[189]
The discovery of Ceres led Gauss to the theory of the motion of planetoids disturbed by large planets, eventually published in 1809 asTheoria motus corporum coelestium in sectionibus conicis solem ambientum.[190]It introduced theGaussian gravitational constant.[37]
Since the new asteroids had been discovered, Gauss occupied himself with theperturbationsof theirorbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object wasPallas, because of its greateccentricityandorbital inclination, whereby Laplace's method did not work. Gauss used his own tools: thearithmetic–geometric mean, thehypergeometric function, and his method of interpolation.[191]He found anorbital resonancewithJupiterin proportion 18:7 in 1812; Gauss gave this result ascipher, and gave the explicit meaning only in letters to Olbers and Bessel.[192][193][u]After long years of work, he finished it in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy.[195]
One fruit of Gauss's research on Pallas perturbations was theDeterminatio Attractionis...(1818) on a method of theoretical astronomy that later became known as the "elliptic ring method". It introduced an averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time the planet takes to follow the corresponding orbital arcs.[196]Gauss presents the method of evaluating the gravitational attraction of such an elliptic ring, which includes several steps; one of them involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate anelliptic integral.[197]
Even after Gauss's contributions to theoretical astronomy came to an end, more practical activities inobservational astronomycontinued and occupied him during his entire career. As early as 1799, Gauss dealt with the determination of longitude by use of the lunar parallax, for which he developed more convenient formulas than those were in common use.[198]After appointment as director of observatory he attached importance to the fundamental astronomical constants in correspondence with Bessel. Gauss himself provided tables ofnutationandaberration, solar coordinates, and refraction.[199]He made many contributions tospherical geometry, and in this context solved some practical problems aboutnavigation by stars.[200]He published a great number of observations, mainly on minor planets and comets; his last observation was thesolar eclipse of 28 July 1851.[201]
Gauss's first publication following his doctoral thesis dealt with the determination of thedate of Easter(1800), an elementary mathematical topic. Gauss aimed to present a convenient algorithm for people without any knowledge of ecclesiastical or even astronomical chronology, and thus avoided the usual terms ofgolden number,epact,solar cycle,domenical letter, and any religious connotations.[202]This choice of topic likely had historical grounds. The replacement of theJulian calendarby theGregorian calendarhad caused confusion in theHoly Roman Empiresince the 16th century and was not finished in Germany until 1700, when the difference of eleven days was deleted. Even after this, Easter fell on different dates in Protestant and Catholic territories, until this difference was abolished by agreement in 1776. In the Protestant states, such as the Duchy of Brunswick, the Easter of 1777, five weeks before Gauss's birth, was the first one calculated in the new manner.[203]
Gauss likely used themethod of least squaresto minimize the impact ofmeasurement errorwhen calculating the orbit of Ceres.[92]The method was published first byAdrien-Marie Legendrein 1805, but Gauss claimed inTheoria motus(1809) that he had been using it since 1794 or 1795.[204][205][206]In the history of statistics, this disagreement is called the "priority dispute over the discovery of the method of least squares".[92]Gauss proved that the method has the lowest sampling variance within the class of linear unbiased estimators under the assumption ofnormally distributederrors (Gauss–Markov theorem), in the two-part paperTheoria combinationis observationum erroribus minimis obnoxiae(1823).[207]
In the first paper he provedGauss's inequality(aChebyshev-type inequality) forunimodal distributions, and stated without proof another inequality formomentsof the fourth order (a special case of the Gauss-Winckler inequality).[208]He derived lower and upper bounds for thevarianceof thesample variance. In the second paper, Gauss describedrecursive least squares methods. His work on the theory of errors was extended in several directions by the geodesistFriedrich Robert Helmertto theGauss-Helmert model.[209]
Gauss also contributed to problems inprobability theorythat are not directly concerned with the theory of errors. One example appears as a diary note where he tried to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in(0,1). He derived this distribution, now known as theGauss-Kuzmin distribution, as a by-product of the discovery of theergodicityof theGauss map for continued fractions. Gauss's solution is the first-ever result in the metrical theory of continued fractions.[210]
Gauss was busy with geodetic problems since 1799 when he helpedKarl Ludwig von Lecoqwith calculations during hissurveyinWestphalia.[211]Beginning in 1804, he taught himself some practical geodesy in Brunswick[212]and Göttingen.[213]
Since 1816, Gauss's former studentHeinrich Christian Schumacher, then professor inCopenhagen, but living inAltona(Holstein) nearHamburgas head of an observatory, carried out atriangulationof theJutlandpeninsula fromSkagenin the north toLauenburgin the south.[v]This project was the basis for map production but also aimed at determining the geodetic arc between the terminal sites. Data from geodetic arcs were used to determine the dimensions of the earthgeoid, and long arc distances brought more precise results. Schumacher asked Gauss to continue this work further to the south in the Kingdom of Hanover; Gauss agreed after a short time of hesitation. Finally, in May 1820, KingGeorge IVgave the order to Gauss.[214]
Anarc measurementneeds a precise astronomical determination of at least two points in thenetwork. Gauss and Schumacher used the coincidence that both observatories in Göttingen and Altona, in the garden of Schumacher's house, laid nearly in the samelongitude. Thelatitudewas measured with both their instruments and azenith sectorofRamsdenthat was transported to both observatories.[215][w]
Gauss and Schumacher had already determined some angles betweenLüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818.[216]During the summers of 1821 until 1825 Gauss directed the triangulation work personally, fromThuringiain the south to the riverElbein the north. ThetrianglebetweenHoher Hagen,Großer Inselsbergin theThuringian Forest, andBrockenin theHarzmountains was the largest one Gauss had ever measured with a maximum size of 107 km (66.5 miles). In the thinly populatedLüneburg Heathwithout significant natural summits or artificial buildings, he had difficulties finding suitable triangulation points; sometimes cutting lanes through the vegetation was necessary.[203][217]
For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named itheliotrope.[218]Another suitable construction for the same purpose was asextantwith an additional mirror which he namedvice heliotrope.[219]Gauss was assisted by soldiers of the Hanoverian army, among them his eldest son Joseph. Gauss took part in thebaselinemeasurement (Braak Base Line) of Schumacher in the village ofBraaknear Hamburg in 1820, and used the result for the evaluation of the Hanoverian triangulation.[220]
An additional result was a better value for theflatteningof the approximativeEarth ellipsoid.[221][x]Gauss developed theuniversal transverse Mercator projectionof the ellipsoidal shaped Earth (what he namedconform projection)[223]for representing geodetical data in plane charts.
When the arc measurement was finished, Gauss began the enlargement of the triangulation to the west to get a survey of the wholeKingdom of Hanoverwith a Royal decree from 25 March 1828.[224]The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Gauss, who applied his mathematical inventions such as themethod of least squaresand theelimination methodto it. The project was finished in 1844, and Gauss sent a final report of the project to the government; his method of projection was not edited until 1866.[225][226]
In 1828, when studying differences inlatitude, Gauss first defined a physical approximation for thefigure of the Earthas the surface everywhere perpendicular to the direction of gravity;[227]later his doctoral studentJohann Benedict Listingcalled this thegeoid.[228]
Gauss had been interested in magnetism since 1803.[229]AfterAlexander von Humboldtvisited Göttingen in 1826, both scientists began intensive research ongeomagnetism, partly independently, partly in productive cooperation.[230]In 1828, Gauss was Humboldt's guest during the conference of theSociety of German Natural Scientists and Physiciansin Berlin, where he got acquainted with the physicistWilhelm Weber.[231]
When Weber got the chair for physics in Göttingen as successor ofJohann Tobias Mayerby Gauss's recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge ofmagnetismwith a representation for the unit of magnetism in terms of mass, charge, and time.[232]They founded theMagnetic Association(German:Magnetischer Verein), an international working group of several observatories, which carried out measurements ofEarth's magnetic fieldin many regions of the world using equivalent methods at arranged dates in the years 1836 to 1841.[233]
In 1836, Humboldt suggested the establishment of a worldwide net of geomagnetic stations in theBritish dominionswith a letter to theDuke of Sussex, then president of the Royal Society; he proposed that magnetic measures should be taken under standardized conditions using his methods.[234][235]Together with other instigators, this led to a global program known as "Magnetical crusade" under the direction ofEdward Sabine. The dates, times, and intervals of observations were determined in advance, theGöttingen mean timewas used as the standard.[236]61 stations on all five continents participated in this global program. Gauss and Weber founded a series for publication of the results, six volumes were edited between 1837 and 1843. Weber's departure toLeipzigin 1843 as late effect of theGöttingen Seven affairmarked the end of Magnetic Association activity.[233]
Following Humboldt's example, Gauss ordered a magneticobservatoryto be built in the garden of the observatory, but the scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magneticdeclination,inclination, and intensity and differentiated, unlike Humboldt, between "horizontal" and "vertical" intensity. Together with Weber, he developed methods of measuring the components of the intensity of the magnetic field and constructed a suitablemagnetometerto measureabsolute valuesof the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus.[233][237]The precision of the magnetometer was about ten times higher than that of previous instruments. With this work, Gauss was the first to derive a non-mechanical quantity by basic mechanical quantities.[236]
Gauss carried out aGeneral Theory of Terrestrial Magnetism(1839), in what he believed to describe the nature of magnetic force; according to Felix Klein, this work is a presentation of observations by use ofspherical harmonicsrather than a physical theory.[238]The theory predicted the existence of exactly twomagnetic poleson the Earth, thusHansteen's idea of four magnetic poles became obsolete,[239]and the data allowed to determine their location with rather good precision.[240]
Gauss influenced the beginning of geophysics in Russia, whenAdolph Theodor Kupffer, one of his former students, founded a magnetic observatory inSt. Petersburg, following the example of the observatory in Göttingen, and similarly,Ivan SimonovinKazan.[239]
The discoveries ofHans Christian ØrstedonelectromagnetismandMichael Faradayonelectromagnetic inductiondrew Gauss's attention to these matters.[241]Gauss and Weber found rules for branchedelectriccircuits, which were later found independently and first published byGustav Kirchhoffand named after him asKirchhoff's circuit laws,[242]and made inquiries into electromagnetism. They constructed the firstelectromechanical telegraphin 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen,[y]but they made no further commercial use of this invention.[243][244]
Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In notebooks from these years, he recorded several innovative formulations; he discovered thevector potentialfunction, independently rediscovered byFranz Ernst Neumannin 1845, and in January 1835 he wrote down an "induction law" equivalent toFaraday's law, which stated that theelectromotive forceat a given point in space is equal to theinstantaneous rate of change(with respect to time) of this function.[245][246]
Gauss tried to find a unifying law for long-distance effects ofelectrostatics,electrodynamics, electromagnetism, andinduction, comparable to Newton's law of gravitation,[247]but his attempt ended in a "tragic failure".[236]
Since Isaac Newton had shown theoretically that the Earth and rotating stars assume non-spherical shapes, the problem of attraction of ellipsoids gained importance in mathematical astronomy. In his first publication on potential theory, the "Theoria attractionis..." (1813), Gauss provided aclosed-form expressionto the gravitational attraction of a homogeneoustriaxial ellipsoidat every point in space.[248]In contrast to previous research ofMaclaurin, Laplace and Lagrange, Gauss's new solution treated the attraction more directly in the form of an elliptic integral. In the process, he also proved and applied some special cases of the so-calledGauss's theoreminvector analysis.[249]
In theGeneral theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances(1840) Gauss gave a basic theory ofmagnetic potential, based on Lagrange, Laplace, and Poisson;[238]it seems rather unlikely that he knew the previous works ofGeorge Greenon this subject.[241]However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future.[236]
Gauss's calculations enabled instrument makerJohann Georg RepsoldinHamburgto construct a newachromatic lenssystem in 1810. A main problem, among other difficulties, was that therefractive indexanddispersionof the glass used were not precisely known.[250]In a short article from 1817 Gauss dealt with the problem of removal ofchromatic aberrationindouble lenses, and computed adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the opticianCarl August von Steinheil, who in 1860 introduced the achromaticSteinheil doublet, partly based on Gauss's calculations.[251]Many results ingeometrical opticsare scattered in Gauss's correspondences and hand notes.[252]
In theDioptrical Investigations(1840), Gauss gave the first systematic analysis of the formation of images under aparaxial approximation(Gaussian optics).[253]He characterized optical systems under a paraxial approximation only by itscardinal points,[254]and he derived the Gaussianlensformula, applicable without restrictions in respect to the thickness of the lenses.[255][256]
Gauss's first work in mechanics concerned theearth's rotation. When his university friendBenzenbergcarried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as theCoriolis force, he asked Gauss for a theory-based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and the results corresponded sufficiently with Benzenberg's data, who added Gauss's considerations as an appendix to his book on falling experiments.[257]
AfterFoucaulthad demonstrated the earth's rotation by hispendulumexperiment in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum. It is described in the Gauss–Gerling correspondence and Weber made some experiments with this apparatus in 1853, but no data were published.[258][259]
Gauss's principle of least constraintof 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combiningD'Alembert's principlewithLagrange'sprinciple of virtual work, and showing analogies to the method ofleast squares.[260]
In 1828, Gauss was appointed as head of the board for weights and measures of the Kingdom of Hanover. He createdstandardsfor length and measure. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical construction.[203]In the correspondence with Schumacher, who was also working on this matter, he described new ideas for high-precision scales.[261]He submitted the final reports on the Hanoverianfootandpoundto the government in 1841. This work achieved international importance due to an 1836 law that connected the Hanoverian measures with the English ones.[203]
Gauss first became member of a scientific society, theRussian Academy of Sciences, in 1802.[262]Further memberships (corresponding, foreign or full) were awarded by theAcademy of Sciencesin Göttingen (1802/ 1807),[263]theFrench Academy of Sciences(1804/ 1820),[264]theRoyal Societyof London (1804),[265]theRoyal Prussian Academyin Berlin (1810),[266]theNational Academy of Sciencein Verona (1810),[267]theRoyal Society of Edinburgh(1820),[268]theBavarian Academy of Sciencesof Munich (1820),[269]theRoyal Danish Academyin Copenhagen (1821),[270]theRoyal Astronomical Societyin London (1821),[271]theRoyal Swedish Academy of Sciences(1821),[270]theAmerican Academy of Arts and Sciencesin Boston (1822),[272]theRoyal Bohemian Society of Sciencesin Prague (1833),[273]theRoyal Academy of Science, Letters and Fine Arts of Belgium(1841/1845),[274]theRoyal Society of Sciences in Uppsala(1843),[273]theRoyal Irish Academyin Dublin (1843),[273]theRoyal Institute of the Netherlands(1845/ 1851),[275]theSpanish Royal Academy of Sciencesin Madrid (1850),[276]theRussian Geographical Society(1851),[277]theImperial Academy of Sciencesin Vienna (1848),[277]theAmerican Philosophical Society(1853),[278]theCambridge Philosophical Society,[277]and theRoyal Hollandish Society of Sciencesin Haarlem.[279][280]
Both theUniversity of Kazanand the Philosophy Faculty of theUniversity of Pragueappointed him honorary member in 1848.[279]
Gauss received theLalande Prizefrom the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations,[281]the Danish Academy of Science prize in 1823 for his memoir on conformal projection,[273]and theCopley Medalfrom the Royal Society in 1838 for "his inventions and mathematical researches in magnetism".[280][282][37]
Gauss was appointed Knight of the FrenchLegion of Honourin 1837,[283]and became one of the first members of the PrussianOrder Pour le Merite(Civil class) when it was established in 1842.[284]He received theOrder of the Crown of Westphalia(1810),[280]the DanishOrder of the Dannebrog(1817),[280]the HanoverianRoyal Guelphic Order(1815),[280]the SwedishOrder of the Polar Star(1844),[285]theOrder of Henry the Lion(1849),[285]and theBavarian Maximilian Order for Science and Art(1853).[277]
The Kings of Hanover appointed him the honorary titles "Hofrath" (1816)[55]and "Geheimer Hofrath"[z](1845). In 1949, on the occasion of his golden doctor degree jubilee, he receivedhonorary citizenshipof both Brunswick and Göttingen.[277]Soon after his death a medal was issued by order of KingGeorge V of Hanoverwith the back inscription dedicated "to the Prince of Mathematicians".[286]
The "Gauss-Gesellschaft Göttingen" ("Göttingen Gauss Society") was founded in 1964 for research on the life and work of Carl Friedrich Gauss and related persons. It publishes theMitteilungen der Gauss-Gesellschaft(Communications of the Gauss Society).[287]
TheGöttingen Academy of Sciences and Humanitiesprovides a complete collection of the known letters from and to Carl Friedrich Gauss that is accessible online.[38]The literary estate is kept and provided by theGöttingen State and University Library.[288]Written materials from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick.[289]
|
https://en.wikipedia.org/wiki/Carl_F._Gauss
|
Object-oriented programming(OOP) is aprogramming paradigmbased on the concept ofobjects.[1]Objects can containdata(calledfields,attributesorproperties) and have actions they can perform (calledproceduresormethodsand implemented incode). In OOP,computer programsare designed by making them out of objects that interact with one another.[2][3]
Many of the most widely used programming languages (such asC++,Java,[4]andPython) support object-oriented programming to a greater or lesser degree, typically as part ofmultiple paradigmsin combination with others such asimperative programminganddeclarative programming.
Significant object-oriented languages includeAda,ActionScript,C++,Common Lisp,C#,Dart,Eiffel,Fortran 2003,Haxe,Java,[4]JavaScript,Kotlin,Logo,MATLAB,Objective-C,Object Pascal,Perl,PHP,Python,R,Raku,Ruby,Scala,SIMSCRIPT,Simula,Smalltalk,Swift,ValaandVisual Basic.NET.
The idea of "objects" in programming started with theartificial intelligencegroup atMITin the late 1950s and early 1960s. Here, "object" referred toLISPatoms with identified properties (attributes).[5][6]Another early example wasSketchpadcreated byIvan Sutherlandat MIT in 1960–1961. In the glossary of his technical report, Sutherland defined terms like "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.[7]Later, in 1968, AED-0, MIT's version of theALGOLprogramming language, connected data structures ("plexes") and procedures, prefiguring what were later termed "messages", "methods", and "member functions".[8][9]Topics such asdata abstractionandmodular programmingwere common points of discussion at this time.
Meanwhile, in Norway,Simulawas developed during the years 1961–1967.[8]Simula introduced essential object-oriented ideas, such asclasses, inheritance, anddynamic binding.[10]Simula was used mainly by researchers involved withphysical modelling, like the movement of ships and their content through cargo ports.[10]Simula is generally accepted as being the first language with the primary features and framework of an object-oriented language.[11]
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).
Influenced by both MIT and Simula,Alan Kaybegan developing his own ideas in November 1966. He would go on to createSmalltalk, an influential object-oriented programming language. By 1967, Kay was already using the term "object-oriented programming" in conversation.[1]Although sometimes called the "father" of object-oriented programming,[12]Kay has said his ideas differ from how object-oriented programming is commonly understood, and has implied that the computer science establishment did not adopt his notion.[1]A 1976 MIT memo co-authored byBarbara LiskovlistsSimula 67,CLU, andAlphardas object-oriented languages, but does not mention Smalltalk.[13]
In the 1970s, the first version of theSmalltalkprogramming language was developed atXerox PARCbyAlan Kay,Dan IngallsandAdele Goldberg. Smalltalk-72 was notable for use of objects at the language level and its graphical development environment.[14]Smalltalk was a fully dynamic system, allowing users to create and modify classes as they worked.[15]Much of the theory of OOP was developed in the context of Smalltalk, for example multiple inheritance.[16]
In the late 1970s and 1980s, object-oriented programming rose to prominence. TheFlavorsobject-oriented Lisp was developed starting 1979, introducingmultiple inheritanceandmixins.[17]In August 1981,Byte Magazinehighlighted Smalltalk and OOP, introducing these ideas to a wide audience.[18]LOOPS, the object system forInterlisp-D, was influenced by Smalltalk and Flavors, and a paper about it was published in 1982.[19]In 1986, the firstConference on Object-Oriented Programming, Systems, Languages, and Applications(OOPSLA) was attended by 1,000 people. This conference marked the beginning of efforts to consolidate Lisp object systems, eventually resulting in theCommon Lisp Object System. In the 1980s, there were a few attempts to designprocessor architecturesthat includedhardwaresupport for objects inmemory, but these were not successful. Examples include theIntel iAPX 432and theLinn SmartRekursiv.
In the mid-1980s, new object-oriented languages likeObjective-C,C++, andEiffelemerged. Objective-C was developed byBrad Cox, who had used Smalltalk atITT Inc..Bjarne StroustrupcreatedC++based on his experience using Simula for his PhD thesis.[14]Bertrand Meyerproduced the first design of theEiffel languagein 1985, which focused on software quality using adesign by contractapproach.[20]
In the 1990s, object-oriented programming became the main way of programming, especially as more languages supported it. These includedVisual FoxPro3.0,[21][22]C++,[23]andDelphi[citation needed]. OOP became even more popular with the rise ofgraphical user interfaces, which used objects for buttons, menus and other elements. One well-known example is Apple'sCocoaframework, used onMac OS Xand written inObjective-C. OOP toolkits also enhanced the popularity ofevent-driven programming.[citation needed]
AtETH Zürich,Niklaus Wirthand his colleagues created new approaches to OOP.Modula-2(1978) andOberon(1987), included a distinctive approach to object orientation, classes, and type checking across module boundaries. Inheritance is not obvious in Wirth's design since his nomenclature looks in the opposite direction: It is called type extension and the viewpoint is from the parent down to the inheritor.
Many programming languages that existed before OOP have added object-oriented features, includingAda,BASIC,Fortran,Pascal, andCOBOL. This sometimes caused compatibility and maintainability issues, as these languages were not originally designed with OOP in mind.
In the new millenium, new languages likePythonandRubyhave emerged that combine object-oriented and procedural styles. The most commercially important "pure" object-oriented languages continue to beJava, developed bySun Microsystems, as well asC#andVisual Basic.NET(VB.NET), both designed for Microsoft's.NETplatform. These languages show the benefits of OOP by creating abstractions from implementation. The .NET platform supports cross-language inheritance, allowing programs to use objects from multiple languages together.
Object-oriented programming focuses on working with objects, but not all OOP languages have every feature linked to OOP. Below are some common features of languages that are considered strong in OOP or support it along with other programming styles. Important exceptions are also noted.[24][25][26][27]Christopher J. Datepointed out that comparing OOP with other styles, likerelational programming, is difficult because there isn't a clear, agreed-upon definition of OOP.[28]
Features from imperative and structured programming are present in OOP languages and are also found in non-OOP languages.
Support formodular programminglets programmers organize related procedures into files and modules. This makes programs easier to manage. Each modules has its ownnamespace, so items in one module will not conflict with items in another.
Object-oriented programming (OOP) was created to make code easier toreuseandmaintain.[29]However, it was not designed to clearly show the flow of a program's instructions—that was left to the compiler. As computers began using more parallel processing and multiplethreads, it became more important to understand and control how instructions flow. This is difficult to do with OOP.[30][31][32][33]
An object is a type ofdata structurethat has two main parts:fieldsandmethods. Fields may also be known as members, attributes, or properties, and hold information in the form of statevariables. Methods are actions,subroutines, or procedures, defining the object's behavior in code. Objects are usually stored inmemory, and in many programming languages, they work likepointersthat link directly to a contiguous block containing the object instances's data.
Objects can contain other objects. This is calledobject composition. For example, an Employee object might have an Address object inside it, along with other information like "first_name" and "position". This type of structures shows "has-a" relationships, like "an employee has an address".
Some believe that OOP places too much focus on using objects rather than onalgorithmsanddata structures.[34][35]For example, programmerRob Pikepointed out that OOP can make programmers think more about type hierarchy than composition.[36]He has called object-oriented programming "theRoman numeralsof computing".[37]Rich Hickey, creator ofClojure, described OOP as overly simplistic, especially when it comes to representing real-world things that change over time.[35]Alexander Stepanovsaid that OOP tries to fit everything into a single type, which can be limiting. He argued that sometimes we need multisorted algebras—families of interfaces that span multiple types, such as ingeneric programming. Stepanov also said that calling everything an "object" doesn't add much understanding.[34]
Sometimes, objects represent real-world things and processes in digital form.[38]For example, a graphics program may have objects such as "circle", "square", and "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product".Niklaus Wirthsaid, "This paradigm [OOP] closely reflects the structure of systems in the real world and is therefore well suited to model complex systems with complex behavior".[39]
However, more often, objects represent abstract entities, like an open file or a unit converter. Not everyone agrees that OOP makes it easy to copy the real world exactly or that doing so is even necessary.Bob Martinsuggests that because classes are software, their relationships don't match the real-world relationships they represent.[40]Bertrand Meyerargues inObject-Oriented Software Construction, that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed".[41]Steve Yeggenoted that natural languages lack the OOP approach of strictly prioritizingthings(objects/nouns) beforeactions(methods/verbs), as opposed tofunctional programmingwhich does the reverse.[42]This can sometimes make OOP solutions more complicated than those written in procedural programming.[43]
Most OOP languages allowreusingandextendingcode through "inheritance". This inheritance can use either "classes" or "prototypes", which have some differences but use similar terms for ideas like "object" and "instance".
Inclass-based programming, the most common type of OOP, every object is aninstanceof a specificclass. The class defines the data format, like variables (e.g., name, age) and methods (actions the object can take). Every instance of the class has the same set of variables and methods. Objects are created using a special method in the class known as aconstructor.
Here are a few key terms in class-based OOP:
Classes may inherit from other classes, creating a hierarchy of "subclasses". For example, an "Employee" class might inherit from a "Person" class. This means the Employee object will have all the variables from Person (like name variables) plus any new variables (like job position and salary). Similarly, the subclass may expand the interface with new methods. Most languages also allow the subclass tooverridethe methods defined by superclasses. Some languages supportmultiple inheritance, where a class can inherit from more than one class, and other languages similarly supportmixinsortraits. For example, a mixin called UnicodeConversionMixin might add a method unicode_to_ascii() to both a FileReader and a WebPageScraper class.
Some classes areabstract, meaning they cannot be directly instantiated into objects; they're only meant to be inherited into other classes. Other classes areutilityclasses which contain only class variables and methods and are not meant to be instantiated or subclassed.[44]
Inprototype-based programming, there aren't any classes. Instead, each object is linked to another object, called itsprototypeorparent. In Self, an object may have multiple or no parents,[45]but in the most popular prototype-based language, Javascript, every object has exactly oneprototypelink, up to the base Object type whose prototype is null.
The prototype acts as a model for new objects. For example, if you have an objectfruit, you can make two objectsappleandorange, based on it. There is nofruitclass, but they share traits from thefruitprototype. Prototype-based languages also allow objects to have their own unique properties, so theappleobject might have an attributesugar_content, while theorangeorfruitobjects do not.
Some languages, likeGo, don't use inheritance at all.[46]Instead, they encourage "composition over inheritance", where objects are built using smaller parts instead of parent-child relationships. For example, instead of inheriting from class Person, the Employee class could simply contain a Person object. This lets the Employee class control how much of Person it exposes to other parts of the program.Delegationis another language feature that can be used as an alternative to inheritance.
Programmers have different opinions on inheritance. Bjarne Stroustrup, author of C++, has stated that it is possible to do OOP without inheritance.[47]Rob Pikehas criticized inheritance for creating complicated hierarchies instead of simpler solutions.[48]
People often think that if one class inherits from another, it means the subclass "is a" more specific version of the original class. This presumes theprogram semanticsare that objects from the subclass can always replace objects from the original class without problems. This concept is known asbehavioral subtyping, more specifically theLiskov substitution principle.
However, this is often not true, especially in programming languages that allowmutableobjects, objects that change after they are created. In fact,subtype polymorphismas enforced by thetype checkerin OOP languages cannot guarantee behavioral subtyping in most if not all contexts. For example, thecircle-ellipse problemis notoriously difficult to handle using OOP's concept of inheritance. Behavioral subtyping is undecidable in general, so it cannot be easily implemented by a compiler. Because of this, programmers must carefully design class hierarchies to avoid mistakes that the programming language itself cannot catch.
When a method is called on an object, the object itself—not outside code—decides which specific code to run. This process, calleddynamic dispatch, usually happens at run time by checking a table linked to the object to find the correct method. In this context, a method call is also known asmessage passing, meaning the method name and its inputs are like a message sent to the object for it to act on. If the method choice depends on more than one type of object (such as other objects passed as parameters), it's calledmultiple dispatch.
Dynamic dispatch works together with inheritance: if an object doesn't have the requested method, it looks up to its parent class (delegation), and continues up the chain until it finds the method or reaches the top.
Dataabstractionis a way of organizing code so that only certain parts of the data are visible to related functions (data hiding). This helps prevent mistakes and makes the program easier to manage. Because data abstraction works well, many programming styles, like object-oriented programming and functional programming, use it as a key principle.Encapsulationis another important idea in programming. It means keeping the internal details of an object hidden from the outside code. This makes it easier to change how an object works on the inside without affecting other parts of the program, such as incode refactoring. Encapsulation also helps keep related code together (decoupling), making it easier for programmers to understand.
In object-oriented programming, objects act as a barrier between their internal workings and external code. Outside code can only interact with an object by calling specificpublicmethods or variables. If a class only allows access to its data through methods and not directly, this is calledinformation hiding. When designing a program, it's often recommended to keep data as hidden as possible. This means using local variables inside functions when possible, then private variables (which only the object can use), and finally public variables (which can be accessed by any part of the program) if necessary. Keeping data hidden helps prevent problems when changing the code later.[49]Some programming languages, like Java, control information hiding by marking variables asprivate(hidden) orpublic(accessible).[50]Other languages, like Python, rely on naming conventions, such as starting a private method's name with an underscore. Intermediate levels of access also exist, such as Java'sprotectedkeyword, (which allows access from the same class and its subclasses, but not objects of a different class), and theinternalkeyword in C#, Swift, and Kotlin, which restricts access to files within the same module.[51]
Abstraction and information hiding are important concepts in programming, especially in object-oriented languages.[52]Programs often create many copies of objects, and each one works independently. Supporters of this approach say it makes code easier to reuse and intuitively represents real-world situations.[53]However, others argue that object-oriented programming does not enhance readability or modularity.[54][55]Eric S. Raymondhas written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency.[56]Raymond compares this unfavourably to the approach taken with Unix and theC programming language.[56]
One programming principle, called the "open/closed principle", says that classes and functions should be "open for extension, but closed for modification".Luca Cardellihas stated that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex.[54]The latter point is reiterated byJoe Armstrong, the principal inventor ofErlang, who is quoted as saying:[55]
The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
Leo Brodie says that information hiding can lead to copying the same code in multiple places (duplicating code),[57]which goes against thedon't repeat yourselfrule of software development.[58]
Polymorphismis the use of one symbol to represent multiple different types.[59]In object-oriented programming, polymorphism more specifically refers tosubtypingor subtype polymorphism, where a function can work with a specificinterfaceand thus manipulate entities of different classes in a uniform manner.[60]
For example, imagine a program has two shapes: a circle and a square. Both come from a common class called "Shape." Each shape has its own way of drawing itself. With subtype polymorphism, the program doesn't need to know the type of each shape, and can simply call the "Draw" method for each shape. The programming language runtime will ensure the correct version of the "Draw" method runs for each shape. Because the details of each shape are handled inside their own classes, this makes the code simpler and more organized, enabling strongseparation of concerns.
In object-oriented programming, objects have methods that can change or use the object's data. Many programming languages use a special word, likethisorself, to refer to the current object. In languages that supportopen recursion, a method in an object can call other methods in the same object, including itself, using this special word. This allows a method in one class to call another method defined later in a subclass, a feature known aslate binding.
OOP languages can be grouped into different types based on how they support and use objects:
Many popular programming languages, like C++, Java, and Python, use object-oriented programming. In the past, OOP was widely accepted,[62]but recently, some programmers have criticized it and prefer functional programming instead.[63]A study by Potok et al. found no major difference in productivity between OOP and other methods.[64]
Paul Graham, a well-known computer scientist, believes big companies like OOP because it helps manage large teams of average programmers. He argues that OOP adds structure, making it harder for one person to make serious mistakes, but at the same time restrains smart programmers.[65]Eric S. Raymond, aUnixprogrammer andopen-source softwareadvocate, argues that OOP is not the best way to write programs.[56]
Richard Feldman says that, while OOP features helped some languages stay organized, their popularity comes from other reasons.[66]Lawrence Krubner argues that OOP doesn't offer special advantages compared to other styles, like functional programming, and can make coding more complicated.[67]Luca Cardellisays that OOP is slower and takes longer to compile than procedural programming.[54]
In recent years, object-oriented programming (OOP) has become very popular indynamic programming languages. Some languages, likePython,PowerShell,RubyandGroovy, were designed with OOP in mind. Others, likePerl,PHP, andColdFusion, started as non-OOP languages but added OOP features later (starting with Perl 5, PHP 4, and ColdFusion version 6).
On the web,HTML,XHTML, andXMLdocuments use theDocument Object Model(DOM), which works with theJavaScriptlanguage. JavaScript is a well-known example of aprototype-basedlanguage. Instead of using classes like other OOP languages, JavaScript creates new objects by copying (or "cloning") existing ones. Another language that uses this method isLua.
When computers communicate in a client-server system, they send messages to request services. For example, a simple message might include a length field (showing how big the message is), a code that identifies the type of message, and a data value. These messages can be designed as structured objects that both the client and server understand, so that each type of message corresponds to a class of objects in the client and server code. More complex messages might include structured objects as additional details. The client and server need to know how to serialize and deserialize these messages so they can be transmitted over the network, and map them to the appropriate object types. Both clients and servers can be thought of as complex object-oriented systems.
TheDistributed Data Management Architecture(DDM) uses this idea by organizing objects into four levels:
The first version of DDM defined distributed file services. Later, it was expanded to support databases through theDistributed Relational Database Architecture(DRDA).
Design patternsare common solutions to problems in software design. Some design patterns are especially useful for object-oriented programming, and design patterns are typically introduced in an OOP context.
The following are notablesoftware design patternsfor OOP objects.[68]
A commonanti-patternis theGod object, an object that knows or does too much.
Design Patterns: Elements of Reusable Object-Oriented Softwareis a famous book published in 1994 by four authors:Erich Gamma,Richard Helm,Ralph Johnson, andJohn Vlissides. People often call them the "Gang of Four". The book talks about the strengths and weaknesses of object-oriented programming and explains 23 common ways to solve programming problems.
These solutions, called "design patterns," are grouped into three types:
Both object-oriented programming andrelational database management systems(RDBMSs) are widely used in software today. However,relational databasesdon't store objects directly, which creates a challenge when using them together. This issue is calledobject-relational impedance mismatch.
To solve this problem, developers use different methods, but none of them are perfect.[69]One of the most common solutions isobject-relational mapping(ORM), which helps connect object-oriented programs to relational databases. Examples of ORM tools includeVisual FoxPro,Java Data Objects, andRuby on RailsActiveRecord.
Some databases, calledobject databases, are designed to work with object-oriented programming. However, they have not been as popular or successful as relational databases.
Date and Darwen have proposed a theoretical foundation that uses OOP as a kind of customizabletype systemto support RDBMSs, but it forbids objects containing pointers to other objects.[70]
Inresponsibility-driven design, classes are built around what they need to do and the information they share, in the form of a contract. This is different fromdata-driven design, where classes are built based on the data they need to store. According to Wirfs-Brock and Wilkerson, the originators of responsibility-driven design, responsibility-driven design is the better approach.[71]
SOLIDis a set of five rules for designing good software, created by Michael Feathers:
GRASP(General Responsibility Assignment Software Patterns) is another set of software design rules, created byCraig Larman, that helps developers assign responsibilities to different parts of a program:[72]
In object-oriented programming, objects are things that exist while a program is running. An object can represent anything, like a person, a place, a bank account, or a table of data. Many researchers have tried to formally define how OOP works.Recordsare the basis for understanding objects. They can represent fields, and also methods, iffunction literalscan be stored. However,inheritancepresents difficulties, particularly with the interactions between open recursion and encapsulated state. Researchers have usedrecursive typesandco-algebraic data typesto incorporate essential features of OOP.[73]Abadi and Cardelli defined several extensions ofSystem F<:that deal with mutable objects, allowing bothsubtype polymorphismandparametric polymorphism(generics), and were able to formally model many OOP concepts and constructs.[74]Although far from trivial, static analysis of object-oriented programming languages such as Java is a mature field,[75]with several commercial tools.[76]
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
TheKey Management Interoperability Protocol(KMIP) is anextensiblecommunication protocolthat defines message formats for the manipulation ofcryptographic keyson akey managementserver. This facilitates dataencryptionby simplifying encryption key management. Keys may be created on a server and then retrieved, possibly wrapped by other keys. Bothsymmetricandasymmetrickeys are supported, including the ability to sign certificates. KMIP also allows for clients to ask a server to encrypt or decrypt data, without needing direct access to the key.
The KMIP standard was first released in 2010. Clients and servers are commercially available from multiple vendors. The KMIP standard effort is governed by theOASIS standards body. Technical details can also be found on the official KMIP page[1]and kmip wiki.[2]
A KMIP server stores and controlsManaged Objectslike symmetric and asymmetric keys, certificates, and user defined objects. Clients then use the protocol for accessing these objects subject to a security model that is implemented by the servers. Operations are provided to create, locate, use, retrieve and update managed objects.
Each managed object comprises an immutableValuelike a key-block containing a cryptographic-key. These objects also have mutableAttributeswhich can be used for storing metadata about their keys. Some attributes are derived directly from the Value, like the cryptographic-algorithm and key-length. Other attributes are defined in the specification for the management of objects like the Application-Specific Identifier which is usually derived from tape-identification data. Additional identifiers can be defined by the server or client per application need.
Each object is identified by a unique and immutable object-identifier generated by the server and is used for getting object-values. Managed-objects may also be given a number of mutable yet globally uniqueNameattribute which can be used for Locating objects.
The types of managed-objects being managed by KMIP include:
The operations provided by KMIP include:
Each key has a cryptographic state defined by theNational Institute of Standards and Technology(NIST). Keys are created in an Initial state, and must be Activated before they can be used. Keys may then be Deactivated and eventually Destroyed. A key may also be marked being Compromised.
Operations are provided for manipulating Key-state in conformance with the NIST life-cycle guidelines. A Key-state may be interrogated using the State attribute or the attributes that record dates of each transformation such as Activation Date. Dates can be specified into the future thus keys automatically become unavailable for specified operations when they expire.
KMIP is a stateless protocol in which messages are sent from a client to a server and then the client normally awaits on a reply. Each request may contain many operations thus enables the protocol to efficiently handle large numbers of keys. There are also advanced features for processing requests asynchronously.
The KMIP protocol specifies several different types of encodings. The main one is atype–length–valueencoding of messages, calledTTLV(Tag, Type, Length, Value). Nested TTLV structures allow for encoding of complex, multi-operation messages in a singlebinary message.
There are also well defined XML and JSON encodings of the protocol for environments where binary is not appropriate.
All of these protocols are expected to be transmitted usingTLSprotocol in order to ensure integrity and security. However, it is also possible to register and retrieve keys that are wrapped (encrypted) using another key on the server, which provides an additional level of security.
KMIP provides standardized mechanisms to manage a KMIP server by suitably authorized administrative clients using System Objects.
User objects can be created and authorized to perform specific operations on specific managed objects. Both Managed Objects and Users can be assigned to groups, and those groups can form a hierarchy which facilitates efficient management of complex operating environments.
KMIP also provides a provisioning system that facilitates providing end points with credentials using simple one-time passwords.
Default values of attributes can be provided, so that simple clients need not specify cryptographic and other parameters. For example, an administrative user might specify that all "SecretAgent" keys should be 192 bitAESkeys withCBCblock chaining. A client then only needs to specify that they wish to create a "SecretAgent" key to have those defaults provided. It is also possible to enforce constraints on key parameters that implement security policy.
KMIP also defines a set ofprofiles, which are subsets of the KMIP specification showing common usage for a particular context. A particular KMIP implementation is said to beconformantto a profile when it fulfills all the requirements set forth in a profile specification document.OASIShas put forth various profiles describing the requirements for compliance towards storage arrays[3]and tape libraries,[4]but any organization can create a profile.
PKCS#11is aCAPIused to control ahardware security module. PKCS#11 provides cryptographic operations to encrypt and decrypt, as well as operations for simple key management. There is considerable amount of overlap between the PKCS#11 API and the KMIP protocol.
The two standards were originally developed independently. PKCS#11 was created byRSA Security, but the standard is now also governed by anOASIStechnical committee. It is the stated objective of both the PKCS#11 and KMIP committees to align the standards where practical. For example, the PKCS#11 Sensitive and Extractable attributes have been added to KMIP version 1.4. Many individuals are on the technical committees of both KMIP and PKCS#11.
KMIP 2.0 also provides a standardized mechanism to transport PKCS#11 messages from clients to servers. This can be used to target different PKCS#11 implementations without the need to recompile the programs that use it.
The OASIS KMIP Technical Committee maintains a list of known KMIP implementations, which can be found atOASIS Known Implementations. As of December 2024, there are 35 implementations and 91 KMIP products in this list.
The KMIP standard is defined using a formal specification document, test cases, and profiles put forth by theOASISKMIP technical committee. These documents are publicly available on the OASIS website.
Vendors demonstrate interoperability during a process organized by the OASIS KMIP technical committee in the months before each RSA security conference. These demonstrations are informally known asinterops. KMIP interops have been held every year since 2010. The following chart shows the Normalised number of general test cases and profile tests of all interop participants that have participated in two or more interops since 2014.[5]
The 2025 interoperability testedPost Quantum Cryptography(PCQ) algorithms that will be required as quantum computers become more powerful.[6]
The following shows the XML encoding of a request to Locate a key named "MyKeyName" and return its value wrapped in a different key with ID "c6d14516-4d38-0644-b810-1913b9aef4da". (TTLV is a more common wire protocol, but XML is more human readable.)
Documentation is freely available from the OASIS website.[7]This includes the formal technical specification and a usage guide to assist people that are unfamiliar with the specification. A substantial library of test cases is also provided. These are used to test the interoperability of clients and servers, but they also provide concrete examples of the usage of each standard KMIP feature.
|
https://en.wikipedia.org/wiki/KMIP
|
Innumber theoryandcombinatorics, apartitionof a non-negativeintegern, also called aninteger partition, is a way of writingnas asumofpositive integers. Two sums that differ only in the order of theirsummandsare considered the same partition. (If order matters, the sum becomes acomposition.) For example,4can be partitioned in five distinct ways:
The only partition of zero is the empty sum, having no parts.
The order-dependent composition1 + 3is the same partition as3 + 1, and the two distinct compositions1 + 2 + 1and1 + 1 + 2represent the same partition as2 + 1 + 1.
An individual summand in a partition is called apart. The number of partitions ofnis given by thepartition functionp(n). Sop(4) = 5. The notationλ⊢nmeans thatλis a partition ofn.
Partitions can be graphically visualized withYoung diagramsorFerrers diagrams. They occur in a number of branches ofmathematicsandphysics, including the study ofsymmetric polynomialsand of thesymmetric groupand ingroup representation theoryin general.
The seven partitions of 5 are
Some authors treat a partition as a non-increasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as thetuple(2, 2, 1)or in the even more compact form(22, 1)where the superscript indicates the number of repetitions of a part.
This multiplicity notation for a partition can be written alternatively as1m12m23m3⋯{\displaystyle 1^{m_{1}}2^{m_{2}}3^{m_{3}}\cdots }, wherem1is the number of 1's,m2is the number of 2's, etc. (Components withmi= 0may be omitted.) For example, in this notation, the partitions of 5 are written51,1141,2131,1231,1122,1321{\displaystyle 5^{1},1^{1}4^{1},2^{1}3^{1},1^{2}3^{1},1^{1}2^{2},1^{3}2^{1}}, and15{\displaystyle 1^{5}}.
There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named afterNorman Macleod Ferrers, and as Young diagrams, named afterAlfred Young. Both have several possible conventions; here, we useEnglish notation, with diagrams aligned in the upper-left corner.
The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram:
The 14 circles are lined up in 4 rows, each having the size of a part of the partition.
The diagrams for the 5 partitions of the number 4 are shown below:
An alternative visual representation of an integer partition is itsYoung diagram(often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is
while the Ferrers diagram for the same partition is
While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study ofsymmetric functionsandgroup representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects calledYoung tableaux, and these tableaux have combinatorial and representation-theoretic significance.[1]As a type of shape made by adjacent squares joined together, Young diagrams are a special kind ofpolyomino.[2]
Thepartition functionp(n){\displaystyle p(n)}counts the partitions of a non-negative integern{\displaystyle n}. For instance,p(4)=5{\displaystyle p(4)=5}because the integer4{\displaystyle 4}has the five partitions1+1+1+1{\displaystyle 1+1+1+1},1+1+2{\displaystyle 1+1+2},1+3{\displaystyle 1+3},2+2{\displaystyle 2+2}, and4{\displaystyle 4}.
The values of this function forn=0,1,2,…{\displaystyle n=0,1,2,\dots }are:
Thegenerating functionofp{\displaystyle p}is
Noclosed-form expressionfor the partition function is known, but it has bothasymptotic expansionsthat accurately approximate it andrecurrence relationsby which it can be calculated exactly. It grows as anexponential functionof thesquare rootof its argument.,[3]as follows:
In 1937,Hans Rademacherfound a way to represent the partition functionp(n){\displaystyle p(n)}by theconvergent series
p(n)=1π2∑k=1∞Ak(n)k⋅ddn(1n−124sinh[πk23(n−124)]){\displaystyle p(n)={\frac {1}{\pi {\sqrt {2}}}}\sum _{k=1}^{\infty }A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\sinh \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right)}where
Ak(n)=∑0≤m<k,(m,k)=1eπi(s(m,k)−2nm/k).{\displaystyle A_{k}(n)=\sum _{0\leq m<k,\;(m,k)=1}e^{\pi i\left(s(m,k)-2nm/k\right)}.}ands(m,k){\displaystyle s(m,k)}is theDedekind sum.
Themultiplicative inverseof its generating function is theEuler function; by Euler'spentagonal number theoremthis function is an alternating sum ofpentagonal numberpowers of its argument.
Srinivasa Ramanujandiscovered that the partition function has nontrivial patterns inmodular arithmetic, now known asRamanujan's congruences. For instance, whenever the decimal representation ofn{\displaystyle n}ends in the digit 4 or 9, the number of partitions ofn{\displaystyle n}will be divisible by 5.[4]
In both combinatorics and number theory, families of partitions subject to various restrictions are often studied.[5]This section surveys a few such restrictions.
If we flip the diagram of the partition 6 + 4 + 3 + 1 along itsmain diagonal, we obtain another partition of 14:
By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to beconjugateof one another.[6]In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to beself-conjugate.[7]
Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram:
One can then obtain abijectionbetween the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example:
Among the 22 partitions of the number 8, there are 6 that contain onlyodd parts:
Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called apartition with distinct parts. If we count the partitions of 8 with distinct parts, we also obtain 6:
This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted byq(n).[8][9]This result was proved byLeonhard Eulerin 1748[10]and later was generalized asGlaisher's theorem.
For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example isq(n) (partitions into distinct parts). The first few values ofq(n) are (starting withq(0)=1):
Thegenerating functionforq(n) is given by[11]
Thepentagonal number theoremgives a recurrence forq:[12]
whereakis (−1)mifk= 3m2−mfor some integermand is 0 otherwise.
By taking conjugates, the numberpk(n)of partitions ofninto exactlykparts is equal to the number of partitions ofnin which the largest part has sizek. The functionpk(n)satisfies the recurrence
with initial valuesp0(0) = 1andpk(n) = 0ifn≤ 0 ork≤ 0andnandkare not both zero.[13]
One recovers the functionp(n) by
One possible generating function for such partitions, takingkfixed andnvariable, is
More generally, ifTis a set of positive integers then the number of partitions ofn, all of whose parts belong toT, hasgenerating function
This can be used to solvechange-making problems(where the setTspecifies the available coins). As two particular cases, one has that the number of partitions ofnin which all parts are 1 or 2 (or, equivalently, the number of partitions ofninto 1 or 2 parts) is
and the number of partitions ofnin which all parts are 1, 2 or 3 (or, equivalently, the number of partitions ofninto at most three parts) is the nearest integer to (n+ 3)2/ 12.[14]
One may also simultaneously limit the number and size of the parts. Letp(N,M;n)denote the number of partitions ofnwith at mostMparts, each of size at mostN. Equivalently, these are the partitions whose Young diagram fits inside anM×Nrectangle. There is a recurrence relationp(N,M;n)=p(N,M−1;n)+p(N−1,M;n−M){\displaystyle p(N,M;n)=p(N,M-1;n)+p(N-1,M;n-M)}obtained by observing thatp(N,M;n)−p(N,M−1;n){\displaystyle p(N,M;n)-p(N,M-1;n)}counts the partitions ofninto exactlyMparts of size at mostN, and subtracting 1 from each part of such a partition yields a partition ofn−Minto at mostMparts.[15]
The Gaussian binomial coefficient is defined as:(k+ℓℓ)q=(k+ℓk)q=∏j=1k+ℓ(1−qj)∏j=1k(1−qj)∏j=1ℓ(1−qj).{\displaystyle {k+\ell \choose \ell }_{q}={k+\ell \choose k}_{q}={\frac {\prod _{j=1}^{k+\ell }(1-q^{j})}{\prod _{j=1}^{k}(1-q^{j})\prod _{j=1}^{\ell }(1-q^{j})}}.}The Gaussian binomial coefficient is related to thegenerating functionofp(N,M;n)by the equality∑n=0MNp(N,M;n)qn=(M+NM)q.{\displaystyle \sum _{n=0}^{MN}p(N,M;n)q^{n}={M+N \choose M}_{q}.}
Therankof a partition is the largest numberksuch that the partition contains at leastkparts of size at leastk. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rankr, ther×rsquare of entries in the upper-left is known as theDurfee square:
The Durfee square has applications within combinatorics in the proofs of various partition identities.[16]It also has some practical significance in the form of theh-index.
A different statistic is also sometimes called therank of a partition(or Dyson rank), namely, the differenceλk−k{\displaystyle \lambda _{k}-k}for a partition ofkparts with largest partλk{\displaystyle \lambda _{k}}. This statistic (which is unrelated to the one described above) appears in the study ofRamanujan congruences.
There is a naturalpartial orderon partitions given by inclusion of Young diagrams. This partially ordered set is known asYoung's lattice. The lattice was originally defined in the context ofrepresentation theory, where it is used to describe theirreducible representationsofsymmetric groupsSnfor alln, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of adifferential poset.
There is a deep theory of random partitions chosen according to the uniform probability distribution on thesymmetric groupvia theRobinson–Schensted correspondence. In 1977, Logan and Shepp, as well as Vershik and Kerov, showed that the Young diagram of a typical large partition becomes asymptotically close to the graph of a certain analytic function minimizing a certain functional. In 1988, Baik, Deift and Johansson extended these results to determine the distribution of the longest increasing subsequence of a random permutation in terms of theTracy–Widom distribution.[17]Okounkovrelated these results to the combinatorics ofRiemann surfacesand representation theory.[18][19]
|
https://en.wikipedia.org/wiki/Integer_partition
|
In mathematics, theerror function(also called theGauss error function), often denoted byerf, is a functionerf:C→C{\displaystyle \mathrm {erf} :\mathbb {C} \to \mathbb {C} }defined as:[1]erfz=2π∫0ze−t2dt.{\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}\,\mathrm {d} t.}
The integral here is a complexcontour integralwhich is path-independent becauseexp(−t2){\displaystyle \exp(-t^{2})}isholomorphicon the whole complex planeC{\displaystyle \mathbb {C} }. In many applications, the function argument is a real number, in which case the function value is also real.
In some old texts,[2]the error function is defined without the factor of2π{\displaystyle {\frac {2}{\sqrt {\pi }}}}.
Thisnonelementary integralis asigmoidfunction that occurs often inprobability,statistics, andpartial differential equations.
In statistics, for non-negative real values ofx, the error function has the following interpretation: for a realrandom variableYthat isnormally distributedwithmean0 andstandard deviation12{\displaystyle {\frac {1}{\sqrt {2}}}},erfxis the probability thatYfalls in the range[−x,x].
Two closely related functions are thecomplementary error functionerfc:C→C{\displaystyle \mathrm {erfc} :\mathbb {C} \to \mathbb {C} }is defined as
erfcz=1−erfz,{\displaystyle \operatorname {erfc} z=1-\operatorname {erf} z,}
and theimaginary error functionerfi:C→C{\displaystyle \mathrm {erfi} :\mathbb {C} \to \mathbb {C} }is defined as
erfiz=−ierfiz,{\displaystyle \operatorname {erfi} z=-i\operatorname {erf} iz,}
whereiis theimaginary unit.
The name "error function" and its abbreviationerfwere proposed byJ. W. L. Glaisherin 1871 on account of its connection with "the theory of Probability, and notably the theory ofErrors."[3]The error function complement was also discussed by Glaisher in a separate publication in the same year.[4]For the "law of facility" of errors whosedensityis given byf(x)=(cπ)1/2e−cx2{\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{1/2}e^{-cx^{2}}}(thenormal distribution), Glaisher calculates the probability of an error lying betweenpandqas:(cπ)12∫pqe−cx2dx=12(erf(qc)−erf(pc)).{\displaystyle \left({\frac {c}{\pi }}\right)^{\frac {1}{2}}\int _{p}^{q}e^{-cx^{2}}\,\mathrm {d} x={\tfrac {1}{2}}\left(\operatorname {erf} \left(q{\sqrt {c}}\right)-\operatorname {erf} \left(p{\sqrt {c}}\right)\right).}
When the results of a series of measurements are described by anormal distributionwithstandard deviationσandexpected value0, thenerf (a/σ√2)is the probability that the error of a single measurement lies between−aand+a, for positivea. This is useful, for example, in determining thebit error rateof a digital communication system.
The error and complementary error functions occur, for example, in solutions of theheat equationwhenboundary conditionsare given by theHeaviside step function.
The error function and its approximations can be used to estimate results that holdwith high probabilityor with low probability. Given a random variableX~ Norm[μ,σ](a normal distribution with meanμand standard deviationσ) and a constantL>μ, it can be shown via integration by substitution:Pr[X≤L]=12+12erfL−μ2σ≈Aexp(−B(L−μσ)2){\displaystyle {\begin{aligned}\Pr[X\leq L]&={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} {\frac {L-\mu }{{\sqrt {2}}\sigma }}\\&\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)\end{aligned}}}
whereAandBare certain numeric constants. IfLis sufficiently far from the mean, specificallyμ−L≥σ√lnk, then:
Pr[X≤L]≤Aexp(−Blnk)=AkB{\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}}
so the probability goes to 0 ask→ ∞.
The probability forXbeing in the interval[La,Lb]can be derived asPr[La≤X≤Lb]=∫LaLb12πσexp(−(x−μ)22σ2)dx=12(erfLb−μ2σ−erfLa−μ2σ).{\displaystyle {\begin{aligned}\Pr[L_{a}\leq X\leq L_{b}]&=\int _{L_{a}}^{L_{b}}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,\mathrm {d} x\\&={\frac {1}{2}}\left(\operatorname {erf} {\frac {L_{b}-\mu }{{\sqrt {2}}\sigma }}-\operatorname {erf} {\frac {L_{a}-\mu }{{\sqrt {2}}\sigma }}\right).\end{aligned}}}
The propertyerf (−z) = −erfzmeans that the error function is anodd function. This directly results from the fact that the integrande−t2is aneven function(the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is anentire functionwhich takes real numbers to real numbers, for anycomplex numberz:erfz¯=erfz¯{\displaystyle \operatorname {erf} {\overline {z}}={\overline {\operatorname {erf} z}}}wherez¯{\displaystyle {\overline {z}}}denotes thecomplex conjugateofz{\displaystyle z}.
The integrandf= exp(−z2)andf= erfzare shown in the complexz-plane in the figures at right withdomain coloring.
The error function at+∞is exactly 1 (seeGaussian integral). At the real axis,erfzapproaches unity atz→ +∞and −1 atz→ −∞. At the imaginary axis, it tends to±i∞.
The error function is anentire function; it has no singularities (except that at infinity) and itsTaylor expansionalways converges. Forx>> 1, however, cancellation of leading terms makes the Taylor expansion unpractical.
The defining integral cannot be evaluated inclosed formin terms ofelementary functions(seeLiouville's theorem), but by expanding theintegrande−z2into itsMaclaurin seriesand integrating term by term, one obtains the error function's Maclaurin series as:erfz=2π∑n=0∞(−1)nz2n+1n!(2n+1)=2π(z−z33+z510−z742+z9216−⋯){\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)\end{aligned}}}which holds for everycomplex numberz. The denominator terms are sequenceA007680in theOEIS.
For iterative calculation of the above series, the following alternative formulation may be useful:erfz=2π∑n=0∞(z∏k=1n−(2k−1)z2k(2k+1))=2π∑n=0∞z2n+1∏k=1n−z2k{\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)\\[6pt]&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}\end{aligned}}}because−(2k− 1)z2/k(2k+ 1)expresses the multiplier to turn thekth term into the(k+ 1)th term (consideringzas the first term).
The imaginary error function has a very similar Maclaurin series, which is:erfiz=2π∑n=0∞z2n+1n!(2n+1)=2π(z+z33+z510+z742+z9216+⋯){\displaystyle {\begin{aligned}\operatorname {erfi} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)\end{aligned}}}which holds for everycomplex numberz.
The derivative of the error function follows immediately from its definition:ddzerfz=2πe−z2.{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {erf} z={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.}From this, the derivative of the imaginary error function is also immediate:ddzerfiz=2πez2.{\displaystyle {\frac {d}{dz}}\operatorname {erfi} z={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.}Anantiderivativeof the error function, obtainable byintegration by parts, iszerfz+e−z2π+C.{\displaystyle z\operatorname {erf} z+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}+C.}An antiderivative of the imaginary error function, also obtainable by integration by parts, iszerfiz−ez2π+C.{\displaystyle z\operatorname {erfi} z-{\frac {e^{z^{2}}}{\sqrt {\pi }}}+C.}Higher order derivatives are given byerf(k)z=2(−1)k−1πHk−1(z)e−z2=2πdk−1dzk−1(e−z2),k=1,2,…{\displaystyle \operatorname {erf} ^{(k)}z={\frac {2(-1)^{k-1}}{\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {\mathrm {d} ^{k-1}}{\mathrm {d} z^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots }whereHare the physicists'Hermite polynomials.[5]
An expansion,[6]which converges more rapidly for all real values ofxthan a Taylor expansion, is obtained by usingHans Heinrich Bürmann's theorem:[7]erfx=2πsgnx⋅1−e−x2(1−112(1−e−x2)−7480(1−e−x2)2−5896(1−e−x2)3−787276480(1−e−x2)4−⋯)=2πsgnx⋅1−e−x2(π2+∑k=1∞cke−kx2).{\displaystyle {\begin{aligned}\operatorname {erf} x&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}\left(1-e^{-x^{2}}\right)-{\frac {7}{480}}\left(1-e^{-x^{2}}\right)^{2}-{\frac {5}{896}}\left(1-e^{-x^{2}}\right)^{3}-{\frac {787}{276480}}\left(1-e^{-x^{2}}\right)^{4}-\cdots \right)\\[10pt]&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}}wheresgnis thesign function. By keeping only the first two coefficients and choosingc1=31/200andc2= −341/8000, the resulting approximation shows its largest relative error atx= ±1.40587, where it is less than 0.0034361:erfx≈2πsgnx⋅1−e−x2(π2+31200e−x2−3418000e−2x2).{\displaystyle \operatorname {erf} x\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).}
Given a complex numberz, there is not auniquecomplex numberwsatisfyingerfw=z, so a true inverse function would be multivalued. However, for−1 <x< 1, there is a uniquerealnumber denotederf−1xsatisfyingerf(erf−1x)=x.{\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}x\right)=x.}
Theinverse error functionis usually defined with domain(−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk|z| < 1of the complex plane, using the Maclaurin series[8]erf−1z=∑k=0∞ck2k+1(π2z)2k+1,{\displaystyle \operatorname {erf} ^{-1}z=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}wherec0= 1andck=∑m=0k−1cmck−1−m(m+1)(2m+1)={1,1,76,12790,43692520,3480716200,…}.{\displaystyle {\begin{aligned}c_{k}&=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}\\[1ex]&=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {4369}{2520}},{\frac {34807}{16200}},\ldots \right\}.\end{aligned}}}
So we have the series expansion (common factors have been canceled from numerators and denominators):erf−1z=π2(z+π12z3+7π2480z5+127π340320z7+4369π45806080z9+34807π5182476800z11+⋯).{\displaystyle \operatorname {erf} ^{-1}z={\frac {\sqrt {\pi }}{2}}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{182476800}}z^{11}+\cdots \right).}(After cancellation the numerator and denominator values inOEIS:A092676andOEIS:A092677respectively; without cancellation the numerator terms are values inOEIS:A002067.) The error function's value at±∞is equal to±1.
For|z| < 1, we haveerf(erf−1z) =z.
Theinverse complementary error functionis defined aserfc−1(1−z)=erf−1z.{\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}z.}For realx, there is a uniquerealnumbererfi−1xsatisfyingerfi(erfi−1x) =x. Theinverse imaginary error functionis defined aserfi−1x.[9]
For any realx,Newton's methodcan be used to computeerfi−1x, and for−1 ≤x≤ 1, the following Maclaurin series converges:erfi−1z=∑k=0∞(−1)kck2k+1(π2z)2k+1,{\displaystyle \operatorname {erfi} ^{-1}z=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}whereckis defined as above.
A usefulasymptotic expansionof the complementary error function (and therefore also of the error function) for large realxiserfcx=e−x2xπ(1+∑n=1∞(−1)n1⋅3⋅5⋯(2n−1)(2x2)n)=e−x2xπ∑n=0∞(−1)n(2n−1)!!(2x2)n,{\displaystyle {\begin{aligned}\operatorname {erfc} x&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left(1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{\left(2x^{2}\right)^{n}}}\right)\\[6pt]&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}},\end{aligned}}}where(2n− 1)!!is thedouble factorialof(2n− 1), which is the product of all odd numbers up to(2n− 1). This series diverges for every finitex, and its meaning as asymptotic expansion is that for any integerN≥ 1one haserfcx=e−x2xπ∑n=0N−1(−1)n(2n−1)!!(2x2)n+RN(x){\displaystyle \operatorname {erfc} x={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}}+R_{N}(x)}where the remainder isRN(x):=(−1)N(2N−1)!!π⋅2N−1∫x∞t−2Ne−t2dt,{\displaystyle R_{N}(x):={\frac {(-1)^{N}\,(2N-1)!!}{{\sqrt {\pi }}\cdot 2^{N-1}}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t,}which follows easily by induction, writinge−t2=−12tddte−t2{\displaystyle e^{-t^{2}}=-{\frac {1}{2t}}\,{\frac {\mathrm {d} }{\mathrm {d} t}}e^{-t^{2}}}and integrating by parts.
The asymptotic behavior of the remainder term, inLandau notation, isRN(x)=O(x−(1+2N)e−x2){\displaystyle R_{N}(x)=O\left(x^{-(1+2N)}e^{-x^{2}}\right)}asx→ ∞. This can be found byRN(x)∝∫x∞t−2Ne−t2dt=e−x2∫0∞(t+x)−2Ne−t2−2txdt≤e−x2∫0∞x−2Ne−2txdt∝x−(1+2N)e−x2.{\displaystyle R_{N}(x)\propto \int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t=e^{-x^{2}}\int _{0}^{\infty }(t+x)^{-2N}e^{-t^{2}-2tx}\,\mathrm {d} t\leq e^{-x^{2}}\int _{0}^{\infty }x^{-2N}e^{-2tx}\,\mathrm {d} t\propto x^{-(1+2N)}e^{-x^{2}}.}For large enough values ofx, only the first few terms of this asymptotic expansion are needed to obtain a good approximation oferfcx(while for not too large values ofx, the above Taylor expansion at 0 provides a very fast convergence).
Acontinued fractionexpansion of the complementary error function was found byLaplace:[10][11]erfcz=zπe−z21z2+a11+a2z2+a31+⋯,am=m2.{\displaystyle \operatorname {erfc} z={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}},\qquad a_{m}={\frac {m}{2}}.}
The inversefactorial series:erfcz=e−z2πz∑n=0∞(−1)nQn(z2+1)n¯=e−z2πz[1−121(z2+1)+141(z2+1)(z2+2)−⋯]{\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {\left(-1\right)^{n}Q_{n}}{{\left(z^{2}+1\right)}^{\bar {n}}}}\\[1ex]&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left[1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{\left(z^{2}+1\right)\left(z^{2}+2\right)}}-\cdots \right]\end{aligned}}}converges forRe(z2) > 0. HereQn=def1Γ(12)∫0∞τ(τ−1)⋯(τ−n+1)τ−12e−τdτ=∑k=0n(12)k¯s(n,k),{\displaystyle {\begin{aligned}Q_{n}&{\overset {\text{def}}{{}={}}}{\frac {1}{\Gamma {\left({\frac {1}{2}}\right)}}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-{\frac {1}{2}}}e^{-\tau }\,d\tau \\[1ex]&=\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),\end{aligned}}}zndenotes therising factorial, ands(n,k)denotes a signedStirling number of the first kind.[12][13]There also exists a representation by an infinite sum containing thedouble factorial:erfz=2π∑n=0∞(−2)n(2n−1)!!(2n+1)!z2n+1{\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-2)^{n}(2n-1)!!}{(2n+1)!}}z^{2n+1}}
wherea1= 0.278393,a2= 0.230389,a3= 0.000972,a4= 0.078108
erfx≈1−(a1t+a2t2+a3t3)e−x2,t=11+px,x≥0{\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+a_{3}t^{3}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0}(maximum error:2.5×10−5)
wherep= 0.47047,a1= 0.3480242,a2= −0.0958798,a3= 0.7478556
erfx≈1−1(1+a1x+a2x2+⋯+a6x6)16,x≥0{\displaystyle \operatorname {erf} x\approx 1-{\frac {1}{\left(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6}\right)^{16}}},\qquad x\geq 0}(maximum error:3×10−7)
wherea1= 0.0705230784,a2= 0.0422820123,a3= 0.0092705272,a4= 0.0001520143,a5= 0.0002765672,a6= 0.0000430638
erfx≈1−(a1t+a2t2+⋯+a5t5)e−x2,t=11+px{\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}}}(maximum error:1.5×10−7)
wherep= 0.3275911,a1= 0.254829592,a2= −0.284496736,a3= 1.421413741,a4= −1.453152027,a5= 1.061405429
All of these approximations are valid forx≥ 0. To use these approximations for negativex, use the fact thaterfxis an odd function, soerfx= −erf(−x).
This approximation can be inverted to obtain an approximation for the inverse error function:erf−1x≈sgnx⋅(2πa+ln(1−x2)2)2−ln(1−x2)a−(2πa+ln(1−x2)2).{\displaystyle \operatorname {erf} ^{-1}x\approx \operatorname {sgn} x\cdot {\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)^{2}-{\frac {\ln \left(1-x^{2}\right)}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)}}.}
Thecomplementary error function, denotederfc, is defined as
erfcx=1−erfx=2π∫x∞e−t2dt=e−x2erfcxx,{\displaystyle {\begin{aligned}\operatorname {erfc} x&=1-\operatorname {erf} x\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,\mathrm {d} t\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} x,\end{aligned}}}which also defineserfcx, thescaled complementary error function[26](which can be used instead oferfcto avoidarithmetic underflow[26][27]). Another form oferfcxforx≥ 0is known as Craig's formula, after its discoverer:[28]erfc(x∣x≥0)=2π∫0π2exp(−x2sin2θ)dθ.{\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,\mathrm {d} \theta .}This expression is valid only for positive values ofx, but it can be used in conjunction witherfcx= 2 − erfc(−x)to obtainerfc(x)for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for theerfcof the sum of two non-negative variables is as follows:[29]erfc(x+y∣x,y≥0)=2π∫0π2exp(−x2sin2θ−y2cos2θ)dθ.{\displaystyle \operatorname {erfc} (x+y\mid x,y\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}-{\frac {y^{2}}{\cos ^{2}\theta }}\right)\,\mathrm {d} \theta .}
Theimaginary error function, denotederfi, is defined as
erfix=−ierfix=2π∫0xet2dt=2πex2D(x),{\displaystyle {\begin{aligned}\operatorname {erfi} x&=-i\operatorname {erf} ix\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,\mathrm {d} t\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}}whereD(x)is theDawson function(which can be used instead oferfito avoidarithmetic overflow[26]).
Despite the name "imaginary error function",erfixis real whenxis real.
When the error function is evaluated for arbitrarycomplexargumentsz, the resultingcomplex error functionis usually discussed in scaled form as theFaddeeva function:w(z)=e−z2erfc(−iz)=erfcx(−iz).{\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).}
The error function is essentially identical to the standardnormal cumulative distribution function, denotedΦ, also namednorm(x)by some software languages[citation needed], as they differ only by scaling and translation. Indeed,
Φ(x)=12π∫−∞xe−t22dt=12(1+erfx2)=12erfc(−x2){\displaystyle {\begin{aligned}\Phi (x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,\mathrm {d} t\\[6pt]&={\frac {1}{2}}\left(1+\operatorname {erf} {\frac {x}{\sqrt {2}}}\right)\\[6pt]&={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)\end{aligned}}}or rearranged forerfanderfc:erf(x)=2Φ(x2)−1erfc(x)=2Φ(−x2)=2(1−Φ(x2)).{\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi {\left(x{\sqrt {2}}\right)}-1\\[6pt]\operatorname {erfc} (x)&=2\Phi {\left(-x{\sqrt {2}}\right)}\\&=2\left(1-\Phi {\left(x{\sqrt {2}}\right)}\right).\end{aligned}}}
Consequently, the error function is also closely related to theQ-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function asQ(x)=12−12erfx2=12erfcx2.{\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} {\frac {x}{\sqrt {2}}}\\&={\frac {1}{2}}\operatorname {erfc} {\frac {x}{\sqrt {2}}}.\end{aligned}}}
TheinverseofΦis known as thenormal quantile function, orprobitfunction and may be expressed in terms of the inverse error function asprobit(p)=Φ−1(p)=2erf−1(2p−1)=−2erfc−1(2p).{\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).}
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of theMittag-Leffler function, and can also be expressed as aconfluent hypergeometric function(Kummer's function):erfx=2xπM(12,32,−x2).{\displaystyle \operatorname {erf} x={\frac {2x}{\sqrt {\pi }}}M\left({\tfrac {1}{2}},{\tfrac {3}{2}},-x^{2}\right).}
It has a simple expression in terms of theFresnel integral.[further explanation needed]
In terms of theregularized gamma functionPand theincomplete gamma function,erfx=sgnx⋅P(12,x2)=sgnxπγ(12,x2).{\displaystyle \operatorname {erf} x=\operatorname {sgn} x\cdot P\left({\tfrac {1}{2}},x^{2}\right)={\frac {\operatorname {sgn} x}{\sqrt {\pi }}}\gamma {\left({\tfrac {1}{2}},x^{2}\right)}.}sgnxis thesign function.
The iterated integrals of the complementary error function are defined by[30]inerfcz=∫z∞in−1erfcζdζi0erfcz=erfczi1erfcz=ierfcz=1πe−z2−zerfczi2erfcz=14(erfcz−2zierfcz){\displaystyle {\begin{aligned}i^{n}\!\operatorname {erfc} z&=\int _{z}^{\infty }i^{n-1}\!\operatorname {erfc} \zeta \,\mathrm {d} \zeta \\[6pt]i^{0}\!\operatorname {erfc} z&=\operatorname {erfc} z\\i^{1}\!\operatorname {erfc} z&=\operatorname {ierfc} z={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} z\\i^{2}\!\operatorname {erfc} z&={\tfrac {1}{4}}\left(\operatorname {erfc} z-2z\operatorname {ierfc} z\right)\\\end{aligned}}}
The general recurrence formula is2n⋅inerfcz=in−2erfcz−2z⋅in−1erfcz{\displaystyle 2n\cdot i^{n}\!\operatorname {erfc} z=i^{n-2}\!\operatorname {erfc} z-2z\cdot i^{n-1}\!\operatorname {erfc} z}
They have the power seriesinerfcz=∑j=0∞(−z)j2n−jj!Γ(1+n−j2),{\displaystyle i^{n}\!\operatorname {erfc} z=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\,\Gamma \left(1+{\frac {n-j}{2}}\right)}},}from which follow the symmetry propertiesi2merfc(−z)=−i2merfcz+∑q=0mz2q22(m−q)−1(2q)!(m−q)!{\displaystyle i^{2m}\!\operatorname {erfc} (-z)=-i^{2m}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}}andi2m+1erfc(−z)=i2m+1erfcz+∑q=0mz2q+122(m−q)−1(2q+1)!(m−q)!.{\displaystyle i^{2m+1}\!\operatorname {erfc} (-z)=i^{2m+1}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.}
|
https://en.wikipedia.org/wiki/Error_function
|
Atrailing zerois any 0 digit that comes after the last nonzero digit in a number string inpositional notation. For digitsbeforethe decimal point, the trailing zeros between thedecimal pointand the last nonzero digit are necessary for conveying the magnitude of a number and cannot be omitted (ex. 100), whileleading zeros– zeros occurring before the decimal point and before the first nonzero digit – can be omitted without changing the meaning (ex. 001). Any zeros appearing to the right of the last non-zero digitafterthe decimal point do not affect its value (ex. 0.100). Thus, decimal notation often does not use trailing zeros that come after the decimal point. However, trailing zeros that come after the decimal point may be used to indicate the number ofsignificant figures, for example in a measurement, and in that context, "simplifying" a number by removing trailing zeros would be incorrect.
The number of trailing zeros in a non-zero base-bintegernequals the exponent of the highest power ofbthat dividesn. For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors ininteger factorization. Somecomputer architectureshave acount trailing zerosoperation in theirinstruction setfor efficiently determining the number of trailing zero bits in a machine word.
Inpharmacy, trailing zeros are omitted fromdosevalues to prevent misreading.
The number of trailing zeros in thedecimal representationofn!, thefactorialof anon-negativeintegern, is simply the multiplicity of theprimefactor 5 inn!. This can be determined with this special case ofde Polignac's formula:[1]
wherekmust be chosen such that
more precisely
and⌊a⌋{\displaystyle \lfloor a\rfloor }denotes thefloor functionapplied toa. Forn= 0, 1, 2, ... this is
For example, 53> 32, and therefore 32! = 263130836933693530167218012160000000 ends in
zeros. Ifn< 5, the inequality is satisfied byk= 0; in that case the sum isempty, giving the answer 0.
The formula actually counts the number of factors 5 inn!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero.
Defining
the followingrecurrence relationholds:
This can be used to simplify the computation of the terms of the summation, which can be stopped as soon asqireaches zero. The condition5k+1>nis equivalent toqk+1= 0.
|
https://en.wikipedia.org/wiki/Trailing_zero
|
Theaviation transponder interrogation modesare the standard formats of pulsed sequences from an interrogatingSecondary Surveillance Radar(SSR) or similarAutomatic Dependent Surveillance-Broadcast(ADS-B) system. The reply format is usually referred to as a "code" from atransponder, which is used to determine detailed information from a suitably equipped aircraft.
In its simplest form, a "Mode" or interrogation type is generally determined by pulse spacing between two or more interrogation pulses. Various modes exist from Mode 1 to 5 for military use, to Mode A, B, C and D, and Mode S for civilian use.
Several different RFcommunication protocolshave been standardized for aviation transponders:
Mode A and Mode C are implemented usingair traffic control radar beacon systemas thephysical layer, whereas Mode S is implemented as a standalone backwards-compatible protocol. ADS-B can operate using Mode S-ES orUniversal Access Transceiveras itstransport layer:[3]
When the transponder receives an interrogation request, it broadcasts the configured transponder code (or "squawk code"). This is referred to as "Mode 3A" or more commonly, Mode A. A separate type of response called "Ident" can be initiated from the airplane by pressing a button on the transponder control panel.
A Mode A transponder code response can be augmented by apressure altituderesponse, which is then referred to as Mode C operation.[2]Pressure altitude is obtained from an altitude encoder, either a separate self-contained unit mounted in the aircraft or an integral part of the transponder. The altitude information is passed to the transponder using a modified form of the modifiedGray codecalled aGillham code.
Mode A and C responses are used to help air traffic controllers identify a particular aircraft's position and altitude on a radar screen, in order to maintain separation.[2]
Another mode called Mode S (Select) is designed to help avoiding overinterrogation of the transponder (having many radars in busy areas) and to allow automatic collision avoidance. Mode S transponders are compatible with Mode A and Mode CSecondary Surveillance Radar(SSR) systems.[2]This is the type of transponder that is used for TCAS or ACAS II (Airborne Collision Avoidance System) functions, and is required to implement the extendedsquitterbroadcast, one means of participating inADS-Bsystems. A TCAS-equipped aircraft must have a Mode S transponder, but not all Mode S transponders include TCAS. Likewise, a Mode S transponder is required to implement 1090ES extended squitter ADS-B Out, but there are other ways to implement ADS-B Out (in the U.S. and China.) The format of Mode S messages is documented in ICAO Doc 9688,Manual on Mode S Specific Services.[4]
Upon interrogation, Mode S transponders transmit information about the aircraft to theSSRsystem, toTCASreceivers on board aircraft and to theADS-BSSR system. This information includes thecall signof the aircraft and/or the aircraft's permanent ICAO 24-bit address (which is represented for human interface purposes as six hexadecimal characters.) One of the hidden features of Mode S transponders is that they are backwards compatible; an aircraft equipped with a Mode S transponder can still be used to send replies to Mode A or C interrogations. This feature can be activated by a specific type of interrogation sequence called inter-mode.[citation needed]
Mode S equipped aircraft are assigned a unique ICAO 24-bit address or (informally) Mode-S "hex code" upon national registration and this address becomes a part of the aircraft'sCertificate of Registration. Normally, the address is never changed, however, the transponders are reprogrammable and, occasionally, are moved from one aircraft to another (presumably for operational or cost purposes), either by maintenance or by changing the appropriate entry in the aircraft'sFlight management system.
There are 16,777,214 (224-2) unique ICAO 24-bit addresses (hex codes) available.[5][6]The ICAO 24-bit address can be represented in three digital formats:hexadecimal,octal, andbinary. These addresses are used to provide a unique identity normally allocated to an individual aircraft or registration.
As an example, following is the ICAO 24-bit address assigned to theShuttle Carrier Aircraftwith theregistrationN905NA:[7][8]
These are all the same 24-bit address of the Shuttle Carrier Aircraft, represented indifferent numeral systems(see above).
An issue with Mode S transponders arises when pilots enter the wrongflight identitycode into the Mode S transponder.[9]In this case, the capabilities ofACAS IIand Mode SSSRcan be degraded.[10]
In 2009 the ICAO published an "extended" form of Mode S with more message formats to use withADS-B;[11]it was further refined in 2012.[12]Countries implementing ADS-B can require the use of either the extended squitter mode of a suitably-equipped Mode S transponder, or theUATtransponder on 978 MHz.
Mode-S data has the potential to contain the aircraft's movement vectors in relation to theEarthand its atmosphere. The difference between these two vectors is the wind acting on the aircraft.[13]Deriving winds (and temperatures from theMach numberandtrue airspeed) was developed simultaneously bySiebren de Haanof theKNMIandEdmund Stoneof theMet Office.[14]Over the UK the number of aircraft observations has increased from approximately 7500 per day fromAMDARto over 10 million per day. The Met Office together with KNMI andFlightRadar24are actively developing an expanded capability including data from every continent other than Antarctica.[15]
|
https://en.wikipedia.org/wiki/Aviation_transponder_interrogation_modes
|
Inphysics,Minkowski space(orMinkowski spacetime) (/mɪŋˈkɔːfski,-ˈkɒf-/[1]) is the main mathematical description ofspacetimein the absence ofgravitation. It combinesinertialspaceandtimemanifoldsinto afour-dimensionalmodel.
The model helps show how aspacetime intervalbetween any twoeventsis independent of theinertial frame of referencein which they are recorded. MathematicianHermann Minkowskideveloped it from the work ofHendrik Lorentz,Henri Poincaré, and others said it "was grown on experimental physical grounds".
Minkowski space is closely associated withEinstein'stheories ofspecial relativityandgeneral relativityand is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due tolength contractionandtime dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events.[nb 1]Minkowski space differs fromfour-dimensional Euclidean spaceinsofar as it treats time differently from the three spatial dimensions.
In 3-dimensionalEuclidean space, theisometry group(maps preserving the regularEuclidean distance) is theEuclidean group. It is generated byrotations,reflectionsandtranslations. When time is appended as a fourth dimension, the further transformations of translations in time andLorentz boostsare added, and the group of all these transformations is called thePoincaré group. Minkowski's model follows special relativity, where motion causestime dilationchanging the scale applied to the frame in motion and shifts the phase of light.
Minkowski space is apseudo-Euclidean spaceequipped with anisotropic quadratic formcalled thespacetime intervalor theMinkowski norm squared. An event in Minkowski space for which the spacetime interval is zero is on thenull coneof the origin, called thelight conein Minkowski space. Using thepolarization identitythe quadratic form is converted to asymmetric bilinear formcalled theMinkowski inner product, though it is not a geometricinner product. Another misnomer isMinkowski metric,[2]but Minkowski space is not ametric space.
Thegroupof transformations for Minkowski space that preserves the spacetime interval (as opposed to the spatial Euclidean distance) is theLorentz group(as opposed to theGalilean group).
In his second relativity paper in 1905,Henri Poincaréshowed[3]how, by taking time to be an imaginary fourthspacetimecoordinateict, wherecis thespeed of lightandiis theimaginary unit,Lorentz transformationscan be visualized as ordinary rotations of the four-dimensional Euclidean sphere. The four-dimensional spacetime can be visualized as a four-dimensional space, with each point representing an event in spacetime. TheLorentz transformationscan then be thought of as rotations in this four-dimensional space, where the rotation axis corresponds to the direction of relative motion between the two observers and the rotation angle is related to their relative velocity.
To understand this concept, one should consider the coordinates of an event in spacetime represented as a four-vector(t,x,y,z). A Lorentz transformation is represented by amatrixthat acts on the four-vector, changing its components. This matrix can be thought of as a rotation matrix in four-dimensional space, which rotates the four-vector around a particular axis.x2+y2+z2+(ict)2=constant.{\displaystyle x^{2}+y^{2}+z^{2}+(ict)^{2}={\text{constant}}.}
Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations and are interpreted in the ordinary sense. The "rotation" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is aLorentz boostin physical spacetime withrealinertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary, which turns rotations into rotations in hyperbolic space (seehyperbolic rotation).
This idea, which was mentioned only briefly by Poincaré, was elaborated by Minkowski in a paper inGermanpublished in 1908 called "The Fundamental Equations for Electromagnetic Processes in Moving Bodies".[4]He reformulatedMaxwell equationsas a symmetrical set of equations in the four variables(x,y,z,ict)combined with redefined vector variables for electromagnetic quantities, and he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context.
From his reformulation, he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensionalspacetime continuum.
In a further development in his 1908 "Space and Time" lecture,[5]Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables(x,y,z,t)of space and time in the coordinate form in a four-dimensional realvector space. Points in this space correspond to events in spacetime. In this space, there is a definedlight-coneassociated with each point, and events not on the light cone are classified by their relation to the apex asspacelikeortimelike. It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity.
In the English translation of Minkowski's paper, the Minkowski metric, as defined below, is referred to as theline element. The Minkowski inner product below appears unnamed when referring toorthogonality(which he callsnormality) of certain vectors, and the Minkowski norm squared is referred to (somewhat cryptically, perhaps this is a translation dependent) as "sum".
Minkowski's principal tool is theMinkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g.,proper timeandlength contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics torelativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and thePoincaré groupas symmetry group of spacetime)followingfrom the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application orderivationof the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for whichflatMinkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian.
Minkowski, aware of the fundamental restatement of the theory which he had made, said
The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth, space by itself and time by itself are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.
Though Minkowski took an important step for physics,Albert Einsteinsaw its limitation:
At a time when Minkowski was giving the geometrical interpretation of special relativity by extending the Euclidean three-space to aquasi-Euclideanfour-space that included time, Einstein was already aware that this is not valid, because it excludes the phenomenon ofgravitation. He was still far from the study ofcurvilinear coordinatesandRiemannian geometry, and the heavy mathematical apparatus entailed.[6]
For further historical information see referencesGalison (1979),Corry (1997)andWalter (1999).
Wherevis velocity,x,y, andzareCartesiancoordinates in 3-dimensional space,cis the constant representing the universal speed limit, andtis time, the four-dimensional vectorv= (ct,x,y,z) = (ct,r)is classified according to the sign ofc2t2−r2. A vector istimelikeifc2t2>r2,spacelikeifc2t2<r2, andnullorlightlikeifc2t2=r2. This can be expressed in terms of the sign ofη(v,v), also calledscalar product, as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the spacetime interval under Lorentz transformation.
The set of allnull vectorsat an event[nb 2]of Minkowski space constitutes thelight coneof that event. Given a timelike vectorv, there is aworldlineof constant velocity associated with it, represented by a straight line in a Minkowski diagram.
Once a direction of time is chosen,[nb 3]timelike and null vectors can be further decomposed into various classes. For timelike vectors, one has
Null vectors fall into three classes:
Together with spacelike vectors, there are 6 classes in all.
Anorthonormalbasis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases, it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called anull basis.
Vector fieldsare called timelike, spacelike, or null if the associated vectors are timelike, spacelike, or null at each point where the field is defined.
Time-like vectors have special importance in the theory of relativity as they correspond to events that are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light. Of most interest are time-like vectors that aresimilarly directed, i.e. all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise because both forward and backward cones are convex, whereas the space-like region is not convex.
Thescalar productof two time-like vectorsu1= (t1,x1,y1,z1)andu2= (t2,x2,y2,z2)isη(u1,u2)=u1⋅u2=c2t1t2−x1x2−y1y2−z1z2.{\displaystyle \eta (u_{1},u_{2})=u_{1}\cdot u_{2}=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}.}
Positivity of scalar product: An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversedCauchy–Schwarz inequalitybelow. It follows that if the scalar product of two vectors is zero, then one of these, at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs.
Using the positivity property of time-like vectors, it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light cone because of convexity).
The norm of a time-like vectoru= (ct,x,y,z)is defined as‖u‖=η(u,u)=c2t2−x2−y2−z2{\displaystyle \left\|u\right\|={\sqrt {\eta (u,u)}}={\sqrt {c^{2}t^{2}-x^{2}-y^{2}-z^{2}}}}
The reversed Cauchy inequalityis another consequence of the convexity of either light cone.[7]For two distinct similarly directed time-like vectorsu1andu2this inequality isη(u1,u2)>‖u1‖‖u2‖{\displaystyle \eta (u_{1},u_{2})>\left\|u_{1}\right\|\left\|u_{2}\right\|}or algebraically,c2t1t2−x1x2−y1y2−z1z2>(c2t12−x12−y12−z12)(c2t22−x22−y22−z22){\displaystyle c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}>{\sqrt {\left(c^{2}t_{1}^{2}-x_{1}^{2}-y_{1}^{2}-z_{1}^{2}\right)\left(c^{2}t_{2}^{2}-x_{2}^{2}-y_{2}^{2}-z_{2}^{2}\right)}}}
From this, the positive property of the scalar product can be seen.
For two similarly directed time-like vectorsuandw, the inequality is[8]‖u+w‖≥‖u‖+‖w‖,{\displaystyle \left\|u+w\right\|\geq \left\|u\right\|+\left\|w\right\|,}where the equality holds when the vectors arelinearly dependent.
The proof uses the algebraic definition with the reversed Cauchy inequality:[9]‖u+w‖2=‖u‖2+2(u,w)+‖w‖2≥‖u‖2+2‖u‖‖w‖+‖w‖2=(‖u‖+‖w‖)2.{\displaystyle {\begin{aligned}\left\|u+w\right\|^{2}&=\left\|u\right\|^{2}+2\left(u,w\right)+\left\|w\right\|^{2}\\[5mu]&\geq \left\|u\right\|^{2}+2\left\|u\right\|\left\|w\right\|+\left\|w\right\|^{2}=\left(\left\|u\right\|+\left\|w\right\|\right)^{2}.\end{aligned}}}
The result now follows by taking the square root on both sides.
It is assumed below that spacetime is endowed with a coordinate system corresponding to aninertial frame. This provides anorigin, which is necessary for spacetime to be modeled as a vector space. This addition is not required, and more complex treatments analogous to anaffine spacecan remove the extra structure. However, this is not the introductory convention and is not covered here.
For an overview, Minkowski space is a4-dimensionalrealvector spaceequipped with a non-degenerate,symmetric bilinear formon thetangent spaceat each point in spacetime, here simply called theMinkowski inner product, withmetric signatureeither(+ − − −)or(− + + +). The tangent space at each event is a vector space of the same dimension as spacetime,4.
In practice, one need not be concerned with the tangent spaces. The vector space structure of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g.Lee (2003, Proposition 3.8.) orLee (2012, Proposition 3.13.) These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as[10](x0,x1,x2,x3)↔x0e0|p+x1e1|p+x2e2|p+x3e3|p↔x0e0|q+x1e1|q+x2e2|q+x3e3|q{\displaystyle {\begin{aligned}\left(x^{0},\,x^{1},\,x^{2},\,x^{3}\right)\ &\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{p}+\left.x^{1}\mathbf {e} _{1}\right|_{p}+\left.x^{2}\mathbf {e} _{2}\right|_{p}+\left.x^{3}\mathbf {e} _{3}\right|_{p}\\&\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{q}+\left.x^{1}\mathbf {e} _{1}\right|_{q}+\left.x^{2}\mathbf {e} _{2}\right|_{q}+\left.x^{3}\mathbf {e} _{3}\right|_{q}\end{aligned}}}with basis vectors in the tangent spaces defined byeμ|p=∂∂xμ|pore0|p=(1000), etc.{\displaystyle \left.\mathbf {e} _{\mu }\right|_{p}=\left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p}{\text{ or }}\mathbf {e} _{0}|_{p}=\left({\begin{matrix}1\\0\\0\\0\end{matrix}}\right){\text{, etc}}.}
Here,pandqare any two events, and the second basis vector identification is referred to asparallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first-order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with adirectional derivativeoperator on the set of smooth functions. This is promoted to adefinitionof tangent vectors in manifoldsnotnecessarily being embedded inRn. This definition of tangent vectors is not the only possible one, as ordinaryn-tuples can be used as well.
A tangent vector at a pointpmay be defined, here specialized to Cartesian coordinates in Lorentz frames, as4 × 1column vectorsvassociated toeachLorentz frame related by Lorentz transformationΛsuch that the vectorvin a frame related to some frame byΛtransforms according tov→ Λv. This is thesameway in which the coordinatesxμtransform. Explicitly,x′μ=Λμνxν,v′μ=Λμνvν.{\displaystyle {\begin{aligned}x'^{\mu }&={\Lambda ^{\mu }}_{\nu }x^{\nu },\\v'^{\mu }&={\Lambda ^{\mu }}_{\nu }v^{\nu }.\end{aligned}}}
This definition is equivalent to the definition given above under a canonical isomorphism.
For some purposes, it is desirable to identify tangent vectors at a pointpwithdisplacement vectorsatp, which is, of course, admissible by essentially the same canonical identification.[11]The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting inMisner, Thorne & Wheeler (1973). They offer various degrees of sophistication (and rigor) depending on which part of the material one chooses to read.
The metric signature refers to which sign the Minkowski inner product yields when given space (spaceliketo be specific, defined further down) and time basis vectors (timelike) as arguments. Further discussion about this theoretically inconsequential but practically necessary choice for purposes of internal consistency and convenience is deferred to the hide box below. See also the page treatingsign conventionin Relativity.
In general, but with several exceptions, mathematicians and general relativists prefer spacelike vectors to yield a positive sign,(− + + +), while particle physicists tend to prefer timelike vectors to yield a positive sign,(+ − − −). Authors covering several areas of physics, e.g.Steven WeinbergandLandau and Lifshitz((− + + +)and(+ − − −)respectively) stick to one choice regardless of topic. Arguments for the former convention include "continuity" from the Euclidean case corresponding to the non-relativistic limitc→ ∞. Arguments for the latter include that minus signs, otherwise ubiquitous in particle physics, go away. Yet other authors, especially of introductory texts, e.g.Kleppner & Kolenkow (1978), donotchoose a signature at all, but instead, opt to coordinatize spacetime such that the timecoordinate(but not time itself!) is imaginary. This removes the need for theexplicitintroduction of ametric tensor(which may seem like an extra burden in an introductory course), and one needsnotbe concerned withcovariant vectorsandcontravariant vectors(or raising and lowering indices) to be described below. The inner product is instead affected by a straightforward extension of thedot productinR3toR3×C. This works in the flat spacetime of special relativity, but not in the curved spacetime of general relativity, seeMisner, Thorne & Wheeler (1973, Box 2.1, Farewell toict) (who, by the way use(− + + +)). MTW also argues that it hides the trueindefinitenature of the metric and the true nature of Lorentz boosts, which are not rotations. It also needlessly complicates the use of tools ofdifferential geometrythat are otherwise immediately available and useful for geometrical description and calculation – even in the flat spacetime of special relativity, e.g. of the electromagnetic field.
Mathematically associated with the bilinear form is atensorof type(0,2)at each point in spacetime, called theMinkowski metric.[nb 4]The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the4×4matrix representing the bilinear form.
For comparison, ingeneral relativity, aLorentzian manifoldLis likewise equipped with ametric tensorg, which is a nondegenerate symmetric bilinear form on the tangent spaceTpLat each pointpofL. In coordinates, it may be represented by a4×4matrixdepending on spacetime position. Minkowski space is thus a comparatively simple special case of aLorentzian manifold. Its metric tensor is in coordinates with the same symmetric matrix at every point ofM, and its arguments can, per above, be taken as vectors in spacetime itself.
Introducing more terminology (but not more structure), Minkowski space is thus apseudo-Euclidean spacewith total dimensionn= 4andsignature(1, 3)or(3, 1). Elements of Minkowski space are calledevents. Minkowski space is often denotedR1,3orR3,1to emphasize the chosen signature, or justM. It is an example of apseudo-Riemannian manifold.
Then mathematically, the metric is a bilinear form on an abstract four-dimensional real vector spaceV, that is,η:V×V→R{\displaystyle \eta :V\times V\rightarrow \mathbf {R} }whereηhas signature(+, -, -, -), and signature is a coordinate-invariant property ofη. The space of bilinear maps forms a vector space which can be identified withM∗⊗M∗{\displaystyle M^{*}\otimes M^{*}}, andηmay be equivalently viewed as an element of this space. By making a choice of orthonormal basis{eμ}{\displaystyle \{e_{\mu }\}},M:=(V,η){\displaystyle M:=(V,\eta )}can be identified with the spaceR1,3:=(R4,ημν){\displaystyle \mathbf {R} ^{1,3}:=(\mathbf {R} ^{4},\eta _{\mu \nu })}. The notation is meant to emphasize the fact thatMandR1,3{\displaystyle \mathbf {R} ^{1,3}}are not just vector spaces but have added structure.ημν=diag(+1,−1,−1,−1){\displaystyle \eta _{\mu \nu }={\text{diag}}(+1,-1,-1,-1)}.
An interesting example of non-inertial coordinates for (part of) Minkowski spacetime is theBorn coordinates. Another useful set of coordinates is thelight-cone coordinates.
The Minkowski inner product is not aninner product, since it has non-zeronull vectors. Since it is not adefinite bilinear formit is calledindefinite.
The Minkowski metricηis the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally, aconstantpseudo-Riemannian metric in Cartesian coordinates. As such, it is a nondegenerate symmetric bilinear form, a type(0, 2)tensor. It accepts two argumentsup,vp, vectors inTpM,p∈M, the tangent space atpinM. Due to the above-mentioned canonical identification ofTpMwithMitself, it accepts argumentsu,vwith bothuandvinM.
As a notational convention, vectorsvinM, called4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldfacev. The latter is generally reserved for the3-vector part (to be introduced below) of a4-vector.
The definition[12]u⋅v=η(u,v){\displaystyle u\cdot v=\eta (u,\,v)}yields an inner product-like structure onM, previously and also henceforth, called theMinkowski inner product, similar to the Euclideaninner product, but it describes a different geometry. It is also called therelativistic dot product. If the two arguments are the same,u⋅u=η(u,u)≡‖u‖2≡u2,{\displaystyle u\cdot u=\eta (u,u)\equiv \|u\|^{2}\equiv u^{2},}the resulting quantity will be called theMinkowski norm squared. The Minkowski inner product satisfies the following properties.
The first two conditions imply bilinearity.
The most important feature of the inner product and norm squared is thatthese are quantities unaffected by Lorentz transformations. In fact, it can be taken as the defining property of a Lorentz transformation in that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally forallclassical groups definable this way inclassical group. There, the matrixΦis identical in the caseO(3, 1)(the Lorentz group) to the matrixηto be displayed below.
Minkowski space is constructed so that thespeed of lightwill be the same constant regardless of the reference frame in which it is measured. This property results from the relation of the time axis to a space axis. Two eventsuandvareorthogonalwhen the bilinear form is zero for them:η(v,w) = 0.
When bothuandvare both space-like, then they areperpendicular, but if one is time-like and the other space-like, then the relation ishyperbolic orthogonality. The relation is preserved in a change of reference frames and consequently the computation of light speed yields a constant result. The change of reference frame is called aLorentz boostand in mathematics it is ahyperbolic rotation. Each reference frame is associated with ahyperbolic angle, which is zero for the rest frame in Minkowski space. Such a hyperbolic angle has been labelledrapiditysince it is associated with the speed of the frame.
From thesecond postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that thespacetime intervalbetween two arbitrary events called1and2is:[13]c2(t1−t2)2−(x1−x2)2−(y1−y2)2−(z1−z2)2.{\displaystyle c^{2}\left(t_{1}-t_{2}\right)^{2}-\left(x_{1}-x_{2}\right)^{2}-\left(y_{1}-y_{2}\right)^{2}-\left(z_{1}-z_{2}\right)^{2}.}This quantity is not consistently named in the literature. The interval is sometimes referred to as the square root of the interval as defined here.[14][15]
The invariance of the interval under coordinate transformations between inertial frames follows from the invariance ofc2t2−x2−y2−z2{\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}}provided the transformations are linear. Thisquadratic formcan be used to define a bilinear formu⋅v=c2t1t2−x1x2−y1y2−z1z2{\displaystyle u\cdot v=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}}via thepolarization identity. This bilinear form can in turn be written asu⋅v=uT[η]v,{\displaystyle u\cdot v=u^{\textsf {T}}\,[\eta ]\,v,}where[η]is a4×4{\displaystyle 4\times 4}matrix associated withη. While possibly confusing, it is common practice to denote[η]with justη. The matrix is read off from the explicit bilinear form asη=(10000−10000−10000−1),{\displaystyle \eta =\left({\begin{array}{r}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}}\right)\!,}and the bilinear formu⋅v=η(u,v),{\displaystyle u\cdot v=\eta (u,v),}with which this section started by assuming its existence, is now identified.
For definiteness and shorter presentation, the signature(− + + +)is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map givenhere) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensorηhas been used in a derivation, go back to the earliest point where it was used, substituteηfor−η, and retrace forward to the desired formula with the desired metric signature.
A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors{e0,e1,e2,e3}such thatη(e0,e0)=−η(e1,e1)=−η(e2,e2)=−η(e3,e3)=1{\displaystyle \eta (e_{0},e_{0})=-\eta (e_{1},e_{1})=-\eta (e_{2},e_{2})=-\eta (e_{3},e_{3})=1}and for whichη(eμ,eν)=0{\displaystyle \eta (e_{\mu },e_{\nu })=0}whenμ≠ν.{\textstyle \mu \neq \nu \,.}
These conditions can be written compactly in the formη(eμ,eν)=ημν.{\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.}
Relative to a standard basis, the components of a vectorvare written(v0,v1,v2,v3)where theEinstein notationis used to writev=vμeμ. The componentv0is called thetimelike componentofvwhile the other three components are called thespatial components. The spatial components of a4-vectorvmay be identified with a3-vectorv= (v1,v2,v3).
In terms of components, the Minkowski inner product between two vectorsvandwis given by
η(v,w)=ημνvμwν=v0w0+v1w1+v2w2+v3w3=vμwμ=vμwμ,{\displaystyle \eta (v,w)=\eta _{\mu \nu }v^{\mu }w^{\nu }=v^{0}w_{0}+v^{1}w_{1}+v^{2}w_{2}+v^{3}w_{3}=v^{\mu }w_{\mu }=v_{\mu }w^{\mu },}andη(v,v)=ημνvμvν=v0v0+v1v1+v2v2+v3v3=vμvμ.{\displaystyle \eta (v,v)=\eta _{\mu \nu }v^{\mu }v^{\nu }=v^{0}v_{0}+v^{1}v_{1}+v^{2}v_{2}+v^{3}v_{3}=v^{\mu }v_{\mu }.}
Herelowering of an indexwith the metric was used.
There are many possible choices of standard basis obeying the conditionη(eμ,eν)=ημν.{\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.}Any two such bases are related in some sense by a Lorentz transformation, either by a change-of-basis matrixΛνμ{\displaystyle \Lambda _{\nu }^{\mu }}, a real4 × 4matrix satisfyingΛρμημνΛσν=ηρσ.{\displaystyle \Lambda _{\rho }^{\mu }\eta _{\mu \nu }\Lambda _{\sigma }^{\nu }=\eta _{\rho \sigma }.}orΛ, a linear map on the abstract vector space satisfying, for any pair of vectorsu,v,η(Λu,Λv)=η(u,v).{\displaystyle \eta (\Lambda u,\Lambda v)=\eta (u,v).}
Then if two different bases exist,{e0,e1,e2,e3}and{e′0,e′1,e′2,e′3},eμ′=eνΛμν{\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }}can be represented aseμ′=eνΛμν{\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }}oreμ′=Λeμ{\displaystyle e_{\mu }'=\Lambda e_{\mu }}. While it might be tempting to think ofΛνμ{\displaystyle \Lambda _{\nu }^{\mu }}andΛas the same thing, mathematically, they are elements of different spaces, and act on the space of standard bases from different sides.
Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces ofMand thecotangent spacesofM. At a point inM, the tangent and cotangent spaces aredual vector spaces(so the dimension of the cotangent space at an event is also4). Just as an authentic inner product on a vector space with one argument fixed, byRiesz representation theorem, may be expressed as the action of alinear functionalon the vector space, the same holds for the Minkowski inner product of Minkowski space.[17]
Thus ifvμare the components of a vector in tangent space, thenημνvμ=vνare the components of a vector in the cotangent space (a linear functional). Due to the identification of vectors in tangent spaces with vectors inMitself, this is mostly ignored, and vectors with lower indices are referred to ascovariant vectors. In this latter interpretation, the covariant vectors are (almost always implicitly) identified with vectors (linear functionals) in the dual of Minkowski space. The ones with upper indices arecontravariant vectors. In the same fashion, the inverse of the map from tangent to cotangent spaces, explicitly given by the inverse ofηin matrix representation, can be used to defineraising of an index. The components of this inverse are denotedημν. It happens thatημν=ημν. These maps between a vector space and its dual can be denotedη♭(eta-flat) andη♯(eta-sharp) by the musical analogy.[18]
Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear function can be characterized by two objects: itskernel, which is ahyperplanepassing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or1-form(though the latter is usually reserved for covectorfields).
One quantum mechanical analogy explored in the literature is that of ade Broglie wave(scaled by a factor of Planck's reduced constant) associated with amomentum four-vectorto illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many times the arrow pierces the planes.[16]The mathematical reference,Lee (2003), offers the same geometrical view of these objects (but mentions no piercing).
Theelectromagnetic field tensoris adifferential 2-form, which geometrical description can as well be found in MTW.
One may, of course, ignore geometrical views altogether (as is the style in e.g.Weinberg (2002)andLandau & Lifshitz 2002) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to asindex gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly.
Given a bilinear formη:M×M→R{\displaystyle \eta :M\times M\rightarrow \mathbf {R} }, the lowered version of a vector can be thought of as the partial evaluation ofη{\displaystyle \eta }, that is, there is an associated partial evaluation mapη(⋅,−):M→M∗;v↦η(v,⋅).{\displaystyle \eta (\cdot ,-):M\rightarrow M^{*};v\mapsto \eta (v,\cdot ).}
The lowered vectorη(v,⋅)∈M∗{\displaystyle \eta (v,\cdot )\in M^{*}}is then the dual mapu↦η(v,u){\displaystyle u\mapsto \eta (v,u)}. Note it does not matter which argument is partially evaluated due to the symmetry ofη{\displaystyle \eta }.
Non-degeneracy is then equivalent to injectivity of the partial evaluation map, or equivalently non-degeneracy indicates that the kernel of the map is trivial. In finite dimension, as is the case here, and noting that the dimension of a finite-dimensional space is equal to the dimension of the dual, this is enough to conclude the partial evaluation map is a linear isomorphism fromM{\displaystyle M}toM∗{\displaystyle M^{*}}. This then allows the definition of the inverse partial evaluation map,η−1:M∗→M,{\displaystyle \eta ^{-1}:M^{*}\rightarrow M,}which allows the inverse metric to be defined asη−1:M∗×M∗→R,η−1(α,β)=η(η−1(α),η−1(β)){\displaystyle \eta ^{-1}:M^{*}\times M^{*}\rightarrow \mathbf {R} ,\eta ^{-1}(\alpha ,\beta )=\eta (\eta ^{-1}(\alpha ),\eta ^{-1}(\beta ))}where the two different usages ofη−1{\displaystyle \eta ^{-1}}can be told apart by the argument each is evaluated on. This can then be used to raise indices. If a coordinate basis is used, the metricη−1is indeed the matrix inverse toη.
The present purpose is to show semi-rigorously howformallyone may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced.
A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearanceημνdxμ⊗dxν=ημνdxμ⊙dxν=ημνdxμdxν.{\displaystyle \eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }=\eta _{\mu \nu }dx^{\mu }\odot dx^{\nu }=\eta _{\mu \nu }dx^{\mu }dx^{\nu }.}
Explanation: The coordinate differentials are 1-form fields. They are defined as theexterior derivativeof the coordinate functionsxμ. These quantities evaluated at a pointpprovide a basis for the cotangent space atp. Thetensor product(denoted by the symbol⊗) yields a tensor field of type(0, 2), i.e. the type that expects two contravariant vectors as arguments. On the right-hand side, thesymmetric product(denoted by the symbol⊙or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric.[19]The notation on the far right is also sometimes used for the related, but different,line element. It isnota tensor. For elaboration on the differences and similarities, seeMisner, Thorne & Wheeler (1973, Box 3.2 and section 13.2.)
Tangentvectors are, in this formalism, given in terms of a basis of differential operators of the first order,∂∂xμ|p,{\displaystyle \left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p},}wherepis an event. This operator applied to a functionfgives thedirectional derivativeoffatpin the direction of increasingxμwithxν,ν≠μfixed. They provide a basis for the tangent space atp.
The exterior derivativedfof a functionfis acovector field, i.e. an assignment of a cotangent vector to each pointp, by definition such thatdf(X)=Xf,{\displaystyle df(X)=Xf,}for eachvector fieldX. A vector field is an assignment of a tangent vector to each pointp. In coordinatesXcan be expanded at each pointpin the basis given by the∂/∂xν|p. Applying this withf=xμ, the coordinate function itself, andX= ∂/∂xν, called acoordinate vector field, one obtainsdxμ(∂∂xν)=∂xμ∂xν=δνμ.{\displaystyle dx^{\mu }\left({\frac {\partial }{\partial x^{\nu }}}\right)={\frac {\partial x^{\mu }}{\partial x^{\nu }}}=\delta _{\nu }^{\mu }.}
Since this relation holds at each pointp, thedxμ|pprovide a basis for the cotangent space at eachpand the basesdxμ|pand∂/∂xν|paredualto each other,dxμ|p(∂∂xν|p)=δνμ.{\displaystyle \left.dx^{\mu }\right|_{p}\left(\left.{\frac {\partial }{\partial x^{\nu }}}\right|_{p}\right)=\delta _{\nu }^{\mu }.}at eachp. Furthermore, one hasα⊗β(a,b)=α(a)β(b){\displaystyle \alpha \otimes \beta (a,b)=\alpha (a)\beta (b)}for general one-forms on a tangent spaceα,βand general tangent vectorsa,b. (This can be taken as a definition, but may also be proved in a more general setting.)
Thus when the metric tensor is fed two vectors fieldsa,b, both expanded in terms of the basis coordinate vector fields, the result isημνdxμ⊗dxν(a,b)=ημνaμbν,{\displaystyle \eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }(a,b)=\eta _{\mu \nu }a^{\mu }b^{\nu },}whereaμ,bνare thecomponent functionsof the vector fields. The above equation holds at each pointp, and the relation may as well be interpreted as the Minkowski metric atpapplied to two tangent vectors atp.
As mentioned, in a vector space, such as modeling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right-hand side of the above equation can be employed directly, without regard to the spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from.
This situation changes ingeneral relativity. There one hasg(p)μνdxμ|pdxν|p(a,b)=g(p)μνaμbν,{\displaystyle g(p)_{\mu \nu }\left.dx^{\mu }\right|_{p}\left.dx^{\nu }\right|_{p}(a,b)=g(p)_{\mu \nu }a^{\mu }b^{\nu },}where nowη→g(p), i.e.,gis still a metric tensor but now depending on spacetime and is a solution ofEinstein's field equations. Moreover,a,bmustbe tangent vectors at spacetime pointpand can no longer be moved around freely.
Letx,y∈M. Here,
Supposex∈Mis timelike. Then thesimultaneous hyperplaneforxis{y:η(x,y) = 0}. Since thishyperplanevaries asxvaries, there is arelativity of simultaneityin Minkowski space.
A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be4(2or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature.
Complexified Minkowski space is defined asMc=M⊕iM.[20]Its real part is the Minkowski space offour-vectors, such as thefour-velocityand thefour-momentum, which are independent of the choice oforientationof the space. The imaginary part, on the other hand, may consist of four pseudovectors, such asangular velocityandmagnetic moment, which change their direction with a change of orientation. Apseudoscalariis introduced, which also changes sign with a change of orientation. Thus, elements ofMcare independent of the choice of the orientation.
Theinner product-like structure onMcis defined asu⋅v=η(u,v)for anyu,v∈Mc. A relativistic purespinof anelectronor any half spin particle is described byρ∈Mcasρ=u+is, whereuis the four-velocity of the particle, satisfyingu2= 1andsis the 4D spin vector,[21]which is also thePauli–Lubanski pseudovectorsatisfyings2= −1andu⋅s= 0.
Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. Ifn≥ 2,n-dimensional Minkowski space is a vector space of real dimensionnon which there is a constant Minkowski metric of signature(n− 1, 1)or(1,n− 1). These generalizations are used in theories where spacetime is assumed to have more or less than4dimensions.String theoryandM-theoryare two examples wheren> 4. In string theory, there appearsconformal field theorieswith1 + 1spacetime dimensions.
de Sitter spacecan be formulated as a submanifold of generalized Minkowski space as can the model spaces ofhyperbolic geometry(see below).
As aflat spacetime, the three spatial components of Minkowski spacetime always obey thePythagorean Theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significantgravitation. However, in order to take gravity into account, physicists use the theory ofgeneral relativity, which is formulated in the mathematics ofdifferential geometryofdifferential manifolds. When this geometry is used as a model of spacetime, it is known ascurved spacetime.
Even in curved spacetime, Minkowski space is still a good description in aninfinitesimal regionsurrounding any point (barring gravitational singularities).[nb 5]More abstractly, it can be said that in the presence of gravity spacetime is described by a curved 4-dimensionalmanifoldfor which thetangent spaceto any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity.
The meaning of the termgeometryfor the Minkowski space depends heavily on the context. Minkowski space is not endowed with Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by themodel spacesinhyperbolic geometry(negative curvature) and the geometry modeled by thesphere(positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not ametric spaceand not a Riemannian manifold with a Riemannian metric. However, Minkowski space containssubmanifoldsendowed with a Riemannian metric yielding hyperbolic geometry.
Model spaces of hyperbolic geometry of low dimension, say 2 or 3,cannotbe isometrically embedded in Euclidean space with one more dimension, i.e.R3{\displaystyle \mathbf {R} ^{3}}orR4{\displaystyle \mathbf {R} ^{4}}respectively, with the Euclidean metricg¯{\displaystyle {\overline {g}}}, preventing easy visualization.[nb 6][22]By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension.[23]Hyperbolic spacescanbe isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metricη{\displaystyle \eta }.
DefineHR1(n)⊂Mn+1{\displaystyle \mathbf {H} _{R}^{1(n)}\subset \mathbf {M} ^{n+1}}to be the upper sheet (ct>0{\displaystyle ct>0}) of thehyperboloidHR1(n)={(ct,x1,…,xn)∈Mn:c2t2−(x1)2−⋯−(xn)2=R2,ct>0}{\displaystyle \mathbf {H} _{R}^{1(n)}=\left\{\left(ct,x^{1},\ldots ,x^{n}\right)\in \mathbf {M} ^{n}:c^{2}t^{2}-\left(x^{1}\right)^{2}-\cdots -\left(x^{n}\right)^{2}=R^{2},ct>0\right\}}in generalized Minkowski spaceMn+1{\displaystyle \mathbf {M} ^{n+1}}of spacetime dimensionn+1.{\displaystyle n+1.}This is one of thesurfaces of transitivityof the generalized Lorentz group. Theinduced metricon this submanifold,hR1(n)=ι∗η,{\displaystyle h_{R}^{1(n)}=\iota ^{*}\eta ,}thepullbackof the Minkowski metricη{\displaystyle \eta }under inclusion, is aRiemannian metric. With this metricHR1(n){\displaystyle \mathbf {H} _{R}^{1(n)}}is aRiemannian manifold. It is one of the model spaces of Riemannian geometry, thehyperboloid modelofhyperbolic space. It is a space of constant negative curvature−1/R2{\displaystyle -1/R^{2}}.[24]The 1 in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and thenfor its dimension. A2(2){\displaystyle 2(2)}corresponds to thePoincaré disk model, while3(n){\displaystyle 3(n)}corresponds to thePoincaré half-space modelof dimensionn.{\displaystyle n.}
In the definition aboveι:HR1(n)→Mn+1{\displaystyle \iota :\mathbf {H} _{R}^{1(n)}\rightarrow \mathbf {M} ^{n+1}}is theinclusion mapand the superscript star denotes thepullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration thatHR1(n){\displaystyle \mathbf {H} _{R}^{1(n)}}actually is a hyperbolic space.
Behavior of tensors under inclusion:For inclusion maps from a submanifoldSintoMand a covariant tensorαof orderkonMit holds thatι∗α(X1,X2,…,Xk)=α(ι∗X1,ι∗X2,…,ι∗Xk)=α(X1,X2,…,Xk),{\displaystyle \iota ^{*}\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(\iota _{*}X_{1},\,\iota _{*}X_{2},\,\ldots ,\,\iota _{*}X_{k}\right)=\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right),}whereX1,X1, …,Xkare vector fields onS. The subscript star denotes the pushforward (to be introduced later), and it is in this special case simply the identity map (as is the inclusion map). The latter equality holds because a tangent space to a submanifold at a point is in a canonical way a subspace of the tangent space of the manifold itself at the point in question. One may simply writeι∗α=α|S,{\displaystyle \iota ^{*}\alpha =\alpha |_{S},}meaning (with slightabuse of notation) the restriction ofαto accept as input vectors tangent to somes∈Sonly.
Pullback of tensors under general maps:The pullback of a covariantk-tensorα(one taking only contravariant vectors as arguments) under a mapF:M→Nis a linear mapF∗:TF(p)kN→TpkM,{\displaystyle F^{*}\colon T_{F(p)}^{k}N\rightarrow T_{p}^{k}M,}where for any vector spaceV,TkV=V∗⊗V∗⊗⋯⊗V∗⏟ktimes.{\displaystyle T^{k}V=\underbrace {V^{*}\otimes V^{*}\otimes \cdots \otimes V^{*}} _{k{\text{ times}}}.}
It is defined byF∗(α)(X1,X2,…,Xk)=α(F∗X1,F∗X2,…,F∗Xk),{\displaystyle F^{*}(\alpha )\left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(F_{*}X_{1},\,F_{*}X_{2},\,\ldots ,\,F_{*}X_{k}\right),}where the subscript star denotes thepushforwardof the mapF, andX1,X2, …, Xkare vectors inTpM. (This is in accord with what was detailed about the pullback of the inclusion map. In the general case here, one cannot proceed as simply becauseF∗X1≠X1in general.)
The pushforward of vectors under general maps:Heuristically, pulling back a tensor top∈MfromF(p) ∈Nfeeding it vectors residing atp∈Mis by definition the same as pushing forward the vectors fromp∈MtoF(p) ∈Nfeeding them to the tensor residing atF(p) ∈N.
Further unwinding the definitions, the pushforwardF∗:TMp→TNF(p)of a vector field under a mapF:M→Nbetween manifolds is defined byF∗(X)f=X(f∘F),{\displaystyle F_{*}(X)f=X(f\circ F),}wherefis a function onN. WhenM=Rm,N=Rnthe pushforward ofFreduces toDF:Rm→Rn, the ordinarydifferential, which is given by theJacobian matrixof partial derivatives of the component functions. The differential is the best linear approximation of a functionFfromRmtoRn. The pushforward is the smooth manifold version of this. It acts between tangent spaces, and is in coordinates represented by the Jacobian matrix of thecoordinate representationof the function.
The corresponding pullback is thedual mapfrom the dual of the range tangent space to the dual of the domain tangent space, i.e. it is a linear map,F∗:TF(p)∗N→Tp∗M.{\displaystyle F^{*}\colon T_{F(p)}^{*}N\rightarrow T_{p}^{*}M.}
In order to exhibit the metric, it is necessary to pull it back via a suitableparametrization. A parametrization of a submanifoldSof a manifoldMis a mapU⊂Rm→Mwhose range is an open subset ofS. IfShas the same dimension asM, a parametrization is just the inverse of a coordinate mapφ:M→U⊂Rm. The parametrization to be used is the inverse ofhyperbolic stereographic projection. This is illustrated in the figure to the right forn= 2. It is instructive to compare tostereographic projectionfor spheres.
Stereographic projectionσ:HnR→Rnand its inverseσ−1:Rn→HnRare given byσ(τ,x)=u=RxR+τ,σ−1(u)=(τ,x)=(RR2+|u|2R2−|u|2,2R2uR2−|u|2),{\displaystyle {\begin{aligned}\sigma (\tau ,\mathbf {x} )=\mathbf {u} &={\frac {R\mathbf {x} }{R+\tau }},\\\sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )&=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right),\end{aligned}}}where, for simplicity,τ≡ct. The(τ,x)are coordinates onMn+1and theuare coordinates onRn.
LetHRn={(τ,x1,…,xn)⊂M:−τ2+(x1)2+⋯+(xn)2=−R2,τ>0}{\displaystyle \mathbf {H} _{R}^{n}=\left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\subset \mathbf {M} :-\tau ^{2}+\left(x^{1}\right)^{2}+\cdots +\left(x^{n}\right)^{2}=-R^{2},\tau >0\right\}}and letS=(−R,0,…,0).{\displaystyle S=(-R,0,\ldots ,0).}
IfP=(τ,x1,…,xn)∈HRn,{\displaystyle P=\left(\tau ,x^{1},\ldots ,x^{n}\right)\in \mathbf {H} _{R}^{n},}then it is geometrically clear that the vectorPS→{\displaystyle {\overrightarrow {PS}}}intersects the hyperplane{(τ,x1,…,xn)∈M:τ=0}{\displaystyle \left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\in M:\tau =0\right\}}once in point denotedU=(0,u1(P),…,un(P))≡(0,u).{\displaystyle U=\left(0,u^{1}(P),\ldots ,u^{n}(P)\right)\equiv (0,\mathbf {u} ).}
One hasS+SU→=U⇒SU→=U−S,S+SP→=P⇒SP→=P−S{\displaystyle {\begin{aligned}S+{\overrightarrow {SU}}&=U\Rightarrow {\overrightarrow {SU}}=U-S,\\S+{\overrightarrow {SP}}&=P\Rightarrow {\overrightarrow {SP}}=P-S\end{aligned}}}orSU→=(0,u)−(−R,0)=(R,u),SP→=(τ,x)−(−R,0)=(τ+R,x)..{\displaystyle {\begin{aligned}{\overrightarrow {SU}}&=(0,\mathbf {u} )-(-R,\mathbf {0} )=(R,\mathbf {u} ),\\{\overrightarrow {SP}}&=(\tau ,\mathbf {x} )-(-R,\mathbf {0} )=(\tau +R,\mathbf {x} ).\end{aligned}}.}
By construction of stereographic projection one hasSU→=λ(τ)SP→.{\displaystyle {\overrightarrow {SU}}=\lambda (\tau ){\overrightarrow {SP}}.}
This leads to the system of equationsR=λ(τ+R),u=λx.{\displaystyle {\begin{aligned}R&=\lambda (\tau +R),\\\mathbf {u} &=\lambda \mathbf {x} .\end{aligned}}}
The first of these is solved forλand one obtains for stereographic projectionσ(τ,x)=u=RxR+τ.{\displaystyle \sigma (\tau ,\mathbf {x} )=\mathbf {u} ={\frac {R\mathbf {x} }{R+\tau }}.}
Next, the inverseσ−1(u) = (τ,x)must be calculated. Use the same considerations as before, but now withU=(0,u)P=(τ(u),x(u)).,{\displaystyle {\begin{aligned}U&=(0,\mathbf {u} )\\P&=(\tau (\mathbf {u} ),\mathbf {x} (\mathbf {u} )).\end{aligned}},}one getsτ=R(1−λ)λ,x=uλ,{\displaystyle {\begin{aligned}\tau &={\frac {R(1-\lambda )}{\lambda }},\\\mathbf {x} &={\frac {\mathbf {u} }{\lambda }},\end{aligned}}}but now withλdepending onu. The condition forPlying in the hyperboloid is−τ2+|x|2=−R2,{\displaystyle -\tau ^{2}+|\mathbf {x} |^{2}=-R^{2},}or−R2(1−λ)2λ2+|u|2λ2=−R2,{\displaystyle -{\frac {R^{2}(1-\lambda )^{2}}{\lambda ^{2}}}+{\frac {|\mathbf {u} |^{2}}{\lambda ^{2}}}=-R^{2},}leading toλ=R2−|u|22R2.{\displaystyle \lambda ={\frac {R^{2}-|u|^{2}}{2R^{2}}}.}
With thisλ, one obtainsσ−1(u)=(τ,x)=(RR2+|u|2R2−|u|2,2R2uR2−|u|2).{\displaystyle \sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).}
One hashR1(n)=η|HR1(n)=(dx1)2+⋯+(dxn)2−dτ2{\displaystyle h_{R}^{1(n)}=\eta |_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}\right)^{2}+\cdots +\left(dx^{n}\right)^{2}-d\tau ^{2}}and the mapσ−1:Rn→HR1(n);σ−1(u)=(τ(u),x(u))=(RR2+|u|2R2−|u|2,2R2uR2−|u|2).{\displaystyle \sigma ^{-1}:\mathbf {R} ^{n}\rightarrow \mathbf {H} _{R}^{1(n)};\quad \sigma ^{-1}(\mathbf {u} )=(\tau (\mathbf {u} ),\,\mathbf {x} (\mathbf {u} ))=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},\,{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).}
The pulled back metric can be obtained by straightforward methods of calculus;(σ−1)∗η|HR1(n)=(dx1(u))2+⋯+(dxn(u))2−(dτ(u))2.{\displaystyle \left.\left(\sigma ^{-1}\right)^{*}\eta \right|_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}-\left(d\tau (\mathbf {u} )\right)^{2}.}
One computes according to the standard rules for computing differentials (though one is really computing the rigorously defined exterior derivatives),dx1(u)=d(2R2u1R2−|u|2)=∂∂u12R2u1R2−|u|2du1+⋯+∂∂un2R2u1R2−|u|2dun+∂∂τ2R2u1R2−|u|2dτ,⋮dxn(u)=d(2R2unR2−|u|2)=⋯,dτ(u)=d(RR2+|u|2R2−|u|2)=⋯,{\displaystyle {\begin{aligned}dx^{1}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}\right)={\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}+\cdots +{\frac {\partial }{\partial u^{n}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{n}+{\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ,\\&\ \ \vdots \\dx^{n}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{n}}{R^{2}-|u|^{2}}}\right)=\cdots ,\\d\tau (\mathbf {u} )&=d\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)=\cdots ,\end{aligned}}}and substitutes the results into the right hand side. This yields(σ−1)∗hR1(n)=4R2[(du1)2+⋯+(dun)2](R2−|u|2)2≡hR2(n).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.}
One has∂∂u12R2u1R2−|u|2du1=2(R2−|u|2)+4R2(u1)2(R2−|u|2)2du1,∂∂u22R2u1R2−|u|2du2=4R2u1u2(R2−|u|2)2du2,{\displaystyle {\begin{aligned}{\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}&={\frac {2\left(R^{2}-|u|^{2}\right)+4R^{2}\left(u^{1}\right)^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{1},\\{\frac {\partial }{\partial u^{2}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{2}&={\frac {4R^{2}u^{1}u^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{2},\end{aligned}}}and∂∂τ2R2u1R2−|u|2dτ2=0.{\displaystyle {\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ^{2}=0.}
With this one may writedx1(u)=2R2(R2−|u|2)du1+4R2u1(u⋅du)(R2−|u|2)2,{\displaystyle dx^{1}(\mathbf {u} )={\frac {2R^{2}\left(R^{2}-|u|^{2}\right)du^{1}+4R^{2}u^{1}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{2}}},}from which(dx1(u))2=4R2(r2−|u|2)2(du1)2+16R4(R2−|u|2)(u⋅du)u1du1+16R4(u1)2(u⋅du)2(R2−|u|2)4.{\displaystyle \left(dx^{1}(\mathbf {u} )\right)^{2}={\frac {4R^{2}\left(r^{2}-|u|^{2}\right)^{2}\left(du^{1}\right)^{2}+16R^{4}\left(R^{2}-|u|^{2}\right)\left(\mathbf {u} \cdot d\mathbf {u} \right)u^{1}du^{1}+16R^{4}\left(u^{1}\right)^{2}\left(\mathbf {u} \cdot d\mathbf {u} \right)^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.}
Summing this formula one obtains(dx1(u))2+⋯+(dxn(u))2=4R2(R2−|u|2)2[(du1)2+⋯+(dun)2]+16R4(R2−|u|2)(u⋅du)(u⋅du)+16R4|u|2(u⋅du)2(R2−|u|2)4=4R2(R2−|u|2)2[(du1)2+⋯+(dun)2](R2−|u|2)4+R216R4(u⋅du)(R2−|u|2)4.{\displaystyle {\begin{aligned}&\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]+16R^{4}\left(R^{2}-|u|^{2}\right)(\mathbf {u} \cdot d\mathbf {u} )(\mathbf {u} \cdot d\mathbf {u} )+16R^{4}|u|^{2}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{4}}}+R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{4}}}.\end{aligned}}}
Similarly, forτone getsdτ=∑i=1n∂∂uiRR2+|u|2R2+|u|2dui+∂∂τRR2+|u|2R2+|u|2dτ=∑i=1nR44R2uidui(R2−|u|2),{\displaystyle d\tau =\sum _{i=1}^{n}{\frac {\partial }{\partial u^{i}}}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}du^{i}+{\frac {\partial }{\partial \tau }}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}d\tau =\sum _{i=1}^{n}R^{4}{\frac {4R^{2}u^{i}du^{i}}{\left(R^{2}-|u|^{2}\right)}},}yielding−dτ2=−(R4R4(u⋅du)(R2−|u|2)2)2=−R216R4(u⋅du)2(R2−|u|2)4.{\displaystyle -d\tau ^{2}=-\left(R{\frac {4R^{4}\left(\mathbf {u} \cdot d\mathbf {u} \right)}{\left(R^{2}-|u|^{2}\right)^{2}}}\right)^{2}=-R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.}
Now add this contribution to finally get(σ−1)∗hR1(n)=4R2[(du1)2+⋯+(dun)2](R2−|u|2)2≡hR2(n).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.}
This last equation shows that the metric on the ball is identical to the Riemannian metrich2(n)Rin thePoincaré ball model, another standard model of hyperbolic geometry.
The pullback can be computed in a different fashion. By definition,(σ−1)∗hR1(n)(V,V)=hR1(n)((σ−1)∗V,(σ−1)∗V)=η|HR1(n)((σ−1)∗V,(σ−1)∗V).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}(V,\,V)=h_{R}^{1(n)}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right)=\eta |_{\mathbf {H} _{R}^{1(n)}}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right).}
In coordinates,(σ−1)∗V=(σ−1)∗Vi∂∂ui=Vi∂xj∂ui∂∂xj+Vi∂τ∂ui∂∂τ=Vi∂xj∂ui∂∂xj+Vi∂τ∂ui∂∂τ=Vxj∂∂xj+Vτ∂∂τ.{\displaystyle \left(\sigma ^{-1}\right)_{*}V=\left(\sigma ^{-1}\right)_{*}V^{i}{\frac {\partial }{\partial u^{i}}}=V^{i}{\frac {\partial x^{j}}{\partial u^{i}}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial \tau }{\partial u^{i}}}{\frac {\partial }{\partial \tau }}=V^{i}{\frac {\partial }{x}}^{j}{\partial u^{i}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial }{\tau }}{\partial u^{i}}{\frac {\partial }{\partial \tau }}=Vx^{j}{\frac {\partial }{\partial x^{j}}}+V\tau {\frac {\partial }{\partial \tau }}.}
One has from the formula forσ–1Vxj=Vi∂∂ui(2R2ujR2−|u|2)=2R2VjR2−|u|2−4R2uj⟨V,u⟩(R2−|u|2)2,(hereV|u|2=2∑k=1nVkuk≡2⟨V,u⟩)Vτ=V(RR2+|u|2R2−|u|2)=4R3⟨V,u⟩(R2−|u|2)2.{\displaystyle {\begin{aligned}Vx^{j}&=V^{i}{\frac {\partial }{\partial u^{i}}}\left({\frac {2R^{2}u^{j}}{R^{2}-|u|^{2}}}\right)={\frac {2R^{2}V^{j}}{R^{2}-|u|^{2}}}-{\frac {4R^{2}u^{j}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}},\quad \left({\text{here }}V|u|^{2}=2\sum _{k=1}^{n}V^{k}u^{k}\equiv 2\langle \mathbf {V} ,\,\mathbf {u} \rangle \right)\\V\tau &=V\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)={\frac {4R^{3}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}}.\end{aligned}}}
Lastly,η(σ∗−1V,σ∗−1V)=∑j=1n(Vxj)2−(Vτ)2=4R4|V|2(R2−|u|2)2=hR2(n)(V,z,V),{\displaystyle \eta \left(\sigma _{*}^{-1}V,\,\sigma _{*}^{-1}V\right)=\sum _{j=1}^{n}\left(Vx^{j}\right)^{2}-(V\tau )^{2}={\frac {4R^{4}|V|^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}=h_{R}^{2(n)}(V,z,V),}and the same conclusion is reached.
Media related toMinkowski diagramsat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Minkowski_space#Standard_basis
|
Artificial consciousness,[1]also known asmachine consciousness,[2][3]synthetic consciousness,[4]ordigital consciousness,[5]is theconsciousnesshypothesized to be possible inartificial intelligence.[6]It is also the corresponding field of study, which draws insights fromphilosophy of mind,philosophy of artificial intelligence,cognitive scienceandneuroscience.
The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feelqualia).[7]Since sentience involves the ability to experience ethically positive or negative (i.e.,valenced) mental states, it may justify welfare concerns and legal protection, as with animals.[8]
Somescholarsbelieve that consciousness is generated by the interoperation of various parts of thebrain; these mechanisms are labeled theneural correlates of consciousnessor NCC. Some further believe that constructing asystem(e.g., acomputersystem) that can emulate this NCC interoperation would result in a system that is conscious.[9]
As there are many hypothesizedtypes of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects ofexperiencethat can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.[10]
Type-identity theoristsand other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution.[11][12][13][14]In his 2001 article "Artificial Consciousness: Utopia or Real Possibility,"Giorgio Buttazzosays that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, orfree will. A computer, like a washing machine, is a slave operated by its components."[15]
For other theorists (e.g.,functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.[16]
David Chalmersproposed twothought experimentsintending to demonstrate that "functionallyisomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.[17][18]
The "fading qualia" is areductio ad absurdumthought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on asilicon chip. Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference. However, if qualia (such as the subjective experience of bright red) were to fade or disappear, the subject would likely notice this change, which causes a contradiction. Chalmers concludes that the fading qualia hypothesis is impossible in practice, and that the resulting robotic brain, once every neurons are replaced, would remain just as sentient as the original biological brain.[17][19]
Similarly, the "dancing qualia" thought experiment is anotherreductio ad absurdumargument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).[17][19]
Critics[who?]of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization.
In 2022, Google engineer Blake Lemoine made a viral claim that Google'sLaMDAchatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous.[20]However, while philosopherNick Bostromstates that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain.[...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."[21]
Kristina Šekrst cautions thatanthropomorphicterms such as "hallucination" can obscure importantontologicaldifferences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not justify ascribing mental states or consciousness to them. Instead, she advocates for anepistemologicalframework (such asreliabilism) that recognizes the distinct nature of AI knowledge production.[22]She suggests that apparent understanding in LLMs may be a sophisticated form of AI hallucination. She also questions what would happen if a LLM were trained without any mention of consciousness.[23]
David Chalmersargued in 2023 that LLMs today display impressive conversational and general intelligence abilities, but are likely not conscious yet, as they lack some features that may be necessary, such as recurrent processing, aglobal workspace, and unified agency. Nonetheless, he considers that non-biological systems can be conscious, and suggested that future, extended models (LLM+s) incorporating these elements might eventually meet the criteria for consciousness, raising both profound scientific questions and significant ethical challenges.[24]
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as thehard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable.[25][26]Additionally, some chatbots have been trained to say they are not conscious.[27]
A well-known method for testing machineintelligenceis theTuring test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.[28]
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments.[29]He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia orbinding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
If it were suspected that a particular machine was conscious, its rights would be anethicalissue that would need to be assessed (e.g. what rights it would have under law).[30]For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Shouldlawsbe made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness,[31][32]such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."[31]
Ethical concerns still apply (although to a lesser extent)when the consciousness is uncertain, as long as the probability is deemed non-negligible. Theprecautionary principleis also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.[32][8]
In 2021, German philosopherThomas Metzingerargued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".[33]David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".[24]
Enforced amnesia has been proposed as a way to mitigate the risk ofsilent sufferingin locked-in conscious AI and certain AI-adjacent biological systems likebrain organoids.[34]
Bernard Baarsand others argue there are various aspects of consciousness necessary for a machine to be artificially conscious.[35]The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function.Igor Aleksandersuggested 12 principles for artificial consciousness:[36]the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Some philosophers, such asDavid Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively tovalenced(ethically positive or negative) subjective experiences, like pleasure or suffering.[24]Explaining why and how subjective experience arises is known as thehard problem of consciousness.[37]AI sentience would give rise to concerns of welfare and legal protection,[8]whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.[38]
Awarenesscould be one required aspect, but there are many problems with the exact definition ofawareness. The results of the experiments ofneuroscanning on monkeyssuggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,[clarification needed]and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.
There are at least three types of awareness:[39]agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[40]
Conscious events interact withmemorysystems in learning, rehearsal, and retrieval.[41]TheIDA model[42]elucidates the role of consciousness in the updating of perceptual memory,[43]transientepisodic memory, andprocedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system.[44]In IDA, these two memories are implemented computationally using a modified version ofKanerva’ssparse distributed memoryarchitecture.[45]
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events.[35]PerAxel Cleeremansand Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".[46]
The ability to predict (oranticipate) foreseeable events is considered important for artificial intelligence byIgor Aleksander.[47]The emergentistmultiple drafts principle[48]proposed byDaniel DennettinConsciousness Explainedmay be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events.[47]An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
Functionalismis a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships.[49]Functionalism is particularly popular among philosophers.[50]
A 2023 study suggested that currentlarge language modelsprobably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.[51]
This theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage. The brain contains many specialized processes or modules (such as those for vision, language, or memory) that operate in parallel, much of which is unconscious. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules. For example, when reading a word, the visual module recognizes the letters, the language module interprets the meaning, and the memory module might recall associated information – all coordinated through the global workspace.[52][53]
Higher-order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher-order representation, such as a thought or perception about that state. These theories argue that consciousness arises from a relationship between lower-order mental states and higher-order awareness of those states. There are several variations, including higher-order thought (HOT) and higher-order perception (HOP) theories.[54][53]
In 2011,Michael Grazianoand Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[55]Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[9]This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body.[9]This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.
Stan Franklincreated a cognitive architecture calledLIDAthat implementsBernard Baars's theory of consciousness called theglobal workspace theory. It relies heavily oncodelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.[56][57]
The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.[58]
Ben Goertzelmade an embodied AI through the open-sourceOpenCogproject. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at theHong Kong Polytechnic University.
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achievemindand consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a specialcognitive architectureto reproduce the processes ofperception,inner imagery,inner speech,pain,pleasure,emotionsand thecognitivefunctions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, theartificial neurons, withoutalgorithmsorprograms". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."[59][60]
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge inautonomous agentsthat have a suitable neuro-inspired architecture of complexity; these are shared by many.[61][62]A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.[63][64]
Murray Shanahandescribes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").[65][2][3][66]
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[67][68][69]or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories orconfabulationsthat may qualify as potential ideas or strategies.[70]He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[71][72][73]Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[72][74][75][76][77]
Hod Lipsondefines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model orsimulation of itself.[78][79]
In2001: A Space Odyssey, the spaceship's sentient supercomputer,HAL 9000was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading tocognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.[80][81]
In Arthur C. Clarke'sThe City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.
InWestworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.[82][80]
InGreg Egan's short storyLearning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towardsdigital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.[83][84]
|
https://en.wikipedia.org/wiki/Artificial_consciousness
|
Simultaneous multithreading(SMT) is a technique for improving the overall efficiency ofsuperscalarCPUswithhardware multithreading. SMT permits multiple independentthreadsof execution to better use the resources provided by modernprocessor architectures.
The termmultithreadingis ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with differentpage tables, differenttask state segments, differentprotection rings, differentI/O permissions, etc.). Although running on the same core, they are completely separated from each other.
Multithreading is similar in concept topreemptive multitaskingbut is implemented at the thread level of execution in modern superscalar processors.
Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form beingtemporal multithreading(also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support many more.[1]
Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measuredenergy efficiencyof SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors.[2]In modern systems, SMT effectively exploits concurrency with very little additional dynamic power. That is, even when performance gains are minimal the power consumption savings can be considerable.[2]Some researchers[who?]have shown that the extra threads can be used proactively to seed ashared resourcelike a cache, to improve the performance of another single thread, and claim this shows that SMT does not only increase efficiency. Others[who?]use SMT to provide redundant computation, for some level of error detection and recovery.
However, in most current cases, SMT is about hidingmemory latency, increasing efficiency, and increasing throughput of computations per amount of hardware used.[citation needed]
In processor design, there are two ways to increase on-chip parallelism with fewer resource requirements: one is superscalar technique which tries to exploitinstruction-level parallelism(ILP); the other is multithreading approach exploitingthread-level parallelism(TLP).
Superscalar means executing multiple instructions at the same time while thread-level parallelism (TLP) executes instructions from multiple threads within one processor chip at the same time. There are many ways to support more than one thread within a chip, namely:
The key factor to distinguish them is to look at how many instructions the processor can issue in one cycle and how many threads from which the instructions come. For example, Sun Microsystems' UltraSPARC T1 is a multicore processor combined with fine-grain multithreading technique instead of simultaneous multithreading because each core can only issue one instruction at a time.
While multithreading CPUs have been around since the 1950s, simultaneous multithreading was first researched by IBM in 1968 as part of theACS-360project.[3]The first major commercial microprocessor developed with SMT was theAlpha 21464(EV8). This microprocessor was developed byDECin coordination with Dean Tullsen of the University of California, San Diego, and Susan Eggers and Henry Levy of the University of Washington. The microprocessor was never released, since the Alpha line of microprocessors was discontinued shortly beforeHPacquiredCompaqwhich had in turn acquiredDEC. Dean Tullsen's work was also used to develop thehyper-threadedversions of the Intel Pentium 4 microprocessors, such as the "Northwood" and "Prescott".
TheIntelPentium 4was the first modern desktop processor to implement simultaneous multithreading, starting from the 3.06 GHz model released in 2002, and since introduced into a number of their processors. Intel calls the functionalityHyper-Threading Technology, and provides a basic two-thread SMT engine. Intel claims up to a 30% speed improvement[4]compared against an otherwise identical, non-SMT Pentium 4. The performance improvement seen is very application-dependent; however, when running two programs that require full attention of the processor it can actually seem like one or both of the programs slows down slightly when Hyper-threading is turned on.[5]This is due to thereplay systemof the Pentium 4 tying up valuable execution resources, increasing contention for resources such as bandwidth, caches,TLBs,re-order bufferentries, and equalizing the processor resources between the two programs which adds a varying amount of execution time. The Pentium 4 Prescott core gained a replay queue, which reduces execution time needed for the replay system. This was enough to completely overcome that performance hit.[6]
The latestImagination TechnologiesMIPS architecturedesigns include an SMT system known as "MIPS MT".[7]MIPS MT provides for both heavyweight virtual processing elements and lighter-weight hardware microthreads.RMI, a Cupertino-based startup, is the first MIPS vendor to provide a processorSOCbased on eight cores, each of which runs four threads. The threads can be run in fine-grain mode where a different thread can be executed each cycle. The threads can also be assigned priorities.Imagination TechnologiesMIPS CPUs have two SMT threads per core.
IBM'sBlue Gene/Q has 4-way SMT.
The IBMPOWER5, announced in May 2004, comes as either a dual core dual-chip module (DCM), or quad-core or oct-core multi-chip module (MCM), with each core including a two-thread SMT engine.IBM's implementation is more sophisticated than the previous ones, because it can assign a different priority to the various threads, is more fine-grained, and the SMT engine can be turned on and off dynamically, to better execute those workloads where an SMT processor would not increase performance. This is IBM's second implementation of generally available hardware multithreading. In 2010, IBM released systems based on the POWER7 processor with eight cores with each having four Simultaneous Intelligent Threads. This switches the threading mode between one thread, two threads or four threads depending on the number of process threads being scheduled at the time. This optimizes the use of the core for minimum response time or maximum throughput. IBMPOWER8has 8 intelligent simultaneous threads per core (SMT8).
IBM Zstarting with thez13processor in 2013 has two threads per core (SMT-2).
Although many people reported thatSun Microsystems' UltraSPARC T1 (known as "Niagara" until its 14 November 2005 release) and the now defunct processorcodenamed"Rock" (originally announced in 2005, but after many delays cancelled in 2010) are implementations ofSPARCfocused almost entirely on exploiting SMT and CMP techniques, Niagara is not actually using SMT. Sun refers to these combined approaches as "CMT", and the overall concept as "Throughput Computing". The Niagara has eight cores, but each core has only one pipeline, so actually it uses fine-grained multithreading. Unlike SMT, where instructions from multiple threads share the issue window each cycle, the processor uses a round robin policy to issue instructions from the next active thread each cycle. This makes it more similar to abarrel processor. Sun Microsystems' Rock processor is different: it has more complex cores that have more than one pipeline.
TheOracle CorporationSPARC T3 has eight fine-grained threads per core; SPARC T4, SPARC T5, SPARC M5, M6 and M7 have eight fine-grained threads per core of which two can be executed simultaneously.
FujitsuSPARC64 VI has coarse-grained Vertical Multithreading (VMT) SPARC VII and newer have 2-way SMT.
IntelItaniumMontecito uses coarse-grained multithreading and Tukwila and newer ones use 2-way SMT (with dual-domain multithreading).
IntelXeon Phihas 4-way SMT (with time-multiplexed multithreading) with hardware-based threads which cannot be disabled, unlike regular Hyper-Threading.[8]TheIntel Atom, first released in 2008, is the first Intel product to feature 2-way SMT (marketed as Hyper-Threading) without supporting instruction reordering, speculative execution, or register renaming. Intel reintroduced Hyper-Threading with theNehalem microarchitecture, after its absence on theCore microarchitecture.
AMDBulldozer microarchitectureFlexFPU and Shared L2 cache are multithreaded but integer cores in module are single threaded, so it is only a partial SMT implementation.[9][10]
AMDZen microarchitecturehas 2-way SMT.
VISC architecture[11][12][13][14]uses theVirtual Software Layer(translation layer) to dispatch a single thread of instructions to theGlobal Front Endwhich splits instructions intovirtual hardware threadletswhich are then dispatched to separate virtual cores. These virtual cores can then send them to the available resources on any of the physical cores. Multiple virtual cores can push threadlets into the reorder buffer of a single physical core, which can split partial instructions and data from multiple threadlets through the execution ports at the same time. Each virtual core keeps track of the position of the relative output. This form of multithreading can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs. Therefore, if two virtual cores are competing for resources, there are appropriate algorithms in place to determine what resources are to be allocated where.
Depending on the design and architecture of the processor, simultaneous multithreading can decrease performance if any of the shared resources are bottlenecks for performance.[15]Critics argue that it is a considerable burden to put on software developers that they have to test whether simultaneous multithreading is good or bad for their application in various situations and insert extra logic to turn it off if it decreases performance. Current operating systems lack convenientAPIcalls for this purpose and for preventing processes with different priority from taking resources from each other.[16]
There is also a security concern with certain simultaneous multithreading implementations. Intel's hyperthreading inNetBurst-based processors has a vulnerability through which it is possible for one application to steal acryptographic keyfrom another application running in the same processor by monitoring its cache use.[17]There are also sophisticated machine learning exploits to HT implementation that were explained atBlack Hat 2018.[18]
|
https://en.wikipedia.org/wiki/Simultaneous_multithreading
|
Aspambotis acomputer programdesigned to assist in the sending ofspam. Spambots usually create accounts and send spam messages with them.[1]Web hosts and website operators have responded by banning spammers, leading to an ongoing struggle between them and spammers in which spammers find new ways to evade the bans and anti-spam programs, and hosts counteract these methods.[2]
Emailspambots harvestemail addressesfrom material found on theInternetin order to build mailing lists for sending unsolicited email, also known asspam. Such spambots areweb crawlersthat can gather email addresses from websites, newsgroups, special-interest group (SIG) postings, and chat-room conversations. Because email addresses have a distinctive format, such spambots are easy to code.
A number of programs and approaches have been devised to foil spambots. One such technique isaddress munging, in which an email address is deliberately modified so that a human reader (and/or human-controlledweb browser) can interpret it but spambots cannot. This has led to the evolution of more sophisticated spambots that are able to recover email addresses from character strings that appear to be munged, or instead can render the text into a web browser and thenscrapeit for email addresses.Alternativetransparent techniques include displaying all or part of the email address on a web page as an image, a text logo shrunken to normal size using inlineCSS, or as text with the order of characters jumbled, placed into readable order at display time using CSS.[citation needed]
Forum spambotsbrowse the internet, looking forguestbooks,wikis,blogs,forums, and other types ofweb formsthat they can then use to submit bogus content. These often useOCRtechnology to bypassCAPTCHAs. Some spam messages are targeted towards readers and can involve techniques oftarget marketingor evenphishing, making it hard to tell real posts from the bot generated ones. Other spam messages are not meant to be read by humans, but are instead posted to increase the number oflinksto a particular website, to boost itssearch engine ranking.
One way to prevent spambots from creating automated posts is to require the poster to confirm their intention to post via email. Since most spambot scripts use a fake email address when posting, any email confirmation request is unlikely to be successfully routed to them. Some spambots will pass this step by providing a valid email address and use it for validation, mostly viawebmailservices. Using methods such as security questions are also proven to be effective in curbing posts generated by spambots, as they are usually unable to answer it upon registering, also on various forums, consistent uploading of spam will also gain the person the title 'spambot'.[citation needed]
|
https://en.wikipedia.org/wiki/Spambot
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.